And if Duplicacy only did that, and you had ECC RAM, this line of reasoning would be true. But it doesn’t, so it isn’t. Also, sync != backup. Backup is more than just bits.
Well I like to spin it 5400 or 7200rpm.
Single external drive absolutely is a valid destination as part of 3-2-1. In fact, I recently did a real world restoration when my server was offline. You might not have a use case, but the only thing invalid is your argument, coz I do…
My backup data effectively resides across 6 drives (excluding source):
- 4 in an off-site RAIDZ2 (auto scrubbed weekly, chunks auto checked weekly),
- 1 local server internal drive with Erasure Coding (auto scrubbed monthly, chunks auto checked weekly),
- 1 external (mostly offline) with EC (manually scrubbed for bad sectors, chunks checked after every manual copy).
Who has the shitty backup?
Not one snapshot. Metadata chunks are reusable.
If nothing changes in that chunk but in other chunks they do, each subsequent revision references that metadata. -persist
won’t help you, as it won’t continue once it’s encountered corrupted metadata. You will have lost data.
This is all the more reason to separate metadata chunks, as we could further run a check -meta
, say.
But does it? You seem to be missing the point which is no surprise - have you tested this assumption?
Here’s a salient example of the risk of putting undue ‘trust’ in an API. Another one for OneDrive. These bugs were discovered by users verifying their data. Now consider the low prevalence of AWS users willing to do the same.

Even this forum runs on ancient version of discourse. It works, does not it?
I don’t know, you tell me.

CISA urges admins to patch critical Discourse code execution bug
A critical Discourse remote code execution (RCE) vulnerability tracked as CVE-2021-41163 was fixed via an urgent update by the developer on Friday