Best Storage Backend in 2025

And if Duplicacy only did that, and you had ECC RAM, this line of reasoning would be true. But it doesn’t, so it isn’t. Also, sync != backup. Backup is more than just bits.

Well I like to spin it 5400 or 7200rpm. :wink:

Single external drive absolutely is a valid destination as part of 3-2-1. In fact, I recently did a real world restoration when my server was offline. You might not have a use case, but the only thing invalid is your argument, coz I do…

My backup data effectively resides across 6 drives (excluding source):

  • 4 in an off-site RAIDZ2 (auto scrubbed weekly, chunks auto checked weekly),
  • 1 local server internal drive with Erasure Coding (auto scrubbed monthly, chunks auto checked weekly),
  • 1 external (mostly offline) with EC (manually scrubbed for bad sectors, chunks checked after every manual copy).

Who has the shitty backup?

Not one snapshot. Metadata chunks are reusable.

If nothing changes in that chunk but in other chunks they do, each subsequent revision references that metadata. -persist won’t help you, as it won’t continue once it’s encountered corrupted metadata. You will have lost data.

This is all the more reason to separate metadata chunks, as we could further run a check -meta, say.

But does it? You seem to be missing the point which is no surprise - have you tested this assumption?

Here’s a salient example of the risk of putting undue ‘trust’ in an API. Another one for OneDrive. These bugs were discovered by users verifying their data. Now consider the low prevalence of AWS users willing to do the same.

I don’t know, you tell me.

There are many more things than ECC ram. There are many more layers that you have trust – because you cannot verify everything. Well, you think you can – but my premise is that you can’t realistically be verifying every bit of a backup. So if you disagree with this core assumption – there is really no reason to argue about anything that is a consequence of that assumption.

Lol :slight_smile:

It’s like smoking on the gas station. 99.99% of the time nothing will happen. Does not mean it’s a good idea.

You can entirely forgo this single drive and save yourself a hassle maintaining it, because it does not guarantee anything. It’s just a waste of time.

Good point. On the other hand, corruption a single config file also makes entire backup go poof. So the point is not to let any bits rot. Or make probability of it low enough to be irrelevant.

I agree to separate metadata chunks, but not for this reason, but if this is another drop in the ocean to get them separated – I’ll subscribe to this argument :smiley:

Yes, by selectively restoring a few files. Sanity-testing backup. This is enough to convince me to that it works with probability that allows me to sleep well at night. Remember, I’m not going to test every byte.

Lol. Dropbox and and OneDrive. Nobody shall be using those services as backup destinations. They were not designed for this bulk object storage usecase. How is that surprising that they fail? I keep repeating this like a broken record – don’t use *Drive services. Remove support from *Drive services from duplicacy. But it won’t happen because I guess this sells the app – “look you can backup to the storage you already have through your employer!”.

This again reiterates my claim that the backup is just as reliable as the underlying storage.
There is no point in checking and fixing ill-suiting backend. I’t like a, I don’t know, a kettle. If you buy a crappy kettle, and it breaks, you don’t go replacing it by warranty or repairing, you defenestrate it, and buy the one that would not break. (To be pendantic, the one that is signinifally less likely to break).

IN this case – I trust that my bank will be very unhappy if AWS lost data. So AWS won’t lose data. As simple as that. OneDrive - three duded lost there excel spreadsheets – big deal, restore from backup.

Not everything needs to be constantly patched and kept up to date. Some things are easier to fix after they break as opposed to preventing them from breaking. Others not. By using old software you assume some ever-increasing risk, as things around it change, including from security perspective, yes. Whether this risk justifies having another full time job watching, patching, and testing daily any time yet another security hole is found – is not a black and white decision, it’s a balance between cost of risk and available resources.

But I don’t want this to turn into discussing security of internet facing services, it’s an entirely different topic, with necessarily different balance between what’s acceptable in temrs of risk vs effort.

I mean, if a 10.0 CVSS isn’t enough to cause concern (let alone prove my point), we’re most definitely done here. Up is down, right is wrong. Truly mind-boggling…

Prove what point?

You have conflated security and functionality. I stated above — security of internet facing services is out of scope of this discussion. Only functionality. Functionality of duplicacy updating data to AWS is fine and atomic. It won’t silently break. No need to run around preemptively fixing things until it does. Fewer changes → fewer bugs.

Security is irrelevant — duplicacy runs in the trusted environment. Amazon worries about S3 security.

I can take it a step further: I’m (the user) paying my storage provider to store my data and worry about all that security minutiae. I’m paying my backup software vendor to make backups. What else do you want me to do? Run errands verifying their jobs? No thank you, I have better things to do with my time (for example, argue the obvious on this and other forums).

Security seems to bother you too much, likely due to your occupation as what seems to be related to providing IT services. That’s fine, and even good for your customers.

For the rest of people it’s not a such a big deal. That’s why they have a backups and redundancies. Stuff will happen. You can’t protect against everything all the time. .

Taking it even further — I don’t think convenience must be sacrificed in the name of security. I know you disagree, that’s ok.

How this all relates to the original discussion: it’s quite simple. The overall backup solution shall be ”good enough”. It does not, and cannot be perfect, and it should not take any nontrivial amount of time or money to maintain. That means sanity testing occasionally, and using reliable software and services. Not obsessing over integrity of every byte stored or AWS api changes or watching change lists of every duplicacy dependency like a hawk or reading security bulletins.

Anyway, it seems we are going in circles. This is my perspective as a user. Not a developer or service provider.

YOU brought up the point about Duplicacy forum being hosted with out of date software. When clearly the site deserves a bloody update! “Not broken - don’t fix.” you said - literally broken and you’re arguing against the very idea of applying updates. You’re just blathering, and we’re wasting each other’s time.