If the entire snapshots folder is lost (because it can happen), what can I restore from a backup?
I’ve tried to delete it then tried a restore command, it just ouput an error saying “Snapshot 1 at revision 1 does not exist”.
If the entire snapshots folder is lost (because it can happen), what can I restore from a backup?
I’ve tried to delete it then tried a restore command, it just ouput an error saying “Snapshot 1 at revision 1 does not exist”.
What are you actually trying to do?
Snapshots contain information about how to assemble files together from pieces that are stored in chunks. Or, more often contain information on how to assemble large list of pieces required to assemble files.
Elaborate. How do you end up in the scenario where snapshot folder can vanish and yet everything else will stay intact?
(Say, this is a nice car, but if I chop off the engine (because it can happen) it does not drive. It says “engine missing”. How far can I drive this car?)
Hi. I’m just trying to figure it out how robust is the backup scheme before using it or making a purchase, and not trying to present a challenge.
The more likely scenario is that data on the storage gets corrupted, such as failed sectors of a local drive or errors in the cloud storage, resulting in the lost of the snapshots AND other files (the lost data happen to include the snapshots folder, not that the only data lost is the folder — a possible but very unlikely scenario).
Your metaphor with the car is similar, I wouldn’t chop off the engine, it fails on its own.
Regardless, the answer I’ve got is that the backup will be unrestorable without the snapshots. Thank you.
Yes, you need a snapshot to restore files from it. But snapshots don’t depend on each other; so if you lose 79283 out of 79284 snapshots — you can still restore from the remaining lucky one. It’s however unrealistic scenario. its much more likely you will lose chunks due to media rot, simply because they are larger.
Eeach backup behaves as an independent one, while still enjoying space efficiency due to deduplication. (Read about Content addressable storage)
If you delete a bunch of chunk files only those unlucky files (or other snapshots) whose chunks were deleted will get affected and you can restore everything else.
Duplicacy also supports reed-Solomon erasure coding, where data is written redundantly to tolerate rotting media to some degree.
It does not rely on a locking database that can get corrupted — there simply isn’t any. All is filesystem based CAS
Duplicacy is the most robust and resilient backup solution on the market, in my opinion, and I’ve tested a lot of backup tools; and while it has its shortcomings — resilience, performance and reliability is not one of them.
I highly recommend reading the paper, it will answer a lot of your questions: Duplicacy paper accepted by IEEE Transactions on Cloud Computing
At the end of the day, you should be practicing proper backup strategies - such as 3-2-1 (at least 3 copies including the original, 1 off-site).
You can cheat, by making secondary copies of the snapshots
folder - since the metadata there is typically very very small. That, alongside erasure encoding, might protect you from potential recovery issues, although it isn’t foolproof, and again - always make more backups. Duplicacy makes this easy with the copy
command.
A feature that I’ve been calling for, which would make Duplicacy even more robust, is the separation of metadata chunks, from the file content chunk data. Then, you could replicate the snapshot and metadata chunks even moreso - more than just 3 copies, since this data should be relatively small. Keeping 3+ copies of erasure encoded data, plus umpteen metadata copies, and you have bulletproof backups.
This topic was automatically closed 10 days after the last reply. New replies are no longer allowed.