How do I do a checkup, and how should I upgrade?

I’ve been happily running Duplicacy GUI on 3 machines (1 PC and two Macs) for the last 18 months. On the PC I run 2.1.0, on the Mac 2.1.1 Beta 4. I’ve just been running it normally, and I haven’t ever needed to use my backup.
I figure it is time for to check on how things are going!
I tend to distrust old backups… As an old geek, I personally experienced the old days when old backups weren’t trustworthy!
So I’m tempted start over and wipe my 1.1 TB at Backblaze B2, put the new Web Edition on my 3 machines at just start from that clean slate. Of course, that will involve a long upload process, but I am on a gigabit fiber connection, so it’s not horrible.
Are there better suggestions? Can I easily verify the integrity of my backup at B2 and just migrate to the new Web Duplicacy?

Thanks for any comments!
Carl

You can just do a full restore and compare the old and restored files afterwards.

Instead of nuking everything, you could just create new snapshots user only for web-ui, and keep the existing ones as they are (at least for another year)

Well, the full restore to a new drive and compare method of verification would work… but gee that seems arduous. And that would be a big “diff” run. So there is no verify type command I guess?

The keeping the old stuff and just picking up with the new web-UI is a good idea. Hmm… I might even just start a whole new repository and keep the old one until I’m confident in the new backup.

Yep, just use different repository IDs and de-duplication will re-use most of the chunks. No need to start from scratch imo.

However, doing a full or partial restore OR check -files is definitely a good idea from time to time.

A simple check will only ensure all chunks that should exist do exist. The integrity of those chunks - the data itself - should be tested…

Recently came across a situation where a restore failed on one file due to a handful of chunks. They existed but the chunks couldn’t be decrypted. Using a hex editor, I saw that there was random data but it was completely wrong, despite the beginning of the file having the usual header that signifies a Duplicacy chunk. No idea how it happened. Luckily, the bad chunks were in a ‘copied’ storage so the original was fine and I was able to fix it by deleting the bad chunks and re-copying. Glad I tested it!

1 Like

I forgot about this!