Corrupted chunks. Now what?

I’ve been hitting various problems with chunks going missing and finally decided to run a full check (-all -fossils -resurrect -chunks -stats -persist). It ended with the very troubling “8894 out of 202923 chunks are corrupted”.

So what do I do now? Presumably this means that many of my backup revisions are incomplete. Is there some way to identify them? That number seems like it should be abnormally high, is there something wrong with my setup that I should be investigating?

Curious, what storage are you using?

1 Like

Anything above 0 should be abnormally high, with more than 4% of your chunks corrupted, I doubt you’ll be able to salvage much. Unless it was some timed event where new chunks for some revisions got corrupted.

1 Like

I use B2 as the backend

Very strange. I’ve been using B2 for several years and I’ve never had a single issue with chunks getting corrupted.

We send the sha1 checksum as one of the HTTP headers when uploading a chunk to B2. You can pick any corrupted chunk, find it in the B2 portal page, then click the file and you’ll see the sha1 checksum. Download the file locally and check if sha1 matches. This can tell if it is a B2 issue or if the chunk was corrupted before it was uploaded.

You can also use the official B2 CLI. The download command will automatically verify the SHA of the downloaded file against the header provided by Duplicacy at upload time.