I have a duplicacy backup that has 265 increments over the course of several years. I ran:
duplicacy check -r 265 -chunks -threads 32 -persist
and it gave me:
42 out of 1405935 chunks are corrupted
This seems very bad… I have a few questions about this:
- How do I figure out which chunks are bad?
- It seems the way to fix this going forward is to (1) delete the corrupted chunks; (2) change the repository id; (3) make a new backup of the same files and hope those chunks get re-uploaded; (4) change the repository id back - is this accurate?
- How do I figure out what caused this and prevent it from happening in the future? I’ve run
check
without-chunks
many times in the past and no errors were found as all chunks were present.
Edit: I ran the same command (duplicacy check -r 265 -chunks -threads 32 -persist
) a few more times and am left with more questions than answers. I ran it 4 more times after the first time. After run #1 (described above), I was told I had 42 corrupted chunks. After run #2, 5 chunks apparently verified correctly and I was left with 37 corrupted chunks. After run #3, it went down another 5 to 32 corrupted chunks. Run #4 didn’t successfully verify any chunks. Run #5 verified one more chunk and its output was:
31 out of 1405935 chunks are corrupted
Added 1 chunks to the list of verified chunks
Added 1 chunks to the list of verified chunks
What could cause this type of behavior where chunks take many many tries to verify successfully? Can I be confident that these chunks are actually not corrupted?
I am using Backblaze B2.