Question About Failure During File Checks

Decided to run a “check -files” to ensure my backups were intact, but ran into the error below, note that it was about 3 hours into the process. Unfortunately, I have no indication how far duplicacy got into the check, nor how long it will take to complete if run again. Wasabi doesn’t charge egress costs, but I’d rather not push their policy and have to start from the beginning. Is there a way to know how far it got or start again from when it failed?

Options: [-log check -storage xxx-backups -a -stats -files -a -tabular]
2020-01-08 09:13:02.623 INFO SNAPSHOT_CHECK 1 snapshots and 2 revisions
2020-01-08 09:13:02.631 INFO SNAPSHOT_CHECK Total chunk size is 829,235M in 172828 chunks
2020-01-08 09:13:22.567 INFO SNAPSHOT_VERIFY All files in snapshot bid-1 at revision 1 have been successfully verified
2020-01-09 00:22:24.441 ERROR DOWNLOAD_CHUNK Failed to download the chunk xxx read tcp x.x.x.x:38470->y.y.y.y:443: read: connection timed out


The log file indicates that it successfully checked everything in revision 1, out of 2 total revisions. Presumably it hit an issue with downloading a chunk while checking the second revision.

If you didn’t want to start from the beginning, you could specify -r 2 to only check the second revision.

1 Like

Thanks, I saw that rev 1 completed, but I think that was a smaller set. Revision 2 has a lot more and I was hoping I could know what it checked and not have to start 2 from the beginning somehow. Hoping for something like knowing chunks a-m were done and now I can start with n-z.

You can probably get an idea of which file it got to in the snapshot revision based on the chunk that it failed on; but I don’t think there’s a way to tell the check command to only validate the integrity of a subset of files.