I would like to confirm that my backups being stored in Wasabi are valid. I have been running successful daily backups, checks and prunes for the last month or so, but now decided to run a more intensive file check to actually validate the backup sets:
-log check -storage main-nas-backups -r 6 -stats -files -a -tabular
I chose revision 6 because I thought I would start with a smaller set. (EDIT oops…just realized that -r 6
and -a
are probably conflicting. That was a mistake, but not sure it changes the problem)
I started the run via the UI which displayed a “1/28” progress bar, began to use a lot of bandwidth and then ran for about 13 hours. It ended with the following message:
2020-02-03 23:02:53.386 FATAL DOWNLOAD_CHUNK Chunk ...snip...6f32a8d8495dc0e3256be272d87f6cafacc665f618ae50437ad43 can't be found Chunk ...snip...6f32a8d8495dc0e3256be272d87f6cafacc665f618ae50437ad43 can't be found`
I found this surprising because my daily backups and checks do not indicate there any problems. At this point, I have very little confidence in the integrity of my offsite backups and would appreciate some guidance on the best next steps.
I am willing to abandon this set and create a new one if required, but Wasabi will charge more for this. All I want is to ensure I have a valid backup, whether I can keep the existing backup data or not. I have a fairly simple setup (one backup set in one location, simple pruning, think home nas backups) so I don’t know where this process has gone wrong.
The full error:
Options: [-log check -storage main-nas-backups -r 6 -stats -files -a -tabular]
2020-02-03 10:18:01.902 INFO STORAGE_SET Storage set to wasabi://[us-east-2@s3.us-east-2.wasabisys.com/...snip.../nas-backups
2020-02-03](http://us-east-2@s3.us-east-2.wasabisys.com/...snip.../nas-backups%0D
2020-02-03) 10:18:02.875 INFO SNAPSHOT_CHECK Listing all chunks
2020-02-03 10:19:07.755 INFO SNAPSHOT_CHECK 1 snapshots and 28 revisions
2020-02-03 10:19:07.764 INFO SNAPSHOT_CHECK Total chunk size is 1541G in 327379 chunks
2020-02-03 10:19:51.497 INFO SNAPSHOT_VERIFY All files in snapshot bid-1 at revision 1 have been successfully verified
2020-02-03 23:02:53.386 FATAL DOWNLOAD_CHUNK Chunk ...snip...6f32a8d8495dc0e3256be272d87f6cafacc665f618ae50437ad43 can't be found Chunk ...snip...6f32a8d8495dc0e3256be272d87f6cafacc665f618ae50437ad43 can't be found
A check run where it looks like everything is fine:
Running check command from /cache/localhost/all
Options: [-log check -storage main-nas-backups -tabular -a -tabular]
2020-02-03 00:08:24.031 INFO STORAGE_SET Storage set to wasabi://[us-east-2@s3.us-east-2.wasabisys.com/...snip.../nas-backups 2020-02-03](http://us-east-2@s3.us-east-2.wasabisys.com/...snip.../nas-backups%0D
2020-02-03) 00:08:24.398 INFO SNAPSHOT_CHECK Listing all chunks
2020-02-03 00:09:34.735 INFO SNAPSHOT_CHECK 1 snapshots and 28 revisions
2020-02-03 00:09:34.750 INFO SNAPSHOT_CHECK Total chunk size is 1541G in 327379 chunks
2020-02-03 00:09:34.778 INFO SNAPSHOT_CHECK All chunks referenced by snapshot bid-1 at revision 1 exist
2020-02-03 00:09:35.910 INFO SNAPSHOT_CHECK All chunks referenced by snapshot bid-1 at revision 4 exist
2020-02-03 00:09:37.899 INFO SNAPSHOT_CHECK All chunks referenced by snapshot bid-1 at revision 5 exist
2020-02-03 00:09:39.778 INFO SNAPSHOT_CHECK All chunks referenced by snapshot bid-1 at revision 6 exist
...snip...
Some details around the backups:
snap | rev | | files | bytes | chunks | bytes | uniq | bytes | new | bytes |
bid-1 | 1 | @ 2020-01-05 13:14 -hash | 3353 | 489,718K | 100 | 410,896K | 7 | 12,524K | 100 | 410,896K |
bid-1 | 4 | @ 2020-01-07 17:34 | 437769 | 991,081M | 172677 | 828,858M | 39 | 74,409K | 172584 | 828,469M |
bid-1 | 5 | @ 2020-01-09 00:00 | 451230 | 1719G | 323455 | 1526G | 21 | 44,802K | 150817 | 734,279M |
bid-1 | 6 | @ 2020-01-11 00:00 | 451238 | 1719G | 323477 | 1526G | 38 | 56,174K | 43 | 65,036K |
...snip...
My prune parameters
> Running prune command from /cache/localhost/all
> Options: [-log prune -storage main-nas-backups -keep 0:1800 -keep 90:730 -keep 30:365 -keep 7:180 -keep 3:90 -threads 8 -a]
Thanks for the help.
Running : web v1.1.0, cli v2.3.0 in docker