Pinging @gchen for visibility.
Have an issue with my check
failing on my GCD:
...
All chunks referenced by snapshot dolores-c_programdata at revision 3358 exist
All chunks referenced by snapshot dolores-c_programdata at revision 3370 exist
Chunk eaa25cb5e76c31c3218877c6e5b6067d94c01b3516c12bfe6e7e69024847dda3 can't be found
Chunk ae84899894827746437a47f475e1400560af3e404acf6eca712b32821335ec84 can't be found
Failed to load chunks for snapshot dolores-c_programdata at revision 3374: unexpected end of JSON input
I believe the problem was caused by an attempt to restore deleted files via Google Workspace’s restore data feature. (Long story short; was using Stablebit CloudDrive to store a separate bunch of non-important data which got corrupted due to the software resizing the ‘drive’. So needed to do a restore - Workspace allows you to undelete files even from the bin up to 25 days after - but this ‘restore’ feature doesn’t let you filter which files - only a date range in the entire GCD. Some Duplicacy snapshots inadvertently got restored along with it.)
So… ended up with a bunch of snapshot revisions that reference chunks which were deleted in a prior prune
.
My problem is I have a lot of these bad snapshots, and my only way to fix this is to delete each snapshot 1 by 1, because Duplicacy halts after detecting a missing chunk which a snapshot references. This is incredibly tedious! A check
run with -threads 16
can take 40 minutes.
So here’s my feature request…
Please make -persist
error out only after all of the snapshots have been iterated on.
A complete list of bad snapshots would let me fix this after just one check
operation. Right now, there’s no end in sight… 1) run check, delete 1 snapshot, run check, ad infinitum.
As a bonus, maybe make Duplicacy prune
delete snapshot files first, and then chunks. (I know it wasn’t the reason in this instance, but I’ve come across the issue before, as have others, albeit lucky there were fewer snapshots to delete.)
Thanks!