Noticed this error in my routine (hourly) backup job to Google Drive
2022-09-16 13:06:09.065 INFO SNAPSHOT_CHECK Listing all chunks
2022-09-16 13:06:12.143 ERROR DOWNLOAD_DECRYPT Failed to decrypt the file snapshots/critical-gdrv/47: No enough encrypted data (0 bytes) provided
Failed to decrypt the file snapshots/critical-gdrv/47: No enough encrypted data (0 bytes) provided
Check if you have multiple files named snapshots/critical-gdrv/47, one of them being empty. GDrive allows multiple objects with the same name, and I’ve seen sometimes creating them as such, with one file being the correct one and one being empty. And then the empty one is picked up leading to weird errors. I’ve seen it for chunks, not snapshots, but the same can probably still happen for any file.
If this is your case, just remove the empty file with the duplicate name.
Don’t try to edit it. Download that file directly from google drive to your local machine and check its size.
If it is not zero – try Duplicacy again. Maybe it was a temporary glitch.
if it is indeed zero – something must have happened during upload, or prune.
Look for upload logs that list that snapshot, and also check prune logs that mention that snapshot.
If you don’t find anything interesting in the logs – add -d to your backup and prune commands (global flag) and wait for the issue to occur again, and then review logs – they will be much more verbose.
To recover - delete the bad snapshot file.
It would also be useful to run rclone dedupe on your google drive to cleanup any issues related to what @sevimo is describing (eventual consistency resulting in duplicated files).
This is to catch the issue when it happens next time, assuming the file was actually getting truncated or corrupted during upload; to be able to track down the failure, in case present logs are not enough.
Did you find the backup logs that created the snapshot 47? Is there anything interesting? Maybe it failed, but google kept the partial file anyway? Please look up the backup log.
Ok, so it’s non-zero. What’s the size of the other snapshot files that don’t fail check?
If this one is significantly smaller – it got truncated, perhaps google drive API glitched. Or maybe duplicacy’s bug. Delete the file.
yeah, looks like duplicacy did not detect any issues while making that backup, which leaves two possibilities –
some shenanigans at google, where the api glitched somehow and ended up saving truncated file, but since it’s about the same size as neighbors, this is very unlikely
Local system (filesystem? memory?) glitch where the file was correctly assembled, but then corrupted copy was uploaded to the cloud.
Per your logs you are running on windows; Windows is notorious for having filesystem corruptions out of the blue; I would recommend running chkdsk c: /f and rebooting to schedule disk check at boot. Watch it progress; if it reboots machine before getting to the login screen – you know it had fixed some inconsistencies. I have witnessed these issues affecting all sorts of subsystems, from drivers not installing, to windows services configuration failures. Or I should say: I have seen and spent time fruitlessly debugging many weird issues on windows that ultimately disappeared after as chkdsk… Run it, it won’t hurt.
I would also suggest running memtest just to rule out failing ram.
Having done all of this, if you keep seeing this issue periodically – then this would deserve some scrutiny: I would enable debug logging for backup jobs, to have more verbose log information, and perhaps reviewing the upload code will be justified, with further contacting google drive support to get clarity of what have they seen on their end with the problematic file.
It is not unheard of for an api to experience temporary failures – you can find discussions here of B2 having an issue when they were returning bad data, and One Drive was saving truncated files… I have never heard anythign about google drive [yet!], but nothing is infallible.
Once you check you disk and memory, I would recommend running check -chunks, since you have free egress. This will download all chunks and verify them, just in case the corruption is not an isolated issue.