No enough encrypted data (0 bytes) provided

Noticed this error in my routine (hourly) backup job to Google Drive

Full(er) log:

2022-09-16 13:06:09.065 INFO SNAPSHOT_CHECK Listing all chunks
2022-09-16 13:06:12.143 ERROR DOWNLOAD_DECRYPT Failed to decrypt the file snapshots/critical-gdrv/47: No enough encrypted data (0 bytes) provided
Failed to decrypt the file snapshots/critical-gdrv/47: No enough encrypted data (0 bytes) provided

This specific log is from the (failed) check job

What am I supposed to understand from this?

Check if you have multiple files named snapshots/critical-gdrv/47, one of them being empty. GDrive allows multiple objects with the same name, and I’ve seen :d: sometimes creating them as such, with one file being the correct one and one being empty. And then the empty one is picked up leading to weird errors. I’ve seen it for chunks, not snapshots, but the same can probably still happen for any file.

If this is your case, just remove the empty file with the duplicate name.

There are no empty files in that folder ( i looked both at the mounted drive as well as in the web interface). In particular, there’s also no duplicate of any file in that folder.

However, when i edit the file it does indeed seem to contain nothing, even though it seems to report having at least 1KB of data.

So, what do i do?

And why would it happen to begin with?

Don’t try to edit it. Download that file directly from google drive to your local machine and check its size.

If it is not zero – try Duplicacy again. Maybe it was a temporary glitch.
if it is indeed zero – something must have happened during upload, or prune.

Look for upload logs that list that snapshot, and also check prune logs that mention that snapshot.

If you don’t find anything interesting in the logs – add -d to your backup and prune commands (global flag) and wait for the issue to occur again, and then review logs – they will be much more verbose.

To recover - delete the bad snapshot file.

It would also be useful to run rclone dedupe on your google drive to cleanup any issues related to what @sevimo is describing (eventual consistency resulting in duplicated files).

not sure what you mean by “Try it again” - try what? the backups are working. it’s the check step that fails.

I don’t prune (yet)

occurs only during check

i downloaded it to a local HDD.
size is non-zero. it’s 1062 bytes (but when edited it looks empty).

Try check.

This is to catch the issue when it happens next time, assuming the file was actually getting truncated or corrupted during upload; to be able to track down the failure, in case present logs are not enough.

Did you find the backup logs that created the snapshot 47? Is there anything interesting? Maybe it failed, but google kept the partial file anyway? Please look up the backup log.

Ok, so it’s non-zero. What’s the size of the other snapshot files that don’t fail check?

If this one is significantly smaller – it got truncated, perhaps google drive API glitched. Or maybe duplicacy’s bug. Delete the file.

backups run every 2 hours, check runs every 6 -
the backup always succeeds, the check now always fails

I don’t understand what i should try again

I don’t see anything special in the logs of that particular backup:

22:00:04.185 INFO SNAPSHOT_FILTER Loaded 30 include/exclude pattern(s)
2022-09-04 22:00:07.560 INFO BACKUP_END Backup for C:\root at revision 47 completed
2022-09-04 22:00:07.560 INFO BACKUP_STATS Files: 129886 total, 89,186M bytes; 0 new, 0 bytes
2022-09-04 22:00:07.560 INFO BACKUP_STATS File chunks: 18034 total, 89,186M bytes; 0 new, 0 bytes, 0 bytes uploaded
2022-09-04 22:00:07.560 INFO BACKUP_STATS Metadata chunks: 9 total, 37,166K bytes; 0 new, 0 bytes, 0 bytes uploaded
2022-09-04 22:00:07.560 INFO BACKUP_STATS All chunks: 18043 total, 89,223M bytes; 0 new, 0 bytes, 0 bytes uploaded
2022-09-04 22:00:07.560 INFO BACKUP_STATS Total running time: 00:00:06

After deleting the “47” file, the checks now complete successfully

some are identical, some a bit different. And no it’s not significantly smaller, it’s the same size as its “neighbors”.

yeah, looks like duplicacy did not detect any issues while making that backup, which leaves two possibilities –

  • some shenanigans at google, where the api glitched somehow and ended up saving truncated file, but since it’s about the same size as neighbors, this is very unlikely
  • Local system (filesystem? memory?) glitch where the file was correctly assembled, but then corrupted copy was uploaded to the cloud.

Per your logs you are running on windows; Windows is notorious for having filesystem corruptions out of the blue; I would recommend running chkdsk c: /f and rebooting to schedule disk check at boot. Watch it progress; if it reboots machine before getting to the login screen – you know it had fixed some inconsistencies. I have witnessed these issues affecting all sorts of subsystems, from drivers not installing, to windows services configuration failures. Or I should say: I have seen and spent time fruitlessly debugging many weird issues on windows that ultimately disappeared after as chkdsk… Run it, it won’t hurt.

I would also suggest running memtest just to rule out failing ram.

Having done all of this, if you keep seeing this issue periodically – then this would deserve some scrutiny: I would enable debug logging for backup jobs, to have more verbose log information, and perhaps reviewing the upload code will be justified, with further contacting google drive support to get clarity of what have they seen on their end with the problematic file.

It is not unheard of for an api to experience temporary failures – you can find discussions here of B2 having an issue when they were returning bad data, and One Drive was saving truncated files… I have never heard anythign about google drive [yet!], but nothing is infallible.

Once you check you disk and memory, I would recommend running check -chunks, since you have free egress. This will download all chunks and verify them, just in case the corruption is not an isolated issue.