Receiving Failed to decrypt the chunk messages

Over the past few weeks I’ve had a recurring issue that I attempt to correct, only to have it return again. When performing a scheduled backup on one of my laptops, I receive the following error message:

Running backup command from /Users/tom/.duplicacy-web/repositories/localhost/0 to back up /Users/tom
Options: [-log backup -storage cifs -threads 4 -stats]
2020-05-26 23:00:01.777 INFO REPOSITORY_SET Repository set to /Users/tom
2020-05-26 23:00:01.777 INFO STORAGE_SET Storage set to /Volumes/storage/Backups/duplicacy
2020-05-26 23:00:03.001 WARN DOWNLOAD_RETRY Failed to decrypt the chunk 90e88c5673672b2648194a81be282d54a9bb95d882b6c4e6a4fe048f2b0f69c1: No enough encrypted data (0 bytes) provided; retrying
2020-05-26 23:00:03.008 WARN DOWNLOAD_RETRY Failed to decrypt the chunk 90e88c5673672b2648194a81be282d54a9bb95d882b6c4e6a4fe048f2b0f69c1: No enough encrypted data (0 bytes) provided; retrying
2020-05-26 23:00:03.014 WARN DOWNLOAD_RETRY Failed to decrypt the chunk 90e88c5673672b2648194a81be282d54a9bb95d882b6c4e6a4fe048f2b0f69c1: No enough encrypted data (0 bytes) provided; retrying
2020-05-26 23:00:03.021 ERROR DOWNLOAD_DECRYPT Failed to decrypt the chunk 90e88c5673672b2648194a81be282d54a9bb95d882b6c4e6a4fe048f2b0f69c1: No enough encrypted data (0 bytes) provided
Failed to decrypt the chunk 90e88c5673672b2648194a81be282d54a9bb95d882b6c4e6a4fe048f2b0f69c1: No enough encrypted data (0 bytes) provided

The previous backup succeeds and creates revision 341.

Running backup command from /Users/tom/.duplicacy-web/repositories/localhost/0 to back up /Users/tom
Options: [-log backup -storage cifs -threads 4 -stats]
2020-05-26 18:51:30.898 INFO REPOSITORY_SET Repository set to /Users/tom
2020-05-26 18:51:30.898 INFO STORAGE_SET Storage set to /Volumes/storage/Backups/duplicacy
2020-05-26 18:51:33.714 INFO BACKUP_START Last backup at revision 340 found
2020-05-26 18:51:33.714 INFO BACKUP_INDEXING Indexing /Users/tom
2020-05-26 18:51:33.714 INFO SNAPSHOT_FILTER Parsing filter file /Users/tom/.duplicacy-web/repositories/localhost/0/.duplicacy/filters
2020-05-26 18:51:33.717 INFO SNAPSHOT_FILTER Loaded 19 include/exclude pattern(s)
[ SNIP BACKUP CONTENTS ]
2020-05-26 18:51:48.662 INFO BACKUP_END Backup for /Users/tom at revision 341 completed
2020-05-26 18:51:48.662 INFO BACKUP_STATS Files: 74939 total, 81,238M bytes; 198 new, 248,655K bytes
2020-05-26 18:51:48.662 INFO BACKUP_STATS File chunks: 22082 total, 81,481M bytes; 19 new, 107,717K bytes, 106,016K bytes uploaded
2020-05-26 18:51:48.665 INFO BACKUP_STATS Metadata chunks: 11 total, 31,433K bytes; 11 new, 31,433K bytes, 8,673K bytes uploaded
2020-05-26 18:51:48.665 INFO BACKUP_STATS All chunks: 22093 total, 81,511M bytes; 30 new, 139,151K bytes, 114,690K bytes uploaded
2020-05-26 18:51:48.665 INFO BACKUP_STATS Total running time: 00:00:18
2020-05-26 18:51:48.665 WARN BACKUP_SKIPPED 6 files were not included due to access errors

The 6 skipped files are socket files from gnupg running in the background, and obviously aren’t normal files. At the next backup, 6 hours later, I receive the “Failed to decrypt the chunk” message. If I check the backups for this host’s ID, all previous revisions succeed, and the last one will fail.

To fix this issue, I have attempted pruning revision 341, then pruning my storage with the “-exclusive” command line option to make sure that the corrupt chunk file is removed. Sometimes I need to manually delete the file, and other times the prune feature will remove the file on its own. I suspect this could be down to a version difference, though, because all machines now appear to remove the chunk file on their own.

The backup is destined for a CIFS file share on an Ubuntu server, which I have mounted from my Mac laptop. Multiple machines back up to the Ubuntu server, which will then copy the local storage to Backblaze. Because of the corrupt or empty chunk file, the storage checks on my Ubuntu server will fail, as will the copy task for Backblaze.

I don’t know what’s causing this issue, but the fact that it keeps returning after I’ve attempted to clean it up is a concern. And the fact that I’ve only been able to create manual backups after cleaning up, only to be followed by failed backups means I’m getting into a situation where my last good backup is older and older.

Any tips how I can track down what’s going on here?

Laptop:
Duplicacy Web Edition 1.3.0
CLI duplicacy_osx_x64_2.5.2
Mac OS X 10.15.4

Server:
Duplicacy Web Edition 1.3.0
CLI duplicacy_linux_x64_2.5.2
Ubuntu 20.04 LTS

I have been using this software for years but have been having this same issue lately. I have a local backup and a remote one that is managed with the duplicacly copy command. The copy command has been failing for a couple of months or so due to this same error message about 0 byte chunks. I just ran a search and there are 1100 0 byte chunks on my local drive…
find /mnt/backups/ -type f -empty >> /mnt/backups/debugEmptyChunks.list

Like OP I removed up through the last successful run, but continue to receive this error. There is a possibility that 0 byte chunks are from older snapshots.

My guess is we will want to delete all of the 0 byte chunks, run a check to find all affected snapshots (I’m assuming all of my snapshots ¯\_(ツ)_/¯), remove those snapshots (without pruning), then run the backup again, followed by a prune exhaustive… and keep an eye out for 0 byte chunks in the future. It is concerning that this is happening.

find /mnt/backups/ -type f -empty -delete >> /mnt/backups/debugEmptyChunks.list

@Charles is this a samba drive too?

When uploading to a local or samba drive, Duplicacy will first write to a temporary file first, sync and then close the file, and finally rename the file. I don’t know how it is still possible to have a 0-size file with these steps, unless there is something extra thing needed by a samba drive.

Note that this issue was reported before (Failed to decrypt the file snapshots), and may have been caused by a kernel panic. The sync operation was added later.

A recent change that might be relevant was to ignore the sync error if the operation is not supported (Fix "Failed to upload the chunk ... sync ...: operation not supported" by fbarthez · Pull Request #590 · gilbertchen/duplicacy · GitHub).

Mine is fully local with both drives attached to the same machine. OP didn’t mention using samba. CIFS is a network attached file system that looks like a local disk. I’ve had issues with those in the past since they lack some advanced file system functionality like Linux ACLs etc, so if I wasn’t experiencing the same thing with local drives, that would be the first thing I’d investigate

PS/edit: (EDIT 2: this is not the same thing. nevermind) Could this at all be related to this post from a few days ago. Maybe it’s not an issue with Dropbox but something further up the stack. Duplicacy failing with Dropbox (0 byte files, restore impossible)

A few thoughts I have on it…

What does duplicacy do with 0 byte/ sparse files during the backup process? Skip them? Encrypt them, but result in non-sparse file, etc?

(I now see what you mean by temp file to avoid this. Nevermind) Is it possible that the file could open, then the backup fails due to an interruption like a power outage or disconnect drive etc, then on the next backup it continues but sees that the file name is already there and skips it? (I had some system stability issues due to a hardware failure for a bit that has since been fixed)

(nevermind. not the same issue) If you were able to reproduce it on Dropbox with threading, then maybe it’s possible but simply less likely due to the more robust nature on other storages. I’ve always used threading.

UPDATE:
I downloaded the latest version and preformed a check with the new -chunks flag on and it was able to detect… something. I was hoping it would identify which snapshots would be affected but it stops running after it finds 1 bad chunk I guess and doesn’t say which ones it’s in.

I figured out what was wrong with that dropbox issue. It was caused by retrying without resetting the reader that reads the chunk, so it is unrelated to this issue.

The temporary file is given a random name so it is unlikely that an incomplete temporary file will be picked up by the next run.

This is perhaps the only thing you can do now. I’ll add a size check to the check command so you’ll able to catch such errors immediately after a backup when you run the check command.

@372800b839af73b7fc0b So the solution is clear, it’s not enough to prune the using -exclusive. You would need to use both -exclusive -exhaustive on every snapshot that is affected and currently there isn’t a good way to tell each one that is affected using only duplicacy, so it’s better to delete these manually, then run a check without the -chunks or -files flags. Instead of exhaustively pruning these though, I recommend manually deleting or moving the snapshots out of the snapshot folder. This will leave the chunks that have already been generated properly in the destination to save time when you back up again.

So

  1. Update to the latest version to get the fix for file.sync() in case your CIFS doesn’t support that
  2. find /mnt/backups/ -type f -empty -delete #Delete the 0 byte chunks. Might be different on your system, especially on MacOS. Maybe test this without the delete flag first too.
  3. Run a check
  4. Delete/ move all snapshots with missing chunks out of the snapshot folder
  5. Run a backup as you normally would
  6. Run prune with the -ehaustive flag to remove all orphaned chunks from your destination storages

I’ve done exactly this, but I continue to have issues cropping up. I’ve cleaned up the environment, and then at some point I have a backup that kicks off and creates the situation again. I’m trying to track down what’s causing the initial problem rather than how to clean up after it happens.

@gchen I agree that this seems nearly impossible… but perhaps it would be best to change the error handling so that it checks if the file at fullPath is not 0 bytes and delete both the temp and destination file and throw an error with an elseif. That’s the only possible issue I see. If this is easily reproducible you could always test it by not deleting the temp file.

edit: I checked a few files in my list of 0 byte chunks and no temporary files were left behind. Perhaps the move failed but somehow left an empty file behind?

Since there seems to be no help for discovering the source of my corruption, is there a procedure I should be using to identify what backups certain chunk files belong to on my NAS? Or do I have to search through the plain text snapshot files on each device individually? Is there a procedure I should be following for being completely sure that any reference to a chunk file is removed?

You can run this command to remove 0-size files: find /storage/chunks -type f -empty | xargs rm . Now these chunks become missing chunks, and you can run a check command to find snapshots that are affected, and a prune command to remove these commands. The check command may fail if a metadata chunk is missing, in which case you can remove the corresponding snapshot file manually from the storage. Once the check command no longer complains about missing chunks, run duplicacy prune -exclusive -exhaustive to remove unreferenced chunks.

In the next release I’ll make the check command checks for 0-size chunk files by default. For now you can run check -chunks if that doesn’t take too long after each backup run.

1 Like

I cleaned up, ran checks, made a backup successfully, attempted an additional backup and it failed with the same error message. I’m really at a loss for what’s suddenly causing this problem, but not being able to back up one of my laptops makes my licenses probably not worth renewing.

Since the storage server is running Ubuntu, I would suggest switching to SFTP for accessing the same storage. Generally speaking SFTP is more reliable than Samba/CIFS.

Switching to SFTP should be fairly easy. Create a new SFTP storage accessing the same directory on the ubuntu server, and create a new backup with the same source directory and same backup id, but with the new SFTP storage as the destination. Then you should be able to run a backup right away (of course you need to first clean up the storage to get rid of existing 0-size chunks).