Over the past few weeks I’ve had a recurring issue that I attempt to correct, only to have it return again. When performing a scheduled backup on one of my laptops, I receive the following error message:
Running backup command from /Users/tom/.duplicacy-web/repositories/localhost/0 to back up /Users/tom
Options: [-log backup -storage cifs -threads 4 -stats]
2020-05-26 23:00:01.777 INFO REPOSITORY_SET Repository set to /Users/tom
2020-05-26 23:00:01.777 INFO STORAGE_SET Storage set to /Volumes/storage/Backups/duplicacy
2020-05-26 23:00:03.001 WARN DOWNLOAD_RETRY Failed to decrypt the chunk 90e88c5673672b2648194a81be282d54a9bb95d882b6c4e6a4fe048f2b0f69c1: No enough encrypted data (0 bytes) provided; retrying
2020-05-26 23:00:03.008 WARN DOWNLOAD_RETRY Failed to decrypt the chunk 90e88c5673672b2648194a81be282d54a9bb95d882b6c4e6a4fe048f2b0f69c1: No enough encrypted data (0 bytes) provided; retrying
2020-05-26 23:00:03.014 WARN DOWNLOAD_RETRY Failed to decrypt the chunk 90e88c5673672b2648194a81be282d54a9bb95d882b6c4e6a4fe048f2b0f69c1: No enough encrypted data (0 bytes) provided; retrying
2020-05-26 23:00:03.021 ERROR DOWNLOAD_DECRYPT Failed to decrypt the chunk 90e88c5673672b2648194a81be282d54a9bb95d882b6c4e6a4fe048f2b0f69c1: No enough encrypted data (0 bytes) provided
Failed to decrypt the chunk 90e88c5673672b2648194a81be282d54a9bb95d882b6c4e6a4fe048f2b0f69c1: No enough encrypted data (0 bytes) provided
The previous backup succeeds and creates revision 341.
Running backup command from /Users/tom/.duplicacy-web/repositories/localhost/0 to back up /Users/tom
Options: [-log backup -storage cifs -threads 4 -stats]
2020-05-26 18:51:30.898 INFO REPOSITORY_SET Repository set to /Users/tom
2020-05-26 18:51:30.898 INFO STORAGE_SET Storage set to /Volumes/storage/Backups/duplicacy
2020-05-26 18:51:33.714 INFO BACKUP_START Last backup at revision 340 found
2020-05-26 18:51:33.714 INFO BACKUP_INDEXING Indexing /Users/tom
2020-05-26 18:51:33.714 INFO SNAPSHOT_FILTER Parsing filter file /Users/tom/.duplicacy-web/repositories/localhost/0/.duplicacy/filters
2020-05-26 18:51:33.717 INFO SNAPSHOT_FILTER Loaded 19 include/exclude pattern(s)
[ SNIP BACKUP CONTENTS ]
2020-05-26 18:51:48.662 INFO BACKUP_END Backup for /Users/tom at revision 341 completed
2020-05-26 18:51:48.662 INFO BACKUP_STATS Files: 74939 total, 81,238M bytes; 198 new, 248,655K bytes
2020-05-26 18:51:48.662 INFO BACKUP_STATS File chunks: 22082 total, 81,481M bytes; 19 new, 107,717K bytes, 106,016K bytes uploaded
2020-05-26 18:51:48.665 INFO BACKUP_STATS Metadata chunks: 11 total, 31,433K bytes; 11 new, 31,433K bytes, 8,673K bytes uploaded
2020-05-26 18:51:48.665 INFO BACKUP_STATS All chunks: 22093 total, 81,511M bytes; 30 new, 139,151K bytes, 114,690K bytes uploaded
2020-05-26 18:51:48.665 INFO BACKUP_STATS Total running time: 00:00:18
2020-05-26 18:51:48.665 WARN BACKUP_SKIPPED 6 files were not included due to access errors
The 6 skipped files are socket files from gnupg running in the background, and obviously aren’t normal files. At the next backup, 6 hours later, I receive the “Failed to decrypt the chunk” message. If I check the backups for this host’s ID, all previous revisions succeed, and the last one will fail.
To fix this issue, I have attempted pruning revision 341, then pruning my storage with the “-exclusive” command line option to make sure that the corrupt chunk file is removed. Sometimes I need to manually delete the file, and other times the prune feature will remove the file on its own. I suspect this could be down to a version difference, though, because all machines now appear to remove the chunk file on their own.
The backup is destined for a CIFS file share on an Ubuntu server, which I have mounted from my Mac laptop. Multiple machines back up to the Ubuntu server, which will then copy the local storage to Backblaze. Because of the corrupt or empty chunk file, the storage checks on my Ubuntu server will fail, as will the copy task for Backblaze.
I don’t know what’s causing this issue, but the fact that it keeps returning after I’ve attempted to clean it up is a concern. And the fact that I’ve only been able to create manual backups after cleaning up, only to be followed by failed backups means I’m getting into a situation where my last good backup is older and older.
Any tips how I can track down what’s going on here?
Laptop:
Duplicacy Web Edition 1.3.0
CLI duplicacy_osx_x64_2.5.2
Mac OS X 10.15.4
Server:
Duplicacy Web Edition 1.3.0
CLI duplicacy_linux_x64_2.5.2
Ubuntu 20.04 LTS