Snapshot error - chunk hash is not a valid hex string [Web Edition]

I’ve been using the paid Duplicacy Web Edition for several months without issue, but today I found that one of my three backup jobs has been failing for the last two weeks. The log shows the information below.

I found this bug report referencing something similar, but I’m not using the CLI or local storage (I’m using Dropbox).

I have tried to remove (prune) the problem revision with -id image01 -r 2301 -exclusive but that fails with the same error message (“ERROR SNAPSHOT_CHUNK Failed to load chunks for snapshot image01 at revision 2301…” ). I’ve also tried removing all caches under ~/.duplicacy-web, to no avail.

How can I get this working again?

Running backup command from /home/user/.duplicacy-web/repositories/localhost/1 to back up /media/user/image01
Options: [-log backup -storage Documents -threads 1 -stats]
2023-09-26 21:44:08.023 INFO REPOSITORY_SET Repository set to /media/user/image01
2023-09-26 21:44:08.023 INFO STORAGE_SET Storage set to dropbox://Documents
2023-09-26 21:44:12.996 INFO BACKUP_START Last backup at revision 2301 found
2023-09-26 21:44:13.175 ERROR SNAPSHOT_PARSE Failed to load chunks specified in the snapshot image01 at revision 2301: The chunk hash c68825373047d5bbc05a6<783af100c0dca696eb6f2a37b9d7ef2e75b5e78917 is not a valid hex string
Failed to load chunks specified in the snapshot image01 at revision 2301: The chunk hash c68825373047d5bbc05a6<783af100c0dca696eb6f2a37b9d7ef2e75b5e78917 is not a valid hex string

Running duplicacy_web_linux_x64_1.7.2 on Linux Mint 21.2

Remove the directory /home/user/.duplicacy-web/repositories/ and try again. This is to remove the local cache that stores metadata chunks and one of the chunk file is obviously corrupted.

Okay, I quit the app, removed /home/user/.duplicacy-web/repositories/, restarted the app, and ran prune with [-id image01 -r 2301 -exclusive] and the backup, and both still failed with the same error:

Running backup command from /home/user/.duplicacy-web/repositories/localhost/1 to back up /media/user/image01
Options: [-log backup -storage Documents -threads 1 -stats]
2023-09-28 15:14:05.460 INFO REPOSITORY_SET Repository set to /media/user/image01
2023-09-28 15:14:05.460 INFO STORAGE_SET Storage set to dropbox://Documents
2023-09-28 15:14:10.030 INFO BACKUP_START Last backup at revision 2301 found
2023-09-28 15:14:10.181 ERROR SNAPSHOT_PARSE Failed to load chunks specified in the snapshot image01 at revision 2301: The chunk hash c68825373047d5bbc05a6<783af100c0dca696eb6f2a37b9d7ef2e75b5e78917 is not a valid hex string
Failed to load chunks specified in the snapshot image01 at revision 2301: The chunk hash c68825373047d5bbc05a6<783af100c0dca696eb6f2a37b9d7ef2e75b5e78917 is not a valid hex string

I’ve had several problems with Dropbox in the past, and I don’t consider it an ideal storage for backups, with thousands of files (chunks) per folder. It - like other “drives” Google Drive, Onedrive, etc. - is simply not made for this, but for synchronizing files with your computer or making them available via the web interface.

That said, in your case, it is clear that Duplicacy is bothering with the < character there in the middle, making it really not a valid hex. I can’t imagine how that character ended up there.

If you don’t mind losing revision 2301, simply go to the snapshots folder corresponding to that id, delete file 2301, then go to the chunks → c6 folder and delete file 8825373047d5bbc05a6<783af100...

Then run a prune -exclusive -exhaustive (make sure no backup or other operations are being performed) and that’s it.

1 Like

The corruption must have happened before the chunk was uploaded to Dropbox. This is because if the corruption were caused by Dropbox, then Duplicacy would have caught it and complained about mismatched chunk ID.

If revision 2301 is really important to you, there might be a way to ignore the chunk hash error (for instance by editing the CLI source code) and hope that is the only error. Otherwise, as @towerbr said, simply remove that revision and clean up the storage.

Thanks @towerbr. File 8825373047d5bbc05a6<783... didn’t exist under chunks/c6 for whatever reason, but removing revision 2301 from the snapshots folder and running a prune job with -id image01 -exclusive -exhaustive did the trick. Backups have run successfully every hour for the past ~24 hours.

I know Dropbox isn’t as fast or flexible as dedicated storage services (s3 etc.), but I’ve used it for many years without issue with Duplicati (but with many other issues with PC-side database corruption etc.) and as @gchen noted, it doesn’t seem to be the cause of the problem here, either. I’m not sure what could have caused the corruption error on my side; there were no actual source file additions/deletions/modifications between revisions 2300 and 2301.

I’ll mark this as solved, but it would be nice if Duplicacy gave a notification warning when backups fail like this; I didn’t realise my backups were failing for over two weeks, which isn’t an ideal set-and-forget set-up.

1 Like

You might’ve got really really unlucky and a memory cell got hit with a cosmic ray (but luckily) showed up in the chunk ID or filename instead of its chunk content. Or, you might have some iffy RAM and you might wanna test it with Memtest86+ - just to be sure. Consider also running a check with -chunks.

1 Like

This topic was automatically closed 10 days after the last reply. New replies are no longer allowed.