Decrypt Chunk Fail

Before you post: have you tried searching the forum? If you have found related posts that didn’t quite solve your issue, please mention (link) them in your post. Various posts. Not to sure which one applies to me.

Hi All.

No special Encryption. just the Standard Setup.

The restore command encountered an error:
Failed to decrypt the chunk 63ac3d1907ceb6b4ff480358d394fce4eaa75af3805f831a9b844bf7dc6be2b2: cipher: message authentication failed

Exit code: 100

tried 2/3 restore points. Unsure how to Resolve.

Thanks Kindly.

This message usually means the chunk is corrupted. Which storage backend are you using?

Oh sh*t… Not ideal at all as this affects my Restore from any time points (that i actually need)…

May need to Purge the backend and reset up and upload 10TB again :frowning:

That’s not a solution. If it happened once - it will happen again. Ether the root cause of the failure need to be understood and prevented from happening in the future or that backend should not be used.

What backend is that?

1 Like

Google Drive.

Unusual being as its the only one of my Large amount of machines to throw this error. that’s over a year we are talking too…

Being as this has affected ALL my backups (that i need to restore that is) on that one machine, I don’t have a choice but to re-config. Unless removing that chunk may resolve the issue or something?

Either way, would ideally like this to Restore.

Try moving the chunk to a different location instead of deleting it. Then run the check command, which will show how many revisions are affected by this chunk.

1 Like

Good plan! may i ask how to find the file “63ac3d1907ceb6b4ff480358d394fce4eaa75af3805f831a9b844bf7dc6be2b2” doing a search doesn’t yield any results. but maybe doing it incorrectly. (Nothing gets Deleted from the Remote end so this file should exist. especially since the check shows as Complete (every time it runs).

You should be able to find that chunk under the subdirectory chunks/63, named ac3d1907ceb6b4ff480358d394fce4eaa75af3805f831a9b844bf7dc6be2b2.

Thanks!

And sadly Rev 6-204 are Nuked by the looks of things,

205, 206 appear to be ok, I think those 3 Revs should have what I need (I hope). [Checked, they do not have what I need… :frowning: ]

Thanks for the tip!

What would be the best was of Resolving? repairing or would simply Pruning that specific set?

Google handles data integrity well and api allows for upload atomicity: probability of google losing data is negligible, and I have never heard of this ever happening. (A counter example: recently One Drive for business had a bug where partially uploaded file would persist; it was fixed shortly after, but it did happen. Backblaze also had a bug where bad data was returned through api).

This leaves for the chunk to get corrupted during the windows after file creation but before the upload, due to:

  1. filesystem corruption,
  2. bad drive media,
  3. or bad ram.

I would run ram test and filesystem scan on the source system, just in case.

It is also possible the chunk is corrupted in the local cache due to above reasons. Nuking the cache, repairing the filesystem, healing the drive, and addressing ram issues then should suffice to recover.

1 Like

Hi, I have a similar issue but i am not sure how to resolve. My understanding of the technical workings of the duplicacy system is not keeping up with many members on these threads. Duplicacy has faired me well to run a backup and restore files when needed.

Some back information to my suspected cause. I run two Unraid machines Server A is always online the second Server B is a datastore for duplicacy only and communicates over the LAN, booted up ever few weeks to receive duplicacy backups. I have recently been replacing hard drives for larger ones in Server B and then migrating those drives to Server A, one at a time and rebuilding data from parity disk. I suspect this is where a problem with the chunk has been caused. The drives from Server B were migrated to Server A and a similar process of rebuild from parity conducted. During this a hard drive failed and lost the data on Server A (1 drive). I am trying to restore and get the error

"The restore command encountered an error:
Failed to decrypt the chunk b23f572b5db59750a9df9ed125d407d78d4d608f63c7619c2ae080da42a3ef95: cipher: message authentication failed

Exit code: 100"

I have managed to restore approx 4.8 of 8.6 when duplicacy gives this error.
Per the above i have located the chunk on Server B manually. When i run a check it does not complete

“Running check command from /cache/localhost/all
Options: [-log check -storage BLACK_BOX -chunks -a -tabular]
2024-08-10 22:06:26.314 INFO STORAGE_SET Storage set to /BLACK_BOX
2024-08-10 22:06:26.399 INFO SNAPSHOT_CHECK Listing all chunks
2024-08-11 01:45:34.554 INFO SNAPSHOT_CHECK 4 snapshots and 50 revisions
2024-08-11 01:45:34.679 INFO SNAPSHOT_CHECK Total chunk size is 26488G in 5701657 chunks”

…Many Many lines of checking …

“2024-08-11 04:06:27.514 INFO VERIFY_PROGRESS Verified chunk da02ac61eb7261f55b61a0af235d2e35619b172498630632abfc3ec3b67a7e67 (87137/5334396), 55.09MB/s 5 days 13:38:25 1.6%
2024-08-11 04:06:27.963 INFO SNAPSHOT_VERIFY Added 87137 chunks to the list of verified chunks
Duplicacy was aborted”

There is talk of corruption in many threads but i’m inexperienced to take action to possible detriment of the existing data i need to retrieve.

How would i asses if the “3f572b5db59750a9df9ed125d407d78d4d608f63c7619c2ae080da42a3ef95” is corrupt? How would i move past this chunk and continue to the next in the restoration? Simply how can i fix the matter to restore as much as i can and then run the backup again to protect new files.