Failed to decrypt the chunk XXX corrupt input

Hi,

I’m back with the same task I’ve tried in this thread: Crash when copying unencrypted backup to encrypted GDrive (invalid memory address or nil pointer dereference)

I’m still trying to copy my SFTP unencrypted storage to GCP.

This time I’m using new, 2.4.1 version of duplicacy (and not I’m passing a private key to the copy command), but infortunately it still fails (details below)

I’ve tried the same command with second NAS, and S3-like storage (MinIO based one) - but I get the same issue there as well.

Please describe what you are doing to trigger the bug:

duplicacy copy -from default -to gdrive

Please describe what you expect to happen (but doesn’t):

Storage is being copied

Please describe what actually happens (the wrong behaviour):

I get

...
Chunk 3edd127338071aed68d0f2b1743b3ad1c4f4c2ac6c2d9f4bd9161015d2e25161 (3826/449488) copied to the destination
Chunk 648e9dc801a7560973b17bb80ac05070b92ab6ea172820a017f5b8d5e0b1b335 (3827/449488) copied to the destination
Chunk f7d2b45afee78f434debf05224d0a1b8df2d90e79ccdd55fa76862bff1d08fc8 (3830/449488) copied to the destination
Failed to decrypt the chunk 743b0748b5d9449d1952c29e36285d44ef6c90698cd5661192340d99c9b46d61: corrupt input; retrying
Failed to decrypt the chunk 743b0748b5d9449d1952c29e36285d44ef6c90698cd5661192340d99c9b46d61: corrupt input; retrying
Failed to decrypt the chunk 743b0748b5d9449d1952c29e36285d44ef6c90698cd5661192340d99c9b46d61: corrupt input; retrying
Failed to decrypt the chunk 743b0748b5d9449d1952c29e36285d44ef6c90698cd5661192340d99c9b46d61: corrupt input

error.

It fails on different chunk every time (sometimes it’s the 3000th one, sometimes 10k one).

This corrupt input error comes from the lz4 decompressor; it means the data read from the source storage was corrupted. It can be caused by a disk error.

Did this chunk (743b…) get downloaded successfully when you tried again?

Hmmm, that would be strange - all of the drives in my array are rather new, and they’re passing all surface tests without a problem.

How can I check if it was downloaded successfully? Run again, and grep for the output?

I’ve retried it couple time, and it looks like these chunks are being procesed in rather random order - I’ve been grepping these logs in order to find 743b0748b5d9449d1952c29e36.... chunk - but it haven’t appeared there

Yes, the order can be random. You can copy one revision at a time, so that it will be more likely to hit the same chunk.

Hmm… that’s a good idea.

I’ve triaged one of the revisions that is failing (different chink this time). After triggering copy action over and over, it’s sometimes stopping on the same chunk.

But what exactly does it mean for me?

I’m regullary checking my storage with duplicacy check -all and everything seems fine.

Also how exactly can I find failing chunk on the disk? I’ve tried with find <path> -name <chink_name> but I can’t find it anywhere.

I have cron jobs with regular check's and prunes of my storage:

/usr/local/bin/duplicacy prune -all -keep 0:360 -keep 30:180 -keep 7:30 -keep 1:7 
(ran ever day)
/usr/local/bin/duplicacy check -all -tabular
(ran every hour)
/usr/local/bin/duplicacy check -files -all -tabular
(ran every day)

Is it possible that it have damaged my storage?

PS. Of course I’ve disabled them for now, in order to perform copy tests mentioned in this thread

Run duplicacy check --files -r revision for the revision that is failing to see if there is any corrupt chunk.

Ok, same corrupt input error here. Interesting, it looks like the duplicacy check --files that I’m running every day was failing as well, but I’ve missed that.

Is it possible that prune command have caused this? What are the possible causes of this issue?

Can I somehow trace which revisions uses this chunk, and delete them?

Does duplicacy check fail? If not, then the corrupt chunk must be a file chunk. You can try to rename this chunk and then try to create a new one (by running duplicacy backup -hash). This will be possible if relevant files in the local repository haven’t changed since the revision was made.

What do you mean by Does duplicacy check fail ? “-files” variant or normal one?

I’ve checked both and:
duplicacy check succeeds
duplicacy check -files fails

I’ve ran duplicacy backup -hash, but unfortunately it didn’t help. Does it mean that failing chunk corresponds to the files that aren’t available on my computer anymore?

Also - you’ve said that I can rename the failing chunk: where are those chunks stored? I’ve tried with find <path> -name <corrupted_chunk_name> command, but it doesnt find anything for me

This shows the corrupt chunk is a file chunk, not a snapshot chunk.

Right.

The location of 743b0748b5d9449d1952c29e36285d44ef6c90698cd5661192340d99c9b46d61 is chunks/74/3b0748b5d9449d1952c29e36285d44ef6c90698cd5661192340d99c9b46d61.

Can you run duplicacy check -files on the most recent revision? If that revision has a corrupt chunk it might be more possible to recreate.

You may also check the content of the corrupt chunk to see if it contains repeats of the same sequence of 512 bytes. See Some corrupt files in a large restore.