DOWNLOAD_CORRUPTED error during restore

I made a test restore of a large file and encountered this error. I’ve tried restoring twice already via the Web GUI.

2021-04-13 12:06:45.884 INFO REPOSITORY_SET Repository set to F:/testRestoreImageBackup8M
2021-04-13 12:06:45.974 INFO STORAGE_SET Storage set to wasabi://us-west-1@s3.us-west-1.wasabisys.com/duplicacy/imageBackup8M
2021-04-13 12:06:45.974 INFO SNAPSHOT_FILTER Loaded 0 include/exclude pattern(s)
2021-04-13 12:06:47.499 INFO RESTORE_INPLACE Forcing in-place mode with a non-default preference path
2021-04-13 12:06:47.761 INFO SNAPSHOT_FILTER Parsing filter file \\?\C:\Users\User\.duplicacy-web\repositories\localhost\restore\.duplicacy\filters
2021-04-13 12:06:47.762 INFO SNAPSHOT_FILTER Loaded 0 include/exclude pattern(s)
2021-04-13 12:06:47.771 INFO RESTORE_START Restoring F:/testRestoreImageBackup8M to revision 2
2021-04-13 12:14:43.772 WARN DOWNLOAD_RETRY The chunk 00622a0bbd3b0e096ab590d2acbe9317a24c3323d4d14e21916ceff1fd5bcd64 has a hash id of 02be42964f832ae4a1121ed9f9ebfd0a10083a0e7871c48d7e198c73e3a575ef; retrying
2021-04-13 12:14:51.506 WARN DOWNLOAD_RETRY The chunk 00622a0bbd3b0e096ab590d2acbe9317a24c3323d4d14e21916ceff1fd5bcd64 has a hash id of 02be42964f832ae4a1121ed9f9ebfd0a10083a0e7871c48d7e198c73e3a575ef; retrying
2021-04-13 12:14:58.713 WARN DOWNLOAD_RETRY The chunk 00622a0bbd3b0e096ab590d2acbe9317a24c3323d4d14e21916ceff1fd5bcd64 has a hash id of 02be42964f832ae4a1121ed9f9ebfd0a10083a0e7871c48d7e198c73e3a575ef; retrying
2021-04-13 12:15:04.601 ERROR DOWNLOAD_CORRUPTED The chunk 00622a0bbd3b0e096ab590d2acbe9317a24c3323d4d14e21916ceff1fd5bcd64 has a hash id of 02be42964f832ae4a1121ed9f9ebfd0a10083a0e7871c48d7e198c73e3a575ef

There is only one other post in this forum that has the same error but actually had a different problem (corrupt cache issue) link.

How should I proceed here for this case?

I read a bit about this. Bit disappointed but oh well. So I then proceeded to the following based on those threads:

  1. I deleted that chunk in folder 00 / 622a0
  2. Created a 2nd backup via the GUI. Then I re-uploaded that 1 bad chunk.
  3. Initiated the restore again.

But the third step failed with the same error but with another bad chunk.

Maybe my initial upload was really bad.

What I do know is that my Macrium Reflect image is validated daily (I don’t want to use rsync/rclone, I want to only have 1 backup tool). Can’t duplicacy just check the hashes on my local computer and then compare it to the clouds hashes and re-upload the bad ones? It would be lovely if the fixes can all be done via the GUI. Thank you.

Two possibilities:

  1. The chunks were corrupted (either a Duplicacy bug or a memory error) before they were uploaded.
  2. Wasabi returned a corrupted copy

(2) is unlikely but considering there was a precedent as well as the more recent issue with B2 this should be simply excluded.

Which region is your Wasabi bucket?

It is interesting that you regenerated the chunk. Is your storage encrypted? It would be interesting to diff the corrupted copy with the original one.

Hi gchen,

Region is us-west-1. I believe this storage wasn’t encrypted and it’s chunk was 8M.

Lastly, I did what a dumb user would do.

IT Support: “Did you restart your backup?”
User: No

I deleted that storage. Created a new storage with 1M fixed chunk size and re-uploaded it via web-GUI and prayed hard. Finally, the restore worked and binary checks with WinMerge also passed.

So now that everything is dandy, I’m only curious about this 1st possibility of failure. Maybe it could be a duplicacy-bug/memory failure/SSD bit-rot. However if the files are re-created weekly and validated daily, can’t I just be told by duplicacy that a particular chunk doesn’t match up anymore and then I can run a command to re-upload that chunk?

Thanks gchen for your help.

Why would you rule out the second possibility? Now that you have started over you don’t have a way to triage that issue anymore and we are back to square one; hoping the corruption occurs again so that it can be analyzed.

I personally had a lot of issues with wasabi, specifically us-west-1 endpoint. (No issues with us-east-1) ; not sure if some of these issues could be explained by integrity issues on their end.

Any reason why not use Backblaze? It’s cheaper too.

Hi sorry saspus. I’m not an advanced user. If this happens again, I shall keep that storage for debugging purposes.

After the many troubles with duplicati, I am intent on verifying the backups (data & software too) on a quarterly basis. Hence, egress on backblaze would be costlier than Wasabi.

So we shall see if this episode repeats again in 3 months.

I would then at leader suggest to try us-east-1 endpoint instead.

You can have free egress with B2 via cloudflare, and duplicacy fully supports this usecase:

Agree with duplicati being a train wreck.

Thanks saspus for the hint. I’ve to say, I looked at some of the tutorials and it seems like it’s quite a number of steps there. Also, it’s not supported by Duplicacy’s web-UI, isn’t it? Or must I use the S3-form and fill in the endpoint correctly? Thanks.

I have to say you are right…

I have/had 2 storage buckets under Wasabi US-West and they’ve both lost chunks. Do you know if other wasabi endpoints will experience the same issue?

I spoke too soon. I noticed the same issue in another thread: Missing chunks after running prune

Then, I ran a check with -fossils -resurrect. It is fine again. It must have been a bug within duplicacy.

Missing chunks after prune failed? Sure. This is popping up everywhere.
But this is different issue from what you reported with wasabi, where actual chunks were corrupted.

Lost or corrupted chunks?

I.e. are files missing, or are files corrupted?

Yea my bad. The first time this happened it was corrupted. The second time was about lost chunks.