Fix corrupted remote chunk when I have the chunk in local cache

Hi all,

I have a potentially interesting question/problem…

I just wiped a computer and set it up to backup using the same ID as before. Upon downloading the previous state, it’s throwing errors that a block is the wrong size.

2023-03-10 17:21:16.670 WARN DOWNLOAD_RETRY Failed to download the chunk 44d19d83388f41d675791e4cb280206b0c37452f833b600b6617b0560a6826b7: unexpected EOF; retrying
<repeats before erroring out>

I’m assuming something went wrong when this chunk was uploaded, so I went looking for this chunk on a separate machine. I found it in the local cache and attempted to upload the good chunk to the repository, replacing the bad file.

However, now I’m getting this error:

2023-03-10 21:40:14.044 WARN DOWNLOAD_RETRY Failed to decrypt the chunk 44d19d83388f41d675791e4cb280206b0c37452f833b600b6617b0560a6826b7: The storage doesn't seem to be encrypted; retrying
<repeats>

I can only conclude that the local cache is a decrypted version of the remote (which would make sense).

Now my question is, is there some way of getting Duplicacy to re-encrypt and upload this chunk again?

Thanks,
Alex

Copying the corrupted file from the other machine to the local cache of the new machine allowed Duplicacy to continue and complete a backup.

However, when running a full check of the remote repository, it revealed more incomplete chunks.

While my original question may be academically interesting, my real issue seems to be that I’m using an unreliable backup destination.

(For anyone interested, it is pcloud accessed by rclone served through local webdav.)

Currently there is no facility in Duplicacy that can upload the decrypted chunk in the local cache to the storage. However, writing such a tool should be relatively easy.

1 Like

Thanks for confirming, @gchen.

I dug a bit deeper, and it turns out the data on the remote storage is actually fine. It just occasionally becomes unavailable for ~20 seconds and duplicacy only retries a couple of times in pretty short succession. Restarting the backup continues downloading the chunks and they download fine.

@gchen, would there be any room for having a bit more generous retry timers (perhaps powers of two etc) or would that limit some other use-cases? I feel like this would help with high latency storages.

Thanks,
Alex

Which storage is that? I can tell you that OneDrive backend, for instance, has retry logic with incremental backoffs for 12 attempts for every API call. So chunk download indeed only does 3 retries without waiting, because presumably all the low(er) level errors are already taken care by the storage backend (e.g. as I described for ODB). So this is definitely not a universal problem, it’s specific to the storage.

Hi @sevimo, thank you for letting me know.

This is a pCloud back-end, accessed with rclone and served over WebDAV.

Would it be useful to add a global configurable retry-strategy setting or is this not something we’d like users to have to worry about?

Thanks