"Failed to upload chunk" errors - are these retried?

Hi,

I’m using Duplicacy CLI v2.0.10 on Mac/OS talking to Azure, and I’m having problems with Failed to upload chunk errors. These errors cause the backups to abort.

Here is the first example:

Uploaded chunk 66171 size 5819743, 10.59MB/s 2 days 07:55:25 13.2%
Uploaded chunk 66176 size 1795927, 10.59MB/s 2 days 07:55:30 13.2%
Uploaded chunk 66177 size 1161357, 10.59MB/s 2 days 07:55:29 13.2%
Failed to upload the chunk d9feca43667190675efd46f514096b44738fc1789342253a524c43461e62d81a: Put https://XXXXXX.blob.core.windows.net/XXXXX/chunks/d9/feca43667190675efd46f514096b44738fc1789342253a524c43461e62d81a: read tcp 192.168.1.125:53161->52.191.176.36:443: read: operation timed out
Incomplete snapshot saved to /Volumes/Storage/.duplicacy/incomplete

Here is a second example, occurring just a little while later:

Uploaded chunk 68676 size 6800354, 48.53MB/s 12:07:55 13.7%
Uploaded chunk 68677 size 1308064, 48.53MB/s 12:07:55 13.7%
Uploaded chunk 68679 size 2850576, 48.52MB/s 12:08:01 13.7%
Failed to upload the chunk 587b8758c6bf41126defbea74dc2db0399050a14a537c9ee7d863f2e22ebed6f: Put https://XXXXX.blob.core.windows.net/XXXXX/chunks/58/7b8758c6bf41126defbea74dc2db0399050a14a537c9ee7d863f2e22ebed6f: read tcp 192.168.1.125:60393->52.191.176.36:443: read: operation timed out
Uploaded chunk 68678 size 1233573, 48.52MB/s 12:08:07 13.7%
Uploaded chunk 68680 size 6527776, 48.52MB/s 12:08:06 13.7%
Uploaded chunk 68683 size 1583051, 48.52MB/s 12:08:06 13.7%
Uploaded chunk 68682 size 3470900, 48.52MB/s 12:08:06 13.7%
Uploaded chunk 68681 size 2771453, 48.52MB/s 12:08:05 13.7%
Incomplete snapshot saved to /Volumes/Storage/.duplicacy/incomplete

So, I have a number of questions here:

  1. This appears to be a transient error. If I restart the backup, it continues for a while until it comes up again. Why doesn’t Duplicacy retry in these situations?

  2. What does the GUI do in this case? What is the proper operation for me to do when this occurs? Try again later, or what?

  3. What happens to the chunks that were already uploaded (this is failing in initial backup)? Does the backup get restarted from the very beginning, or is it more or less “continuing where it left off”? I’m backing up a large data set (1.6TB). This will take a couple of days, but that worked fine on several test uploads using other software, so I don’t think there’s a fundamental problem with my communications circuit or anything.

Thanks for any help or guidance that you can provide!

Hey, I just noticed an open issue on this. Sorry for not checking there first.

Since this is a bug, I’ll just follow up on that issue, thanks.

The Azure storage uses github.com/azure/azure-sdk-for-go and I wasn’t aware they didn’t retry on connection timeouts.

When an initial backup fails, Duplicacy will store the list of files already uploaded in a ./duplicacy/incomplete file and load that file in the next run to skip these files.