Resuming failed backups re-uploads data

I’m using Backblaze B2 with:

$ duplicacy backup -stats -threads 16 -limit-rate 800

With a fresh B2 bucket, on the very first run of the backup command, it failed after uploading about 3900 chunks:

Uploaded chunk 3920 size 11244935, 1.19MB/s 21 days 14:34:08 0.8%
Uploaded chunk 3909 size 15300223, 1.19MB/s 21 days 14:54:37 0.8%
Uploaded chunk 3923 size 12382213, 1.19MB/s 21 days 14:48:52 0.8%
Failed to find the path for the chunk <hex here>: Maximum backoff reached
Incomplete snapshot saved to /.../.duplicacy/incomplete

When I run the same backup command again, it gave me this output:

Storage set to b2://...
No previous backup found
Indexing /.../
<some lines about skipping non-regular file that I don't think are relevant>
Incomplete snpashot loaded from /.../.duplicacy/incomplete
Listing all chunks
Skipped 1639 files from previous incomplete backup
Use 16 uploading threads
Skipped chunk 2 size 2298987, 2.19MB/s 11 days 18:04:14 0.0%
Skipped chunk 3 size 2028688, 4.13MB/s 6 days 05:50:38 0.0%
Skipped chunk 4 size 1427268, 5.49MB/s 4 days 16:40:54 0.0%
Skipped chunk 5 size 8982088, 14.05MB/s 1 day 20:00:11 0.0%
Uploaded chunk 12 size 1105465, 645KB/s 40 days 22:22:45 0.0%
Uploaded chunk 14 size 1153988, 664KB/s 39 days 17:50:00 0.0%
Uploaded chunk 7 size 1637334, 520KB/s 50 days 18:01:47 0.0%
...

It looks like it only skipped 4 chunks total (chunks 2-5) and basically started over from scratch. I don’t think anything in the directory to be backed up actually changed between the two runs, though I’m not 100% sure.

Is this a known limitation? How can I get the resume behavior to avoid re-uploading things? The problem is that I expect my backups to be interrupted every once in a while (due to connectivity loss, power interruption, etc), and if it starts over every time, I’ll never actually be able to finish a backup.

Thanks for any help!

This line:

Skipped 1639 files from previous incomplete backup

means these 1639 files had been completely uploaded in the first backup and thus were skipped (without the need to break them into chunks)

Additionally, there were some files that were partially uploaded, and this is why only 4 chunks were skipped in the second backup. The chunk index always starts from 0 in each backup, so a chunk with the same index may mean different chunks in different backups.

Ahhh, thank you for the explanation, Gilbert!

Perhaps it would be helpful to print the number of total bytes remaining to be uploaded somewhere? I assume that is known since there is a time estimate. It would provide confidence that progress is being made.