Duplicacy copy seems much slower than it should be

My first backup was direct to Wasabi. Now I want to create a local backup with the same contents. I realize now I did this backwards. I used duplicacy add and duplicacy copy to start copying the Wasabi snapshots into a new local storage. Wasabi must throttle downloads though because I’m not getting even 8 Mbps down and I was saturating my 15 Mbps upload when I did the backup initially.

I didn’t want to wait for this, so I wiped out my new local storage, created a new one, and I copied the config file from the Wasabi storage into this new local storage. Then I created a new backup into the new local storage. Because the directory I am backing up didn’t change and the chunk parameters and encryption keys are the same, this created all the same chunks that are on Wasabi but instead of taking a week it only took about 6.5 hours.

(In reality what I did is slightly more complicated than this but the outcome is the same as if I had created a new local storage and populated it with the chunks from Wasabi - in the required unflattened form).

Now I’m running duplicacy copy from Wasabi to local and I expected it to fly because all the chunks it needs to create are already in the local storage. But it’s going very slowly. I looked at the network monitor and it’s using all of the bandwidth Wasabi will let me have, ~8 Mbps. So it must be downloading the chunks, THEN realizing that they don’t need to be copied because they’re already present in the destination.

I realize there’s no reason for you to have optimized for this case but it sure would be nice if it did things in the reverse order, checking for the presence of a chunk before downloading it.

This is what I see (excerpted). These lines tick by pretty slowly.

Chunk b461991e04a5b7ce7c159f43b278e3f8d13a2ee344aa9501656466e02282f939 (1/99775) exists at the destination
Chunk 495fcf2e492b730a8a48d7e8c470455f4f777ef161308fd6bf25d612a206c435 (2/99775) exists at the destination
Chunk 36f251c1a264e46a9098a7bcf9f4e6f942eda5695141b52f712831b9dfe4cfc5 (3/99775) exists at the destination
Chunk d0fa2057cf94de04698f252bb283fb2fd67516829ce08c28acb66097f2d51717 (4/99775) exists at the destination

I figured this out. In addition to the config file, I also needed to copy the snapshots from Wasabi to my local storage. Now when I run the copy command it finishes in seconds.

And I’m running duplicacy check on the local storage just in case there are chunks missing.

Great! I think the copy command is another differential features against other backup tools (other than cross-client deduplication). jt70471@github recently made a few improvements on the copy command. I guess if you have one of his commit (https://github.com/gilbertchen/duplicacy/commit/0bf66168fb7b53b72bf93afd14a90cc9508998bf) you won’t need to copy over the snapshots from Wasabi to local.

I actually did have several chunks missing. I think they were “metadata” chunks. I downloaded them and put them in the proper directories and everything seems to check out now.

I briefly looked at that commit, it does seem like it would have helped. Thanks.