Is 2 hours normal for 2.6TB local to a fully seeded remote storage share?

I’m running Duplicacy (on UnRAID) to backup about 2.6TB to a local drive. I have a remote (unRAID) share mapped to my host over WireGuard server to server. I have a maximum local upload of 40Mbps, everything else in the chain is 1Gbps. With no changes between local and remote it takes about 2 hours for a Copy to show completed. I assume it is doing a check each time to validate the data.

Is this speed typical? There’s no feedback in the GUI to show what is happening. Here is the time jump in the logs. Everything listed before and after completed in less than a minute.

2022-04-09 08:24:34.019 INFO SNAPSHOT_EXIST Snapshot unRAID_Flash at revision 8 already exists at the destination storage
2022-04-09 10:33:11.388 INFO SNAPSHOT_COPY Chunks to copy: 17, to skip: 519006, total: 519023

No, Copy doesn’t validate skipped chunks.

You can add -d as a global option so that the job will print more verbose messages to the log which may give you which step took so long.

I added -d and it does show me what is taking so long. I have over half a million chunks and it is working its way through them.

2022-04-10 08:04:12.062 TRACE LIST_FILES Listing chunks/7b/
2022-04-10 08:04:40.316 TRACE LIST_FILES Listing chunks/7a/
2022-04-10 08:04:46.156 TRACE LIST_FILES Listing chunks/79/
2022-04-10 08:05:14.373 TRACE LIST_FILES Listing chunks/78/
2022-04-10 08:05:42.477 TRACE LIST_FILES Listing chunks/77/
2022-04-10 08:06:12.064 TRACE LIST_FILES Listing chunks/76/

It took more than 20 seconds on average to list each directory. Try to run ls on those directory on the server to see if the number of files under each directory makes the listing so low.