UPDATE: Extreme slow restore with 500GB file on wasabi

I am currently testing the GUI-version on Linux Mint.
Upload 500GB to Wasabi took about 7h.
Restore runs since 7h (only 2010 chunks loaded) and estimates 10 days to finalize.
Download speed is about 25KB/s.
Internet speed is about 125 MB/s.

UPDATE:
After setting -threads 32 the download speed is now around 13 MB/s, with an estimated completion time of 10 hours.
It’s a major drawback that the chunk size can’t be set in the GUI — this would significantly speed up operations involving large files.

That’s the problem, and I doubt chunk size would have anything to do with this.

Run duplicacy benchmark, with varying chunk sizes to confirm. You shall be able to saturate your gigabit connection even with default chunking

I would also check for bufferbloat on upstream link

I wasn’t referring to my overall internet connection being slow – I’m on a ~1 Gbit/s line (~125 MB/s).
My issue was specifically with Duplicacy restore speed, which was stuck at ~25 KB/s despite having plenty of bandwidth available.

Increasing -threads helped significantly. Chunk size likely plays a role in such large archives, too.

Thanks for the suggestions though!

I too was referring to your duplicacy performance. You wrote in your original post that your available downstream bandwidth is gigabit, but did not mention upstream. If your upstream is saturated and suffering from bufferbloat, ack packets will be delayed and your download speed will suffer greatly, which is consistent with what you observe.

Increasing number of threads to exorbitant number for a slight increase in speed is hardly a solution though — it attempts to hide latency, but at the expense of more transactions, more latency, more upstream saturation. Now 32 threads now give you 100Mbps, so just under 8Mbps per thread. With wasabi, and average chunk at default, you shall be getting at least double.