I’m testing restoring a file for the first time with Duplicacy (Web GUI) now, from a backup on Google Drive. My test is restoring one 100 GB file. The backup storage is using all-default values with regards to chunk size etc. My restore target directory is on a SSD.
I am surprised that it’s actually quite slow. I have a 500 MBit’s connection, but Duplicacy is only using a download bandwidth of 70 MBit’s.
I have tested with different -threads values.
1 thread: Slow and extremely inconsistent, jumping around between 0 and 30 MBit’s, with the average probably being around 7 MBit’s or so.
5 threads: Faster, but still inconsistent. Jumping around between 0 MBit’s and 70 MBit’s, on average probably around 30 MBit’s.
10 threads: Still jumping around between 0 MBit’s and 70 MBit’s, on average probably around 50 MBit’s.
20 threads: somewhat consistent 70 MBit’s
50 threads: somewhat consistent 70 MBit’s
So, it seems no matter how many threads I throw at the problem, the speed won’t go above 70 MBit’s.
For comparison, if I manually download a file from my Google Drive, I get these speeds, depending on software used for downloading it (different software uses different amounts of connections etc):
Firefox: Consistent 130 MBit’s
Google Chrome: Consistent 300 MBit’s
JDownloader, set to 10 connections per download: consistent 500 MBit’s.
So it’s definitely not a limit of my internet connection, other software that’s designed for quickly downloading files, like JDownloader, can make use of my full 500 MBit’s connection. Can Duplicacy be made faster regarding this? What exactly is the -threads option doing currently? If I set it to 50 threads, does that mean it’s downloading 50 chunks in parallel? Or does it mean it downloads chunks one after another, with 50 threads per chunk?
Ryzen 3950X (16 Cores)
128 GB RAM @3600 Mhz