Extremely slow restores

I’m testing the restore speed with the latest version of Duplicacy using the following command:

duplicacy restore -r 1 -stats -threads 32 -- "path/to/some/folder"

I deleted some files and wanted to see how long it took Duplicacy to restore those files. I got the following results over a symmetric 1gbps ethernet connection to Backblaze B2:

Restored /mnt/tank to revision 1                                                                                                                            
Files: 116 total, 6802.86M bytes
Downloaded 116 file, 6802.86M bytes, 1412 chunks
Skipped 0 file, 0 bytes
Total running time: 00:07:21

Is it supposed to be this slow, around 1 GB / minute? If not, is there anything that I can do to speed it up? Otherwise, it would take 16 hours just to restore 1 TB.

Based on the output, it seems to be downloading way fewer than 32 blocks at once. Looking at networking stats, I don’t see download speed go past 200 mbps, more often being below 100 mbps.

AFAIK the parallelism is in downloading chunks, not multiple chunks concurrently. If chunks are small, the concurrency does not help much.

You can improve it by increasing the default chunk size from 4Mb to e.g. 64. But this will require reinitializing the target repository.

You can use use benchmark command to play with different chunk sizes and thread counts and see how it affects thougput

I used the benchmark command with 32 upload and download threads.

Uploaded 256.00M bytes in 13.07s: 19.58M/s
Downloaded 256.00M bytes in 5.37s: 47.70M/s

It doesn’t seem to be a network issue based on these results, which are much higher than what I actually got with restoring.