Hello!
Is it possible that “restore” is limited to a single download thread and the -threads
parameter has no effect?
Here’s why I think this:
I have a repository with a fixed chunk size of 4MB for better deduplication (with dynamic chunk sizes, Duplicacy fills chunks with additional files. This results in creating a new chunk if a file is moved or renamed, which I want to avoid).
The backed-up data includes my user profile, which contains a Thunderbird profile with over 500,000 mostly <1MB EML files (but just 15GB on disk). Most files in my user profile are small — in total, there are 1,067,860 files that are 374,494MB in size, divided into 1,141,977 chunks.
I am using SFTP as the backend.
I am now trying to restore this backup. While the backup with -threads 100
was completed in just under an hour, fully utilizing my 1Gbps uplink (which was not the case without -threads 100
), the restoration is taking about 16 hours — the 1Gbps uplink connection is hardly utilized (and yes, the server can deliver 1Gbps as verified with ssh user@example.invalid 'dd if=/dev/urandom bs=1k count=1024000' | pv | dd of=/dev/null bs=1k count=1024000
).
Another indicator:
During the upload, the debug output showed that the files were not uploaded sequentially. The log looked like this, for example:
[…]
Uploaded chunk 64800
Uploaded chunk 64798
Uploaded chunk 64801
Uploaded chunk 64802
Uploaded chunk 64799
Uploaded chunk 64805
Uploaded chunk 64803
Uploaded chunk 64804
Uploaded chunk 64806
[…]
Now, during the restoration, the chunks are being downloaded sequentially:
[…]
Downloaded chunk 64800
Downloaded chunk 64801
Downloaded chunk 64802
Downloaded chunk 64803
Downloaded chunk 64804
Downloaded chunk 64805
Downloaded chunk 64806
Downloaded chunk 64807
Downloaded chunk 64808
[…]
The repository uses RSA encryption. I am using Duplicacy in version 3.2.3 (254953).
I suspect that the restore operation is limited to a single thread, despite the -threads
parameter, which affects the restoration speed significantly. Is that correct?