Can we cache chunks to be uploaded into memory or onto a different disk?

I have a bottleneck / upload / optimization question. I recently added a new destination where it seems like absurd threading is the key for upload throughput.

Due to some peering issues between the PC being backed up and the server, each upload thread is limited to 10 Mbits. It doesn’t seem capped on upload threads though, so I tried to saturate my gigabit connection by just tossing more threads at the problem.

The Duplicacy benchmark confirmed that 100 upload threads seems to be the ticket for performance. However, the duplicacy benchmark is running off an internal NVME SSD.

I’m running into a performance issue in practice though, and I think the following is accurate and that disk I/O is my issue.

My backup set is mostly sparsebundles or RAW photos, so 8MB chunks in the sparsebundle and 25MB files for the images. The drive will read those at 160MB/s sequentially, but when I hit it with 100 upload threads, duplicacy reads them in parallel which drops performance immensely due to the I/O limitations of a spinning hard drive. Backup sets on SSD or RAID 10 stripes suffer far less of this performance degradation.

What I’m wondering is if there’s a way to have duplicacy read the sparsebundles and files as a single thread (or as close to sequentially as possible), pack them into duplicacy chunks, then cache those chunks to SSD or RAM for parallel uploading.

If I could cache say 4GB worth of ready to upload chunks into memory, that should let the drive read at full speed while saturating my upload connection with 100 simultaneous 8MB chunks going up.

Is that possible, or does are there any other suggestions to allow me to run a perhaps silly number of threads while keeping HDD reads as sequential as possible.

Duplicacy always uses one thread to read files to be backed up, no matter how many uploading threads you are specifying.

If your drive can only support a read speed of 160MB/s, then this is the bottleneck and the optimal number of uploading threads should be 16.

1 Like

Hmmm, I’ll have to do some more benchmarking / testing on that then.

On the same files, I was seeing pretty different performance based on thread count to different storages.

HDD to local SSD storage (1 thread): 1050 Mbps.
SSD to local SSD storage (4 threads): 3000 Mbps
SSD to cloud storage with 10 Mbps cap (4 threads): 40 Mbps
HDD to cloud storage with 10 Mbps cap (4 threads): 40 Mbps
SSD to cloud storage with 10 Mbps cap (100 threads): 1000 Mbps
HDD to cloud storage with 10 Mbps cap (100 threads): 120 Mbps

So it seems that the HDD is able to hit max read speeds at one thread to the SSD storage, but for whatever reason if I try and use a bunch of threads to the cloud storage it bottlenecks somewhere.

But for a local SSD to the cloud storage, it’ll easily saturate 1000 Mbps with 100 threads.

My workaround for now, is to back up to the local SSD destination and then copy it from the SSD with 100 threads to the cloud storage.