Hello. Firstly, I want to thank all developers for Duplicacy and everyone who helps people in this forum.
I’m testing Duplicacy against 1TB MySQL database files. First backup took 2-3 hours, and it was OK, because it’s a first backup. But incremental backups took also 2-3 hours, while restic, for example, took 40-50 minutes.
Details: first ten minutes files are backing up with speed 200-210 MB/s, but after that, within hour, backup speed slowly slows down until 130-140 MB/s. And I don’t understand where the bottleneck is: it’s not the CPU (there is a lot of cores available, and I tried running backup with multiple threads - same results), it’s not the RAM (duplicacy uses 700M, but there’s hundred gigs of free memory), it’s not the disks (server uses 2 SATA SSDs in RAID-1, they can give about 1 GB/s in total), it’s not the network (Duplicacy didn’t used it a lot, and never approached even to 700 Mbps). And it’s not even the server where sftp is running - there’s also enough resources for Duplicacy.
I’m not sure what’s the root cause in my case, and I didn’t found anything like that on this forum (well there are two posts and one issue about same issues, but with B2 and there’s much different numbers).
There’s also this post about SFTP issues, but also with different numbers.
I also found another issue about backup performance, will try those repository settings later.
Can you help me with this situation? I really want to see Duplicacy shine. It’s just something that prevents it, and I don’t understand what exactly.
Duplicacy version: 2.4.0
Storage backend: sftp
Backup threads: 8
Repository settings: encrypted
Environment variables: DUPLICACY_PASSWORD
and DUPLICACY_SSH_KEY_FILE