Duplicacy eating all RAM, then transfer to B2 slows to <1mbps

I’m running Duplicacy (Web UI) in Docker on an Ubuntu VM in Proxmox.

I’m transferring from a NFS share on the same physical box to Backblaze B2.

When the transfer starts fast, it runs fast at 800mbps – consuming most of my 1gbps pipe as desired. RAM consumption on the VM at this time is about 1GB.

Over the next several minutes, RAM usage increases to 4GB (100% of allocated RAM on the VM). When RAM maxes out, the transfer begins to slow down. After about 10 minutes, it is down to <150mbps. On my last test this continued until it reached a crawl after a couple hours of <1mbps.

If I cancel the job, then RAM usage stays high – at about 3.5GB. Top shows Docker/Duplicacy using almost none, but about 3.3GB stuck in cache.

Any ideas?

There is memory usage optimization on the way: Rewrite the backup procedure to reduce memory usage by gilbertchen · Pull Request #625 · gilbertchen/duplicacy · GitHub. 4GB is not unexpected, depending on amount of files in the backup set.

You can split your files into several datasets to minimize peak memory usage.

You can also set DUPLICACY_ATTRIBUTE_THRESHOLD=1 environment variable to get duplicacy discard extended attributes earlier – this may help slightly.

It’s how linux behaves. The cache is available memory but there is no reason to free it until it’s needed.

That makes sense, and I think my earlier theory of RAM/cache slowing my transfer may have been incorrect.

Today I rebooted the VM and initiated another backup. The backup is running at 40mbps even though there is available space in RAM. This is pretty odd.

What’s the best way for me to troubleshoot the speed issue?

Increase number of threads. You can expect about 10Mbps from B2 endpoints, it is intended to be used multithreaded. IIRC the default is 4 — and this is consistent with 40Mbps you are seeing. Increase that number.