Best Practice to backup Syn-NAS2Syn-Nas

I have installed Duplicacy on a DS1513 NAS and want to backup to a DS411+ (same Network)
What is the best way to do this? If I use SFTP I on get 10-15MB/s (threats 6)

Maybe some can help me.


Reduce number of threads to 1. Disk arrays are bad at random IO

Then, if ssh or sshd is a bottleneck (check which process during transfer is CPU limited on both source and target) you can switch to newer CLI engine and use SMB endpoint.

On the tangent — if both Synology bases are BTRFs you can simply replicate snapshots between them. It will be much faster than any file based backup tool.

Sorry for the stupdi qeustion, how do I get the newer CLI engine?
I have 3.1.0 and settings is set to “latest”

You can edit duplicacy.json for web ui to specify an arbitrary CLI version:

The SMB backend was introduced in

I suspect in your case 411 is a culprit, so you may want a solution that does most of the work on the client. Even SMB may be too much to handle for 411. Confirm by looking at CPU utilization on 411 during transfer (normalized by core)

Consider using NFS: mount a share from 411 on your source NAS and backup to the mount. Most of work nfs does on the client, where you have much more powerful cpu.

I’ve now tried installing version 3.2.3, but it hasn’t made any difference. Everything seems quite odd. Regardless of the target I use, whether it’s SFTP, WEBDAV, USB 3.0 HD, or even Volume1 on the Source DiskStation, I can’t get speeds above 20MB/s. Normal copying within the DiskStation doesn’t pose any issues, and I can copy to USBShare3 at 100MB/s or within Volume1 at 90MB/s
Resource Monitor

You mean SMB did not make any difference vs SFTP? Or you continue using SFTP? Then yes, there won’t be any difference, nothing changed in SFTP remote

Is this resource monitor on a target or source nas? Did you compare resource utilization on source and target nas?

It appears the machine is either CPU limited (how many cores are there? looks like one of the cores is fully saturated) and/or disk performance limited: the volume utilization is too high for such a low throughput, especially since you see:

Or likely both.

Does the nas have enough free ram to fit cache of all the metadata? I see 75% of ram is free – how much is that in GB?

What is the source data – a lot of small files? How many, ballpark?

Basically, the problem seems to be both seek latency and CPU performance. Switching to less CPU intensive target will help with the latter, adding RAM and/or cache SSD will help with the former.

Of the same source dataset?

Did you compare duplicacy with 1 thread vs 4 threads?

I have solved the problem by moving duplicacy to my new IPU613 Proxmox System :slight_smile:
With reading from one NAS and writing to the other i get about 60MB/s with -threads 4 -hash

1 Like

Bingo! That’s how NAS appliances are intended to be used. Synology tries to sell them like application servers, but it only works on paper, with a primary function to sell you NAS. As you have seen, in real word use they only half-work as storage servers (samba is permanently broken among other things. They are ok when used with windows clients only). Believing marketing and attempting to use them as application servers only results in frustration.

The only thing I’ve noticed with my current setup is that the memory cache in the Docker container remains quite high even after all tasks have been completed. Is this behavior normal?
Apologies for the seemingly simple question, but Linux and Docker aren’t exactly my strong suits.

Yes. Unless some other process needs ram — Linux tries to keep recent data in various caches in case it can be reused.

Freeing cache preemptively would be pointless — unused ram is wasted ram.

Since containers use hosts kernel, they share access to all the kernel caches.

Im not sure, but likely that value reported in the container includes kernel, including filesystem, caches, and would be visible from all containers.

Thank you for your help and explanations. And thank you for your work with the Docker Image.

1 Like