Thanks for the reply.
People use it in docker simply to ease deployment.
Ditto. I love Docker. I use it for many things on both my NAS.
I’d rather have extra ram available from caching than consumed by docker engine.
Hmmm. Total RAM usage on my NAS rarely exceeds 15-20%, so I’m not too concerned about that.
nas is best used as a nas, not a do-it-all application server Synology marketing wants you do think it is.
I hear you, but using a nas as storage device that also incorporates a cloud backup process doesn’t sound like a do-it-all approach to me. The NAS I am running Duplicacy on has one function; to store data and occasionally serve files within my LAN. I’ve been monitoring resource usage since installing Duplicacy and it is peaking volume utilization. However, I’m still at the start of ingesting data into a new cloud storage, so I would expect it to max out reads while that is happening. I’ll watch it a bit more closely once I’ve accomplished a complete backup and see how that’s working out.
The best advice I could give is don’t run it on Synology in any shape or form, at all. Run it on another compute device. That way you will not be killing your storage solution performance by evicting filesystem cache on every duplicacy invocation
Are you suggesting something like a desktop with mounted shares? Isn’t that pretty close to containerization as far as disk cache goes or is it the local system cache that’s now the one being abused? Would an NVMe cache drive on the NAS resolve cache issues or possibly an SSD added to the NAS as a separate volume dedicated solely for Docker?