Duplicacy Web package for Synology DSM 7?

Repositories/schedules configuration is in duplicacy.json. Application settings are in settings.json. If you use access token files — then those files also will need to be saved and restored to the same path as before — path to them is encoded in duplicacy.json. This is something that need to be fixed eventually — instead of saving path to a token file — content of the token file must be saved.

Personally — I don’t back it up. It’s trivial to setup from scratch. All I save is storage access credentials in my password manager and backup encryption password.

Hmmm… pre and post scripts don’t seem to be running under the “duplicacy” user that the package creates, or at all. Any ideas? Permissions have been checked and the created duplicacy user has full permissions to the scripts and the entire directory structure above it.

Where did you place the scripts?

In the /volume1/@appstore/duplicacy/.duplicacy-web/repositories/localhost/0/.duplicacy/scripts folder

Yes, that’s the correct location, unless you changed the Temporary directory location in the settings. Please verify that:

  1. filename is pre-backup
  2. execute attribute is set chmod +x pre-backup
  3. if you specify interpreter in your script – make sure it points to something that exists (e.g. e.g. bash on DSM7 may be at /bin/bash or elsewhere – verify)
  4. Add to your script something obvious, like echo hi > /tmp/hi to make sure that works, before you debug more complex logic there.

The temporary directory is the default of “/tmp/duplicacy/repositories”. The file structure in that location is missing the scripts folder. How does the temporary directory work in duplicacy-web? Does it just copy the stuff from the repositories folder in .duplicacy-web folder to the tmp location then execute from /tmp… or does it execute from the .duplicacy-web/… location? I see from the log that it seems to be executing from the /tmp… location.

I went and manually copied the scripts folders to the appropriate repositories in the /tmp location and chown/chmod’d as necessary. That doesn’t seem right… shouldn’t it do that automatically? I did stop and restart the service a few times to see if it would copy on startup but no such thing happened. Only on modifying the settings or what? I wish this were documented…

Duplicacy web does not support scripts. Copying them to the folder duplicacy cli is executing from is a hack, so nothing automatic. That location is temporary and can be deleted; duplicacy-web initializes duplicacy repo there to perform the actions. It is persistent though until you delete it, so your scrip folder will persist. Unless of course your OS purges /tmp folder on boot, in which case it’s better to change that location to somewhere more persistent.

The easiest would be to start backup and open log. This will tell you the exact path to the temporary repository where it starts the cli from. You need to put you script under /that/path/to/temp/repo/.duplicacy/scripts

1 Like

Just updated to DSM 7.1. All scripts now fail even though everything else is the same. I have verified the system internal user/group “duplicacy” has full permissions, including read and execute, on the entire directory tree. I have also done a ps aux and the duplicacy_web process is running under the user “duplicacy”. Here is the log output from a failed task:

2022-05-12 20:39:30.072 INFO SCRIPT_RUN Running script /tmp/duplicacy/repositories/localhost/all/.duplicacy/scripts/pre-check
2022-05-12 20:39:30.072 ERROR SCRIPT_ERROR Failed to run /tmp/duplicacy/repositories/localhost/all/.duplicacy/scripts/pre-check script: fork/exec /tmp/duplicacy/repositories/localhost/all/.duplicacy/scripts/pre-check: permission denied
Failed to run /tmp/duplicacy/repositories/localhost/all/.duplicacy/scripts/pre-check script: fork/exec /tmp/duplicacy/repositories/localhost/all/.duplicacy/scripts/pre-check: permission denied

Any ideas?

Are you using most recent version?

Well, didn’t see that… I’ll try it out.

Just found this thread and the Syno packages release thread. I wasn’t aware that there were native Duplicacy packages for Synology. I am currently using @saspus Duplicacy -Web docker image in my Syno DS920+. What are the advantages/disadvantages to using one or the other?

I did upgrade to 1.6.2 and this issue still occurs where scripts will not run.

Duplicacy does not have dependencies. Therefore it does not benefit from containerization. People use it in docker simply to ease deployment.

Before gchen produced Synology packages I myself switched to running it directly: Duplicacy Web on Synology Diskstation without Docker | Trinkets, Odds, and Ends. I’d rather have extra ram available from caching than consumed by docker engine.

Should you use packages? In my opinion, package system on Synology is horrific. It’s very tricky to make working package in the first place, and “backward compatibility apparently is not in their vocabulary.

The best advice I could give is don’t run it on Synology in any shape or form, at all. Run it on another compute device. That way you will not be killing your storage solution performance by evicting filesystem cache on every duplicacy invocation and have much better behaving system; nas is best used as a nas, not a do-it-all application server Synology marketing wants you do think it is.

2 Likes

Thanks for the reply.

People use it in docker simply to ease deployment.

Ditto. I love Docker. I use it for many things on both my NAS.

I’d rather have extra ram available from caching than consumed by docker engine.

Hmmm. Total RAM usage on my NAS rarely exceeds 15-20%, so I’m not too concerned about that.

nas is best used as a nas, not a do-it-all application server Synology marketing wants you do think it is.

I hear you, but using a nas as storage device that also incorporates a cloud backup process doesn’t sound like a do-it-all approach to me. The NAS I am running Duplicacy on has one function; to store data and occasionally serve files within my LAN. I’ve been monitoring resource usage since installing Duplicacy and it is peaking volume utilization. However, I’m still at the start of ingesting data into a new cloud storage, so I would expect it to max out reads while that is happening. I’ll watch it a bit more closely once I’ve accomplished a complete backup and see how that’s working out.

The best advice I could give is don’t run it on Synology in any shape or form, at all. Run it on another compute device. That way you will not be killing your storage solution performance by evicting filesystem cache on every duplicacy invocation

Are you suggesting something like a desktop with mounted shares? Isn’t that pretty close to containerization as far as disk cache goes or is it the local system cache that’s now the one being abused? Would an NVMe cache drive on the NAS resolve cache issues or possibly an SSD added to the NAS as a separate volume dedicated solely for Docker?

I’m not talking about ram usage. I’m talking about unused ram. Linux filesystem cache resides in the unused ram. When duplicacy runs and allocates few GB of ram that cache is evicted and as a result array performance plummets, (including for duplicacy! — instead of instantly fetching metadata from ram it now has to touch disks — massive amount of unnecessary IO) until duplicacy frees the memory and cache warmes up again over next day or so (whatever the usage period is). Unless there is another duplicacy waiting to murder it all.

Right. Synology provides HyperBackup, and while it is a horrific unstable and unreliable garbage it has one benefit: it uses almost no ram. If you were to use that, there would no problem with cache eviction. Gchen was working on improvements to memory utilization — perhaps when that is released the situation will change

I’m not sure about this. Of course, once your initial backup is done, periodic duplicacy invocations should produce very little IO — all metadata reads will come from ram. During the initial backup — it depends on how many small files you have and how fast is your upstream connection. Still, cache availability could allow prefetch and reduce io pressure just enough for the disk utilization to not be a bottleneck.

Very likely that DSM 7.1 doesn’t allow anything under /tmp to be executable (if pre-check already has the executable permission). Try to change the Temporary Directory in the Setting page. /tmp usually gets cleared after a reboot.

So the temporary directory on the settings page can be any directory? As in - is this the temporary directory for duplicacy for duplicacy to do work in or is it to be pointed to the system temp directory?

That’s the directory where duplicacy-web creates local repositories for duplicacy-cli to work from.

They can be anywhere you want. There is separate location setting for them because they are discardable and don’t need to be backed up (unlike, say, logs folder). Duplicacy web will just create a new folder and .duplicacy/preferences file in there on the next backup run.

However putting that directory to the system temp is a bad idea if the host is rebooted occasionally: these temporary repositories among other things will contain duplicacy cache. Since on reboot temp folders are cleared this will lead to worse performance and extra bandwidth waste as duplicacy would now need to downloaded stuff from the cloud it could otherwise fetch from the cache.

On macOS for example there is special location for these type of caches: it’s called literary Library/Cache. Whether there is designated location for this type of “discardable but worth keeping around stuff” I don’t know. If not — you can just make one in the location that is not being backed up, or snapshotted, etc.

Note that the tmp directory on some Synology NAS are located in memory.
At least on my NAS I had an out of memory/swapping state due to the increasing duplicacy cache.

–mb