Where did you place the scripts?
In the /volume1/@appstore/duplicacy/.duplicacy-web/repositories/localhost/0/.duplicacy/scripts folder
Yes, thatās the correct location, unless you changed the Temporary directory
location in the settings. Please verify that:
- filename is
pre-backup
- execute attribute is set
chmod +x pre-backup
- if you specify interpreter in your script ā make sure it points to something that exists (e.g. e.g. bash on DSM7 may be at /bin/bash or elsewhere ā verify)
- Add to your script something obvious, like
echo hi > /tmp/hi
to make sure that works, before you debug more complex logic there.
The temporary directory is the default of ā/tmp/duplicacy/repositoriesā. The file structure in that location is missing the scripts folder. How does the temporary directory work in duplicacy-web? Does it just copy the stuff from the repositories folder in .duplicacy-web folder to the tmp location then execute from /tmpā¦ or does it execute from the .duplicacy-web/ā¦ location? I see from the log that it seems to be executing from the /tmpā¦ location.
I went and manually copied the scripts folders to the appropriate repositories in the /tmp location and chown/chmodād as necessary. That doesnāt seem rightā¦ shouldnāt it do that automatically? I did stop and restart the service a few times to see if it would copy on startup but no such thing happened. Only on modifying the settings or what? I wish this were documentedā¦
Duplicacy web does not support scripts. Copying them to the folder duplicacy cli is executing from is a hack, so nothing automatic. That location is temporary and can be deleted; duplicacy-web initializes duplicacy repo there to perform the actions. It is persistent though until you delete it, so your scrip folder will persist. Unless of course your OS purges /tmp folder on boot, in which case itās better to change that location to somewhere more persistent.
The easiest would be to start backup and open log. This will tell you the exact path to the temporary repository where it starts the cli from. You need to put you script under /that/path/to/temp/repo/.duplicacy/scripts
Just updated to DSM 7.1. All scripts now fail even though everything else is the same. I have verified the system internal user/group āduplicacyā has full permissions, including read and execute, on the entire directory tree. I have also done a ps aux and the duplicacy_web process is running under the user āduplicacyā. Here is the log output from a failed task:
2022-05-12 20:39:30.072 INFO SCRIPT_RUN Running script /tmp/duplicacy/repositories/localhost/all/.duplicacy/scripts/pre-check
2022-05-12 20:39:30.072 ERROR SCRIPT_ERROR Failed to run /tmp/duplicacy/repositories/localhost/all/.duplicacy/scripts/pre-check script: fork/exec /tmp/duplicacy/repositories/localhost/all/.duplicacy/scripts/pre-check: permission denied
Failed to run /tmp/duplicacy/repositories/localhost/all/.duplicacy/scripts/pre-check script: fork/exec /tmp/duplicacy/repositories/localhost/all/.duplicacy/scripts/pre-check: permission denied
Any ideas?
Are you using most recent version?
Well, didnāt see thatā¦ Iāll try it out.
Just found this thread and the Syno packages release thread. I wasnāt aware that there were native Duplicacy packages for Synology. I am currently using @saspus Duplicacy -Web docker image in my Syno DS920+. What are the advantages/disadvantages to using one or the other?
I did upgrade to 1.6.2 and this issue still occurs where scripts will not run.
Duplicacy does not have dependencies. Therefore it does not benefit from containerization. People use it in docker simply to ease deployment.
Before gchen produced Synology packages I myself switched to running it directly: Duplicacy Web on Synology Diskstation without Docker | Trinkets, Odds, and Ends. Iād rather have extra ram available from caching than consumed by docker engine.
Should you use packages? In my opinion, package system on Synology is horrific. Itās very tricky to make working package in the first place, and ābackward compatibility apparently is not in their vocabulary.
The best advice I could give is donāt run it on Synology in any shape or form, at all. Run it on another compute device. That way you will not be killing your storage solution performance by evicting filesystem cache on every duplicacy invocation and have much better behaving system; nas is best used as a nas, not a do-it-all application server Synology marketing wants you do think it is.
Thanks for the reply.
People use it in docker simply to ease deployment.
Ditto. I love Docker. I use it for many things on both my NAS.
Iād rather have extra ram available from caching than consumed by docker engine.
Hmmm. Total RAM usage on my NAS rarely exceeds 15-20%, so Iām not too concerned about that.
nas is best used as a nas, not a do-it-all application server Synology marketing wants you do think it is.
I hear you, but using a nas as storage device that also incorporates a cloud backup process doesnāt sound like a do-it-all approach to me. The NAS I am running Duplicacy on has one function; to store data and occasionally serve files within my LAN. Iāve been monitoring resource usage since installing Duplicacy and it is peaking volume utilization. However, Iām still at the start of ingesting data into a new cloud storage, so I would expect it to max out reads while that is happening. Iāll watch it a bit more closely once Iāve accomplished a complete backup and see how thatās working out.
The best advice I could give is donāt run it on Synology in any shape or form, at all. Run it on another compute device. That way you will not be killing your storage solution performance by evicting filesystem cache on every duplicacy invocation
Are you suggesting something like a desktop with mounted shares? Isnāt that pretty close to containerization as far as disk cache goes or is it the local system cache thatās now the one being abused? Would an NVMe cache drive on the NAS resolve cache issues or possibly an SSD added to the NAS as a separate volume dedicated solely for Docker?
Iām not talking about ram usage. Iām talking about unused ram. Linux filesystem cache resides in the unused ram. When duplicacy runs and allocates few GB of ram that cache is evicted and as a result array performance plummets, (including for duplicacy! ā instead of instantly fetching metadata from ram it now has to touch disks ā massive amount of unnecessary IO) until duplicacy frees the memory and cache warmes up again over next day or so (whatever the usage period is). Unless there is another duplicacy waiting to murder it all.
Right. Synology provides HyperBackup, and while it is a horrific unstable and unreliable garbage it has one benefit: it uses almost no ram. If you were to use that, there would no problem with cache eviction. Gchen was working on improvements to memory utilization ā perhaps when that is released the situation will change
Iām not sure about this. Of course, once your initial backup is done, periodic duplicacy invocations should produce very little IO ā all metadata reads will come from ram. During the initial backup ā it depends on how many small files you have and how fast is your upstream connection. Still, cache availability could allow prefetch and reduce io pressure just enough for the disk utilization to not be a bottleneck.
Very likely that DSM 7.1 doesnāt allow anything under /tmp to be executable (if pre-check
already has the executable permission). Try to change the Temporary Directory in the Setting page. /tmp
usually gets cleared after a reboot.
So the temporary directory on the settings page can be any directory? As in - is this the temporary directory for duplicacy for duplicacy to do work in or is it to be pointed to the system temp directory?
Thatās the directory where duplicacy-web creates local repositories for duplicacy-cli to work from.
They can be anywhere you want. There is separate location setting for them because they are discardable and donāt need to be backed up (unlike, say, logs folder). Duplicacy web will just create a new folder and .duplicacy/preferences file in there on the next backup run.
However putting that directory to the system temp is a bad idea if the host is rebooted occasionally: these temporary repositories among other things will contain duplicacy cache. Since on reboot temp folders are cleared this will lead to worse performance and extra bandwidth waste as duplicacy would now need to downloaded stuff from the cloud it could otherwise fetch from the cache.
On macOS for example there is special location for these type of caches: itās called literary Library/Cache. Whether there is designated location for this type of ādiscardable but worth keeping around stuffā I donāt know. If not ā you can just make one in the location that is not being backed up, or snapshotted, etc.
Note that the tmp directory on some Synology NAS are located in memory.
At least on my NAS I had an out of memory/swapping state due to the increasing duplicacy cache.
āmb