Automatically update duplicacy web

I’ve played around with the following script to automatically update the duplicacy web edition and I’m sharing it here for anyone who would like to try something similar. The idea is to run it once a month or so as a cronjob. Note, however, that this is one of my first bash scripts and that I have not extensively tested this. So use at your own risk and feel free to comment and improve it.

Needless to say that you will need to adapt the paths in the script to fit your installation.


 current=`ls -t /usr/bin/ | grep -m 1 -E -o "duplicacy_web_linux_x64_[[:digit:]]\.[[:digit:]]\.[[:digit:]]"`
 echo current version is  "$current"
 latest=`curl -s "" | grep -E -o "https:\/\/acrosync\.com\/duplicacy-web\/duplicacy_web_linux_x64_[[:digit:]]\.[[:digit:]]\.[[:digit:]]" | grep -E -o "duplicacy_web_linux_x64_[[:digit:]]\.[[:digit:]]\.[[:digit:]]"`
 echo latest version is "$latest"
    if [ "$current" != "$latest" ]
       echo "Update available"
       echo "Downloading new version..."
     #  ps aux | grep -c "/.duplicacy-web/bin/duplicacy"
     #  while [ ps aux | grep -c "/.duplicacy-web/bin/duplicacy" != "0" ]
     #    do
     #     echo "waiting for duplicacy jobs to finish"
     #     sleep 3600 &
     #    done
       echo "Stopping duplicacy-web"
       systemctl stop duplicacy
       echo "Installing  new version..."
       mv "$latest" /usr/bin/
       chmod 700 /usr/bin/"$latest"
       ln -s -f /usr/bin/"$latest" /usr/bin/duplicacy-web
      echo "Restarting duplicacy-web"
       systemctl start duplicacy
       echo "Done."
       exit 0
       echo "No update available"

Is is something in the air that people rushed writing scripts ? :slight_smile:

I made one few weeks ago too, as part of making TrueNAS Duplicacy plugin: This file is the updater:

It allows to choose the update channel – fixed, Stable, or Latests, or use local custom duplicacy executable.

It runs a loop, so no need for cron. It is launched as a daemon on FreeBSD like so: iocage-plugin-duplicacy/duplicacyupd at master · arrogantrabbit/iocage-plugin-duplicacy · GitHub

This begs a question: why doesn’t duplicacy_web update itself? How many man-hours was spent writing exact same code? …

Edit: *few months ago…

1 Like

Ha, interesting! I didn’t know that the latest version was available in json format. That simplifies things. (How did you know?)

Yes… In this case I actually enjoyed the extra work. It was interesting figuring out what I could do with grep (although I stopped short of digging into whether and how it can do capture groups and did a nice little hack instead (line 5), but I thought it wasn’t to bad of a hack (especially considering the possibility that an hour of research could have left me with the exact same solution).

So in the spirit of learning even more:

I would have thought that a cronjob is the preferred option over creating a demonized loop. To me, it just adds a possible break point. I also appreciate cron because it allows me to easily see in one place what scheduled there tasks are. So why is a loop better? (In any case, I would claim that checking every 15 minutes is total overkill. Maybe my choice of once a month is a bit too humble, but 15 mins??)

Another difference I spoted is in line 61

What does your ln -sF do as opposed to my ln -sf?

I also see that you are simply restarting the service, no precautions regarding ongoing tasks. My impression was that this will kill tasks that were initiated by duplicacy-web but when my intuitive check for that didn’t work, I gave up… So are you suggesting that it is safe?

1 Like

Because it was me who asked for it :smile: How to find out the url for the latest version of Web GUI and/or check for update?. There is similar one for CLI:

Oh, yes, it’s enjoyable and educational, so I guess there are benefits. Just not for people who want this done but don’t have time/desire to tinker in the unix internals.

I think having to use posix classes in regex is annoying and the fact that I have to write [[:digit:]] instead of \d+ is ridiculous to the point of me never using it… So for this purposes I revert to sed, which, at least on macOS and bsd, has compatibility mode that understand (perl?) regex…

BTW you are missing the check for wget download success. According to the current events, however, this would not have been enough anyway: Backups stopped working, segmentation fault in saspus docker image - #12 by saspus

It probably isn’t worse. There is a ridiculously minor overhead of starting a whole new shell process with cron every time, vs having the process sit in ram and sleep, but the shell will likely be cached, so no disk overhead will be involved, and there is only relatively minor compute overhead of executing shell startup code, which is also probably nothing to worry about, but can be 100% avoided by re-using that instance.

It’s not a single failure point because if it crashes (not sure why would it) – it will get restarted by the rc. But I fee it’s more “steady state” and safer solution for the above reasons. I could also easily start and stop the updater with service <name> start and service <name> stop

Lol :slight_smile: Honestly, I adapted it from my GitHub - arrogantrabbit/freebsd_storj_installer: Installer script for Storj on FreeBSD, which I had to create because STORJ’s frebsd updater utility does not know how do restart the rc daemon on freebsd, so I wrote my own script to serve as a drop-in replacement for storagenode-updater. The 15 min update interval IIRC was what they are using by default, so I did the same. And then when adapting it for duplicacy – I did not change it in the spirit of “not broken - don’t fix”. But yes, for duplciacy its waaaay too frequently! I’ll change that, thank you. It’s invaluable when another person reviews the code and ask questions forcing the author pause and think :slight_smile:

If the target exists and is a directory it will remove it. It’s just a more thermonuclear way to create a link, I don’t want this to ever fail.

     -F    If the target file already exists and is a directory, then remove
           it so that the link may occur.  The -F option should be used with
           either -f or -i options.  If neither -f nor -i is specified, -f is
           implied.  The -F option is a no-op unless -s is specified.

     -f    If the target file already exists, then unlink it so that the link
           may occur.  (The -f option overrides any previous -i and -w

First of all, it was aslo copied from my storage node updater, but this actually did receive some thought :). Because backups and prunes can be running for hours/days I don’t want to postpone update. What if there is some critical fix?. And on the other hand, backup and prunes are fully interruptible. (Or at least should be, when and if the prune bug is fixed by implementing fossilization for snapshots, I’m starting to lose hope.)

Yes, it is either safe, is must be safe, with pretty much anything important. With duplicacy, as it shall be with any backup tool, it’s safe by design – interruptions are a fact of life. Nothing bad should happen if someone just rebated their machine or it ran out of battery.

Maybe an exception could be made if the restore operation is running. this is a good point. But since I never intend to restore anything, this wasn’t on my priority list at all. I would be restoring on a new machine (because why else would I need restore from the cloud if my machine is alive? I would restore from a local history instead, namely, filesystem snapshots) and in this case I can first restore and then start the deamon. Or, actually, even easier, stop the updater for the duration of restore :slight_smile: