Automatically update duplicacy web

I’ve played around with the following script to automatically update the duplicacy web edition and I’m sharing it here for anyone who would like to try something similar. The idea is to run it once a month or so as a cronjob. Note, however, that this is one of my first bash scripts and that I have not extensively tested this. So use at your own risk and feel free to comment and improve it.

Needless to say that you will need to adapt the paths in the script to fit your installation.

 #!/bin/bash

 current=`ls -t /usr/bin/ | grep -m 1 -E -o "duplicacy_web_linux_x64_[[:digit:]]\.[[:digit:]]\.[[:digit:]]"`
 echo current version is  "$current"
 latest=`curl -s "https://duplicacy.com/download.html" | grep -E -o "https:\/\/acrosync\.com\/duplicacy-web\/duplicacy_web_linux_x64_[[:digit:]]\.[[:digit:]]\.[[:digit:]]" | grep -E -o "duplicacy_web_linux_x64_[[:digit:]]\.[[:digit:]]\.[[:digit:]]"`
 echo latest version is "$latest"
    if [ "$current" != "$latest" ]
   then
       echo "Update available"
       echo "Downloading new version..."
       wget  https://acrosync.com/duplicacy-web/"$latest"
     #  ps aux | grep -c "/.duplicacy-web/bin/duplicacy"
     #  while [ ps aux | grep -c "/.duplicacy-web/bin/duplicacy" != "0" ]
     #    do
     #     echo "waiting for duplicacy jobs to finish"
     #     sleep 3600 &
     #    done
       echo "Stopping duplicacy-web"
       systemctl stop duplicacy
       echo "Installing  new version..."
       mv "$latest" /usr/bin/
       chmod 700 /usr/bin/"$latest"
       ln -s -f /usr/bin/"$latest" /usr/bin/duplicacy-web
      echo "Restarting duplicacy-web"
       systemctl start duplicacy
       echo "Done."
       exit 0
    else
       echo "No update available"
    fi

Is is something in the air that people rushed writing scripts ? :slight_smile:
This begs a question: why doesn’t duplicacy_web update itself? How many man-hours was spent writing exact same code? …

1 Like

Ha, interesting! I didn’t know that the latest version was available in json format. That simplifies things. (How did you know?)

Yes… In this case I actually enjoyed the extra work. It was interesting figuring out what I could do with grep (although I stopped short of digging into whether and how it can do capture groups and did a nice little hack instead (line 5), but I thought it wasn’t to bad of a hack (especially considering the possibility that an hour of research could have left me with the exact same solution).

So in the spirit of learning even more:

I would have thought that a cronjob is the preferred option over creating a demonized loop. To me, it just adds a possible break point. I also appreciate cron because it allows me to easily see in one place what scheduled there tasks are. So why is a loop better? (In any case, I would claim that checking every 15 minutes is total overkill. Maybe my choice of once a month is a bit too humble, but 15 mins??)

Another difference I spoted is in line 61

What does your ln -sF do as opposed to my ln -sf?

I also see that you are simply restarting the service, no precautions regarding ongoing tasks. My impression was that this will kill tasks that were initiated by duplicacy-web but when my intuitive check for that didn’t work, I gave up… So are you suggesting that it is safe?

1 Like