A good way I have found to do locking is to use the handy flock
command which will attempt to take a lock on a file or directory then execute another command.
Once the process finishes the lock is released.
The lock can be taken on any file or directory with zero impact on most other operations…unless your application tries to take its own file lock along the way.
So reimplementing the backup script in the OP by @Christoph using flock
could be done as follows:
#!/bin/sh
# Attempt to change to repository and bail out if its not possible
cd /srv/NAS/ || exit 1
echo "$(date) Backing up $PWD ..."
# Run duplicacy under a lock of /srv/NAS
flock -en /srv/NAS /usr/local/bin/duplicacy -log backup -stats
echo "$(date) Stopped backing up $PWD ..."
The above means than you can use similar locking functionality for your prune
process to ensure your backup and prune jobs are not overlapping.
The other difference is ensuring the script fails if the cd /srv/NAS/
command fails, which in this case is not a huge deal as the duplicacy
job will fail…or maybe not if this script is executed from another repository location. For destructive scripts missing little checks like these can be catastrophic, using the following “junk cleanup” script as an example:
#!/bin/sh
# PLEASE DON'T RUN THIS!!!
cd /path/to/junk/folder
rm -rf *
Running the above will happily remove everything in the current directory that you have write access to (which is probalby everything if running from your home dir), whereas the addition of || exit 1
after the cd
command will prevent this.
Another option is to add set -e
as the first line of your script which causes any non-zero result to exit the whole script, so your error checking is done for free as follows:
#!/bin/sh
# This is now safe
set -e
cd /path/to/junk/folder
rm -rf *