Monitor backups status (Windows/CLI)

@TheBestPessimist is using to monitor backup status: Scripts and utilities index.

I thought I’d show some examples in case you want to do this in your scripts. I use the CLI (command line) version, and Windows 10 and Windows 7 as backup clients. is a service which allows setting up “checks”. If a check does not receive a signal for a reasonable time (expected period + grace period), you can have a mail sent to you. You can send a failure signal as well, to avoid waiting for the grace period. Signal can be sent to the service by mail or a HTTP request. The service is free for limited personal use. You can monitor 20 checks, with limited logging (100 last events)

The beauty of this kind of service is that although it might not be 100% fail proof, it is independent of your computer which could crash or be switched off.

To avoid false positives you must select your period and grace settings wisely, especially if the computer is seldom used. (You can use the “Start” feature for this)

To test these examples, first register on the service. Set up a check and copy the URL for it. This URL you can paste into the command examples wherever I write

Since I am a Windows/CLI user, I will show some example for the Windows environment. I use powershell and bat files for these examples, but the object can be used in some other languages as well.

The first simple example requires Windows 10 or a newer version of powersehll (v3) (See below for Windows 7 and older PS)
The powershell command is
Invoke-RestMethod ''

You can execute this from a bat-file like this:

powershell.exe -command "&{Invoke-RestMethod ''}"

This sends a signal to the service saying that this check is OK. You probably only want to do this when a backup has been run successfully!

I recommend a scheduled job which performs a backup, logs result locally, and (if successful) reports status to

Let’s say you have set up a Duplicacy repository at X:\YourRepositoryFolder, and installed the CLI files to C:\Program Files\Duplicacy. Create a batch file somewhere it cannot be modified without being administrator (e.g. C:\Program Files\Duplicacy) and schedule a task to run it for instance once a day.

A batch file in Windows is a text file with the file extension .cmd or .bat.
(.cmd looks more modern)

CD /D X:\YourRepositoryFolder
"C:\Program Files\Duplicacy\duplicacy.exe" backup
If not Errorlevel 1 powershell.exe -command "&{Invoke-RestMethod ''}"

The /D option for the CD command selects the the X: drive and then changes the current folder to the path specified. This allows Duplicacy to find the repository.
The If Errorlevel command is true if the exit code of the previous command is equal or higher than the number provided after “Errorlevel”.
“If not Errorlevel 1” is true if the exit code of the previous command is 0 (success)
( If Errorlevel 0 is always true so can not be used here. )

Remember, this is just an example. You can add options from the Duplicacy user guide to show more details, and pipe log results to a log file or other things that you want.

If you want you can report a fail immediately if Duplicacy reports an error, like this:

CD /D X:\YourRepositoryFolder
"C:\Program Files\Duplicacy\duplicacy.exe" backup
If not Errorlevel 1 (
  REM Exit code is not 1 or higher, we're good!
  powershell.exe -command "&{Invoke-RestMethod ''}"
) Else (
  REM If we got here something is wrong, send the fail signal
  powershell.exe -command "&{Invoke-RestMethod '**fail'**}"

If your script performs more tasks, for instance a local backup first and then a copy to an external destination, you should only send a signal to the monitoring service when all operations have succeded.

If you want to send more info, or customize the User-Agent string (since it is shown in the logs) you could do it like this:
powershell.exe -command "&{Invoke-RestMethod '' -headers @{'User-Agent'='%Computername%'} -body 'Documents OK' -method POST}"

This way it is possible to collect results from Duplicacy output to put in log.
(you might want to be careful with what details you publish this way)

This works in Windows 7 and use powershell version 2:
powershell.exe -command "&{(New-Object System.Net.WebClient).DownloadString('')}"

Customizing User-Agent and body with this is a little more complex


Very good tutorial!

I also use, but I ping the URLs directly with curl.

I created a small html file that shows me the backups status (loading the healthchecks svg images), something like a simplified dashboard:


They are separated into groups: my computer, my wife’s computer, media files, etc.

1 Like

Thank you, @towerbr!!
I also like Tags, a very elegant feature, easy to set up and very useful!

More importantly, you made me check on curl, which I knew about (and wget) but didn’t realize curl is actually included in Windows 10 since build 1803 (Announcement: (I knew SSH is included as a feature you can enable, but curl is available out-of-the-box)

So if the client computer is Windows 10 with at least build 1803 you can do it like @towerbr:

The simplest code sample might then read:

CD /D X:\YourRepositoryFolder
“C:\Program Files\Duplicacy\duplicacy.exe” backup
If not Errorlevel 1 curl

(As @towerbr knows), Curl has a ton of options and can easily do what we did with PowerShell in the first post (I’m literally learning by doing here, please let me know if I make mistakes!):
curl --user-agent "%Computername%" --data "Documents OK”
(The main reason I set the user agent string is to keep the log lines shorter and readable.)

The Healthchecks log might then show this in the entries:
#13 Jun 16 23:07 OK HTTPS POST from - ThisPCName - Documents OK

1 Like

Good idea! I hadn’t thought of that. I’m going to check these options again , I have not played with them for a long time.

A word on Laptops…
Healthchecks works really well for computers that are always on.

But if your computer is a laptop you might have a hard time selecting the optimal Period (time between backups). A higher Period setting lowers the chance of false positives but increases the time before flagging a real problem.

Your backup job schedule will also impact the likelihood of backup failure, since the laptop might enter sleep mode or run out of power before the scheduled backup time. You can set the task to run asap after a missed schedule, but generally, setting an early backup schedule will increase the chance that a backup will succeed, at the cost of maybe not getting the latest document changes. Setting a later schedule will improve the chances of backing up the latest changes.

Either way, a laptop will probably not deliver very regular backups just because of their nature.

@Christoph showed me a feature that might help for laptops in How to avoid stupid mistakes in your powershell scripts (self-test your scripts)
You can send the /start signal ( every morning the computer is in use. (Use a scheduled task to run early in the morning, and include the setting to run if schedule missed.)

Before we move on:
Period (normally): Time between expected pings (backups).
Grace (normally): Time to allow the task to last or be delayed.

The /start part of the URL (called an endpoint) is intended to start a timer, but in our case we will use this to tell Healthchecks that our computer is in use and to expect a backup confirmation within the Grace time. This means you can set a very high Period (30 days is the max) and still get a warning within a day or two if it fails. I find a Grace time of 2 days should work, but you should experiment to find out what works for you! This allows backups to fail once without alerting.

When used like this (with a start signal), our perception of time settings changes a little:
Period: The maximum time to wait for a successful signal, so 30 days might be OK, perhaps too short for some if they use the laptop seldom.
Grace: Time to wait after a start signal before expecting a backup

Windows 10 build 1803 (and later) with curl:
Windows 10 before build 1803:
powershell.exe -command “&{Invoke-RestMethod ‘’}”
Windows 7:
powershell.exe -command "&{(New-Object System.Net.WebClient).DownloadString('')}"

Laptop usage is pretty random compared to a server. This method will not 100% avoid false positives (or misses) but perhaps improve it a bit!

1 Like

How to include details from Duplicacy

Let’s assume again your Duplicacy repository is at X:\YourRepositoryFolder and CLI files at “C:\Program Files\Duplicacy”

If we add the –stats option to Duplicacy backup, it will show some info about the backup.
If we add the –log global option, time and date will be added to the lines.

Typical result from a backup:

2019-06-17 01:24:14.700 INFO BACKUP_STATS Files: 23879 total, 40,574M bytes; 1 new, 42K bytes
2019-06-17 01:24:14.700 INFO BACKUP_STATS File chunks: 8259 total, 40,614M bytes; 1 new, 42K bytes, 31K bytes uploaded
2019-06-17 01:24:14.700 INFO BACKUP_STATS Metadata chunks: 5 total, 7,793K bytes; 3 new, 2,163K bytes, 975K bytes uploaded
2019-06-17 01:24:14.700 INFO BACKUP_STATS All chunks: 8264 total, 40,622M bytes; 4 new, 2,206K bytes, 1007K bytes uploaded
2019-06-17 01:24:14.700 INFO BACKUP_STATS Total running time: 00:00:24

Now we can pipe the log to a file,
extract some useful information into a variable, and then

  1. Store some in a local history log
  2. Send detail to Healthchecks along with the signal

Sample batch file:

REM Modify locations to your preference:
Set Log="%Temp%\Backuplog.txt"
Set Tempfile="%Temp%\TempStats.txt"
Set History="%Temp%\Backuplog_History.txt"

CD /D X:\YourRepositoryFolder
"C:\Program Files\Duplicacy\duplicacy.exe" -log backup -stats > %Log%
If Errorlevel 1 goto FAILED

REM (We get here if backup was successful)
REM One of the lines from the log (including –stats) might look like this:
REM 2019-05-17 23:53:10.418 INFO BACKUP_STATS Files: 23829 total, 40,534M bytes; 1 new, 45K bytes
REM (We don’t need the first 42 characters)
type %Log% | find "INFO BACKUP_STATS Files" > %Tempfile%

REM (Optional) Add to a history log:
type %Tempfile% >> %History%

REM Retrieve part of the line to send to the service (from character 42):
set /P Stats=<%Tempfile%
set Stats=%Stats:~42%
curl --user-agent "%Computername%" --data "%Stats%"

REM This skips the rest of the script:
goto :EOF

REM (we could add some more info about why it failed)
curl --user-agent "%Computername%" --data "Backup failed – Exit code %errorlevel%"

Windows 10 before build 1803:

powershell.exe -command "&{Invoke-RestMethod  -headers @{'User-Agent'='%Computername%'} -body '%Stats%' -method POST}"

[Edit:Added code for] Windows 7 with User-Agent and POST data (UploadString)

powershell.exe -command "&{($Web=New-Object System.Net.WebClient).Headers.add('User-Agent','%Computername%');$Web.UploadString('', "POST", '%Stats%')}"

Normally we don’t need to store much data in the log. here’s an example using only User-Agent:
[Edit:Added code for] Windows 7 using only User-Agent and GET (DownloadString)

powershell.exe -command "&{($Web=New-Object System.Net.WebClient).Headers.add('User-Agent','%Computername%  %Stats%');$Web.DownloadString('')}"

This Windows 7 samples also work for Windows 10

Please consider which details you want to send to the public server in case of data breach etc.!

A demo to show the result of the above code.

A Healthchecks report
with some details in “User Agent” and Request Body (=data), (redacted sample):

  Ping #46 

  Time Received                      Client IP
  2019-06-19T08:24:08.669066+00:00   [YourPublicIP.22.33.44]

  Protocol                           Method
  https                              POST

  User Agent

  Request Body
     Files: 23883 total, 40,652M bytes; 3 new, 79,807K bytes

The Healthchecks log might look something like this:

#46  Jun 19 10:24 OK  HTTPS POST from [YourPublicIP.22.33.44] - PCName - Files: 23883 total, 40,652M bytes; 3 new, 79,807K bytes   
#45  Jun 18 10:23 OK  HTTPS POST from [YourPublicIP.22.33.44] - PCName - Files: 23880 total, 40,574M bytes; 0 new, 0 bytes  
#44  Jun 17 10:24 OK  HTTPS POST from [YourPublicIP.22.33.44] - PCName - Files: 23880 total, 40,574M bytes; 2 new, 42K bytes   
#43  Jun 16 10:25 OK  HTTPS POST from [YourPublicIP.22.33.44] - PCName - Files: 23879 total, 40,574M bytes; 1 new, 42K bytes  

The local %history% log file:

2019-06-19 INFO BACKUP_STATS Files: 23883 total, 40,652M bytes; 3 new, 79,807K bytes
2019-06-18 INFO BACKUP_STATS Files: 23880 total, 40,574M bytes; 0 new, 0 bytes 
2019-06-17 INFO BACKUP_STATS Files: 23880 total, 40,574M bytes; 2 new, 42K bytes 
2019-06-17 INFO BACKUP_STATS Files: 23879 total, 40,574M bytes; 1 new, 42K bytes 

Would highly recommend maintaining some kind of backup status overview such as described above.

I manage and monitor backups for a global network (some 800 individual backups spread across around 100 machines across the globe), and thought I would share a strategy I use for monitoring backups.

At the centre of the strategy is a backupStatus table in a database. Each time a backup completes (successfully or otherwise) it either directly updates this table (via a command line utility typically) if on the same internal network as the database, or by sending a status email to a special email address that monitors for backup status emails and updates the database.

This results in a record added to the backup status table for each backup (or not, if there was an issue), with the start/finish times of the backup, the backup status, file count and size in mb, the live host name (the machine being backed up) the backup set name (some machines have multiple backups) and the backup machine host name.

This gives us a historical record of each backup. It also allows us to generate graphs that show disk usage and file count over time, as well as time taken to run the backup.

It also allows us to produce a current status page, which shows the last known state for each backup and shows the age of the last backup highlighting ones that are older than a specified period.

A report is emailed each morning to backup administrators of just those backups who’s age has become a concern or that are not showing a success status. We don’t report on successful backups (there is just too many), only ones of concern.

We also developed a nagios plugin to query backup status and have some of our more critical backups monitored by nagios along with our other critical systems.

The reason for recording file count and size is so that we can see if there is a problem with the backup such as something having moved and the backup not updated to pick up where it was moved to or something mistakenly removed which we would then see a sudden drop in file count or size, or if a backup is growing excessively and may need review.

We are in the process of migrating some of our backups, which are based on rdiff-backup, to duplicacy and we will continue to use the same strategy. I use the list option to capture information about last snapshot size and file count, record start and end time, and email the backup result to our backup monitoring email address.

We also mirror our backups and these are recorded too, so we know mirroring is working.


@austin.france, which “toolset” are you using to capture these emails and update the database?

Sendmail. In /etc/mail/alias.user we have:

alias.user:mailstatususer: "|/etc/mail/"

mailstatususer is the name of the user at that will be sent the email with the status and can be anything you want. parses the incoming email, and builds an insert statement it passes to mysql command line tool.

The parse stage looks for lines matching NAME=value which are the individual bits of information we want to store in the DB, such as status, size, name of backup etc.