Pre Command and Post Command Scripts

How do you do this for backups in the WebUI?

I use this CLI wrapper script.

Thanks. Is there any equivalent for vanilla linux or even unraid?

Hi, please can you add this feature for Duplicacy Web 1.8.0 ? It will be wonderfull.

Thanks.

3 Likes

This would be great for me too as I have remote storage.

Can we get an idea when this feature is planned to be added?

1 Like

Hi, this will send the ping when the backup is complete, regardless if it completed with warnings or not, correct?

Healthcheck.io is designed to notify you when it doesn’t get pinged (e.g. computer is offline) or if the job fails.

Yes, my question was about when Duplicacy sees a job as failed. From other discussions looks like if backup has warnings it’s still flagged as successfull with exit code 0. So we receive a notification that everything is fine, until you get a closer look to the logs.

Yeah to overcome this, I had to use the job scheduler to send emails to health checks and then parse the emails for failure or success

6 years later and still no support for script configuration in the GUI.
I guess it’s safe to say it’s never coming

2 Likes

Same issue here. I’ve tried various different things from placing them in /cache/localhost/all/scripts, to placing them in every individual /cache/localhost/{0,4}/scripts folder. They are called “pre-backup” and “post-backup” with no extension and have chmod +x applied to them. Yet in the backup logs I only see this:

2025-02-07 11:36:29.567 INFO SNAPSHOT_FILTER Parsing filter file /cache/localhost/1/.duplicacy/filters
2025-02-07 11:36:29.567 INFO SNAPSHOT_FILTER Loaded 0 include/exclude pattern(s)

So it’s 100% looking in the correct directory but it’s simply not looking for, nor executing the scripts.
This is actually a pretty big dealbreaker. I’ve moved away from Duplicati but there the pre- and post-scripts did work from the UI…

Edit, OK it seems that I have figured it out:

The exact path is cache/localhost/{id}/.duplicacy/scripts, where {id} is the ID of the backup: 0, 1, 2,…
The names of the scripts are pre-backup and post-backup, without extension. And obviously should have chmod +x.

These scripts need to be in this directory for each and every backup. Putting them in cache/localhost/all/.duplicacy/scripts does NOT work (which I consider to be a bug). This approach works for the WebUI version.

It’s not a bug - all is where non-backup operations run from (prune, check etc.) - numbered directories are where the backups run for each repository, the index of which can be found in duplicacy.json.

1 Like

Still no support for pre/post backup scripts in the web gui? I almost purchased the license but I guess I’ll just have to use the free CLI version given how much gymnastics is required to setup the pre/post scripts when using the gui.

It’s only free for personal use. See Duplicacy

Yeah, I understand. I was planning to use the Web GUI version for personal use (backing up my home NAS). Will be using the CLI version for that matter.

The way I perceive it — web ui is a simplified front end that works for basic and straightforward scenarios. Attempts to do anything more complex invariable turn into an uphill battle with webui — getting it to swallow the input you want to pass to CLI. At this point, the value is of web ui reduces to that of a scheduler. And any modern OS already has one. For me, using CLI is a no-brainer, especially for a backup tool, that is supposed to be out of sight out of mind, especially for cases where some non-trivial configuration is needed.

But in that case, you actually don’t even need to use pre-post backup scripts at all — you can do that in the host script that launches CLI. This further simplifies setup and decouples duplicacy from the rest of environment.

Yup, that was pretty much my conclusion. I sort of like the idea of having nice web UIs even for things like backups, where you can visualize the storage usage, schedules, etc. But when it becomes a pain to configure - CLI is the way to go, and I agree - no need for pre/post scripts. I’ll just setup the unRaid user script running on a cron, that will stop my containers, backups, start them back.

1 Like

I think unRaid supports zfs, and therefore you shall be able to backup filesystem snapshots and avoid stopping containers altogether. (Except databases, which need to be backed up in their own way (dump/export) and for non-critical services stop/backup/start is not that bad.)

ehh, I don’t see a problem with stopping the containers given that those backups will run once or twice a week very late at night. Re: databases - wouldn’t stopping the pgsql container and backing up it’s data folder do it?

4 posts were split to a new topic: Pre/post setup scripts: no such file or directory error