Pre Command and Post Command Scripts

I had the same issue and in the end it turns out the ID from the WebUI minus 1 is the ID to be used in the file path. Would be nice if this showed up in the documentation somewhere.

I think in an application like duplicacy people can deal with starting to count from 0.

Is there a way to use pre- or post-scripts for schedules?

I tried to set a post-backup script into ~/.duplicacy-web/cache/localhost/all/.duplicacy/scripts/post-backup with 777 permissions
but this seems not to be invoked after backup schedules ran. Other pre- or post-scripts for backup ID’s with same settings and permissions are running without any problems.

I have many backups in one schedule and want to set the remote server to sleep only after the completion of the whole schedule. A workaround would be to only set the post-backup script on the last scheduled backup but this would cause the backups not allowed to run in parallel and this is IMHO not the best solution.

1 Like

Hi, a date for support pre/post scripts in schedule ? i search but don’t find it.

1 Like

Hi, any date for pre/post script implementation in duplicacy web edition ?

1 Like

I would like to fail in a pre-backup script if the backup is about to add > x MB or Yk files in a new snapshot. This is something I have to do in order to tweak and optimize the backup filters over time. Because space cost is not zero on cloud storages. Has anyone had any success checking in an automated fashion what’s about to be added to the backup snapshots, either in terms of raw size or file count? Thanks

It sounds very counterproductive an idea: to discard data to fit into a specific storage cost.

Data either needs to backed up or it does not. Cost of cloud storage is an external factor that has no bearing on the value of the data.

In the grand scheme of things, in the context of data backup, cost of storage is always negligible compared to the cost of data it safeguards

This is not true. Data that might be worth storing at 0.01$/GB might not be worth storing at $100/GB. One size does not fit all.

2 Likes

It’s a straw man argument here.

Both numbers you quoted are way too high for backup applications. In real world we live in cost of backup storage is somewhere between $1 and $4 per TB per month. Any amount of data majority of users have will therefore be way under $10. Splitting hairs to save couple of bucks a month is highly, outrageously counterproductive, even at minimum wage.

If you are saying “but there may be some users who can’t decide whether they need to backup those 100TB they have laying around” — those users are a minority and they already know what to do, they definitely would not be asking for advice on data management on duplicacy forum.

And yes, one size definitely fits the vast majority. And outliers usually don’t need forum advice: either they store very little or petabytes; on this entirely different scale everything changes.

1 Like

To answer the question (or point you in the right direction at least)… you can’t do this directly with Duplicacy, however you could probably run a diff or backup --dry-run and parse the output.

You might not be able to cancel the job with a Duplicacy managed pre-script, but you could probably write a standalone script using these commands and abort the final run if it exceeds a threshold.

4 posts were split to a new topic: Low cost / Archival storage discussion

How do you do this for backups in the WebUI?

I use this CLI wrapper script.

Thanks. Is there any equivalent for vanilla linux or even unraid?

Hi, please can you add this feature for Duplicacy Web 1.8.0 ? It will be wonderfull.

Thanks.

3 Likes

This would be great for me too as I have remote storage.

Can we get an idea when this feature is planned to be added?

1 Like

Hi, this will send the ping when the backup is complete, regardless if it completed with warnings or not, correct?

Healthcheck.io is designed to notify you when it doesn’t get pinged (e.g. computer is offline) or if the job fails.

Yes, my question was about when Duplicacy sees a job as failed. From other discussions looks like if backup has warnings it’s still flagged as successfull with exit code 0. So we receive a notification that everything is fine, until you get a closer look to the logs.

Yeah to overcome this, I had to use the job scheduler to send emails to health checks and then parse the emails for failure or success

6 years later and still no support for script configuration in the GUI.
I guess it’s safe to say it’s never coming

2 Likes