Scheduling of daily tasks (workaround?)


I know that this topic has been discussed a few times in the past, but I would like to know if there is any new development and ask a couple of questions about a possible workaround.

To summarize, I am evaluating Duplicacy to backup my workstation. Until now I have been using Duplicati, but I already had to recreate whole 1+TB backup sets a few times because of database corruption (just recreating the database doesn’t work, but this is off-topic here). This is not confidence-inducing. Duplicacy is definitely faster and, from what I read, more robust than Duplicati. On top of that, a very nice feature is the ability of copying a local backup to the cloud (with Duplicati one needs to perform two independent backups).

The only negative point that I found is that the scheduler does not catch-up missed jobs if the computer was off at the scheduled start time. I would like to backup some large and not-very-dynamic datasets only once a day, typically when I start the computer. This doesn’t work, as far as I can see.

Unless something new has been implemented of which I am not aware, I see three possible solutions:

1 - I start the jobs manually
2 - I set the start time later in the day, when the computer is almost certainly running
3 - I run the jobs using Windows Task Scheduler

I can already say that I find the first 2 options suboptimal. My idea is to have a fire-and-forget backup system, so I do not really like the idea of having to start the jobs manually every day. The second option is better, but I’d rather have these long and CPU/IO intensive jobs run when I am not using the computer (I usually start it up, then I go and get a coffe, in the meantime the backup runs).

The third option seems the most attractive. I know that someone might think that this goes against the idea of having a GUI, and I tend to agree, but IMHO it’s a better solution than the other two. Using Process Hacker, I know which options the backup jobs use and in which directory they need to start, so I could easily write a script that is invoked at startup.

If I did that, would I screw up the integration with the GUI? I assume that Duplicacy Web is capturing and processing the output of the CLI executable. That is pretty obvious in the case of the “check” operation (but that can be scheduled at a later time: it doesn’t take long and it’s lightweight). What about the “backup” operation? Is that safe to run that outside Duplicacy Web service?

Finally, I think that it would be relatively simple to add a check button in the scheduler that says “Execute at Startup” and disables the “Starting Time” field. After the first run, the job can be repeated at the interval defined by “Frequency”. It’s not the same as catching-up missed daily jobs (e.g. the computer or the service might be restarted during the day), but it would help.

Thank you!


1 Like

It is safe to run CLI outside of GUI, there is not a whole lot in the GUI that relies on the output of these commands. I think this is mostly stats, which might even be parsed from the generated log files and not from the execution output itself (I am not sure about that). But purely for backup task there is nothing (almost?) in the GUI that reads the output. So run check from the GUI once in a while, the rest you can run externally via CLI.

Hey I just wanted to share this is very similar to the issue I wrote here: Web-UI Daily Schedule doesn't trigger immediately if more than 1 day has elapsed without backup

Would definitely like to see a better solution to this, I’m on MacOS if that matters.