Feature Request: Limiting max run time

Could you add an option to limit the maximum time a backup command is allowed to run? Local backups run super fast, but uploading my initial data set to Wasabi is going to take weeks I fear. Even running with 16 threads will take days (and with everyone at home, tends to slow down any uplink intensive workloads like VPN and zoom).

Based on this forum post I’ve been stopping and restarting the backup as needed when we need the upload bandwidth, but I’m still wondering how much time I’m wasting. It would be nice to run the overnight (for example) and have an actual listable/recoverable “snapshot” created (even if it’s marked as incomplete).

The web GUI 1.3.0 already supports this feature – there is an option for setting the max run time when configuring a schedule.

Ahhh, thanks. I know I saw it somewhere and though I most have seen it in borg. I didn’t want to schedule a job until I ran and tested the full pass (especially with the job taking a couple of weeks and the job frequency being a week at most).

If the job exceeds the maximum time, does it stop the client gracefully and write any sort of snapshot or stop it (leaving the uploaded chunks in place?) From the other thread I see you write a partial snapshot file (locally?) but that’s only on the initial backup.

I guess I was just trying to figure out the right approach. I only have part of my files selected now (I’ll create the other directory links later). I figured “one good (if partial) backup” would a good start, but I’m worried that the start/stop process wouldn’t work well after that partial backup is complete.

Yes, when an initial backup is interrupted Duplicacy will save an incomplete snapshot locally so on resume it will skip files that have already been processed. Even if this incomplete snapshot doesn’t work, resuming is still fast because chunks already in the storage won’t be re-uploaded again.

If you need to add new files after the initial backup is complete, one trick to force fast resuming is to create a new backup with a different backup id, so the new backup will become an initial backup but it can still reuse all chunks already in the storage.

Ok, great! Thanks for the idea. I wanted to be sure I was on the right track. I went ahead and just added everything to the backup and will restart if I need to interrupt it (I guess I’ll know in 9 days).

BTW, thanks for the excellent program! I really appreciate all of the design details and docs on your website.

1 Like