Backup behaviour of duplicacy

Hi everybody,

I work with the web GUI. I’m not shure about the behaviour of duplicacy.
When I set up a backup and let’s say I need to backup roundabout 3 TB in google drive, it will take a while to backup everything. So I have two options to start the back-up.

  1. I can just go to Backup, choose my Backup and click the play button. When I now upload something big for example the initial backup, what does happen if the backup is interrupted, for example when I shut down the computer? Is the backup corrupted? Does duplicacy finish the backup job automatically after re-booting or do I have to start it manually again? Is duplicacy then creating a new revision or is it just finishing the revision before I shut down? What does hap-pen if the original data is changed while backing up? I mean when I upload 3 TB with duplicacy it will take several days to finish. It’s not really an option to not touch my computer in this time.

  2. Second option would be to go to the schedule and add this as a backup task. I’ve chosen that my schedule triggers every 15 minutes a backup, starting at 4am with endless runtime. It’s because later on I want to update my Backup almost continuously. What does happen if a new backup task is triggered by the schedule before the other one was finished? Will it crash? Corrupt the Backup? Does it trigger a new revision or just finish the job as long as it takes and then accept new triggers from the schedule to create new revisions? Last one would be my preferred behaviour.

Thanks for any advice!
Cheers Dag!

It won’t corrupt your backup. New revisions are only finalized when the whole process completes. If your backup was terminated, you will need to restart it again, manually or on schedule. :d: will try to create another revision (so will get new files pulled in), but because many of the chunks are already uploaded, the process will effectively resume from the termination as existing chunks do not need to be uploaded and will be skipped. That is, as long as your repository is mostly unchanged.

What will happen in this scenario though is after many stop and go attempts you will accumulate some unreferenced chunks (chunks that were uploaded into incomplete runs, but later were changed and as such are not part of any revision). This doesn’t affect integrity of your storage, but they would take some extra space that won’t be released by regular prune. You will need to run -exhaustive prune mode if you want to clean these up (after your initial backup is done).

This is how it works, tasks do not trigger again while still running.

Hey sevimo!

thank you for your quick response!
Best regards Paul