Questions about B2 and multiple repositories, schedules

I am running Duplicacy in a saspus/Duplicacy-web Docker image on my Synology DS920+. Setup and installation was problem free and I am happily backing up data to my Backblaze B2 buckets. Still, I have questions…

Backup design

I want to maintain three separate repositories (shares on my nas) and back them up to individual B2 buckets. I have about 2+TB total to store from the three sources and my upload throttle limits me to about 80GB/day. If I’m understanding Duplicacy correctly, I need to create:

  1. Three separate backup routines.
  2. Three separate schedules (or run them all in parallel)
  3. Three separate check schedules.

At first blush, this all seems rather awkward and uncoordinated. Just managing the sequencing is a bit confusing. I’ve read some about parallel actions and this seems rather uncoordinated as well. I assume that I don’t want checks running at the same time a source is being backed up, so it It feels like I have to create and coordinate at least six different schedules in order to accomplish this.

If I want to do more checks/prunes, I have to create a different schedule for each type and each repository, correct? That means I’ll have 3 backups, 3 schedules for the backups, 3 check chunk schedules, 3 check files schedules, 3 prune schedules, all of which I have to coordinate to make sure they do/don’t overlap/conflict with each other. WHEW!

Surely, I am missing something… Is there a better way to do this and still maintain three separate repositories in separate B2 buckets?


  1. When and how often does one need to run checks, prunes?
  2. Do I have to run a check in order to get any data on my B2 storage size?

Thanks in advance!

Did you mean that you don’t want any of these jobs to run in parallel? If so then you can just set up one schedule that runs these jobs in sequence.

Otherwise, I would suggest creating 3 schedules that run backup, prune, and check for each bucket.

But you probably don’t need both check -chunks and check -files. They will create lots of download traffic, which isn’t free for B2. check -chunks alone might be ok, since it will only download new chunks that have not been verified before, while check -files will download every chunk at least once.

Run check (without -chunks and -files) after each backup and prune. Run prune at most once a day.



Thanks for the reply. I’m probably getting ahead of myself here. At this point, I’m still focused on ingestion and I have a low bandwidth connection, so there’s no sense running anything sequentially b/c any one of my backups will run 24/7 for days until it all gets uploaded. Once I get everything uploaded, I can see how long each backup takes and then sequence them under a single schedule. Ditto with the checks/prunes.

With my limited bandwidth, I suspect running any backups in parallel is a lost cause, at least for now while I am still trying to get the initial upload accomplished. I’ll have gigabit fiber in a month or so, but for now, I’m living in the dark ages…

Still getting to grips with the GUI. It’s not the most intuitive or well-designed, imo. I only just figured out that I can add multiple actions to a single schedule, so that makes things a bit more manageable.

Thanks for the tips on the chunks & files options. Based on your suggestions, I will probably end up with one schedule for all three backups in sequence, then another schedule to run checks & prunes on each. I think checks and prunes every two days is probably enough, but I’l trial and error my way to what works.