I am running Duplicacy in a saspus/Duplicacy-web Docker image on my Synology DS920+. Setup and installation was problem free and I am happily backing up data to my Backblaze B2 buckets. Still, I have questions…
I want to maintain three separate repositories (shares on my nas) and back them up to individual B2 buckets. I have about 2+TB total to store from the three sources and my upload throttle limits me to about 80GB/day. If I’m understanding Duplicacy correctly, I need to create:
- Three separate backup routines.
- Three separate schedules (or run them all in parallel)
- Three separate check schedules.
At first blush, this all seems rather awkward and uncoordinated. Just managing the sequencing is a bit confusing. I’ve read some about parallel actions and this seems rather uncoordinated as well. I assume that I don’t want checks running at the same time a source is being backed up, so it It feels like I have to create and coordinate at least six different schedules in order to accomplish this.
If I want to do more checks/prunes, I have to create a different schedule for each type and each repository, correct? That means I’ll have 3 backups, 3 schedules for the backups, 3 check chunk schedules, 3 check files schedules, 3 prune schedules, all of which I have to coordinate to make sure they do/don’t overlap/conflict with each other. WHEW!
Surely, I am missing something… Is there a better way to do this and still maintain three separate repositories in separate B2 buckets?
- When and how often does one need to run checks, prunes?
- Do I have to run a check in order to get any data on my B2 storage size?
Thanks in advance!