New Duplicacy user here, running the Web-UI version in a Docker environment from my Synology NAS.
I know there have been other threads about understanding the Prune command, but after searching through them they’re still not resolving my confusion or, rather, not in a way I understand so I would hugely appreciate some direct help!
This is the setup:
- I have Duplicacy Web-UI running on my Synology NAS.
- Duplicacy connects to a Backblaze B2 bucket I own. That bucket has a master retention setting of “Keep prior versions of a file for 30 days” active. I use one bucket for all cloud backups.
- The NAS contains folders of data that very rarely changes. These are things like audio and video files that I need cloud copies of, but only really one or two revisions of. Let’s call these Type 1 files.
- The NAS also contains folders that are the backup targets of system backups from machines on my network. These are changing daily, and I need to retain extended cloud revisions of these. These are Type 2 files.
Now I currently, and I suspect foolishly, have a Prune command set up after each one of my backup jobs, which are either set to:
[-keep 0:14 -keep 7:14 -keep 1:7 -a] (for Type 1; the data that rarely changes)
[-keep 0:7 -keep 7:30 -keep 1:7 -a] (for Type 2; the data that changes often)
Reading these forums, I think this is incorrect and I should instead be having just one, master, Prune command that handles everything. But I’m not sure exactly how and what to set it to to find a happy medium between my file types.
I therefore have three questions:
-
Am I correct in thinking I should remove all the per-job Prune commands and instead create a single Prune with setting with the [-keep 0:7 -keep 7:30 -keep 1:7 -a] arrangement? That way my Type 2 files - the daily backups from various systems - will keep a rolling copy of the last 30 days, and the Type 1 files - that almost never change - will do the same but will not overfill the bucket since they have no new revisions.
-
Looking at the logs, I can see some of the Prune events on some jobs have affected chunks on other jobs. Have I unwittingly screwed my backups, some of which took me almost a week to initially seed?
-
Regarding the Backblaze 30 version retention, will this interfere with Duplicacy? If it does, am I best - as I think I am - simply telling Backblaze to never prune or remove and have Duplicacy handle all that?
My apologies if these are simple questions, but I’m very new to Duplicacy and more than a little lost!