First time user - Duplicacy with Backblaze B2 best practice

Hi, I’ve been trying to decide between Duplicacy and Synology Hyperbackup to back up some of my data stored on my Synology to the Backblaze B2 service. I’ve been going between the 2, one day I’ll decide hyperbackup, the next duplicacy, but something just keeps drawing me back to duplicacy.

I’ve never done cloud backups before, only local, therefore this is all completely new to me and I’m certainly not comfortable with the CLI. I’m so close to paying for a web gui license but have a query.

My synology has several shared folders, and I’m wanting to backup only 2 or 3 of the shared folders.
These are mapped as different drive letters to my Windows PC which I’ve got Duplicacy running on.

I tried running in a docker container on my synology but I wasn’t comfortable with it and have gone back to a nice and easy windows installation.

So in my use case, I have my precious photo’s on my Z drive, my parents photo’s are on the Y drive, then a bunch of other data that’s replaceable on other network drives.

Using the trial web gui I’ve initialised some storage on B2, when I go to create the backup it only lets me choose one drive letter which is fine, and thinking about it it would probably be better to split the network drives into different backup tasks.

I’ve chosen a backup ID and the 1st network drive is uploading right now.

However what I’m not sure about is what to do when the first backup of the my photo’s have finished and I wanted to start on the next network drive’s data.

Is best practice to :-

Create a 2nd bucket for the next drive letter, and so on.
Use the same bucket but create a new backupID.
I’m not sure if this is possible yet as the 1st backup task hasn’t finished… create a new backup task but use the same backupID

Any advice would be greatly appreciated. I have around 1TB of data to backup and can only upload at 500KB/s, therefore I need to get this right first time!

This one.

You don’t want to re-use the same backup ID, because each subsequent backup will rely on the metadata in the previous snapshot, and think it needs to reupload again - coz you’re switching between backup locations.

(It won’t re-upload most of the chunks, but it’ll re-hash the files each time, and you won’t have a clean history of incremental backups.)

But one of Duplicacy’s best features is de-duplication and it’ll take full advantage of that if you backup to the same storage/bucket, so that’s what I’d recommend.

2 Likes

Thank you very much, I’m uploading my next network drive now using a different backupID but to the same bucket.

This topic was automatically closed 10 days after the last reply. New replies are no longer allowed.