Backup strategy advice

Can anyone give me some advice on the best backup strategy for my scenario?

  • I have 8 Windows 10 workstations located in several different countries.
  • Each workstation has a data set that I need to backup in located c:\backup - it contains regular Office files, pictures, and some 30-40GB VMs.
  • A lot of this data (especially the VM’s) is very similar and I expect well suited to a deduplicated backup system.
  • Some of the workstations have fast 1GB net links, others have very slow ADSL links
  • I’d like to configure Duplicacy to backup all of these machines to Wasabi.

My questions are:

  1. In order to benefit from the de-duplication can I backup all of these machines to the same storage pool?
  2. Can they backup simultaneously?
  3. Is this a sane idea?
  4. Have I fundamentally misunderstood the workings of Dupicacy?

My goal is use as little backup space as possible, and then seed the workstations with the fast links first so that most of the data is already in the storage pool, making the slower linked workstations more efficient for backups as the deduped data is already in the pool.

In order to benefit from deduplication you must backup all machines to the same storage.

Yes.

Seems like it to me.

Not that I can tell.

As an aside, if your VMs are a significant portion of your backup data, it might be worth considering using a fixed chunk size instead of the default variable size chunking. One main caveat (?) to using fixed size chunking is that this disables the bundling of small files into larger chunks — which might be a pro or a con depending on your data set.

3 Likes

Thanks so much for your reply John.

Good to know that my strategy makes sense :slight_smile:

I’ll read up on fixed-size chunking before I initiate my test deployment.

1 Like

It’s not really possible to switch a storage’s chunk size configuration once it’s created, so just wanted to share some other threads on chunk size.

1 Like

Dunno if Wasabi supports multiple buckets(?) but, if it does, I’d create 2 Duplicacy ‘storages’ on Wasabi - one for normal files (variable) and one for VMs (fixed). Make sure to exclude the vhd/vmdx’s / VM folders from the variable size backup if they reside within the same c:\backup area.

4 Likes

Thats a solid plan, Wasabi does indeed support multiple buckets.

Thanks for the great advice

If you’re going to do regular backups, make sure they happen at least once per day. That way, if anything happens, you’ll still have some time to get things back online before the problem becomes too big.

Something to keep in mind is that when you have several PCs backing up to the same storage, every user on those PCs that can access the shared storage via Duplicacy will be able to restore other users’ files and read them.

1 Like

This is another usecase that would benefit from separating data chunks from metadata ones: then data chunks can be made write-only, thus preventing data disclosure, while snapshot chunks could remain read/write thus facilitating enumeration.

1 Like