Plan for Multi-workstation, appliance, & NAS Share backup to local and offsite storage


I’m looking for a sanity check on my backup plan. It includes a number of workstations, appliances, Unraid NAS and Backblaze B2. Duplicacy is running via docker on the Unraid. Here is my rough plan at a high level:

  1. 12am rsync of workstations & appliance configs copied to Backups folder on Unraid Nas
  2. 12am script runs to shutdown docker containers (including Duplicacy) so we can run rsync on of all docker configs and app data to backups folder. When complete script turns on containers including Duplicacy. So at this point Backups folder has latest data from all workstations, appliances, and docker containers.
  3. 9am duplicacy runs backups of the following repos which are shares on the Unraid Nas (Documents, Personal photos, professional photos, Personal Videos, and Backups folder). It backups to local storage (a disk that is not part of the Unraid NAS array).
  4. When backups to the local storage are complete, it then uses the Duplicacy “copy” command to make a copy of the local storage backup to my BackBlaze B2 off-site storage.

Here are my concerns/questions

  • I’m thinking rsync from the workstations is the best way to go. If there is a better or recommended route, please let me know.
  • I’m looking at about 1tb of data at a minimum right now I need to backup offsite, which will definitely take a few months with my connection. I can’t wrap my head around how steps 1-3 will work while step 4 will be taking forever to run. I know I will have to stop the 2nd step since Duplicacy will need to keep running while it does the backup to B2, but what about the backups to the local storage? Can Duplicacy continue to backup to my local storage WHILE it continues to run that initial copy to the off-site storage? Will this be an issue since the number of snapshots might be out of sync? I’m new to this so I just would like expert feedback before I dive in. Thanks your your eyes and input :slight_smile:

Best Regards,

How about this:

Each device you want to backup runs an instance of duplicacy and backs up to a single share on your NAS.
Another instance of the duplicacy copies the repository from the NAS to offsite location. It’s asynchronous, so no waiting involved.

No rcsync, no worries.

This allows you to take advantage of filesystem snapshotting on every device where it exists, reduces complexity (fewer tools to manage or worry about) and therefore improve reliability.

For the docker containers that run on the NAS — does the filesystem on your NAS support snapshotting?

I generally don’t think restarting containers is worth it. Most containers tolerate live data copy, and those that don’t — such as databases — provide data export functionality. Duplicacy container itself does not even needs to be backed up — there is no data worth saving.

I like the aspect of simplifying things by only using duplicacy. In regards to having the workstations backup to the same NAS storage with duplicacy (I like that this’ll also be helpful for deduplication purposes), I’m running into the issue that the storage is smb which doesn’t seem to be directly supported unless I mount w/ local paths. So I would need to use the same local mount path on all systems? Any suggestions to do this when the OS is different? Linux/MacOS/Windows? I’m currently trying to use a custom mount point on MacOS to no avail.

It is directly supported, albeit it never made it into documentation: Direct Samba support in CLI

Or you could use other connectivity, like SFTP: duplicacy does not care how it accesses the storage, only that it is accessible.

I’m not sure what do you mean. For deduplication purposes you would want to mount the same directory on the server, but mount points can be different. Also, see above SMB backend.

Windows in particular cannot create multiple connections to the same SMB server with different credentials, even though it’s Microsoft who invented SMB. Hence, with windows clients you can either use SFTP, or that SMB client above, to avoid mounting.

What do you mean by custom mount points? macOS has autofs bultin, you can auto mount anything anywhere. But there too, consider either SFTP (if backing up from offsite) or SMB direct client (if in the lan).

Yes! The smb storage backend is working and is going to work out perfectly for me. @saspus thank you for all of your insight, my backup plan is starting to come together because of it. Here is my updated diagram

Each device you want to backup runs an instance of duplicacy and backs up to a single share on your NAS.
Another instance of the duplicacy copies the repository from the NAS to offsite location. It’s asynchronous, so no waiting involved.

When you say “duplicacy copies the repository from the NAS to offsite location”, am I just creating another repository with the NAS backup data or can I tell duplicacy that this is a “duplicacy storage” location so I can then run the “copy” operation to another storage. If it’s the former, this won’t create any issues having duplicacy creating a backup from another duplicacy backup so to speak?

You definitely should don’t backup duplciacy datastore with another duplicacy!

This is what I would change:

  1. You don’t want to run duplicacy on WiFi api just to fetch config. Instead, let it scp config on schedule to a folder on NAS.
  2. Laptops and desktops shall backup to some folder on NAS with duplicacy running on each machine;
  3. Another instance of duplicacy running on NAS will pick up data folders “Documents, …, Personal Videos, and AppData” and backup to the same folder.
  4. Yet another instance on duplicacy will copy the duplicacy datastore to the cloud.

Something like this:

Thank you @saspus especially for explaining and drawing that diagram. I have this working with a small set of data. I feel good to finally start executing this plan.