Backing up my backup

I’ve had a long term plan of 1) backing up all the computers in our household to my home server and then 2) backing up that to the cloud.

In effect I see the backup on the home server as covering accidental file deletion or a hard drive failure whilst backing up the repo on the server to the cloud is about covering bigger disasters like theft or fire.

I finally seem to have part one sorted with Duplicacy (having had abortive attempts with urbackup, borgbackup, duplicati, partial success with duplicity…) but am now wondering about part two. How to backup the repo on my home server to the cloud?

I could add a second store for each machine, but my main backup already takes 30mins-ish and as far as I’m aware Duplicacy just repeats that for each store. So I’m more inclined to copy the repo on the home server to the cloud - something that can be done at night when no one is needing the bandwidth or cpu cycles.

The repo is on a ZFS pool, so ZFS copy would be cool but seems very expensive at the moment.

Using ‘duplicacy backup’ on the duplicacy repo seems nonsensical…so I’m wondering about ‘duplicacy copy’. Would it be bad to repeatedly run copy to keep a cloud based copy of my repo?

Other than that I may try a different tool, like Tarsnap or even simple rsync.

Interested to hear if anyone else is doing anything like this and whether ‘copy’ is suitable for the job.

“duplicacy copy” does exactly what you want. It won’t copy chunks that are already on the destination so performance is very good.

It reads the snapshot manifests to find out which chunks need to be copied, which is how its performance is so good. But this means it assumes the manifest is accurate. I would run “duplicacy check” occasionally (maybe after every backup depending on how long it takes / how paranoid you are) to confirm that all of the chunks referenced in the snapshots exist in the cloud.

Don’t pass the “-files” option to “duplicacy check” unless you’re prepared to wait a long time. It has to do almost all the work restore does, and the work it has to do is multiplied by the number of snapshots it is checking.

I use FreeFileSync to copy files from my various computers to a single storage location within my house, and then use Duplicacy to back that full set of content up to cloud storage.

That approach has a significant advantage over what you’re doing, because the files on the drive in my house are available for use via a network share in a NAS-style access pattern, so I can either use the local copy on whatever computer I’m working on (if that computer has that file on it) or I can just directly access the shared storage if there’s no local copy. So I get speed when I need it but flexibility for occasional usage when speed isn’t crucial.

One thing I don’t get is the ability to store multiple versions of content on my in-house shared storage, but I’m comfortable relying on the cloud providers (several of them) for that.

@tbain98: My home server has two zfs mirrors, one hosts the files and apps I wan to access in house and the other backups, but I get your point.

@Danny: Thanks for the tip about check. Reading more about copy I’m noticing it seems to be per backup ID, rather than the whole storage. So I’d need to run copy on each computer I backup to my home server? Originally had in mind something that would run from the server automatically at night, for instance.

Adrian, you can specify the -id option when copying snapshots, so you can run the copy command on the same computer, just one repository id at a time.

Presumably then, since I have duplicacy installed on the server with the repo I can run the copy command on the server and copy snapshots to a remote storage? No problem with it being one id at a time.

Yes, you can run the copy command on the server.

Working nicely…but so slow. I estimate it will take about 6 weeks to get my 1tb storage copied over (just running o/n). Seriously considering whether its worth the effort.

If you’re copying from the local storage to a remote storage, you can set the -threads option to use multiple threads to speed up the upload.