Best Strategy to Backup the Backup

Hello everyone,

I’m new to Duplicacy but, after a few test-runs and comparisons, I like it a lot - it’s so fast… :slight_smile:

My current setup is the following:

  • Different devices create their backups to the NAS
  • The NAS creates backups of the shared folders to the NAS
  • Duplicati backups everything to B2

I want to switch almost everything to Duplicacy now. So a few clients already backup into an archive at the NAS. Others use still Veeam for the backup process. So now I have a backup directory at the NAS with

  • two encrypted Duplicacy archives (one for servers and one for the PCs to make use of the deduplication)
  • one encrypted Veeam backup

What’s the best way to sync them to B2?

Currently, I’m thinking of starting another Duplicacy backup job backing up the entire NAS backup folder to B2. But I guess that wouldn’t make too much sense, because all the files are already encrypted, deduplicated and compressed? I could imagine that this method could create a traffic overhead.

Another option would be to use the copy cmd, at least for the Duplicacy archives. Then I could copy them to different buckets. The Veeam backup could still be another Duplicacy backup to B2.

Or, as a third option, I just sync the NAS backup folder with the b2sync script, rclone or anything else. But as I understood, that should be bad when backups start while it’s still copying files?

So what strategy would you recommend? What’s the most cost-effective and secure way to duplicate my backups to B2?

Welcome to the forum!

Did you already look at the copy command?

This topic is probably relevant:

1 Like

Thank you! :slight_smile:

Yes I’m aware of the copy command but was wondering if that’s the best way since I have the same “problem” as mentioned in your linked thread - good find :slight_smile:

I prefer syncing all storages to B2 by my NAS, so the backup clients don’t have to copy anything, so I have to know all storage passwords and need empty local repositories.
Currently, that would be a possibility since I know all passwords :thinking:

I assume when using rsync/rclone I would have to check if a backup is running in one of those storages I want to sync? I guess it would render the copy usesless in B2 if it’s a copy of an running backup?

Does the copy command handle this? Or isn’t that even a problem?

Edit: Just found Back up to multiple storages
So I will use the copy command where possible. I can even sync every storage in it’s own bucket which will make recovery of single storages easier / faster.

But I’m still wondering what happens when starting copy while a backup is still running. Is that a problem?

1 Like

And one more question… :slight_smile:

I’ve added blackblaze b2 as second storage to a repository with -e, -clone and --bit-identical. So far so good and it works.

But I’m wondering what is happening here. If i don’t set the -e flag, will the copy cmd decrypt my backup and copy it unencrypted to backblaze? Or is it with the -e flag now double encrypted?

Just wondering if Duplicacy is decrypting chunks before copying them. In my opinion that’s a not need CPU overhead. :thinking:
For now i set the same encryption passwords for both storages (local and backblaze).

If you didn’t specify the -e flag, the new storage becomes unencrypted and any backups or copies to it will be unencrypted. There’s no such thing as doubly-encrypted - it’s the storage that’s either encrypted or not.

Yes, it has to. If the source storage is encrypted, each chunk has to be decrypted before re-encrypting with the new storage’s encryption keys. It’s actually pretty fast and doesn’t hit the CPU that much at all, nor slow things down, especially if you have more than 1 thread.

3 Likes

Good to know, thanks :slight_smile:

This could be documented as a hint at the copy command page and especially in the article Back up to multiple storages since the last thing people want is to upload unencrypted data into the cloud. That’s not self-explanatory in my opinion since it’s not a 1:1 copy :slight_smile: @TheBestPessimist

PS: My script (for each storage) looks like this right now. If anyone has some tips or better ideas, feel free to let me know :blush:

#!/usr/bin/env bash
set -e

DIR=$(dirname $0)
cd $DIR

BIN=../bin/duplicacy

DATETIME=$(date "+%Y-%m-%d-%H:%M:%S")

echo "# Starting Backup..."
$BIN -log backup -stats -threads 2 | tee "logs/$DATETIME-backup.log"
echo "# Done"

echo "# Copy to Backblaze..."
$BIN -log copy -from default -to backblaze -threads 20 | tee "logs/$DATETIME-copy-backblaze.log"
echo "# Done"

echo "# Prune Backups..."
$BIN -log prune -all -keep 0:360 -keep 30:90 -keep 7:14 -keep 1:7 | tee "logs/$DATETIME-prune.log"
$BIN -log prune -all -storage backblaze -all -keep 0:360 -keep 30:90 -keep 7:14 -keep 1:7 | tee "logs/$DATETIME-prune-backblaze.log"
echo "# Done"

echo "# Delete old logs..."
find ./logs -name "*.log" -type f -mtime +14 -delete
echo "# Done"

For the copy-jobs at my NAS, i leave the backup-task out, since the client devices are responsible for their backup, and use only copy and prune in the script to move the files to backblaze and clean the storage :slight_smile:

3 Likes

Did this question get answered?

It is not addressed in this How-to either: Back up to multiple storages

1 Like

For what it is worth… I do not think it is a good idea to back up the backup. I just choose to have multiple destinations.

Also for what it is worth… that was Crashplan’s position too.

I don’t think anyone has suggested to backup the backup [storage]… only copy the backup, which is an entirely different proposition and uniquely viable for Duplicacy - not something you can easily do with CrashPlan without stopping services (because its backup storage is basically a database).

3 Likes