Restore question with multiple storages

I see a number of topics about restoring info to a different PC/location. This is a question about restoring FROM a different storage/location…

Hypothetically:
If I were to use duplicacy to backup multiple PCs (and servers) to one, onsite storage location (“default”)…

Then I were to take consistent snapshots of that onsite storage for external, redundant backups.

If the default storage location for duplicacy no longer exists, obviously future backups fail. If I need to restore from remote storage (and that onsite storage is gone – unreachable), can I simply “add” the remote storage on each client and then restore from there? Is the default storage required to be available when you “add” another storage? Probably with some of the flags to make it identical (because it would logically be a copy of the original storage, with identical naming of chunks, encryption, etc.)?

Just trying to see if this strategy will work in theory. :slight_smile:

Thanks again…

The first point to evaluate is how this remote storage was created. The configuration can be: encrypted or not, a copy of local storage or with a configuration (password, etc) completely different from the location.

However, if I understand your idea, this remote storage was not created / configured / maintained with Duplicacy, but using another tool (Rclone, rsync, etc)

You’ll just have to configure/edit your preferences file with the informations of the remote storage (url, etc). It’s not a “command” (add).

I would do this:

  • create your local storage (init command)
  • create your remote storage with add command and -copy and -bit-identical options
  • and use Duplicacy itself to make the copy of the local storage to remote.

The first two points will leave the settings of the storages ready if you need to do a restore (even of a single file) from any of them.

Have you seen this topic? :point_down:

Yes, I have read that recommendation, and I appreciate that duplicacy has a facility to do it.

What if it were seriously difficult to install new software on the local storage (server)? The instructions I saw, instead of having to run an add and copy from every duplicacy client, were to make a pseudo repository on the local, consolidated store and use duplicacy to copy from there to the cloud. But again, what if we will have great difficulty getting new packages installed on that server? (Not to mention this feels like a kludge for an otherwise exceptional utility.)

What if other synchronization utilities are faster or already have features designed and tested for thorough remote sync (and yes, like rclone)? Why not use duplicacy for the backup and deduplication features it is good at (and somewhat unique), and then use an rsync/rclone for the features it is good at?

I would really like to understand (seriously, so I can more thoroughly understand how duplicacy works) why an identical copy - made by anything, even scp - does not suffice and meet all needs for a restore using duplicacy.

For the purposes of simply restoring files from backup, you should be able to use add

Or you can modify the preferences file to point to the alternative storage, but I’d personally do add because you can write-protect this auxiliary storage with the set command) and keep details about your original storage for when you bring it back online. Just make sure to specify the same repository id, so the restore command can pick out the proper collection of files.

Doesn’t matter how that storage was initialised; the client just needs access to it, plus the storage password. Also remember, you can have multiple storages that are or aren’t copy-compatible (so long as you’re not doing copy between them :slight_smile: ). If they were rsync’d or rcloned, doesn’t matter either.

Note, from the top of the add command details:

If the add-ed storage has already been initialised before, then the command line options will be ignored and existing options are used.

All add does in this circumstance is add a second storage entry to the preferences file (take a look). Nothing on the storage(s) is altered yet.

Basically, using Duplicacy to copy from local to remote storage:

  • You can choose which snapshot id to copy;
  • You can choose which tag to copy; (see next post)
  • You can choose which revision to copy;

(You don’t have to push your entire backup to the cloud, only the pieces you want. It can save you money, or just allow you to keep a different frequency of updates offsite)

  • You don’t need to scan / sync your entire local storage to identify the files (chunks) that should be copied to the cloud.

And another point: using a second tool (Rclone, e.g.) to manipulate files generated by a first tool (Duplicacy), you may be introducing failure points.

I’m also a Rclone user, but I use it and Duplicacy in different sets of files.

You can see an interesting discussion here (not exactly the same subject, as it compares backup with copy, but has some interesting points):

Absolutely agreed. I also use Duplicacy to copy my entire storage to the cloud but only because I’m too lazy to script my way around this:

  • You can choose which Tag to copy;

…which isn’t actually true, yet anyway. :slight_smile:

Yup, the copy command doesn’t support tags for some reason.

Been meaning to have a go at creating a pull request, but I haven’t fully navigated the code as yet and I’m a total newbie to golang. Also wanted to make a PR for a -r last alias/whatever (which might serve as a good alternative) but alas, time and proficiency is lacking for me right now. And I really don’t want to bug Gilbert because he has his hands full with the new GUI, which I’m really looking forward to.

1 Like

I didn’t know that :thinking:, I never had to use it. I use snapshots to separate some that don’t need to upload to the cloud.

This would really be an interesting option.

Me too! :yum:

I am learning a lot here (one of the main reasons for a forum)!

The additional options made available with the copy command are interesting…and powerful. But they’re not what I (personally) am trying to accomplish. Just make an identical copy of the storage/target in the cloud in case the local storage is rendered inaccessible.

The copy can still do that, but the added complexity seems overkill. For example, no one has mentioned that a periodic prune will also have to be sent to the cloud storage. Actually, it has to be sent twice to reclaim the space (that is paid for, of course). (Exclusive flag could be used if you run the second repository directly from the local storage or, again made it more complex and manually coordinated across clients…).

Or I could run “rclone sync source cloudTarget”. Handles creates, updates, deletes. Makes the cloud storage identical to the local storage. KISS principle.

Also, using B2 specifically, duplicacy can not use Application Keys.

Appreciate all the feedback. This is valuable info for me, and I hope for others in the future.

Another important reason why copy is preferred over rsync/rclone is that it allows different encryption settings between two storages. So you can leave the local storage unencrypted but have your cloud storage encrypted. Or encrypt them with different passwords.

4 Likes