Can you run a backup job separately after copy?

Hi,

I would like some clarification with this scenario using the Duplicacy GUI. Let’s say I have an external hard drive as storage that already has one backup job setup daily, then I add new cloud storage with “copy” enabled with the external hard drive.

After this, I would run a daily “copy” command to keep the offsite cloud storage with the external hard drive to have an identical copy. Until here, everything looks good.

Let’s say that for a while I won’t have access to the external hard drive storage. Could I disable the external storage backup job, and set up a new backup job to be using the cloud storage?

Then when I have access again to the external hard drive, can I run a copy command the other way to get the new stuff I have in the offsite cloud storage, to the external hard drive storage and resume the normal backup job to the external hard drive, and disable the backup job that was going direct to the offsite cloud storage?

Another scenario, let’s say that I made a lot of changes to my repository, and I already have all the new changes in the cloud storage, but I currently have very slow Internet to download the changes from the offsite cloud storage to the external hardrive. I could run a direct backup job to the external hard drive, and when I do the copy command that won’t be duplicated, but I would have different revisions linked to different backup jobs, is that right?

To avoid revision number conflicts you should use different repository ids when initializing the storages:

duplicacy init id1 external_drive_url
duplicacy add cloud_storage id2 cloud_storage_url

So backups to your external drive will have the id id1, and backups to your cloud storage will have the id id2. When you do a copy from the external drive to the cloud, the cloud storage will have both id1 and id2 (and vice versa). Duplicacy is able to deduplicate between backups with different ids so you won’t need to upload a lot of data when you run a backup after a copy.

Is it possible to replicate this in the web GUI?

Or maybe do the setup through command line but then edit the configuration via the JSONs?

What I currently did is add both storages and choose the copy option on the cloud one and added a backup job to the external drive in the GUI.

If my logic is correct, I could add another backup job but with snapshot “id2”(as in your example) and I think that would be equivalent.

Yes, in the web GUI you can just add another backup job with a different backup id, and with the cloud storage as the destination.

1 Like

Thank you, so far I am liking the product :slight_smile:

Thanks for the explanation first. I have been using this way for a while. What confused me is in back-up-to-multiple-storages, it says a dummy repository is needed if I want to reduce the download cost by the COPY command and make a copy of existing storage.

My solution:

duplicacy init -e -erasure-coding 8:3 -storage-name <storage name 1> <snapshot id 1> <storage url 1>
duplicacy add -e -erasure-coding 8:3 -copy <storage name 1> --bit-identical <storage name 2> <snapshot id 2> <storage url 2>
duplicacy -log backup -vss -vss-timeout 200 -threads 2 -stats -storage <storage name 1>
duplicacy -log backup -vss -vss-timeout 200 -threads 2 -stats -storage <storage name 2>
duplicacy -log copy -from <storage name 1> -to <storage name 2>

Solution from back-up-to-multiple-storages:

duplicacy init -e -erasure-coding 8:3 -storage-name <storage name 1> <snapshot id 1> <storage url 1>
duplicacy add -e -erasure-coding 8:3 -copy <storage name 1> --bit-identical <storage name 2> <snapshot id 2> <storage url 2>
duplicacy add -e -erasure-coding 8:3 -copy <storage name 1> --bit-identical <dummy storage name> <dummy snapshot id> <storage url 2>
duplicacy backup -storage <storage name 1>
duplicacy backup -storage <dummy storage name>
duplicacy copy -from <storage name 1> -to <storage name 2>

Could you please explain this to me a little bit? Thanks.