Configuring multiple storage backends in Web GUI

Hi,

I am trying to associate a second storage backend to an existing backup in the Web GUI (1.2.1). I have one backup configured going to one destination (SFTP). I have configured a second storage backend (local mounted drive). How do I associate this second backend with the existing backup? I went to the Backups section in the UI but couldn’t see any way to associated the second backend with the existing backup. I created a new backup, selected the first storage (SFTP) and selected the existing Backup ID, then changed the storage to the second storage (local mounted drive). However, it seems that I still have to specify what directory to backup which means I will actually have two separate backups? With separate include/excludes? I want to use the same config from the existing backup, I don’t want to have to maintain two lists of includes/excludes.

Am I doing this correctly? Am I misunderstanding something? I tried searching the forums but couldn’t find anything that explained how to do this in the Web GUI.

Thank you very much!

A copy job is exactly what you needed. When creating the second storage, check the Copy-compatible option to make it compatible with the first storage:

And then create a copy job to copy from the first storage to the second. Ideally, the copy job can be in the same schedule as the backup job that backs up to the first storage. But the copy job can also have its own schedule and can run at any time.

You can read this guide for more information. It mostly talks about the CLI but it will be helpful to understand how Duplicacy works in general.

1 Like

Just beware that Web GUI dont add -bit-identical parametr. Just recently I became a victim of that. My local backup HDD had corrupted blocks and I had some missing chunks. And my external Dupliacacy backup Copy (created via WebGui) had different names - so I could not just download missing chunks from remote copy…and I had to nuke local backup :frowning:

So I had to delete local backup, download remote Copy and use it as new local backup. Then via CLI create new remote storage with -bit-identical parametr and point Copy backup to it…just because missing -bit-identical parametr

1 Like

You don’t need to use -bit-identical when using Duplicacy to copy between copy-compatible storages. Only when doing raw copy, which you normally don’t need to do (unless your config file gets corrupted or missing).

What you could have done in your situation is to copy the external storage back to the local backup - using duplicacy copy - which would replace any missing chunks.

2 Likes

Thanks! @Droolio
Is reverse copy via Duplicacy really a thing? In fact this is the first time I’ve heard about it and it sounds like magic :slight_smile:

The first thing I tried was manually search for file names of those missing chunks in remote backup copy - but none was found, because they have different names…

I still have my backup with missing chunks around, so I try create copy job …what will happen
:upside_down_face:

Absolutely :slight_smile:

Althought one thing you have to take into account, is that it’s not a way to mirror or sync backups - so if you have some mis-matches (holes) in the snapshot revisions in your storages - due to different retention periods or when pruning on different days - all it does is a copy the missing ones over…

Depending on how your storage got corrupted, you may have to manually rename some numbered snapshot revisions, and/or chunks, out of the way - so they can be re-copied.

But another thing you can do is copy the config file from your locale storage to a new, temporary, storage, and use the duplicacy copy command to copy a subset of revisions back from b2 to that temp storage.

The config file will have the same chunk hashes etc. that, when copying the chunks back, they get re-encrypted and their filenames changed back to the original. Then you can manually copy the bad chunk and restore it to your local storage.

1 Like

Thanks:)

I did reverse copy for test purposes
Original state was: Backup with 398 versions and missing chunks in every one of them.

Reverse copy job did copy 21 chunks and check after that still report missing chunks, but only in 73 versions, other previously bad 325 revisions are good :grinning: so that’s pretty good :+1:
(even I did not used prune, so I’m not sure why those random 73 revisions are still bad)

Only thing I don’t get, is why is this “feature” not documented somewhere on this forum. Maybe in Copy command details or in Fix missing chunks

Ah, I get it now! I was looking at it the wrong way. I added a copy job to the original schedule and it’s working like a charm. Thank you @gchen!

One follow on question. After adding the copy job, it’s now set up to backup to the off-site storage first (then prune and check) and then do the copy from the offsite storage to the on-site storage (accessible via mounted drive). Now that the copy has completed once, can I switch these? Can I change the schedule so it backs up to the on-site storage first and then copies to the offsite storage? (Does this even matter?)

Yes, I think you should back up to the on-site storage first and then copy to the off-site storage, assuming the on-site storage is faster than the off-site storage.

1 Like

What’s the reason that the Web GUI doesn’t use --bit-identical. Is there any disadvantage to always using --bit-identical when making repositories copy compatible.

You will find an interesting point of view in this post:

2 Likes