Move chunks into existing storage

While I was testing duplicacy, I backed up only part of what is now my repository (and for which I created a new folder in the backend storage. It occurs to me that I could probably move all the chunks from the test-folder to the now active storage to safe some upload time. Both storages were initialized with -chunk-size 1M -e

Are there any pitfalls to watchout for when I do this?

I’m afraid that won’t work. For an encrypted storage, encryption keys are generated randomly so unless you used the -copy option to copy the config file, chunks are incompatible between these two storages.

Okay, at least I now understand the copy option…

I did a backup to a local storage and now I’m trying to copy it to a remote storage. I did an init in the first and an add in the second, both with the same chunks parameters, encrypted, but with different passwords.

When I try to copy I get the message:

ERROR CONFIG_INCOMPATIABLE Two storages are not compatiable for the copy operation

The passwords have to be the same?

When you add the storage using the add command, you’ll need to specify the -copy option, otherwise the two storages won’t be compatible with each other (i.e., they will have different seeds for the chunk splitters such that the same files will be split into different sets of chunks).

1 Like

Sure, my mistake, RTFM …

One more parameter to my script …

Thanks!

An additional question: if I use the -bit-identical option, and use Rclone to synchronize local storage with the cloud storage, can I make backups for any of the two, and when they sync will everything be ok? (of course, since I always sync with Rclone after a set of backups).

Ex: I make some backups to the local storage, then I run Rclone, and the remote storage is updated. Then I make some backups to the remote storage, run Rclone, and the local storage is updated.

In this case, since I’ll not select specific snapshots, revisions, etc, have any advantages or disadvantages in using duplicacy copy instead of rclone sync?

BTW, why is -copy not the default for the add command? Seems like that would make sense.

1 Like

Another doubt (sorry, I’m evaluating the possibilities): if I make 3 backups (revisions) to the local storage and set up a script to always copy the last backup to the cloud storage (it would copy only revision “3”).

If all goes well, fine. But if the copy is interrupted, the revision “3” will be “broken” in the cloud storage.

If I make a new backup to the local storage (“4”), and then make the “last” copy, the “4” will be copied to the remote storage.

Any problem with that? Will the prune command delete the partial “3” in the cloud storage in the future?

Maybe I can also use a tag in backup …

BTW, why is -copy not the default for the add command? Seems like that would make sense.

I think it’s because you might want to specify one or more additional storages with different patterns of chunks, etc.

BTW, why is -copy not the default for the add command? Seems like that would make sense.
I think it’s because you might want to specify one or more additional storages with different patterns of chunks, etc.

Well, that would work just fine with -copy as default. As soon as you specify a chunk parameter, that will obviously overwrite the -copy option.

if I use the -bit-identical option, and use Rclone to synchronize local storage with the cloud storage, can I make backups for any of the two, and when they sync will everything be ok? (of course, since I always sync with Rclone after a set of backups).

Yes, when the -bit-identical option is used, the two storage directories will become identical and rclone sync should work as expected.

But if the copy is interrupted, the revision “3” will be “broken” in the cloud storage.

If I make a new backup to the local storage (“4”), and then make the “last” copy, the “4” will be copied to the remote storage.

Any problem with that? Will the prune command delete the partial “3” in the cloud storage in the future?

If the copy command is interrupted, the snapshot file for the revision “3” will not be copied so you’ll just end up with some unreferenced chunks in the cloud storage.

why is -copy not the default for the add command?

The -copy option takes a storage to copy the config file from, so it can’t be a default option.

1 Like

The -copy option takes a storage to copy the config file from, so it can’t be a default option.

I thought about that too, but thought it would be obvious that the default option would be -copy default…

1 Like

I think the user should be made aware if the new storage is copy-compatible with the default one. Sometimes you may want to back up to multiple storages directly instead of using the copy command.

I’m not sure I understand the argument. Let’s assume a user adds a storage to an existing repository and is not aware that the new storage is copy compatible with the default storage. So what? That user obviously won’t do a duplicacy copy ... but what’s the problem with that? They can just backup to multiple storages as they can now. Or am I missing something?

so you’ll just end up with some unreferenced chunks in the cloud storage.

And these chunks will only be removed if I run the prune command with the -exhaustive option, right?

And these chunks will only be removed if I run the prune command with the -exhaustive option, right?

Right, this is correct.

@gchen, did you see my post above?

When you make the new storage copy-compatible with an old one, you copy not just the chunk size parameters but also others like the chunk seed and the hash key, which may leak to some security risks if you choose to encrypt the storage.

Even without this argument I would still think it is not a good idea to make the copy option a default one, as it shouldn’t be doing too much for the users. Sure, the incompatible error message is annoying, but at least you’ll be greeted by this error message early and it is not hard to fix. On the other hand, if you copy from the default storage by default without noticing it, then there might be corner cases where you’ll caught by surprise much later and it won’t be easy to change the config.

2 Likes