Help: lost the config file in the storage

I was looking at my storage (B2) for fixing a missing chunks problem and i noticed there were two config files:

b2://duplicacy/config
b2://duplicacy/config/config

due to a slip of keystrokes i deleted b2://duplicacy/config instead of b2://duplicacy/config/config.

i don’t know where the other one came from . how do i restore the config file? i know the storage password ofcourse and all the local caches and configs (and in fact everything else in the storage - chunks, snapshots, etc) are intact.

duplicacy CLI is ofcourse now complaining that the storage is not initialized.

  1. you can copy the other config file and see if it just works
  2. check that you actually deleted it as opposed to hidden (B2 buckets have version history)
  3. initialize the storage again with the same parameters as the first time (password, iteration count, etc, etc). Then it shall just work.

the other config seems to be of some other old storage. definitely deleted it by using the b2 cli delete-file-version (and it was the only version). hide needs a separate command.

will try initializing the storage again. but i read here that chunks are encrypted using randomly generated chunk key and AES-GCM. how does re-initializing the storage decrypt the chunks then?

1 Like

Oh, you are right, this won’t work…

@gchen is there a way out here? i can’t seem to accept the fact that loosing a single config file can cost you your entire backup? if that is the case, duplicacy should probably make multiple copies of it or ask the user to keep it backed up elsewhere too.

Losing any file can potentially nuke your entire backup, not just the config file. This is the nature of deduplication. Therefore, it makes no sense to add extra redundancy just to protect this particular file. The only way to avoid this is to set up multiple storages using the copy command.

It looks like B2 doesn’t allow you to recover files recently deleted. I think they should have.

2 Likes

While I agree with the sentiment, this particular file is especially important, as losing it renders access to the entire storage impossible (and it won’t help you recover the storage if you happen to have a non-bit-identical copy elsewhere - very painful if you have to re-upload TBs worth of chunks again to the cloud).

Losing random chunks won’t necessarily nuke the entire backup, and it’s easy to recover from a storage copy if that does happen - again, without the need to re-upload TBs of chunks).

Also, this has been suggested before, but there’s certain moments when that particular file can get nuked.

Does the config file have Erasure Coding as with chunks (and presumably snapshot files)? If not, I don’t think it’d be unreasonable for Duplicacy to simply keep a duplicate copy of this tiny file - say, in the roots of each of snapshot and chunks. (SnapRAID does something similar with its .content files, which are typically stored on as many drives as possible.)

2 Likes

Why do you say so? Loosing chunks or even snapshots have a much smaller blast radius. As far as I understand, you should only loose files containing their deduped parts in those variable-sized packed-and-split chunks. This config file being a single point of failure is definitely not good news. This design decision is not defensive against minor user error (let alone storage corruption or an actor with malicious intent)

My thoughts exactly.

100% agree. Since it is already encrypted with the storage password, keeping copies of it does not decrease security. Also for a an initialized storage, isn’t the config file basically immutable? Also the cost implications are irrelevantly minimal. At the very least, it wouldn’t hurt to cache a local copy of the config file in all repositories, checksum and complain/restore during the check command.

As a corollary, all major filesystems do this. E.g. the equivalent to this is the “superblock” in ext4. Quoting this article:

Since the Superblock contains important information, and losing it results in a complete failure of the file system (you cannot read the filesystem if you do not know the parameters I mentioned above), it has backups (or in other words there are redundant copies of it) in every Block Group.

In fact btrfs seems to go one step ahead and keep multiple copies of metadata for each block group (+checksums).

At the very least there should be a huge banner note in the documentation of the init command informing users about making sure to keep a copy of this file locally or elsewhere, or creating a bit-identical and encrypted copy of the storage to keep a copy of the config file (although copying the config file is much simpler). A user has already suggested making this part of the documentation: How secure is duplicacy? - #18 by towerbr

3 Likes