Why do you say so? Loosing chunks or even snapshots have a much smaller blast radius. As far as I understand, you should only loose files containing their deduped parts in those variable-sized packed-and-split chunks. This config file being a single point of failure is definitely not good news. This design decision is not defensive against minor user error (let alone storage corruption or an actor with malicious intent)
My thoughts exactly.
100% agree. Since it is already encrypted with the storage password, keeping copies of it does not decrease security. Also for a an initialized storage, isn’t the config file basically immutable? Also the cost implications are irrelevantly minimal. At the very least, it wouldn’t hurt to cache a local copy of the config file in all repositories, checksum and complain/restore during the check command.
As a corollary, all major filesystems do this. E.g. the equivalent to this is the “superblock” in ext4. Quoting this article:
Since the Superblock contains important information, and losing it results in a complete failure of the file system (you cannot read the filesystem if you do not know the parameters I mentioned above), it has backups (or in other words there are redundant copies of it) in every Block Group.
In fact btrfs seems to go one step ahead and keep multiple copies of metadata for each block group (+checksums).
At the very least there should be a huge banner note in the documentation of the init command informing users about making sure to keep a copy of this file locally or elsewhere, or creating a bit-identical and encrypted copy of the storage to keep a copy of the config file (although copying the config file is much simpler). A user has already suggested making this part of the documentation: How secure is duplicacy? - #18 by towerbr