I understand that this lifecycle setup can be a problem in very specific cases, so each one should assess whether their use case (number of repositories backed up to the same storage, prune policy, etc) might have a problem with that.
Specific case example:
@jt70471 that is a very rare case but it could happen to you too. Suppose that an old backup to be deleted by the prune command contains the only copy of a file. But before the prune command renames the chunks that compose the file, a new backup from another repository (still unknown to the prune command) happens to include the same file but doesn’t upload all the chunks since they are already in the storage. The prune command goes ahead to rename all the chunks (or hide the chunks using B2’s hide markers) without realizing that they are needed by another backup (which is still in progress at this time so the final snapshot file has not been uploaded).
(by gchen)
Ref: Restore is very slow · Issue #362 · gilbertchen/duplicacy · GitHub
I have a practice of using simple, granular settings, which isolates possible points of failure (actually pretty much eliminates them in practice). My view is that backups should be reliable, not necessarily complex.
Simple configuration example: I don’t use prune. It just doesn’t make sense in my use case, because the “savings” in storage cost doesn’t justify the time I would spend configuring it.
I understand your fear of adopting a “not recommended” configuration… But look, it’s not unanimous… :