Feature Suggestion: Keep a specific number of backups

As title says, if I specify a number of revisions to keep, prune gets rid of older-than revisions.

--keep-revision 20 would only keep the last 20 revisions and delete the older ones (so backup 21 would wipe out revision 1).

1 Like

Alternate name -max-revisions.

Does this look more self-explanatory?

cc @Christoph since i’m bad with naming :stuck_out_tongue:.

1 Like

That sounds like a good idea, where we specify --max-revisions when backing up, and --keep-revisions when pruning. I’d be happy with either one implemented, though!

1 Like

Hi! I think it would only apply to Prune, I feel Backup should not overwrite or replace revisions.
(Next revision is next revision regardless of how many is there, Prune takes care of Prune)

1 Like

Is this still planned? I want my last 3 daily backups kept. I currently have it -keep 0:3, but if it skips a day or whatever, it would then delete backups I wouldn’t want deleting. I would like it to keep the last 3, until a successful backup is made.

Bump! :slight_smile:

I was also searching for the ability to do this. From the look of it this feature is still awaiting consideration?

In case it is helpful I did find that Duplicati has this functionality. When you create a backup job you can set the retention to keep a specific number of backup versions [1].

The retention can be set in 3 ways:

  • Unlimited:
    Backups will never be deleted. This is the most safe option, but remote storage capacity will keep increasing.
  • Until they are older than:
    Backups older than a specified number of days, weeks, months or years will be deleted.
  • A specific number:
    The specified number of backup versions will be kept, all older backups will be deleted.

[1] Using the Graphical User Interface - Duplicati 2 User's Manual

1 Like

What is the realistic usecase for this feature, where number of revisions matters at all?

Well, I would say it’s not the max revisions that count, but max storage quota

The use case would be a target destination with a limited storage quota.

Say you have 2 TB of reserved storage, the backup operation would error out if it exceeds the quota.
So it would be great If we could instruct duplicacy to keep a max amount of data and prune as needed.

In this scenario, you can’t rely on prune because it’s likely that a revision could be empty.

1 Like

Then instead of asking for an unrelated feature that does not address actual issue we should ask for support to limit storage usage instead. Or, better yet, actually limit cost – because that’s what people ultimately care about. (For example, ArqBackup already allows to cap monthly cost of storage on AWS )

This is however likely very counterproductive: the backup size consists of initial data pile (large piece) + incremental changes over time (small piece). Most people add a lot of data (to big pile) and have very little data turnover (that woudl have gone to the small pile). Limiting version history therefore only limits that already small piece, and does nothing (nor should it) to the large. Instead, increase your quota on your storage provider. Storage is very cheap, it makes no sense to delete data and risk losing important versions to save couple of cents a month.

1 Like

Hi saspus,

I’m well aware that it would be a different FR and to be clear I’m not asking for anything myself. :innocent:

I was trying to expand on a realistic use case for a revision limit.
Revisions mean nothing and storage/data is king.
So yes, this would have to be a new feature: backup data retention quota

And although I agree that generally, revisions would not add much storage, there are situations where they would: ie: backing up compressed or encrypted data.

Lastly regarding cost, of course, storage is cheap, so when someone requests a storage quota feature we could assume that there are other more pressing factors. For example, for compliance reasons, one cannot use external storage.

If we limit it at the storage layer, duplicacy would error out.
Due to encryption, only duplicacy can maintain backup integrity and a storage limit

This makes sense – limiting storage/cost is what we’re really after.

The reason limiting the number of revisions worked for my scenario is due to the specific knowledge I have about my data and how it changes over time. As you’ve stated, in the general case it won’t work that well.

I mean we can limit the number of backups already, with --keep

Say you’re doing daily backups, you can limit them in number by using --keep 1:1 --keep 0:20 which will sparse out to daily snapshots and delete them after 20 snapshots are created.

Are you referring to the --keep flag for the prune operation? If I am not mistaken prune works by looking at revision age.

This feature request was asking about keeping a specific number of revisions regardless of their age.