Automatic Deletion

Hi, I’ve already read quite a bit here and tried various things, but I just can’t get it to work.

First, my backup to Google Drive:
I’m running a prune with -keep 2:7 and a backup every week. However, the backup keeps growing and my storage eventually fills up, even though it should be pruning old backups regularly.

Second, I have a different backup that has also filled up, so I deleted it under Schedule and Backup and I’m now trying to remove it using prune --delete-only --exhaustive, but it won’t delete (in Schedule).

Can anyone please help? I can’t just go in on the file level and delete the backup folder manually because everything is stored in chunks and I don’t know which ones belong to which backup.

Thanks in advance!

This means you are not telling duplicacy to ever delete very old snapshots. This command will thin your snapshots so that you only have one for every 2 days (for those more than a week old), but it will just keep piling up old snapshots from the past as you move into the future.

If you want to actually remove older snapshots, you need to add something like this -keep 0:180 That will remove all snapshots older than 6 months.

(Also, if you only backup once per week, it doesn’t make much sense to run -keep 2:7 because you’re are telling duplicacy to keep 3 snapshots per week, but you are only making 1 snapshot per week. You could add something like this -keep 30:90 That will keep one snapshot per month for snapshots more than 3 months old.)

1 Like

—delete-only does not do what you think it does.

Is this backup to the same storage? Are you trying to delete all revisions of a specific snapshot ID?

If so, the best would be to delete the snapshot ID from under snapshots folder on the storage, and then run prune -a -exhaustive to delete orphan chunks.

If not — describe exactly what are you trying to do.

Is this a Google shared drive by a chance? Don’t use shared drives with duplicacy.

@saspus @fronesis thank you for your response!
I am confused to the max.
I just want that for some backups only 2 backups only 2 versions are kept. For others only one version.
I am on unRAID so I have shares and one of them is called DuplicacyShare.
When I go to Storage in the Duplicacy Gui, I see that 473GB should be the size

But when I look in my unRAID, it is 5.83 TB.
grafik

I have already deleted the Ebooks/Kavita folder for the backup in Snapshots

 ls /mnt/user/backups/duplicacy/snapshots/
Nextcloud/  Paperless/  Vaultwarden/

and tried to remove all chunks from it under Schedule with these:

but nothing happens.
When I look at the log, it says the following:

Running prune command from /cache/localhost/all
Options: [-log prune -storage DuplicacyShare -keep 0:1]
2025-05-30 10:12:46.722 INFO STORAGE_SET Storage set to /backupZiel
2025-05-30 10:12:46.776 INFO RETENTION_POLICY Keep no snapshots older than 1 days
2025-05-30 10:12:47.105 INFO FOSSIL_COLLECT Fossil collection 1 found
2025-05-30 10:12:47.105 INFO FOSSIL_POSTPONE Fossils from collection 1 can't be deleted because deletion criteria aren't met
2025-05-30 10:12:47.177 INFO FOSSIL_COLLECT Fossil collection 2 found
2025-05-30 10:12:47.177 INFO FOSSIL_POSTPONE Fossils from collection 2 can't be deleted because deletion criteria aren't met
2025-05-30 10:12:47.259 INFO FOSSIL_COLLECT Fossil collection 3 found
2025-05-30 10:12:47.259 INFO FOSSIL_POSTPONE Fossils from collection 3 can't be deleted because deletion criteria aren't met
2025-05-30 10:12:47.332 INFO FOSSIL_COLLECT Fossil collection 4 found
2025-05-30 10:12:47.332 INFO FOSSIL_POSTPONE Fossils from collection 4 can't be deleted because deletion criteria aren't met
2025-05-30 10:12:47.412 INFO FOSSIL_COLLECT Fossil collection 5 found
2025-05-30 10:12:47.412 INFO FOSSIL_POSTPONE Fossils from collection 5 can't be deleted because deletion criteria aren't met
2025-05-30 10:12:48.322 INFO FOSSIL_COLLECT Fossil collection 6 found
2025-05-30 10:12:48.322 INFO FOSSIL_POSTPONE Fossils from collection 6 can't be deleted because deletion criteria aren't met
2025-05-30 10:12:48.740 INFO FOSSIL_COLLECT Fossil collection 7 found
2025-05-30 10:12:48.740 INFO FOSSIL_POSTPONE Fossils from collection 7 can't be deleted because deletion criteria aren't met
2025-05-30 10:12:49.232 INFO FOSSIL_COLLECT Fossil collection 8 found
2025-05-30 10:12:49.232 INFO FOSSIL_POSTPONE Fossils from collection 8 can't be deleted because deletion criteria aren't met
2025-05-30 10:12:49.652 INFO FOSSIL_COLLECT Fossil collection 9 found
2025-05-30 10:12:49.652 INFO FOSSIL_POSTPONE Fossils from collection 9 can't be deleted because deletion criteria aren't met
2025-05-30 10:12:49.652 INFO SNAPSHOT_NONE No snapshot to delete

DuplicacyShare is the Folder, where the most backups go to.


I removed the Ebooks Backup

The first one won’t do anything because in the options there is nothing to tell it on which snapshots to act. You ether need -a or -id ...
The second one won’t do anything either, for the same reasons, and due to this would only delete collected snapshots which you have none.
The third one would safely prune snapshots, eventually. If you really want to do this right now asap you can turn off all safety, ensure that nothing else is touching the datastore – no backups, no prunes, nothing – and add -exclusive flag. It will then prune right away with all safety turned off.

Yes, that’s the safety that prevents data loss in some weird scenarios – imagine you are pruning from this machine and someone else is backing up the same data from another machines. They would have uploaded chunks that not yet belong to any snapshot – exactly those -exhaustive would remove. You don’t want to nuke that other clients’ data.

So, if you can make 100% sure no other process it touching the datastore, run prune -a -exclusive -exhaustive, it will purge orphans right away.

That’s not a backup then. What happens if you notice data rot/corruption/etc/ happened a week ago/five versions back? It seems you are looking for replication – you can just replicate zfs snapshots as is.

thanks I have now created space again with -a -exclusive -exhaustive.
Do you have an example of what a reasonable setup looks like?

e.g. I would like to make a backup every Monday. Only 4 backups should be kept.

Another backup should be made once a month.
Is it possible to activate incremental backups?

Does that mean I need one backup and one prune per schedule job or an extra schedule with -a for all of them?

I don’t understand what you mean.

So, you want to keep one backup every week, and no backups older than 1 month?

prune -a -keep 0:30 7:7

Please clarify. I thought you are making backups every Monday?

Duplicacy always deduplicates data. It does not differentiate between incremental and full backups. Each backup is a full backup and behaves like an incremental backup. I.e, if no data changed – no extra space is taken by the next backup. That backup is also a full backup – in the sense that it does not depend on any previous revisions.

IF you are interested you can read more here: duplicacy/duplicacy_paper.pdf at master · gilbertchen/duplicacy · GitHub

It’s up to you. I’m not sure why do you want to prune so aggressively. Does your data change so dramatically between backups that space usage goes out of hand? Perhaps you are picking up a lot of transient/temporary data that can be excluded?

I don’t understand what you mean.

I cleaned the backup with more than 3TB size thanks to your exclusive command
Now I can set up the backups properly.
Sorry I was confused and didn’t know how to build the backups. I just tried something and am willing to re-plan everything to get it right.
I want the prune so that there are not too many backups. Sometimes, for example, I have a backup of 500GB and thought that if I had 4 backups of it, it would be 2TB.
Since it behaves like an incremental backup, I now understand that I don’t have to make the backups monthly or weekly, but daily would also be better because only additional data increases the backup, right?

Would you be so kind as to show me a few good examples?
e.g. a backup that is made daily and deletes backups that are older than 12 months, does that make sense?