Prune command details

A post was split to a new topic: Prune: exclude tags

Does -exhaustive imply -all?

No, -exhaustive only means to find ‘orphan’ chunks that are not referenced by any snapshots. It is a plausible use case that you can delete snapshots with a certain snapshot id and at the same time delete these ‘orphan’ chunks.

I think I understand: -all deletes snapshots (revisions) from all snapshot ids according to the specified retention policies; in doing so it deletes (or fossilizes) any chunks referenced by those revisions but not referenced anywhere else. And -exhaustive deletes chunks that aren’t referenced by any snapshot. Correct?

1 Like

This is correct…

1 Like

Reading this:

The -exclusive option will assume that no other clients are accessing the storage, effectively disabling the two-step fossil collection algorithm.

Actually i am backing up some NAS dirs to a storage. No other client has access to that storage, it’s a bucket i created on B2 only for backup purpose.

So it’s fine to always use

-exclusive ?

To only keep a 7-day history, would it be -keep 0:7 ?

Yes.

However it’s questionable why would you want such a short history? It may not accomplish what you may hope to accomplish (I.e. purging data for various compliance reasons).

Is the following correct?

If the keep option is not specified, no snapshots will be deleted and no new fossils created.

For example (assuming exclusive access to a storage) this command will only delete existing fossils, both referenced and unreferenced:

duplicacy prune -all -exhaustive -exclusive

Feature Request: -persist flag that deletes snapshots no matter if there are missing chunks.
Feature Request: -v verbose flag that prints the chunks that are (currently) deleted

3 Likes

I get the following error when trying to prune:

2024-10-04 16:56:28.216 ERROR DOWNLOAD_DECRYPT Failed to decrypt the file snapshots/install-ts639/34: No enough encrypted data (0 bytes) provided
Failed to decrypt the file snapshots/backup/34: No enough encrypted data (0 bytes) provided

Okay I think that was a corrupt backup so I deleted it.

Now when I run prune I get this error:

2024-10-04 17:13:51.990 WARN CHUNK_FOSSILIZE Chunk bae0d76f68c4ca6262d68ea006cad8ec62972da3850e30128bb416a2a42d18c3 is already a fossil
2024-10-04 17:13:54.868 WARN CHUNK_FOSSILIZE Chunk 53e33af467e1b772ba40c3e8e0ff5663f98267974639f871390554e410062339 is already a fossil
2024-10-04 17:13:57.087 ERROR CHUNK_DELETE Failed to fossilize the chunk b0e62a477173fdb2dc7a697dfd3e3585cb02539cff76e273278fe9cd540f1b3b: sftp: "Failure" (SSH_FX_FAILURE)
Failed to fossilize the chunk b0e62a477173fdb2dc7a697dfd3e3585cb02539cff76e273278fe9cd540f1b3b: sftp: "Failure" (SSH_FX_FAILURE)

I think this is because it’s trying to write and there’s 0.0% space.

Is there any way to manually create space to prevent this error?

So while other prune commands need to specify a backup or a snapshot, if I just want to get rid of unreferenced chunks taking up space, I can safely run prune -exhaustive without any keep flags?

Is my assumption correct it will not prune backups newer than 7 days? e.g. when I would make backups a couple of times per day and also run this prune command again and again, it wouldn’t delete the multiple daily backups for a week? (won’t configure it like that, just asking whether I understood it correctly).

Yes. Each -keep statement affects revisions created <second number> days ago or older.

1 Like

I’m having difficulty to prune. What have I overlooked? (using docker)

Running prune command from /cache/localhost/all
Options: [-log prune -storage adocs -r 1466-1486]
2025-04-04 10:17:38.420 INFO STORAGE_SET Storage set to b2://bkup/adocs/
2025-04-04 10:17:38.925 INFO BACKBLAZE_URL Download URL is: https://f002.backblazeb2.com
2025-04-04 10:19:47.207 INFO SNAPSHOT_NONE No snapshot to delete

Here’s the last “check run”

 adocs_b2 | 1466 | @ 2025-03-31 08:00       |  10846 |  17,693M |   3597 | 14,986M |    0 |       0 |    0 |        0 |
 adocs_b2 | 1467 | @ 2025-03-31 16:00       | 230083 | 416,051M |   6784 | 29,427M |    0 |       0 | 2524 |  11,424M |
 adocs_b2 | 1468 | @ 2025-03-31 20:00       | 230083 | 416,051M |   6784 | 29,427M |    0 |       0 |    0 |        0 |
 adocs_b2 | 1469 | @ 2025-04-01 00:00       | 244777 | 448,967M |   6830 | 29,583M |    1 |    317K |   51 | 160,510K |
 adocs_b2 | 1470 | @ 2025-04-01 04:00       | 230177 | 422,850M |   6812 | 29,555M |    0 |       0 |    4 |     686K |
 adocs_b2 | 1471 | @ 2025-04-01 08:00       | 230177 | 422,850M |   6812 | 29,555M |    0 |       0 |    5 |   1,487K |
 adocs_b2 | 1472 | @ 2025-04-01 12:00       | 230177 | 422,850M |   6812 | 29,555M |    0 |       0 |    0 |        0 |
 adocs_b2 | 1473 | @ 2025-04-01 16:00       | 230159 | 422,849M |   6812 | 29,555M |    0 |       0 |    4 |   1,478K |
 adocs_b2 | 1474 | @ 2025-04-01 20:00       | 230159 | 422,849M |   6812 | 29,555M |    0 |       0 |    0 |        0 |
 adocs_b2 | 1475 | @ 2025-04-02 00:00       | 244835 | 455,764M |   6823 | 29,592M |    1 |    317K |   15 |  38,835K |
 adocs_b2 | 1476 | @ 2025-04-02 04:00       | 230231 | 429,646M |   6808 | 29,569M |    0 |       0 |    4 |     739K |
 adocs_b2 | 1477 | @ 2025-04-02 08:00       | 230231 | 429,646M |   6808 | 29,569M |    0 |       0 |    0 |        0 |
 adocs_b2 | 1478 | @ 2025-04-02 12:00       | 230231 | 429,646M |   6808 | 29,569M |    0 |       0 |    0 |        0 |
 adocs_b2 | 1479 | @ 2025-04-02 16:00       | 230230 | 422,912M |   6809 | 29,570M |    0 |       0 |    4 |     975K |
 adocs_b2 | 1480 | @ 2025-04-02 20:00       | 230230 | 422,912M |   6809 | 29,570M |    0 |       0 |    0 |        0 |
 adocs_b2 | 1481 | @ 2025-04-03 00:00       | 244906 | 455,827M |   6814 | 29,587M |    2 |    811K |   10 |  19,238K |
 adocs_b2 | 1482 | @ 2025-04-03 04:00       | 230303 | 429,709M |   6788 | 29,527M |    0 |       0 |    5 |   1,224K |
 adocs_b2 | 1483 | @ 2025-04-03 08:00       | 230303 | 429,709M |   6788 | 29,527M |    0 |       0 |    0 |        0 |
 adocs_b2 | 1484 | @ 2025-04-03 12:00       | 230303 | 429,709M |   6788 | 29,527M |    0 |       0 |    0 |        0 |
 adocs_b2 | 1485 | @ 2025-04-03 16:00       | 230303 | 429,709M |   6788 | 29,527M |    0 |       0 |    0 |        0 |
 adocs_b2 | 1486 | @ 2025-04-03 20:00       | 230303 | 429,709M |   6788 | 29,527M |    0 |       0 |    0 |        0 |
 adocs_b2 |  all |                          |        |          |   8765 | 30,268M | 8765 | 30,268M |      |          |

For reference:

Specify from which snapshot to delete those revisions.

1 Like

Thank you. That wasn’t obvious to me when using the docker GUI for prune.

1 Like

I agree. Perhaps duplicacy (at least CLI, but also GUI – otherwise what’s the point of it existing?) shall provide better diagnostic in the logs: if the combination of command line arguments user specified results in some arguments having no effect – there shall be some warning, because it’s definitely not what user wanted.

A lot of people, including myself, have wasted a lot of time figuring out why prune refuses to act in similar scenarios, only to discover that ether -a or -id has to be provided, otherwise it will happily and successfully do nothing… Which is technically not incorrect (-- “Prune these revisions from no snapshots”; --(Does nothing) “Done, exactly as requested!”); just poor user experience.