Remove a Folder from Backup

Hello,

how can i remove a Folder from Backup for more available Space?
I add a new exclusion to the backup jobs but i will remove his data from the backup to more available space on the Backup Repository.

Many Thanks!

Greetings

Revan335

You cannot retroactively purge files selectively from the existing revisions. Snapshots are immutable by design.

To free data occupied by a specidic file you need to delete all snapshot revisions that file is present in

1 Like

OK, how can I delete the Snapshot/his data for more free space?

Delete the snapshot file from the folder and than run a command? What’s the command or commands?

Here: prune · gilbertchen/duplicacy Wiki · GitHub. You can use this list · gilbertchen/duplicacy Wiki · GitHub to find out which revisions contain specific file.

1 Like
duplicacy list -files
duplicacy prune -r 344-350 (or other revision that I became from the list command)

This one or I forgot a other prune or other command?

That shall work.

1 Like

Many Thanks!

This remove the revisions and his data and makes more free Space?

How can I executed the Commands?
I became command not found, when I executed this on the Docker Container Console.

I used your Docker Container on Unraid.

I would initialize a new empty temporary repository (with the same snapshot ID) on your pc with duplicacy CLI and do it from there.

1 Like

And what does an independent temporary repository including its own snapshot ID naming have to do with the actual one that you want to edit?

Do you then have to integrate the actual one in order to edit it from there?

I can still understand the CLI because of the same environment.

Duplicacy calls local data “repository”, and target data – “storage”.

You have web ui configured to backup your local repository on unraid to your storage. You want to prune some snapshots on that storage. So, you can use existing initialized repository that web ui uses, but it’s cumbersome: you need to ssh to unraid, open shell in the container, find out the locations for temporary repositories webUI is using and run duplicacy list and prune there.

Instead, you create a new empty temporary repository on your PC, connected to the same storage with the same storage ID, exactly how Web UI would have done; and exactly how you would go about restoring data from that storage.

Now, instead of running restore, you just run list and prune job on it. Then delete this local temporary repository.

1 Like

Hmmm… I can’t find the Folder in list -files.
Maybe he is in the chunks and have not created a snapshot that are the job is incomplete why the Data from this folder is to big for the storage.

I have the message quota exceeded in the Backup job.

Can I delete the incomplete? Snapshot/chunks to become more free space to complete the jobs again with excluded folder?

If there was never completed backup with those extra unnecessary large files then you won’t see them.

However the repository will contain chunks for those files, that have already been uploaded, but not referenced by any snapshot.

To clear them out from the storage run prune -a -exhaustive. You can add -exclusive flag, but make sure no other job is running. This will remove all unreferenced chunks.

1 Like
2024-06-12 23:41:11.187 ERROR CHUNK_DELETE Failed to fossilize the chunk 850d49b5beb333aae2bc5987bcb4462b44b1f217c9238a24c40f1c4c943c97a6: sftp: "Quota exceeded" (SSH_FX_FAILURE)
Failed to fossilize the chunk 850d49b5beb333aae2bc5987bcb4462b44b1f217c9238a24c40f1c4c943c97a6: sftp: "Quota exceeded" (SSH_FX_FAILURE)

Can I manually deleted chunks?
But what’s there numbers?

Is the exclusive important?
I have run without that.

You have ran out of quota on your server.

Stop all schedules, and run “prune -exhaustive -exclusive”. This will delete the orphans chunks immediately instead of renaming them, thus avoiding your quota issue.

1 Like

Its work!
I have more free Space.
I checked the next runs for success.

1 Like

This topic was automatically closed 10 days after the last reply. New replies are no longer allowed.