Manually delete chunks?

What happens if you manually delete chunks from a repo? Will they be restored the next time a backup is run?

My server has got completely full and has become unresponsive. I can’t figure out how to free up any space with a prune. I just need to delete 1 GB of space so I access the server and increase the amount of free space.

Any ideas?

By unresponsive, are you still able to delete files? If so, I’d hunt down any 0-byte chunk or snapshot files (find /storage -size 0 -type f -delete) then run a check -all. Manually delete any snapshots with missing chunks.

If you’re then able to run a prune -exclusive -exhaustive, that may get rid of enough fossilised chunks already queued for deletion. Just don’t combine that with any -keep flags - do that after.

Or what you could maybe do is download then delete a handful of old chunk files, hoping none were metadata and not cached locally on your client. After a prune -exclusive -exhaustive, which hopefully frees up some space, you could manually upload those chunks you downloaded/deleted.

If you don’t care about snapshot history, or have another storage you can copy from, you can rebuild those missing chunks but you’d have to use a new ID, as incremental backups on an existing ID assumes all the chunks listed in the latest revision, is present.

I’ve run into lack of disk space issues numerous times over sftp, and usually a quick find /storage -size 0 -type f -delete and prune -exclusive -exhaustive gets everything in a good place.

It’s unresponsive by anything but ssh.

It seems prune commands and similar don’t work at the moment and I don’t currently have another workaround to free up space.

If I have to manually delete chunks that I can’t restore later, what’s the worst thing that will happen? Will it cause issues if duplicacy can’t find chunks it expects to be there? Or will I just not be able to restore the corresponding data?

Absolutely do the find /storage -size 0 -type f -delete step first in ssh, as you don’t wanna have such files in there as Duplicacy will claim nothing is wrong. It won’t free space but it shouldn’t error due to lack of.

Have you tried prune -exclusive -exhaustive? Because that should work, as it doesn’t create new data or rename fossils - just deletes chunks. Make sure no other backups or anything is scheduled to run.

Yes, randomly deleting chunks isn’t a good idea - you might be able to regenerate some of them (with a fresh backup ID) but you risk destroying old snapshots.

1 Like

Thanks. At this stage I’m resigned to breaking old snapshots. Perhaps I could manually delete all snapshots but the latest in the snapshots folder to mitigate?

Unfortunately -size 0 is not allowed with BusyBox v1.01 (2024.06.17-18:46+0000) but I think I have deleted all those files anyway manually.