How can I kill this Zombie snapshot?

I backup to an sftp server on my LAN. As a result of the recent 2.6.2 release and bugfix I ran the suggested duplicacy check on my repo. All commands here are run as root on the server itself.

(Have also read Invalid chunks in 1/4 snapshots and posted How long does check -chunks take? in relation to this.)

My inital check “duplicacy check -all -chunks -threads 4” turned up a corrupted chunk. I decided to take the path of removing it and any snapshots which contained it. (Most recent snapshot doesn’t contain it and its limited to one snapshot ID.)

So I manually deleted the problem chunk. Reran check and found which snapshots had missing chunks: 7,8,9,10,11 and 12 out of 13 total snapshots.
I deleted the snapshots:

duplicacy prune -id oszen -r 7-9
duplicacy prune -id oszen -r 10-12

but the latter didn’t seem to work properly so I did them seperately:

duplicacy prune -id oszen -r 12
duplicacy prune -id oszen -r 10
duplicacy prune -id oszen -r 11

ran prune to clean up:
duplicacy prune -id oszen -exhaustive -exclusive

all of which seemed to work normally but a list:
duplicacy list -id oszen

shows snapshot 11 is still there!
Storage set to /backup/duplicacy/chroot/repo
Snapshot oszen revision 1 created at 2019-12-20 15:07 -hash
Snapshot oszen revision 2 created at 2020-01-07 14:10 
Snapshot oszen revision 3 created at 2020-01-23 09:16 
Snapshot oszen revision 4 created at 2020-01-30 15:31 
Snapshot oszen revision 5 created at 2020-03-10 23:23 
Snapshot oszen revision 6 created at 2020-03-19 10:16 
Snapshot oszen revision 11 created at 2020-06-10 19:22 
Snapshot oszen revision 13 created at 2020-08-03 22:5

I’ve actually been though the whole delete/prune/list two or three times but snapshot 11 refuses to die.

About to delete the metadata cache and try again, but otherwise I’m at a bit of a loss…

Actually I think the cache is irrelevant since I’m on the server and this snapshot refers to a different machine which doesn’t currently exist (11yo computer death throws). The metadata cache will presumably be in the .duplicacy/ folder where the backup was created.

And here is what happens when I try to check the entire repo:

duplicacy check -all -chunks -threads 4

part of result:

 Chunk 0280e56d315e1996687b000535b69ee0edbd2339d57fe9a21f70d3a21e423726 referenced by snapshot oszen at revision 11 does not exist
Some chunks referenced by snapshot oszen at revision 11 are missing

And finally

Some chunks referenced by some snapshots do not exist in the storage

Thus it blocks my check for the rest of the repo and I daren’t run the other machines backups till this is sorted.

When you ran duplicacy prune -id oszen -r 11 did it give you any error? You can try to run this command again.

You can also delete this revision manually from the server (rm /path/to/storage/snapshots/oszen/11) and then run duplicacy prune -exhaustive -exclusive to clean up.

No errors, it appeared to work just as it did for the other 5 snapshots I deleted. And I’ve already rerun the commands about 4 times.

So I’ve deleted it manually as suggested. The cleanup prune is currently running.

Prune completed without error.
check -chunks now running again. It’s already confirmed all chunks exist and is moving on to verification.

Marking ‘delete manually’ as the solution.

Thanks for the help!

For everyone: Feel free to use the :heart: button on the posts that you found useful.

For the OP of any #support topic: you can mark the post that solved your issue by ticking the :checked: under the post. That of course may include your own post :slight_smile:

This is much better than renaming the issue and adding [RESOLVED] or SOLVED, as it allows us to find issues which are solved via search in a standard way:

This topic was automatically closed 10 days after the last reply. New replies are no longer allowed.