Prune "Failed to delete snapshot" .. Yet, file does exist?

I have read all the other posts on here with the same message, this hasn’t seemed to be the same issue as them. I can have a single prune job running, and it fails with the following message. Yet if I go into the storage where backups are sent, I can locate snapshot 65, manually delete it and re-run the job and it then completes… but this seems broken… What could be causing this behavior?

Running prune command from /cache/localhost/all
Options: [-log prune -storage MYSTORAGE -id Unraid-Containers -keep 0:2 -keep 1:1]
2021-08-21 08:27:48.234 INFO STORAGE_SET Storage set to sftp://user@domain.ssh.com:1234/DUPLICACY
2021-08-21 08:27:49.123 INFO RETENTION_POLICY Keep no snapshots older than 2 days
2021-08-21 08:27:49.123 INFO RETENTION_POLICY Keep 1 snapshot every 1 day(s) if older than 1 day(s)
2021-08-21 08:28:07.695 INFO SNAPSHOT_DELETE Deleting snapshot Unraid-Containers at revision 65
2021-08-21 08:28:15.735 ERROR SNAPSHOT_DELETE Failed to delete the snapshot Unraid-Containers at revision 65: file does not exist
Failed to delete the snapshot Unraid-Containers at revision 65: file does not exist

Which Duplicacy version, Web or CLI?

A small correction (yes, Duplicacy nomenclature is slippery…):

revision 65 of Unraid-Containers snapshot-id

Running from cache? :thinking:

Thanks for the reply!

I’m running Web 2.7.2.

Yes, You’re absolutely correct. I will catch on soon :slight_smile:

/cache/localhost/all - I am not sure? This is default? I have Dupliacy running on my Unraid Server my backup Source. My destination is another NAS of mine off-site using SFTP for the storage.

Running prune command from /cache/localhost/all
Options: [-log prune -storage STORAGE -id Unraid-Containers -keep 0:1 -keep 1:2]
2021-08-21 09:48:37.719 INFO STORAGE_SET Storage set to sftp://me@domain.com:1234/DUPLICACY
2021-08-21 09:48:38.668 INFO RETENTION_POLICY Keep no snapshots older than 1 days
2021-08-21 09:48:57.385 INFO SNAPSHOT_DELETE Deleting snapshot Unraid-Containers at revision 64
2021-08-21 09:49:05.633 WARN CHUNK_FOSSILIZE Chunk 9cc7c8713de9aa26b4c8bb3a4dcb884092987eee78ee5a01c82cff5800817bcd is already a fossil
2021-08-21 09:49:05.683 WARN CHUNK_FOSSILIZE Chunk 805c48214b19f089a7006ac6cdd1707bcb1e90c90498c97c4e8d4b943814e6f6 is already a fossil
2021-08-21 09:49:05.741 WARN CHUNK_FOSSILIZE Chunk 88e4f5129e5e60387898c4f611d77f1f47adec79b45e106eb34353dab7a26920 is already a fossil
2021-08-21 09:49:05.814 WARN CHUNK_FOSSILIZE Chunk 4447f5e7fe0a0be79d90890bfe8396dee719ea6ea0d12d3d2c7713f022dc6083 is already a fossil
2021-08-21 09:49:05.877 WARN CHUNK_FOSSILIZE Chunk cb260fc1bad7af58b169c89d4d070022cbc2ff125ca6e4f10416e9339c53f196 is already a fossil
2021-08-21 09:49:05.930 WARN CHUNK_FOSSILIZE Chunk ebdfe822663e53b9ae80ce2298b4aaac6d89fe75cd7abe1ad1eef16af9c2ca68 is already a fossil
2021-08-21 09:49:05.990 WARN CHUNK_FOSSILIZE Chunk 0bfdb6fd8fea5f9128b9595314ae451377294b83026da37c1ea0956eeeebc73f is already a fossil
2021-08-21 09:49:06.051 WARN CHUNK_FOSSILIZE Chunk a94530ed92195ee1e1c6a732d116c11fa3074a8451401dfb78049a0c7ca08cdb is already a fossil
2021-08-21 09:50:21.537 WARN CHUNK_FOSSILIZE Chunk d56c410c0da653f833827a70a64af7a91275fa6d9e2e961db9228ccfd3dfc427 is already a fossil
2021-08-21 09:50:21.629 INFO FOSSIL_COLLECT Fossil collection 1 saved
2021-08-21 09:50:21.672 ERROR SNAPSHOT_DELETE Failed to delete the snapshot Unraid-Containers at revision 64: file does not exist
Failed to delete the snapshot Unraid-Containers at revision 64: file does not exist

I ran it again, I got this message. I manually deleted 64 from the destination storage location
/DUPLICACY/snapshots/Unraid-Containers.

and then ran it again, and it was completed.

Running prune command from /cache/localhost/all
Options: [-log prune -storage STORAGE -id Unraid-Containers -keep 0:1 -keep 1:2]
2021-08-21 10:34:48.935 INFO STORAGE_SET Storage set to sftp://me@domain.com:1234/DUPLICACY
2021-08-21 10:34:49.859 INFO RETENTION_POLICY Keep no snapshots older than 1 days
2021-08-21 10:35:08.157 INFO SNAPSHOT_NONE No snapshot to delete

I am not sure why it seems like it doesn’t have permissions to delete but that’s not the error?

Remember that prune is a two-step operation:

fossil_collection_1
Ref: Lock Free Deduplication · gilbertchen/duplicacy Wiki · GitHub

Seems to be something related to the local cache versus the revision in storage.

I think @gchen can help better understand what’s going on.

Can you try to use the sftp command to test if deletion works?

$ sftp ftp://me@domain.com:1234
sftp> cd DUPLICACY/snapshots/Unraid-Containers
sftp> rm 64
sftp> ls

Hi @gchen , Sorry I was away for the last 4 days. I returned today and checked my backups and they have been failing prune/check for the last few days for the same result " Failed to delete the snapshot Unraid-Containers at revision 66: file does not exist "

Such as , follows or the rest of my backups also… All prune jobs are complaining about a snapshot revision " file does not exist " yet each one I have checked the file is there.

Running prune command from /cache/localhost/all
Options: [-log prune -storage STORAGE -id Databases -keep 0:1800 -keep 30:180 -keep 7:30 -keep 1:7]
2021-08-23 13:07:41.932 INFO STORAGE_SET Storage set to sftp://user-domain@address.com
2021-08-23 13:07:42.879 INFO RETENTION_POLICY Keep no snapshots older than 1800 days
2021-08-23 13:07:42.879 INFO RETENTION_POLICY Keep 1 snapshot every 30 day(s) if older than 180 day(s)
2021-08-23 13:07:42.879 INFO RETENTION_POLICY Keep 1 snapshot every 7 day(s) if older than 30 day(s)
2021-08-23 13:07:42.879 INFO RETENTION_POLICY Keep 1 snapshot every 1 day(s) if older than 7 day(s)
2021-08-23 13:08:03.044 INFO FOSSIL_GHOSTSNAPSHOT Snapshot Unraid-Containers revision 66 should have been deleted already
2021-08-23 13:08:03.044 INFO FOSSIL_GHOSTSNAPSHOT Snapshot Unraid-Containers revision 67 should have been deleted already
2021-08-23 13:08:03.045 INFO FOSSIL_IGNORE The fossil collection file fossils/1 has been ignored due to ghost snapshots
2021-08-23 13:08:03.045 INFO SNAPSHOT_DELETE Deleting snapshot Databases at revision 26
2021-08-23 13:08:03.052 INFO SNAPSHOT_DELETE Deleting snapshot Databases at revision 27
2021-08-23 13:08:12.160 WARN CHUNK_FOSSILIZE Chunk 39a6b1be7f747097ec109d42a1208ea63575c34f8953e6aedbd8fbb690ecfea8 is already a fossil
2021-08-23 13:08:12.217 WARN CHUNK_FOSSILIZE Chunk 15108394bca8521219947eff9919a48a15421b48e72ae888e9a96680a3f6601d is already a fossil
2021-08-23 13:08:12.254 WARN CHUNK_FOSSILIZE Chunk ff923bdd302b88375c57bbb06b7cdfd84b64565845ea11dcda29b423795088fe is already a fossil
2021-08-23 13:08:12.306 WARN CHUNK_FOSSILIZE Chunk 1921a6f74f002aae63599831934dd80b89cdd9c148971906b8718a9057e23945 is already a fossil
2021-08-23 13:08:12.351 WARN CHUNK_FOSSILIZE Chunk 9426a0d1104494809d47ff5c136620f87c63184e70265b6a9fe3fc6769eda133 is already a fossil
2021-08-23 13:08:12.394 WARN CHUNK_FOSSILIZE Chunk 11e406b28f2e76e7f0d6b071fa19a8383cfb7c127aad0b754ddffcbe89b9f69c is already a fossil
2021-08-23 13:08:12.470 INFO FOSSIL_COLLECT Fossil collection 2 saved
2021-08-23 13:08:12.519 ERROR SNAPSHOT_DELETE Failed to delete the snapshot Databases at revision 26: file does not exist
Failed to delete the snapshot Databases at revision 26: file does not exist 

Yes, This seems to work directly sftp from a different linux host on the same network as Dupliacy.

sftp>
sftp> pwd
Remote working directory: /DUPLICACY/snapshots/Unraid-Containers
sftp> ls
66  67  68  69  70
sftp> rm 66
Removing /DUPLICACY/snapshots/Unraid-Containers/66
sftp> ls
67  68  69  70
sftp>

Just ran this again,

Running prune command from /cache/localhost/all
Options: [-log prune -storage STORAGE -id Unraid-Containers -keep 0:1 -keep 1:2]
2021-08-25 08:50:30.860 INFO STORAGE_SET Storage set to sftp://user@domain
2021-08-25 08:50:31.858 INFO RETENTION_POLICY Keep no snapshots older than 1 days
2021-08-25 08:50:52.371 INFO SNAPSHOT_DELETE Deleting snapshot Unraid-Containers at revision 67
2021-08-25 08:50:52.381 INFO SNAPSHOT_DELETE Deleting snapshot Unraid-Containers at revision 68
2021-08-25 08:50:52.389 INFO SNAPSHOT_DELETE Deleting snapshot Unraid-Containers at revision 69

2021-08-25 08:51:02.755 WARN CHUNK_FOSSILIZE Chunk 70748c566548a3e8d3ba47abfc21fab03283b237799cf100c14ad15a25676a93 is already a fossil
2021-08-25 08:51:02.927 WARN CHUNK_FOSSILIZE Chunk 45404e2111537fff747ab82ed1e93c01e448ca14e3c50eb224ec9e543d2f07e1 is already a fossil

While it was checking fossils, I ran

sftp> cd Unraid-Containers/
sftp> ls
67  68  69  70
sftp> rm 67
Removing /DUPLICACY/snapshots/Unraid-Containers/67
sftp> rm 68
Removing /DUPLICACY/snapshots/Unraid-Containers/68
sftp> rm 69
Removing /DUPLICACY/snapshots/Unraid-Containers/69
sftp> ls
70
sftp>

And we have completed the job “successfully”

2021-08-25 08:56:32.318 INFO FOSSIL_COLLECT Fossil collection 1 saved
2021-08-25 08:56:32.347 INFO SNAPSHOT_DELETE The snapshot Unraid-Containers at revision 67 has been removed
2021-08-25 08:56:32.387 INFO SNAPSHOT_DELETE The snapshot Unraid-Containers at revision 68 has been removed
2021-08-25 08:56:32.418 INFO SNAPSHOT_DELETE The snapshot Unraid-Containers at revision 69 has been removed

The job right after, for Databases the job failed as it states the file does not exist, yet this is not true and I did not delete the files manually this job.

Running prune command from /cache/localhost/all
Options: [-log prune -storage STORAGE -id Databases -keep 0:1800 -keep 30:180 -keep 7:30 -keep 1:7]
2021-08-25 08:56:32.778 INFO STORAGE_SET Storage set to sftp://user@domain
2021-08-25 08:56:34.187 INFO RETENTION_POLICY Keep no snapshots older than 1800 days
2021-08-25 08:56:34.187 INFO RETENTION_POLICY Keep 1 snapshot every 30 day(s) if older than 180 day(s)
2021-08-25 08:56:34.187 INFO RETENTION_POLICY Keep 1 snapshot every 7 day(s) if older than 30 day(s)
2021-08-25 08:56:34.187 INFO RETENTION_POLICY Keep 1 snapshot every 1 day(s) if older than 7 day(s)
2021-08-25 08:56:54.075 INFO FOSSIL_COLLECT Fossil collection 1 found
2021-08-25 08:56:54.075 INFO FOSSIL_POSTPONE Fossils from collection 1 can't be deleted because deletion criteria aren't met
2021-08-25 08:56:54.075 INFO SNAPSHOT_DELETE Deleting snapshot Databases at revision 27
2021-08-25 08:56:54.077 INFO SNAPSHOT_DELETE Deleting snapshot Databases at revision 28
2021-08-25 08:56:54.078 INFO SNAPSHOT_DELETE Deleting snapshot Databases at revision 29
2021-08-25 08:57:03.353 WARN CHUNK_FOSSILIZE Chunk 9426a0d1104494809d47ff5c136620f87c63184e70265b6a9fe3fc6769eda133 is already a fossil
2021-08-25 08:57:03.515 WARN CHUNK_FOSSILIZE Chunk 39a6b1be7f747097ec109d42a1208ea63575c34f8953e6aedbd8fbb690ecfea8 is already a fossil
2021-08-25 08:57:03.860 WARN CHUNK_FOSSILIZE Chunk 11e406b28f2e76e7f0d6b071fa19a8383cfb7c127aad0b754ddffcbe89b9f69c is already a fossil
2021-08-25 08:57:04.025 INFO FOSSIL_COLLECT Fossil collection 2 saved
2021-08-25 08:57:04.079 ERROR SNAPSHOT_DELETE Failed to delete the snapshot Databases at revision 27: file does not exist
Failed to delete the snapshot Databases at revision 27: file does not exist

sftp> cd Databases/
sftp> ls
1   22  27  28  29  30  31  32  33  34  35  36  37  38  39  40  41  42  43  44  45  46  47  48  49  50  51  52  53  54
55  56  57  58  59
sftp>

Any chance you’re using a Synology NAS as the storage? Their version of sftp server may act up on long paths: Chunk uploading error with Synology

1 Like

@gchen, Yes. Awesome, this has been fixed it seems that / is the fix. Is this in any of the official documentation? Maybe a note on the sftp page would be ideal or if you can curl for if it’s a Synology nas to provide that extra / ? I understand it’s not your issue. but thank you very much for the link for a fix.

1 Like

Better yet, perhaps WebUI could do the short test of SFTP server that is about to be configured to make sure it can read, write, and delete files as duplicacy would need to do. This will also detect other problems – like bad permissions – at the stage of adding storage.

There is a “note for Synology users” here Supported storage backends but unless you already have had that issue you would not go reading that.

1 Like

This topic was automatically closed 10 days after the last reply. New replies are no longer allowed.