Prune -all not working after changing odb-token

Strange issue on Windows using duplicacy-cli 2.7.2 (175ADB). I backup a few repository directories and use the same onedrive directory for storage of them all. I have a batch file which changes to each repository directory one after another and does the backup every night and then finally “prune -all -keep 0:14” to only save the last 14 days. This was working fine until I enabled MFA on the microsoft account and had to download a new odb-token.json.

The backups still seem to be running OK, and the output from the prune -all command also shows that it is removing the snapshots and the chunks with no errors. But it is actually not removing the snapshots, only the chunks. The next night it will try to delete the same snapshots again and also say that the chunks are already fossils.

the check command complains that many chunks referenced by snapshots are missing because the snapshots didn’t get deleted but the chunks did.

I have found that if I go into each repository and run duplicacy prune -keep 0:14 without the -all option then it does correctly delete the snapshots. I will find out over the next week if running prune from each directory without -all has fixed the problem.

The new odb-token was downloaded on 19th Nov, so snapshots created from the 5th Nov have not been deleted. Some output from last nights prune command is:

Deleting snapshot scans at revision 30
Deleting snapshot scans at revision 31
Deleting snapshot scans at revision 32
Deleting snapshot scans at revision 33
Deleting snapshot scans at revision 34
Deleting snapshot scans at revision 35
Deleting snapshot scans at revision 36
Deleting snapshot scans at revision 37
Deleting snapshot scans at revision 38
Deleting snapshot scans at revision 39
Deleting snapshot scans at revision 40
Deleting snapshot scans at revision 41
Deleting snapshot scans at revision 42
Deleting snapshot scans at revision 43
Deleting snapshot scans at revision 44
Deleting snapshot scans at revision 45
Deleting snapshot scans at revision 46
Deleting snapshot scans at revision 47
Deleting snapshot scans at revision 48

But then the list command which gets run after prune shows that anything created after 5th is still there:

Snapshot scans revision 30 created at 2020-11-05 00:49 -vss
Snapshot scans revision 31 created at 2020-11-06 00:37 -vss
Snapshot scans revision 32 created at 2020-11-07 00:31 -vss
Snapshot scans revision 33 created at 2020-11-08 00:15 -vss
Snapshot scans revision 34 created at 2020-11-09 00:17 -vss
Snapshot scans revision 35 created at 2020-11-10 00:34 -vss
Snapshot scans revision 36 created at 2020-11-11 00:33 -vss
Snapshot scans revision 37 created at 2020-11-12 00:41 -vss
Snapshot scans revision 38 created at 2020-11-13 00:35 -vss
Snapshot scans revision 39 created at 2020-11-14 00:28 -vss
Snapshot scans revision 40 created at 2020-11-15 00:17 -vss
Snapshot scans revision 41 created at 2020-11-16 00:17 -vss
Snapshot scans revision 42 created at 2020-11-17 00:41 -vss
Snapshot scans revision 43 created at 2020-11-18 00:43 -vss
Snapshot scans revision 44 created at 2020-11-19 01:02 -vss
Snapshot scans revision 45 created at 2020-11-20 15:16 -vss
Snapshot scans revision 46 created at 2020-11-21 00:29 -vss
Snapshot scans revision 47 created at 2020-11-22 00:29 -vss
Snapshot scans revision 48 created at 2020-11-23 00:27 -vss
Snapshot scans revision 49 created at 2020-11-24 00:51 -vss
Snapshot scans revision 50 created at 2020-11-25 00:48 -vss
Snapshot scans revision 51 created at 2020-11-26 01:18 -vss
Snapshot scans revision 52 created at 2020-11-27 00:55 -vss
Snapshot scans revision 53 created at 2020-11-28 00:44 -vss
Snapshot scans revision 54 created at 2020-11-29 00:15 -vss
Snapshot scans revision 55 created at 2020-11-30 00:16 -vss
Snapshot scans revision 56 created at 2020-12-01 00:55 -vss
Snapshot scans revision 57 created at 2020-12-02 00:43 -vss
Snapshot scans revision 58 created at 2020-12-03 00:46 -vss
Snapshot scans revision 59 created at 2020-12-04 00:41 -vss
Snapshot scans revision 60 created at 2020-12-05 00:48 -vss
Snapshot scans revision 61 created at 2020-12-06 00:18 -vss
Snapshot scans revision 62 created at 2020-12-07 00:18 -vss

I have gone through the directories to prune them one at a time now and most are OK but one of them won’t prune because it fails at

Chunk 3124dc5a73faa2120e0e663a784d76a6ad895cc0d90dfe799c5ed37053fa21cc does not exist in the storage

and doesn’t continue. I am also unable to prune the revision referencing that chunk by using “prune -r 110” so I guess I need to delete it manually from onedrive?

The thing that confused me is that the “Deleting snapshot xxxx at revision nn” message didn’t actually mean that the snapshot had been deleted. It had just been queued for deletion at the end of the current run of duplicacy. But then when an error was reached because of the missing chunk, the snapshots were never removed even though the unique chunks referenced by them were now gone.

Since I was doing -all to prune all directories in one run it means that none of them would end up being pruned properly.

Can you follow this guide Fix missing chunks to see why that chunk is missing?

Right, Deleting snapshot ... doesn’t mean the snapshot has been deleted. There is another message, The snapshot xxx at revision xxx has been removed, when the snapshot is actually deleted.

I searched all the prune logs and found that chunk mentioned going back to September:

prune-log-20200916-002029:1877:Marked fossil 3124dc5a73faa2120e0e663a784d76a6ad895cc0d90dfe799c5ed37053fa21cc
prune-log-20200917-002555:1089:Deleted fossil 3124dc5a73faa2120e0e663a784d76a6ad895cc0d90dfe799c5ed37053fa21cc
(collection 33)
prune-log-20200926-124459:949:Marked fossil 3124dc5a73faa2120e0e663a784d76a6ad895cc0d90dfe799c5ed37053fa21cc
prune-log-20200928-001511:981:Resurrected fossil 3124dc5a73faa2120e0e663a784d76a6ad895cc0d90dfe799c5ed37053fa21cc
(collection 6)
prune-log-20201202-004319:3435:Marked fossil 3124dc5a73faa2120e0e663a784d76a6ad895cc0d90dfe799c5ed37053fa21cc
prune-log-20201204-004121:1689:Marked fossil 3124dc5a73faa2120e0e663a784d76a6ad895cc0d90dfe799c5ed37053fa21cc
prune-log-20201205-004830:563:Marked fossil 3124dc5a73faa2120e0e663a784d76a6ad895cc0d90dfe799c5ed37053fa21cc
prune-log-20201207-001827:156:Marked fossil 3124dc5a73faa2120e0e663a784d76a6ad895cc0d90dfe799c5ed37053fa21cc
prune-log-20201207-090055:1073:Deleted fossil 3124dc5a73faa2120e0e663a784d76a6ad895cc0d90dfe799c5ed37053fa21cc
(collection 1)
prune-log-20201207-090055:6055:Deleted fossil 3124dc5a73faa2120e0e663a784d76a6ad895cc0d90dfe799c5ed37053fa21cc
(collection 2)
prune-log-20201207-090055:15895:Marked fossil 3124dc5a73faa2120e0e663a784d76a6ad895cc0d90dfe799c5ed37053fa21cc
prune-log-20201207-095736:4415:Marked fossil 3124dc5a73faa2120e0e663a784d76a6ad895cc0d90dfe799c5ed37053fa21cc
prune-log-20201207-103552:357:Marked fossil 3124dc5a73faa2120e0e663a784d76a6ad895cc0d90dfe799c5ed37053fa21cc
prune-log-20201207-123948:8678:Marked fossil 3124dc5a73faa2120e0e663a784d76a6ad895cc0d90dfe799c5ed37053fa21cc
prune-log-20201207-144726:1281:Marked fossil 3124dc5a73faa2120e0e663a784d76a6ad895cc0d90dfe799c5ed37053fa21cc
prune-log-20201207-164606:512:Marked fossil 3124dc5a73faa2120e0e663a784d76a6ad895cc0d90dfe799c5ed37053fa21cc

There are a few log files for 20201207 because that was when I was purging revisions one at a time to find out which one was causing the problem. It looks like that chunk got deleted a few times.

I also looked at old email logs and the last time it was verified by check was 14-11-2020.

I also found in the email logs that another chunk had problems during check
Failed to decrypt the chunk cd003631794ccdc19b6b4e7d6a5d192ee1f6bcc38a8bd505bbf96bc4d7e56f89: cipher: message authentication failed; retrying
Failed to decrypt the chunk cd003631794ccdc19b6b4e7d6a5d192ee1f6bcc38a8bd505bbf96bc4d7e56f89: cipher: message authentication failed; retrying
Failed to decrypt the chunk cd003631794ccdc19b6b4e7d6a5d192ee1f6bcc38a8bd505bbf96bc4d7e56f89: cipher: message authentication failed; retrying
Failed to decrypt the chunk cd003631794ccdc19b6b4e7d6a5d192ee1f6bcc38a8bd505bbf96bc4d7e56f89: cipher: message authentication failed

When this happens I usually delete the chunk from onedrive and delete any snapshots that reference it, but it looks like I missed some snapshots because for a while purge was failing at this other chunk not existing. That could be the reason for the problem.

The reason for chunks getting corrupt is probably because of communication problems with onedrive. I sometimes see errors like
Response code: 504; retry after 748 milliseconds
Response code: 504; retry after 583 milliseconds

The corruption could be caused by this bug: OneDrive Business: ERROR DOWNLOAD_CHUNK, which has been fixed by the latest CLI version.

Thanks,
I am on the latest CLI version already but the last time I saw “message authentication failed; retrying” was 7th November so before the 2.7.2 update. Maybe now I have got it all stable again it will be OK.
It seems unlikely it was actually caused by be adding MFA to the account - that was just a coincidence it stopped purging the same day.