Check storage/configuration integrity after prune problems

After collecting diagnostic info related to configuring https on a QNAP, I inadvertently had two duplicacy_web processes running due to a known bug in stopping the QNAP qpkg. As a result, two duplicate backup/prune/copy/check schedules were run in parallel.

The backup, copy, and check jobs ran without warnings or errors. One prune job failed:

2021-01-17 00:32:16.255 WARN CHUNK_DELETE Failed to remove the file chunks/b7/e3146349276347704dc5ce02883cd72d5df059085d53647fb52aab2205433b.fsl: URL request 'https://api000.backblazeb2.com/b2api/v1/b2_delete_file_version' returned 400 File not present: chunks/b7/e3146349276347704dc5ce02883cd72d5df059085d53647fb52aab2205433b 4_zb3083158c24667c375580610_f113621029019f94b_d20210115_m092030_c000_v0001076_t0014
...
2021-01-17 00:32:17.072 ERROR FOSSIL_COLLECT Failed to read the fossil collection file fossils/47: open /share/CACHEDEV1_DATA/.qpkg/Duplicacy/.duplicacy-web/repositories/localhost/all/.duplicacy/cache/Backblaze-B2/fossils/47: no such file or directory

All other prune jobs logged WARN CHUNK_FOSSILIZE Chunk ... is already a fossil warnings.

Copy jobs showed discrepancies between the number of chunks expected to copy and actually copied, e.g.:

2021-01-17 02:10:07.207 INFO SNAPSHOT_COPY Chunks to copy: 5, to skip: 111, total: 116
2021-01-17 02:10:07.299 INFO COPY_PROGRESS Skipped chunk 5744a3f6248a28123d20bb80e644fb1f6e7c302a0f1a46297a9df354b9e5951d (1/5) 80KB/s 00:00:00 20.0%
2021-01-17 02:10:08.531 INFO COPY_PROGRESS Copied chunk 1d5c621448a41476e46abb173f1fc1fc934d537076b58a5f3398cfe133d9184d (2/5) 4.02MB/s 00:00:01 40.0%
2021-01-17 02:10:09.350 INFO COPY_PROGRESS Copied chunk 4c08280f30cb685816c6fe1c490becf3b4d52567f81c5a39ee55418f493a0a97 (3/5) 3.51MB/s 00:00:01 60.0%
2021-01-17 02:10:10.038 INFO COPY_PROGRESS Copied chunk 86c98bb2296c9ea52765883dbdd072f1f5803db9f9a0864dfe5d4abb217226b5 (4/5) 2.66MB/s 00:00:00 80.0%
2021-01-17 02:10:10.060 INFO COPY_PROGRESS Skipped chunk e8877b3153eb26dbeadb10c6276849bb46b3b6468491991019ed8a7ca1638e88 (5/5) 2.76MB/s 00:00:00 100.0%
2021-01-17 02:10:10.151 INFO SNAPSHOT_COPY Copied 3 new chunks and skipped 113 existing chunks
2021-01-17 02:10:10.639 INFO SNAPSHOT_COPY Copied snapshot Duplicacy-config at revision 93

Subsequent check jobs on both the local and B2 storages were successful, as was a check -chunks on the local storage.

Should I do anything else to check the integrity of the storages? Could this have corrupted the web UI configuration/cache? If so, how can I check/recover?

The fossil collection files may be corrupted if you run multiple prune jobs, but this only happens if they finish as the same time which is very unlikely. If you’re concerned about this you can run a prune job with -exhaustive -exclusive while no other backups are running to clean up the storage.

1 Like

Running prune -exhaustive -exclusive -dry-run on the local storage logged two CHUNK_UNREFERENCED messages and 856 FOSSIL_UNREFERENCED messages, e.g.:

prune-20210118-082618.log:2021-01-18 08:32:13.824 INFO FOSSIL_UNREFERENCED Found unreferenced fossil ff/4dac635e27a609e5b2ad3936a9a699e90be77c236c0e7002c263c11baf8dec.fsl

Is that to be expected after this incident?

You can manually delete the 2 chunks mentions in the CHUNK_UNREFERENCED messages, but for fossils I would wait until the next prune job runs which may delete these fossils.

But even if you run prune -exhaustive -exclusive now it should not lead to any problem.

This topic was automatically closed 10 days after the last reply. New replies are no longer allowed.