Duplicacy Web-ui container reset Fossil collection counts

Hi,

i’m using Duplicacy Web-ui on my NAS with this container hotio/duplicacy - hotio.dev

I noted that after updating docker or just restarting my NAS the Fossil collections count reset to 1.

Do not remember how many fossil collections i had before.

Now i have just:

INFO FOSSIL_COLLECT Fossil collection 1 saved

and no chunks were removed. Only snapshots.

Seems strange, what do you think?

I see that i have a fossils dir somewhere inside Duplicacy cache. Deleting the cache should not be an issue afaik. So no clue why this happened.

That is because the Temporary directories in the Setting page was reset. This directory by default is ~/.duplicacy-web/repositories but I don’t know how it is mapped in your container.

1 Like

Hi, I mapped the volumes cache, config and logs as stated on the container page I linked.

In Settings I set Temporary directory as /cache (was the default) Here I have “0”,”1”, etc:

root@DK:/srv/dev-disk-by-label-HC2/AppData/duplicacy/cache/localhost/all/.duplicacy/cache/storage_name# l    l
total 12
drwxr-----+ 178 root root 4096 Nov 27 01:01 chunks
drwxr-----+   2 root root 4096 Nov 27 01:01 fossils
drwxr-----+   4 root root 4096 Apr 10  2021 snapshots

In logs there are only logs and in config there are the bin directory and machine-id, the keyring and other configurations.

So this is actually an issue I experienced?

Should perform a prune with the - exhaustive tag?

Fossil collection files are stored under the fossils directory. If /cache is deleted then all these files will be gone.

You can run prune with -exhaustive to collect chunks that should have been deleted.

1 Like

Thanks! Any idea why the cache was deleted? I have the volume mapped on the hard disk so it should have stay in place. Another issue I found after a reboot/update is that I have to unlock Duplicacy with the storage encryption password before it can start to work the schedules.

So actually seems that for a strange reason the volumes mapped are reset.

Maybe I can pass your advice to the container’s maintainer if this is an issue of the container itself. Or is a Docker issue in general? Can’t figure out why the volumes should be emptied in the cases I described.

Seems also that the Docker Hub @saspus image mounts the same volumes, at least the names are the same on the outside. Maybe the same behaviour happens with this image also?

I edited my schedule backup/check/prune/check to let prune use -exhaustive once:


Running prune command from /cache/localhost/all
Options: [-log prune -storage storage_name -a -threads 10 -keep 0:7 -exhaustive]
2021-11-28 16:18:36.915 INFO STORAGE_SET Storage set to b2://xxx
2021-11-28 16:18:37.982 INFO BACKBLAZE_URL download URL is: https://xxx
2021-11-28 16:18:38.448 INFO RETENTION_POLICY Keep no snapshots older than 7 days
2021-11-28 16:18:39.092 INFO FOSSIL_COLLECT Fossil collection 3 found
2021-11-28 16:18:39.092 INFO FOSSIL_DELETABLE Fossils from collection 3 is eligible for deletion
2021-11-28 16:18:39.092 INFO PRUNE_NEWSNAPSHOT Snapshot DockerCompose revision 241 was created after collection 3
2021-11-28 16:18:39.100 INFO PRUNE_NEWSNAPSHOT Snapshot AppData revision 236 was created after collection 3
2021-11-28 16:18:39.487 INFO CHUNK_DELETE The chunk 75f6e844c64daec7114a6d0c1450ae08647406b32a941bba14b9020ed9325e34 has been permanently removed
2021-11-28 16:18:39.522 INFO CHUNK_DELETE The chunk 05fdcdc6677da342abc02de3351299e6a56ee93e010a14d2c6f38b9d96abc57c has been permanently removed
2021-11-28 16:18:39.530 INFO CHUNK_DELETE The chunk 95ffbe597f6fffa30d7e9f99bd9c37dee200e208dea983b788c9aa41da465ff3 has been permanently removed
2021-11-28 16:18:39.545 INFO CHUNK_DELETE The chunk 5c1c83724ced6928cf8a7df369c1cdd9a53181e74683ca7f056c3b9a37a32f86 has been permanently removed
2021-11-28 16:18:39.562 INFO CHUNK_DELETE The chunk eb00347e7feda0d585c18d66dd9db8e27f3159e230bf1e0c8b7927d8cde1281a has been permanently removed
2021-11-28 16:18:39.562 INFO CHUNK_DELETE The chunk 425851e6e5f2609cb005d99a21665f293874b2bf15c4ab75f2974f88589aeff1 has been permanently removed
2021-11-28 16:18:39.562 INFO CHUNK_DELETE The chunk 23437d2d59fd90043b70375e68611dcf9ff51f858a68779f4fa6e100bbcd4b8f has been permanently removed
2021-11-28 16:18:39.584 INFO CHUNK_DELETE The chunk 5f915b99fcc984dc5e4e6845c9c2610894ee4717a588a194e56216ae9d1f5d5e has been permanently removed
2021-11-28 16:18:39.599 INFO CHUNK_DELETE The chunk 75a200e773efbf1cf86bb5cd5756d959bc648df2dab2fa47bb77b21058b6c9ce has been permanently removed
2021-11-28 16:18:39.599 INFO CHUNK_DELETE The chunk d9147a33eb48a11a2794ef5da2887e8d82fbe09885e76e0091f63ef4b9d5121f has been permanently removed
2021-11-28 16:18:39.749 INFO CHUNK_DELETE The chunk b9c72eea7e947277aa78240568d0de38dee07de7f6fd05e3848cb4cc65c98d1c has been permanently removed
2021-11-28 16:18:39.764 INFO CHUNK_DELETE The chunk 870358f3fa4d5fbebd19459c376b2e88833f67a038eaf6594d2b73ddbde94427 has been permanently removed
2021-11-28 16:18:40.034 INFO CHUNK_DELETE The chunk 5f2ae116e25021599d5330daea063d97be6be2731ab8ebba5b91bbf39b168c20 has been permanently removed
2021-11-28 16:18:40.607 INFO CHUNK_DELETE The chunk 5ebc69b584abcb3744f790c0e5ab2482a2e4466d480a96fe23d9d6f03bb7cced has been permanently removed
2021-11-28 16:18:40.645 INFO CHUNK_DELETE The chunk 7dca4c448ee8d92964b01de0798527f5c23d97e7c853990db42cc1721ea11d9d has been permanently removed
2021-11-28 16:27:48.809 ERROR CHUNK_DELETE Failed to fossilize the chunk 5f2ae116e25021599d5330daea063d97be6be2731ab8ebba5b91bbf39b168c20: URL request 'https://api003.backblazeb2.com/b2api/v1/b2_hide_file' returned 400 File not present: chunks/5f/2ae116e25021599d5330daea063d97be6be2731ab8ebba5b91bbf39b168c20
Failed to fossilize the chunk 5f2ae116e25021599d5330daea063d97be6be2731ab8ebba5b91bbf39b168c20: URL request 'https://api003.backblazeb2.com/b2api/v1/b2_hide_file' returned 400 File not present: chunks/5f/2ae116e25021599d5330daea063d97be6be2731ab8ebba5b91bbf39b168c20

Probably because I did other backups after the issue arised and so Duplicacy wanted to delete a chunk that was already deleted in a previous prune?

^Bumping the thread.

Are you sure that the cache directory was emptied after a container reset? The fossil collection count can also go back to 1 if the previous collection 1 was processed and then deleted. This count doesn’t always go up – Duplicacy finds the first number not used starting from 1.

No not sure.

Well i was at revision 20 or similar and i prune revisions older than 7 days with backup once a day. So the fossil collection 1 should have been deleted right after that fossil collection 2 were saved. Or not?

The order doesn’t matter. Duplicacy will pick up the first number that is free so it is possible that the count can go back to 1 again.

Sorry but i do not get it. Could you clarify?

It starts at collection 1, so no chunks are removed because it’s the first one. Then i go to 2 and the first collection is erased alongside its chunks.

Following this it should go back to 1 again. (but in that case no chunks will be deleted again like in the first case? was just wandering this now).

What you mean with “The order doesn’t matter”?

No it doesn’t work that way. Each collection file is evaluated and processed independently. If all fossils in collection 1 are no longer referenced by any backups, then these fossils are permanently removed from the storage, and collection 1 will be removed locally (it doesn’t have a remote copy). The next collection file created may reuse the number 1 again.

If any fossil in a collection file can’t be safely removed then this collection file will remain in the local cache and will be evaluated again on the next prune run.

1 Like

Ok, so i actually have:

root@DK:/srv/dev-disk-by-label-HC2/AppData/duplicacy/cache/localhost/all/.duplicacy/cache/storage_name/fossils# ll
total 4
-rw-r----- 1 root root 1905 Dec  3 01:01 4

With my latest nightly prune saying:

2021-12-03 01:01:14.964 INFO FOSSIL_COLLECT Fossil collection 3 found
2021-12-03 01:01:14.964 INFO FOSSIL_DELETABLE Fossils from collection 3 is eligible for deletion
2021-12-03 01:01:14.964 INFO PRUNE_NEWSNAPSHOT Snapshot AppData revision 242 was created after collection 3
2021-12-03 01:01:14.985 INFO PRUNE_NEWSNAPSHOT Snapshot DockerCompose revision 247 was created after collection 3
2021-12-03 01:01:14.987 INFO SNAPSHOT_DELETE Deleting snapshot DockerCompose at revision 238
2021-12-03 01:01:14.988 INFO SNAPSHOT_DELETE Deleting snapshot AppData at revision 233
2021-12-03 01:01:15.213 INFO CHUNK_DELETE The chunk 7dbbf543241afdffad898e2c8764e36cacfd8d6e1d7c9e4f7d36e311028596e3 has been permanently removed
2021-12-03 01:01:15.213 INFO CHUNK_DELETE The chunk 93c9aeddb680044962fd4e07fd3eff8de20897cc43183ac135a73c76bceedf9d has been permanently removed
2021-12-03 01:01:15.247 INFO CHUNK_DELETE The chunk 159c1296ef12f4cd52d2db14118240313a1b1e4365c92111db8daefdb2c51008 has been permanently removed
2021-12-03 01:01:15.247 INFO CHUNK_DELETE The chunk dce4c72e1ab21b055804798f129e477265de73deb6e594a00fd2dc8d43012ca0 has been permanently removed
2021-12-03 01:01:15.268 INFO CHUNK_DELETE The chunk b65c4b32dabf1aa409afba8ebbadf664108cca1921b70faa4e5bf794d1a08537 has been permanently removed
2021-12-03 01:01:15.317 INFO CHUNK_DELETE The chunk 30ef7a5f4c614f8841d0236f3d04af0c7563206b13a9725e1f6de0effcfd6543 has been permanently removed
2021-12-03 01:01:15.320 INFO CHUNK_DELETE The chunk 4dac44e73462fe5dd0b863943091d4df8d8b63a1a403fa706a72b289331493ad has been permanently removed
2021-12-03 01:01:15.326 INFO CHUNK_DELETE The chunk 441f30d7a7dc518f7bd9beba5b9aecda9bc3212d7b86d90d8bfbb350d14e5d89 has been permanently removed
2021-12-03 01:01:15.339 INFO CHUNK_DELETE The chunk 88b88a4a6093979d39ff70563edc25f7d3fbcf3ed5da3c5812c5d27f379a1ea6 has been permanently removed
2021-12-03 01:01:15.384 INFO CHUNK_DELETE The chunk ece05deb3950d19a1008cfce2bea2f9b06a074d88525c5b549841bd36c836b65 has been permanently removed
2021-12-03 01:01:15.647 INFO CHUNK_DELETE The chunk a65abc9b7d0bf388c12e83b97c2b3a4c9b94e48301d4afac78ec01e5daa4a566 has been permanently removed
2021-12-03 01:01:16.495 INFO CHUNK_DELETE The chunk 83faa74f07f65685d7ee679b7f345e4ca63b35b62bf8799f0ca117a30ec0a665 has been permanently removed
2021-12-03 01:01:16.504 INFO CHUNK_DELETE The chunk 373d1fe0c11eecd3249f0f4b346a7e8104eb1122fcb466c2720a4ebf0199af36 has been permanently removed
2021-12-03 01:01:16.505 INFO CHUNK_DELETE The chunk 0ffeed81b08b6568e6dafe2aba87a0b2b044ab6f6ccd43851d1039bca6c6c0f3 has been permanently removed
2021-12-03 01:01:16.508 INFO CHUNK_DELETE The chunk 4ae68a6f456581b399de44252f60b7a69691c7b97dd9698b2eefb08763443a43 has been permanently removed
2021-12-03 01:01:16.553 INFO CHUNK_DELETE The chunk cc7b392057f4c7f14fa40d7a58139d438812a0d0f94df4a8daa62a43dd8a969a has been permanently removed
2021-12-03 01:01:23.584 INFO FOSSIL_COLLECT Fossil collection 4 saved
2021-12-03 01:01:23.689 INFO SNAPSHOT_DELETE The snapshot AppData at revision 233 has been removed
2021-12-03 01:01:23.754 INFO SNAPSHOT_DELETE The snapshot DockerCompose at revision 238 has been removed

Only fossil collection 4 is present. So i would expect the next prune to use a number between 1 and 3, by logic.
You say “The next collection file created may reuse the number 1 again.” so i still don’t get the logic behind.

I’ll see what happens with tonight prune.


Prune performed, now I have:

root@DK:/srv/dev-disk-by-label-HC2/AppData/duplicacy/cache/localhost/all/.duplicacy/cache/storage_name/fossils# ll
total 4
-rw-r----- 1 root root 2142 Dec  4 01:01 5

From the prune log:

2021-12-04 01:01:43.538 INFO FOSSIL_COLLECT Fossil collection 4 found
2021-12-04 01:01:43.538 INFO FOSSIL_DELETABLE Fossils from collection 4 is eligible for deletion
2021-12-04 01:01:43.538 INFO PRUNE_NEWSNAPSHOT Snapshot AppData revision 243 was created after collection 4
2021-12-04 01:01:43.596 INFO PRUNE_NEWSNAPSHOT Snapshot DockerCompose revision 248 was created after collection 4 

[...] 

2021-12-04 01:01:51.037 INFO FOSSIL_COLLECT Fossil collection 5 saved
2021-12-04 01:01:51.106 INFO SNAPSHOT_DELETE The snapshot DockerCompose at revision 239 has been removed
2021-12-04 01:01:51.168 INFO SNAPSHOT_DELETE The snapshot AppData at revision 234 has been removed

So why it went to 5 if 4 was locally deleted?

Hi @gchen, could you provide any further info about this?

Thanks!

The new collection number will alway be one more than the largest one among existing collections. So it will keep going up if there is at least one collection that can’t be processed and has to be deferred to the next run. But, if all collections can be processed and thus deleted, then in the next run the collection number will start from 1 again.

I see, so in my example when collection 4 was deleted I would expect to start from 1 again because no other collections to be processed were present. Instead collection 5 was created. That’s my point and that is what I do not understand.

I mean that collection 4 was not deferred but deleted so it’s like no collections were present, and instead of starting from 1 again, the collection 5 was created instead. Hope I’m describing this accurately and sorry for reiterating on this. It’s a minor thing but it’s worthy to properly understand how it works.