Single missing chunk after first backup, every time

Alright all, I need some help here. I’m new to Duplicacy and I’ve researched the error about missing chunks and read the advice on how to deal with it and it’s not really helping me. Or else I’m doing something wrong without realizing it.

I’m running Duplicacy as a docker container on my unRAID server. I am using the GUI. I have configured a Storage with my Backblaze B2 account with a dedicated bucket for Duplicacy. I have made sure to configure File Lifecycle to Keep All Versions as I read this is a must for B2 storage to work with Duplicacy. I got it connected with an API key and everything just fine too.

Next I configured a handful of Backups for my various shares on unRAID. I believe these are all configured OK, they seem to work as expected.

I set up a Schedule for all these backups, followed by a Prune step to clean out some old stuff, and finally a Check step to verify everything and update the statistics and stuff. Here’s where I run into issues. I run my Schedule for the first time, and all the Backups succeed. But then the Prune fails, followed by the Check failing too. Checking the logs, it is a single Chunk that can’t be found that causes the failure.

The first time, I thought maybe it was a fluke. So I completely wiped the B2 bucket, deleted all my Backups, Schedules, Storages, etc from Duplicacy and started over completely fresh. Waited hours and hours for the backups to complete. Same issue. One single missing chunk.

I have no clue what’s causing this. Does anyone have any idea what I’m doing wrong that is causing just one chunk to go missing during the very first backup? Any help would be hugely appreciated.

This can happen if prune was ever interrupted.

Prune deletes chunks, and then deletes snapshots. If it is interrupted after it deleted chunks but before it deleted snapshots – those snapshots that are now missing chunks remain in the storage and will be failing checks

To recover:

  1. run check -persist
  2. collect failed snapshot IDs form the log
  3. Delete the corresponding snapshot files manually from the bucket/snapshots/snapshot-id/<N>
  4. run prune -exhaustive to cleanup orphaned chunks if any are still remaining.

hmmm, wasn’t it changed because of this scenario, so prune first deletes the snapshot file and afterwards the data chunks? I read it anywhere in the changelogs/forum…

Yours

Lopiuh

No, gchen did not want that explicitly because this would leave trash in the datastore.

Alternative solution suggested was 2-step deletion. It sounded reasonable, same approach is used for fossils already, but it wasn’t implemented.

My personal approach to this is to never prune. Storage is cheap, why bother.