If you're running prune and backup in parallel, please upgrade to 2.1.1

There was a bug in the implementation of the lock-free deduplication algorithm, which failed to include some new snapshots when determining which fossils are still referenced by snapshots to be kept. As a result, some fossils still referenced by these new snapshots will be mistakenly deleted leading to the ‘missing chunks’ problem.

If you’re running prune and backup jobs in parallel, this bug may affect you and you should upgrade to 2.1.1.

Also run a check command on all snapshots and all storages to make sure there aren’t any missing chunks.
duplicacy.exe check -all

This bug is fixed by this commit included in the 2.1.1 release:


I just want to clarify - by “backup jobs in parallel” do you mean multiple backup jobs going to the same storage bucket?
And also I can’t run the “check -all” command because I get an error saying repository is not initialised. I don’t use the CLI version which I guess is why it’s saying this. Is there a way to run this check from the GUI rather than having to resort to the CLI?

I’m wondering the same thing. If we are running our backups exclusively to their own locations, does this affect us? @zigzak, make sure you run that command using the cli version from the repository location and any auth files you have. I’ve made this mistake several times. If I remember correctly, you can’t for instance be in the folder C:\Users[username]\ and run the command C:\Duplicacy\MyDuplicacyRepo\duplicacy.exe check -all
You have to cd to that folder and run it from that location… it’s been a while. I could be getting mixed up, but it’s something to try.

Okay. I was partially right. You just have to run the command from the location your repo is in, but the exe can be somewhere else. So what I just did was cd to my repo location, then typed the full path to my exe to run it from somewhere else.
So for me,
E:\Scripts\duplicacy.exe check -all >> E:\Scripts\check-all-log.txt

Turns out I have 493 chunks missing. How do I re-upload them?

Try this guide: Fix missing chunks

So I thought about it and I guess the only way to get them back would be if the storage location still had them. The only revision that is missing chunks is revision 1… which seems strange that I’ve been running this for a year and have revision 1. I was going to just delete that snapshot and not worry about the missing chunks, but upon further investigation, I seem to have 382 snapshots without a single one missing. Upon even further investigation, it seems my prune command tries to delete these and fails every time, sometimes before it even starts deleting chunks. Sometimes it gets far enough to report things like “The chunk [blah] referenced by snapshot [blah] revision 112” does not exist, which is curious since the check all command only reported chunks missing for revision 1. Eventually it fails with “Failed to locate the path for the chunk […] net/http: TLS handshake timeout”

I’m going to upgrade to the latest version, backup, prune, and check all once more and report back my findings either to say all is clear with the new version or I need help. All in all, I’m fairly confident that the only missing data I have is from things that should have been pruned anyway, so fingers crossed for now.

If you still have problems, you should open a new thread in #support.

1 Like

If you don’t back up multiple repositories to the same storage location, then this bug won’t affect you.

You’re likely experiencing this bug: Deleted chunk during backup. That is, an aborted prune command may cause some snapshots to not be deleted.

Thanks. I was looking for related issues in the forum. I was actually just reading this. Ha!

I ran a backup and a prune and it’s still leaving all of the snapshots in there. It did however resurrect a bunch of fossils that I guess it decided it needed after all. I’ll open a support thread in a minute. I’m waiting for the prune command to finish a second time and will probably run a check as well (for a total of 2 backups and 2 prunes alternating).