Was about to make a post about the exact same problem which I encountered a few days ago. In testing Vertical Backup and trying to establish a good retention period for prune, I ran out of disk space. Oops 
Running out of disk space for the VB/Duplicacy storage is a bad idea!
Firstly, it created a 0 byte snapshot file, which I had to delete manually before even a check could run (it definitely didnât like the 0 byte file).
Running prune, especially with -exclusive option, on revisions which has missing chunks - creates more missing chunks - because Duplicacy stops with an error when it encounters a single problem, and is unaware it already deleted a whole lot of chunks. The snapshot file remains.
Remedy: manual deletion of the affected snapshot files and cleanup with -exhaustive.
One other issue I encountered is that check also stops when it encounters a single error, such as a missing chunk. Which makes it tiresome if you want a complete list of broken snapshots. (In my case, I have a performance bottleneck Iâm still trying to get to the bottom of, that means running check takes practically half a day, just to list all chunks!)
Suggestion: Duplicacy should first rename the snapshot to say <revision number>.del - just like it does for the chunks into fossils (.fsl) - then delete the chunks. On subsequent runs, it can detect and treat such unfinished snapshots specially, and not stop and error out when it finds chunks that are already deleted.
Similarly, check is a read-only operation (unless using the -resurrect option). It would be nice if it didnât abort once it found the first missing chunk - it should carry on and give a complete report.
