Please describe what you are doing to trigger the bug:
This bug is readily reproducible, although it confused me at first. The observation I am having is that when I am restoring my large backup (80-100 TiB), if there is ANY network issue (e.g. read timed out) the restore process becomes corrupted. This corruption manifests in subsequent chunks starting to error out, until the process actually crashes after ~1 hour.
Please describe what you expect to happen (but doesn’t):
When a network error occurs, my expectation that any interrupted / in-process work is purged, and restarted without error. Ideally with some kind of exponential backoff – but at a minimum, the process shouldn’t be corrupted.
Please describe what actually happens (the wrong behaviour):
This will manifest in messages like “Failed to decrypt the chunk dcdf824930f8debed7e87dc828186eec1b789458e0c32006720692f0dfb9c02a: cipher: message authentication failed; retrying” or “Failed to decrypt the chunk ee94639e9d7048ef9d09c3ad5d440710f1bd4482132acfc317ea983ec7096d78: The storage doesn’t seem to be encrypted”
This issue is resolved by killing the process and restarting when any network issue occurs.
Here is an extended log as an example: Log Example - Duplicacy · GitHub
Version: 3.2.3 (254953)