Right now Duplicacy just simply stops when an error occurs.
This is potentially a huge disadvantage when running large backups, since recovering from an interrupted backup can take hours (Duplicacy seems to re-run through a lot og the chunks everytime to skip them).
Strategies could be
- having a retry flag that would try X times to re-upload before failling
- defining the fallback strategy to either interrupt or skip a faillling chunk
So the command could look like duplicacy backup --retries 5 --on-fail skip