[Feature Request] better error handling strategy

Right now Duplicacy just simply stops when an error occurs.

This is potentially a huge disadvantage when running large backups, since recovering from an interrupted backup can take hours (Duplicacy seems to re-run through a lot og the chunks everytime to skip them).

Strategies could be

  • having a retry flag that would try X times to re-upload before failling
  • defining the fallback strategy to either interrupt or skip a faillling chunk

So the command could look like duplicacy backup --retries 5 --on-fail skip

:d: already retries on a number of recoverable errors, namely 4xx HTTP errors (though not all of them).

I was thinking more of a high level error handling rather than web API wise. Including when failing on FS, in order for those retries to support RMount too

Failed to upload the chunk e085b606b5461e211645c8915f6392c98f195a95cbd50b8b089f060175d0807d: rename /mnt/dropbox/back_ser/chunks/e0/85b606b5461e211645c8915f6392c98f195a95cbd50b8b089f060175d0807d.wlymqoca.tmp /mnt/dropbox/back_ser/chunks/e0/85b606b5461e211645c8915f6392c98f195a95cbd50b8b089f060175d0807d: input/output error