I have been using Wasabi for 12 months, and on 5th of October everything went to @#$% with similar errors to what you have seen. And it seems to align with their known issue. So I would take Wasabi claim it is not them with a pinch of salt.
But I do wonder if Duplicacy could improve their chances of getting a successful backup if they did more retries or we could configured the number of retries as I sometimes have a feeling that we sometimes get backup failures for minor or transient reasons that Duplicacy could work through if it kept retrying??? Now this is speculation here, as maybe Duplicacy already does lots of retries and only fails once it has enough retries that any more would be counter productive using resources for something that has too high a failure rate?
Anyway, I would if there a few feature requests we should be adding to help improve things :-
-
ability to specify the number of retries or move valuable only fail on certain number of consecutive failures.
-
Currently backups appear to be successful (backup created), or fail (no backup is taken despite the fact that most of the backup might have uploaded). I wonder if really we need to have a 3rd status which is WARNING. Ie backup completes and backup version is created, but the warning lists the files that were not backup up successfully for whatever reason.
-
related to the above, it appears to be Wasabi is more an “image” level backup of a subset of files, rather than a “file” backup program. ie it seems to want to make sure it backs up everything, or nothing, rather than at least successfully backing up the files it can. I understand when it has historically be designed like this. But I wonder if this is holding it back a little, and we are often throwing the baby out with the bath water by not creating a backup revision just because 1 file can’t be backed up. I wonder if a more file based backup approach might work better.