This is a bug that needs to be addressed. I’ve reported it before, years ago.
Yes, because there is no error. All files that were possible to back up were correctly backed up. It is not possible to back up files that are unreadable. Kind of obvious.
What we are actually discussing here is whether there is some threshold of failures that should warrant notifying the user.
Not aborting backup, just notifying the user. That courtesy note I mentioned above, that duplicacy can issue when count of files in the dataset changes drastically. Or maybe even if at least one file was skipped: this way to prevent constant nagging, users ultimately would have to write an exclusion pattern for those files, thereby making any message from duplicacy meaningful (zero-warning policy)
For most users, transferring data takes time and is expensive. Compute resources are free and infinite.
Untouched data that was already in a previous backup, but is now disregarded because of an intermediate fs hiccup. That’s simply not correct behavior.
You are trading the mild inconvenience of spending extra CPU cycles rehashing content for risk of skipping important, readable files. Makes zero sense to me.
The correct behavior is to protect as much of user data as possible. That’s the goal. If that means re-hashing everything next time around – so what? CPU time is worthless, user’s data is priceless.