Via the same mechanism used for notification about failed backup. Or I don’t understand your question.
In the email sent, there would be three sections:
- Fatal errors: Could not connect to the target, could not transfer data, could not upload snapshot, etc.
- Warnings: Unexpected failures to read few files: bad sectors, VSS crap, SIP, other stuff that shoudl work but did not, etc; list those files.
- Notes: Expected failures to read few files: reparse points, symlinks, other stuff that is known unsupported but was included, etc; list them too.
Successful backup means no fatal errors. User is supposed to review the warning and notes and fix them. I would also not send an email at all if there are no warnings, no notes, and not failures. All is good – don’t say anything. There is enough spam as it is.
(Separate discussion is whether backup tool developers should maintain exclusion list for mainstream OSes and not put the burden on the user)
Also, none of what’s listed in Warnings or Notes should preclude completing the remaining backup. Only the user knows which failures and what files are important. Duplicacy cannot take it upon itself to abort backing up a set of files because the other set of files failed. I keep repeating this, but this is very critical: safety of user data is paramount here. CPU usage is not.
Fix it. Add exclusions. See next comment.
It’s not. Because, just like you said, critical warnings will drown in the sea of unimportant ones. Empty logs are happy logs.
How will you know it failed the backup? By receiving a notification.
That same notification can just list warnings instead, without failing the backup and missing files that are readable just fine.
Why 100? Why not 10? or 1? Any magic number here will be wrong. What is my mount of external filesystem got mounted with wrogn permissions? All 30000 files are now unreadable. Shall back up halt and skip backing up my document I just edited in the home folder? Answer – heck no!
Oh, yes! Another related killer feature would be to continue taking local file system snapshots at the scheduled cadence, even if backup cannot be started or completed for other reasons (such as no network, or running on battery) and when connectivity is restored – process and transfer data for these multiple snapshots all at once. I don’t think anyone besides Time Machine does it today. This would be a huge selling point for folks like myself.
That’s a copout. Offering the multitude of configuration options only moves the burden of making a choice (the developer could not make!) to the user. The developer needs to decide on the right approach and stick to it, and optionally, teach the user the correct way.