I strongly disagree. The vast majority of end-users of any backup software will not be using anything like ZFS as a storage backend. They’ll be using local, perhaps external HDDs, or any cloud provider with unknown storage practices and resilience (Backblaze an exception).
Now, Duplicacy isn’t any simple backup software. In order to do what it does, it packs your data, compresses it, encrypts it. Practically locks it away. If a user’s external HDD just has a simple copy of their data - if any of that got corrupted through ‘bit-rot’ - it isn’t the end of the day. They can at least access it, even the corrupted file. If it’s a media file, you may not even notice. Text file; the same.
Because Duplicacy packs your data in this way, a single bit error renders the whole chunk (and the rest of the restored file) inaccessible. IMO, that’s Duplicacy’s responsibility to safeguard against as much as possible. I’m not saying the user isn’t responsible for having multiple backup copies of their data. (Reed-Solomon wouldn’t protect against disk failure anyway.). I am saying Duplicacy’s storage format is generally more fragile to warrant extra protection. And, quite frankly, when dealing with important data, Erasure Coding is something many people would expect through familiarity with tools such as WinRAR’s Recovery Record feature. I certainly would.
Furthermore, I wouldn’t call that feature-creep in the slightest. I don’t care if it’s optional, but implementation would be straightforward and succinctly specified as an important, atomic, feature. I don’t understand why such a feature would be a bad thing?! Where is the downside?