Define “superior”. Yes, api based bulk cloud storage services will work much better for backup than file based ones: s3 will be significantly more robust than WebDAV — those are designed for different purposes. WebDav was never meant to handle millions of small files; it’s a an http extension for sharing documents for collaboration. But still this is probably not a main reasons to pick one versus the other.
Example: I use google drive as a backend. Zero issues (with asterisk. Meaning, zero issues I did not know how to handle). It’s built on the same google cloud service you can use directly yourself, but it is definitely not designed for the bulk storage. It’s a collaboration service. I’m effectively abusing it by using it as a backup backend, and yet, it was and is solid.
B2 on the other hand had some issues recently with returning bad data, performance is not uniform — see recent post of someone not being able to get any reasonable bandwidth from it. Yeah, it is 2x-4x cheaper. You get what you pay.
It all boils down to company culture and how they handle development, testing, quality control, and maintenance. I would trust google and Amazon to keep my data healthy but not pcloud, hubic, iDrive, Backblaze, or other little companies with little side projects and three overworked QA engineers and flaky processes (judging by the outcomes).
Reality is — if you want reliable backup you should not really be using *Drive services as backends regardless of vendor. It’s a minefield or gotchas — for example, eventual consistency of virtually all those services means you can end up with duplicate folder names. Unless your backup tool is designed to handle it — you will see weird issues unless you are very deliberate in how you design your backup workflow (as a side note, rclone dedup can fix those — guess why I know this). Then there is throttling, anti abuse features, various other performance limits that are OK for intended use of sharing documents but are there on purpose to discourage abuse or as a side effect of compromises taken. (For example don’t even try to download with multiple threads from Dropbox on OneDrive)
It just so happens that google figured out how to build reliable web services and google drive happens to be robust enough ultra cheap solution that happens to work well historically with minimal pitflals. It’s a unicorn. Every single other *Drive service happens to be unsuitable as backup target for any reasonably large dataset.
So yes, you are right, for set it and forget it rock solid approach you should go by market share and use Amazon S3, Google Cloud or Microsoft Azure (in that order). That would be expensive. Very. But how much is your data worth to you?
if the cost is not an issue (wither if you have small dataset or deep pockets) then that’s where the story ends.
Otherwise you can optimize out further based on type of data you store and your workflow. For example, my largest important dataset is few TB of family photos and videos. They live in iCloud. Meaning they are managed by apple and stored at google cloud or Amazon s3. Chances of me losing data in iCloud are very slim — data there is backed up and replicated and can be recovered if attacked by ransomware. I’m backing it up elsewhere anyway out of sheer paranoia but never had to nor ever expect to need to restore from there. In this case archival services are great. Like Google Archive or Amazon Glacier. They are ultra cheap to store (under $0.001/GB/month), but quite expensive to restore. (Duplicacy does not work out of the box with Glacier due to thawing requirements, not sure about Google Archive, I use them with another backup tool for selective subset of data.)
My backup to google drive in other words is a toy. I know in the long run I’ll fully switch to actual cloud storage service (and probably archival one) — but it’s fun to see how far can it be stretched. (And it’s cheap, so why not)