Backblaze B2 Cloud Storage Now Has S3 Compatible APIs

Spotted this today and thought the forum would find this interesting:

2 Likes

Agreed! I almost switched back to B2 because of this out of impulse (but stopped when I realized B2 is still considerably more expensive for me than Wasabi with legacy pricing).

Looks great for apps that support S3 but not B2. Though it doesn’t seem like there’s any reason to change from the B2 API to S3 API for apps that already support B2 (like :d:), since the S3 API doesn’t add any features and apparently costs Backblaze more to operate.

Gleb Budman Mod nalvarez12 hours ago

Yes - B2 is still a more cost efficient API as it allows customers to connect directly to the final storage location. However, we have built a highly cost efficient load balancing system as we always have - using software to optimize inexpensive hardware - and are swallowing the additional costs for our customers. Thanks for keeping up with our blog!

For anyone else interested, note that existing buckets have to be re-created in order to work with the new S3-compatible API.

1 Like

Perfect explanations and reasons. I also have no intention to change to the new API. It will probably work to attract new customers using S3 compatible applications.

I’m new to Duplicacy and will be using Backblaze.
Is there any advantages in using the S3 compatibility setting, instead of b2?

No.
S3 is an extra layer and centralizes access through their s3 gateway. Its only purpose is to onboard people who don’t support B2. Duplicacy does, so there is no point.

I would consider storj instead of B2. Cheaper, and can be faster due to decentralization.

Thanks.

I looked at storj a while back, but wasn’t sure about its longevity (what with all that crypto stuff). Will take another look.

If you are going to be using the service you don’t need to deal with any crypto. You can pay for storage with a credit card.

It’s a distributed storage and nodes are hosted all over the world. Node operators are paid with utility tokens, to avoid dealing with conventional international payments, and associated overhead, complying with hundreds of different requirements, and currency conversions, etc. But since they pay with tokens, they also accept tokens as payment for services, so if you want to to — you can pay for storage with tokens as well.

I use it, it has been great so far.

From my understanding, if used via their own CLI, storj automatically encrypts at the client machine before uploading? How to trust/check this. Though I’d be using Duplicacy, so would encrypt as per usual

My main issue is not being sure about pricing compared to backblaze. I have fair mixture of some largish files (up to 0.3 gb, and many files (from 1 - 20mb). I’d being doing incremental backups daily. Rarely restoring unless something went wrong.

This is by design. It is not possible to upload plaintext data to storj. It’s always encrypted. But it does not matter, duplicacy will be encrypting stuff anyway.

Sizes of your files are irrelevant. What matters is duplicacy’s average chunk size. Depending on the nature of your data you may benefit from increasing default average chunk size to somewhere 50-64Mb. 64 Mb is Storj’s segment size, and since there is a per-segment fee, you want to reduce number of segments. On the other hand, per-segment fee is so tiny, it probably not worth it to pay attention to it.

Duplicacy supports storj two ways: via STORJ endpoint (that behaves like native storj client – encrypting data on your machine and sending segments to nodes directly) and via their S3 gateway. In this case gateway has credentials to decrypt data, but instead of you sending stuff to nodes, you send it to S3 gateway. This saves you quite a bit of upstream bandwidth (2.7x), and with duplicacy you don’t really care that the gateway has encryption keys, since you don’t rely on storj’s encryption in the first place. It’s no different than uploading data to backblaze or any other storage provider.

Let’s say you have 1TB of data in 10MB segments. This will cost you $4 + 100000*$0.0000088 = $4.88.

if you tweak duplicacy to use as close as possible to 64MB chunks – then cost becomes $4.14.

It’s not even matter or pricing really, even if it was more expensive, I woudl have preffered to to back blaze. For one, it is geo-redundant by default. Backblaze isn’t georedundand. And two – back blaze allowed some mishaps happen in the past that should not have been allowed to happen at that stage of service maturity (like returning bad data to clients). I would not trust them with a backup. Hot data – maybe, but not backup.