Google Cloud Storage options

I’ve happily been storing my backups on Backblaze B2 for years now. Back then, it was the cheapest option available. But now I pay ~$7.50 per month. Not a whole lot, but still.

Someone suggested Google Cloud Storage as a cheaper alternative. But it’s not entirely clear to me how this storage service works. I thought to post here, as I expect a lot of like-minded people frequent this forum. (mods: feel free to delete this post if it’s too far off-topic, and apologies)

Google Cloud Storage has 4 different storage classes. All I do is upload encrypted Duplicacy backups to it, and once per month I prune old versions. The marketing material talks of ‘accessing once per month’ or ‘per quarter’ and this confuses me. So I’m wondering: is the “Archive” class appropriate for my use case? If it is, I’ll reconfigure Duplicacy to save $72 per year!

Can someone share their thoughts?

Archvie class on GCS has a 360 days minimum retention charge. It is suitable for archiving, for storing stuff you never expect to delete or download. The pricing structure reflects that: doing anything with the data is very expensive – like deleting early, uploading, egressing, – but leaving it alone – cheap. This is ideal for archiving - what the name of the tier reflects. Backup is a bit different than archiving though.

That “once per quarter” access language is a ballpark of what usage pattern will make that financially feasible.

I would not use archival tier for primary backup, especially fi you expect you might need to restore from.

ColdLine storage would be more appropriate, which is $4/TB/month, but there are still API fees and egress fees.

At this point you may want to consider STORJ – it’s also $4/TB/month, modest egress fee, but no api costs, and you get geo-redundancy for free.

2 Likes

Great answer. Thanks very much for pointing out the minimum retention charge, and the alternative option.

I also think STORJ is a fantastic service, but there is another cost that has bothered me in the past: they charge per “segment”.

If you - like me - have a very large amount of small chunks, it really isn’t worth it.

I think I currently use the default chunk size which I believe is 50MB. I’m guessing this cost could be reduced somewhat by changing the chunk size to 64MB or just below.

This would result in a total segment charge of less than $0.20 per month for me.

I’ve never considered changing it to smaller size. Do you do this because your dataset has small files being changed and by reducing the chunk size you’re transferring less data every increment and reducing overal storage being used?

Default is 4MiB. It cannot be 50, must be power or two.

But I agree, 32-64MiB is likely better for most users, even outside of storj: fewer chunks, faster enumeration, smaller overhead. And over time average file size grew too — more pixels in photos, larger datasets.

Perhaps default needs to be updated as well to keep up with times.

This is a behavioral charge — they want people to increase object sizes to reduce impact of fixed per-object overhead (latency, time to first byte, acccounting, etc) and improve performance, and financial incentives are the best type of incentives.

All cloud storage providers do that — something’s it’s rolled into cost of api — some calls are more expensive than others, sometimes it’s punitive in some other way — like minimum retention fines on cold-er tiers.

Storj does not charge any of those, and does not play games with burying those costs into api calls, that do not map 1:1, instead they add per segment fee, which is insignificant in the optimal usecase.

That optimal usecase happens to also work better for other storage types — local servers, Amazon AWS, blueray archival disks, etc