'Best' cloud storage backend in 2025

You will NOT encounter these fees with autoclass turned on, because autoclass does not even move the data into the colder tier until it has already met those minimums.

In other words: that data from my first backup went to regional (hot) storage, and didn’t move to nearline until day 31. And so on…

So on autoclass, all your data has already met the minimum for whatever tier it’s in. You shouldn’t be charged to delete it.

Maybe I’ll test this out by running a prune…

1 Like

I would LOVE to be getting anything close to 100 mbit. I’m in Poland with a 900/300 fiber connection but hetzner storage in Germany gives me 20-40 mbit both for uploading and for downloading duplicacy backups.

Wait…what’s wrong with iDrive? I used their personal service for close to 10 years, tested data recovery multiple times and never had a problem. It was slow way back then, but just as fast as Crashplan and Mozy were at the same time.

Since then I’ve switched entirely to Linux on all of my personal machines and of course servers so the personal tier isn’t officially supported.

I literally just used their migration tool to pull 75GBs from B2 over to E2. I didn’t watch the transfer rate, this is what the completion notification said:

Start Date:

Aug 06, 2025, 09:39 AM

Last updated:

Aug 06, 2025, 10:01 AM

From my point of view I found two things wrong with iDrive:

  1. It doesn’t support incremental local backup on MacOS. IMHO, that’s just crazy.

  2. Cloud backup from my location is substantially slower than backup to B2, the latter manages to saturate my 200MB line for both backup and restore. I know some people don’t care about backup speed, but I do.

You can search this forum.

This is a testament that it worked for you. But this does not matter, any service will work for someone. What mattes is how service provider handles failures.

iDrive does not handle them at all, for example, see this: IDrive e2 now integrated with Duplicacy - Fast S3 Compatible Cloud storage - #14 by warren1. Or this: Failed to add IDrive E2 storage - #5 by upssnowman. Or… Testimonials like these are all over the internet. And they are not surprising at all – to be able to sell storage at the cost they do – they have to cur all possible and impossible corners. They want customers that pay for but not use the service. You – someone that actually wants to use for what you paid – are not that customer. Therefore losing you as a customer is not a problem for them, and there is no incentive to fix anything.

You can use this service for unimportant transient data (I would not personally do that either), but not your backup history. You really does not want to hinge your data on “maybe it won’t fail”, you want “we guarantee data integrity” type of deal.

iDrive is in the rat race to the bottom. You don’t want to use them, or any similar shop, including Backblaze, for anything where long term data integrity and durability is required.

No even through Duplicacy? I thought it did. There is also Veeam, which they give users a discount for using but I have zero experience with that solution. Fair point on 2. I suppose for my use case I run backups during oddball hours to not only maximize my throughput for cable internet since peak times tank your upload/download speeds, but also no one else in the house is using the internet at say midnight.

What is a good service to use? You mention also Backblaze, which I understood to be rather reputable. Only reason I wanted to switch was for cost savings. B2 was more expensive than E2 by a significant margin. Despite having years of iDrive Personal, I’ve only just begun dabbling with their E2 service, so I’m genuinely asking. What are good alternatives? I have all clients back up to local server first, a copy of critical data is sent off to a server at my parent’s house, and I have been using Duplicacy to backup to Backblaze B2 for the last two years. Yet you call out both as not being a good option. What do you suggest as an alternative cloud backup solution that works with Duplicacy?

I was referring to the tool iDrive, not iDrive e2.

I probably should leave this forum, because I chose Arq for my Mac, and am still testing both B2 and GCS Archive storage as cloud backend (based on the discussion in this forum). B2 has very predictable cost, GCS Archive storage is cheap(er) but charges for both backup and restore actions. I like some stuff in Duplicacy but the UI is not finished and the CLI is too many steps back for me.

Depends on the amount and type of data you have.

Media (immutable, incompressible, non-de duplicatable) does not benefit from compression or deduplication, and versioning means corruption – so you can copy it directly to archival storage with object lock enabled: Google Archive, or AWS Glacier Deep Archive.

Everything else – depends on the amount. If you have less than 1TB – cost does not really matter, it’s still few bucks a month. You can use google storage, aws storage, or if you want even cheaper - storj is an interesting contender, I have had great success with it. By design, it cannot return bad data (unlike Backblaze did in not so recent past)

They spend a lot of time and money on marketings, but they are not reputable by any mean: besides the glaring issues with consistency of data returned through their API, that happened way too recently to be forgivable at this stage of project maturity, and is indicative of serious flaws in the whole systems engineering side of it, their user facing software quality is garbage, and there is no reason to believe that the backend is any better; and there are all reasons to belive it’s just as bad; They lied to customers and contradicted their SEC filings about outside investments, and generally behave in a rather slimy way, in my interactions with them both via support and on public venues (reddit specifically). So no, they are all but reputable. If you look at backblaze – consider Wasabi instead. At least they are profitable and managed well. But neither would be my first choice for backup.

That shall tell you something. Backblaze is still not profitable. How can E2 offer the same service cheaper?

For cheap good option (you get geo-redundancy as a bonus for free) – STORJ. You would want to tweak the average chunk size from default 4MB to 16 or 32 to improve performance and reduce cost. But it’s E2E encrypted, erasure coded, and distributed, with the same durability as major players. From anecdotal personal experience using them last few years – not a single hiccup. (you can save additional 10% if you pay with the token instead of the credit card)

If you want more classic vendor – Wasabi is not bad. Their “pay as you go” basic offering is well suited for backup: no egress fee, but there is a limit on total amount of egress. Perfect for backup job.50% more expensive than storj and you dont’ get ge-redundancy.

But again, if you have less than 1 TB – just use google cloud storage.

The aforementioned Arq’s premium tier is using Google Cloud Storage on the backend.

Welcome to the club.

I use Glacier Deep Archvie for that.

+1

I’m doing very similar thing – zfs snapshot replication between servers at different locations (or duplciacy backup if filesystem is not zfs), and Glacier Deep Archive for archiving in case everything fails at once – I never expect to need to restore from the glacier though, and therefore high restore cost is irrelevant.

This shall be fixable with SQM on your gateway. You probably have bufferbloat – saturating upstream connection shall not affect latency on the properly configured equipment.

Backblaze is reputable, they had a small API hiccup over 4 years ago (which was fixed, no loss of data) and apparently the world’s fallen in… :man_shrugging: Bug happen, bugs get fixed. Also, you do 3-2-1, right?

Don’t take too much stock from personal anecdote, form your own opinion - you already have 2 years experience of B2, what do you think? (I see way too much of this on the internet nowadays.)

Sure, listen to opinions (my personal opinion is a self-hosted server or NAS is better than any cloud), but recognise they’re just that… opinions. If you see a pattern of reports, note it down. You’ll find no such pattern with B2, though.

Just south of 1TB of family pics, videos, and important docs. As far as cost, Backblaze is running me about $10/month. For 1TB, e2 is ~$4/mo after their first year promo of $20ish for the 1TB plan.

Funny you mention de duplication because my photo library right now is a hot mess. I have a good chunk of it from old snapshots, then I recently did a Google Takeout of my Photos library and imported that all into Immich because I want to ditch Google as much as possibly. So there are defintely some repeats including old snapshots of Picasa and iPhoto libraries that had kind of a weird way to store thumbnails. I still haven’t sat down to try to suss that out prior to backing it up. So for now, the whole mess is on B2 with some migrated to E2 as I try their service. Not only that but I still don’t fully understand best practice for setting up repositories and buckets to take full advantage of Duplicacy’s selling points. But I needed some assurance that it’s safe off site in addition to the “local offsite” I have (parent’s house, one town over).

Is Storj profitable? Are they going to be around long term? I have no skin in this game for any of these folks. You mentioned previously that iDrive has been known to astroturf here and other online outlets, so throwing it out there that I’m not affiliated with anybody.

Just a Linux newb trying to figure out how to keeps his family memories safe long term. All the other stuff on my server can be reobtained if lost.

Seems like everything is personal anecdote. it’s how I landed on Duplicacy as a solution almost two years ago now (I think). I wanted a GUI backup solution. CLI is fine and I’ve learned a lot more in the last couple of years, but I still like a visual on things that have completed or failed. That said Duplicacy’s GUI is…rough. And I know the new hotness is restic or borg and there are several wrappers for those apps to make them a bit more newb friendly, but who knows how long those wrappers will stick around? I’m sure the underlying apps will be.

Honestly, I’m ambivalent towards B2. It’s my first experience with S3 compatible storage and I’m still trying to wrap my head around best practices for buckets, permissions for access keys/application IDs and managing those so they’re not lost. All while balancing the want to make restoring in the vent of data loss easy for my family should something happen to me. It’s been a frustrating journey with backups. I’m about ready to just use rclone for everything. At least that CLI is easy for me to understand. :grinning_face_with_smiling_eyes:

Not all bugs are created equal. This type of mishap at that stage of their project maturity is indicative of glaring system design issues and strongly points at their infrastructure being held together by sap and twigs. The properly designed system would either return correct data or none. Backblaze are proven to be incompetent, and should not be trusted.

And by the way, when they “deployed the fix” the only fixed issues for the users who noticed and reported the corruption. Not across the board. Because they could not. Because they lack infrastructure to detect that. Amateur hour.

Yes, the world has fallen, they are dead for all intents and purposes, bar very short term scratch data storage. There is zero tolerance for this type of “hiccups”.

Indeed. Therefore you might want to ignore positive reports (service works as expected for someone is not really useful), read negative reports, and understand implications. Like service offline for an hour last year — not a big deal. Service retuning bad data one time in that year — horrible. Service offering unlimited resources for fixed priced — misaligned incentives, will not end well. Service charging for some or all uses as you go — good. They are interested in making things work smoothly.

Also, general company reputation — AWS/GCS is used by banks and hospitals. B2/E2 — hobbyists and small companies who don’t know better. Home users and small businesses who opt to use AWS/GCS benefit from all that reliability engineering banks amd hospitals enjoy.

That’s a bit of a stretch. I’ll grant you E2 since it’s still pretty young in that space. But a bit of searching shows that Backblaze has banks, Plex, some universities in the mix, a couple of well know TV shows/Youtube programs (Good Eats, Hot Ones) amongst several I don’t know of personally.

Your own argument suggests ignoring positive reports of AWS usage, so I’ll pretend I don’t know about any hospitals or banks that use the service and only go on negative reports? Gotta say, that doesn’t make sense to me.

The market is stuffed to the gills with cloud storage so wading through and finding who is a flash in the pan and will be gone in a decade vs who’s around for the long haul is a beast. Further complicating the problem with tempting offers of “lifetime” plans (Rsync.net being the one to come to mind at the moment).

I’ll grant that I haven’t heard anything really negative about AWS, but I’ve also not looked too hard. I’m tired of giving my money to Google and Amazon, hence this whole journey to begin with. I started simply because I wanted to stop giving money to Google for additional photo storage, which lead to Immich, which lead to backup solutions, which lead me here.

Well they are the best at it — so from pure technical perspective, they are de factor golden standard, and it would be counterproductive to buy storage from anyone else.

If you don’t want to give them money from some ethical considerations — sure. But then yes, you’ll have to wade through the mess yourself or trust others. Look into STOrJ - they are very green (in terms of energy footprint) and I like them, but they are still new, albeit fast growing.

That would be missing the point. The idea being commercial and health institutions would have a very low tolerance for failures, and therefore the service must be engineered to higher standards to support such customers or go bankrupt paying out damages on contracts. Small customers benefit from that reliability and engineering for free. Secondarily, Google and Amazon can afford to attract and retain better talent, that further contributes to quality of resulting service. S3 is Amazon’s spec that is de factor standard for cloud storage. Backblaze tried to push B2, explained how s3 is bullshit in color, then pulled that blog post and three years later deployed the s3 gateway.

If you dig deeper, you’ll find they are not actually using them for anything critical.

E2 maybe new, but iDrive as a company isn’t. Nothing really changes by adding, or rather, exposing the interface to the public.

Anyway, if you want to justify for yourself using cheap small noname services because saving $10/month is impoetant — fine. Just fully understand what you are actually getting — a very poor value. And in case of specifically backup storage — you shall avoid small players altogether. Backup is as good as underlaying storage.

It’s easy to filter them from incentive alignment perspective. If your incentives as a customer align with those of a business — use them. Else — don’t.

IDrive gets your money whether you use the service or not. They have no incentive to fix anything. They have all incentives to slow you down.

AWS s3 only gets money if you use the service. They are interested in keeping it running and as fast as possible.

Services with lifetime plans do either of the following two things

  • discontinue lifetime plans
  • close the shop.

Lifetime plans are an arguably useful tool to raise capital at the beginning but then they are a drag and liability. I would avoid such offers and companies. Rsync.net is overpriced for what is needed for a backup — it’s great if you want to replicate snapshots but in this case it’s still very expensive. Hetzner offers much better solutions in that realm.

I’d suggest focus on distributed redundant cloud storage — aforementioned storj is a good option, or go with major player like Amazon or Google.

Apologies for multiple posts — on the phone it’s quite unwieldy to type and reference long post.

No apologies needed. Long-form on phones is an exercise in frustration.

I don’t like the concept of lifetime either, but I will admit sometimes it’s tempting.

You’re extrapolating systemic issues based on 1 small incident and little first hand knowledge of what otherwise goes on behind the scenes. Simply put, you’re massively exaggerating here.

It’s funny, coz you still recommend Storj despite their constantly moving goalposts (and prices) year after year. It’s the same spuriousness when you claimed GCD was unsuitable.

Right…

As I said, mistakes happen. You’re pretending one cloud is somehow vastly better than another, based on unimportant metrics that you have no control over or knowledge of. It’s the cloud. The bare reality is that 3-2-1 and testing is so much more of an important factor.

I disagree. It’s quite different: Re: storj – I’ve read their whitepaper, glanced through code, and convinced myself that this is a model I can trust – it makes a lot of sense and mathematically solid. You are right, I don’t know what backblaze is actually doing, outside of what they wrote in the blogposts (erasure-coding across pods) so I have to extrapolate from what I know – and that are mishaps, misleading and self contradicting blog posts, and absolutely horrific software quality of their business/personal backup tooling, data-affecting bugs that they were pointed to and repeatedly either ignored or presented as features. You might argue that quality of their endpoint tools may not be indicative of their backend design – but will argue that it might, it’s quite a small company. And I don’t want to hinge my data safety on “maybe”. In this realm – one strike you are out, especially when so many competitors exist. I guess that might explain why are they still unprofitable, while all of their competitors get to profitability much fasat

Yes, I believe some bugs are just bugs, and some are indicative of systemic design failures, borderline negligence or aggressive corner cutting, that cannot be patched easily, and are representing actual threats. This belief is rooted in my experience in software and systems engineering. Under no circumstances should bad data be returned to customers, it’s a matter of implementing series of checks and verifications. They lacked them. Moreover, they lacked ability to detect such corruptions or even affected customers even after the fact. This is a bad system design; whether it is negligence or incompetence does not matter. It’s not just "ooopsie, we deployed bad change)

Well, if you read the article – the human messed up in that case. It wasn’t a system design flaw. You can rm -rf your data accidentally as well, it’s not really indicative of anything, human factor can override any precautions and security measures.

It is, nobody argues otherwise. We are discussing specific storage providers.