Moving Away from Wasabi - GCA, Storj and IDrive e2?

Since Wasabi hiked the price I’m seeing a 80% increase on my bill (which is not a lot in the grand scheme of things considering the relatively small amount of data I have but still), which I only realized last week as I was quite occupied in the past few months.

So far, I’ve sorted out the alternatives as below, after browsing this forum the whole morning:

  • Google Archival Storage ($0.0012 / GiB)
  • Storj ($0.004 / GiB)
  • IDrive e2 ($150 / yr / 5TB = $0.0025 / GiB)

It seems Google Archival Storage is probably both the safest and cheapest option, is there anything I should be aware of moving from Wasabi?

Cheers.

Depending on your data turnover, i.e. if it’s high, google may end up being very expensive, due to their ultra high minimum retention period (1 year). If you are OK with that – sure, use google.

Otherwise – storj. You might want to increase average chunks size, to reduce number of segments.

Nobody shall be using iDrive for anything ever.

Thanks! I overlooked the minium retention period of Google Archival Storage.

How about Google Coldline? It has the same pricing at Storj, and also similar (?) retention policy as Wasabi (90-day). So there should be minimal adjustment for my setup?

  • Google Coldline Storage ($0.004 / GiB)

Cheers.

Ooops I found this post

Seems Google Archival is further out of the question then.

Cheers.

BTW, I see that you mentioned elsewhere about the chunk size on Storj.

I wonder, if there are lots of small files that are <64MB, will it cause bloating?

Not sure if I have but just want to know in advance.

Cheers.

Just checked the log.

Total chunks: 727508
Total Used: 3605G

go with idrive e2, performance will good, moved to them recently also from wasabi

It should not. Duplicacy does not care about small files. Everything gets collected into a sausage and then shredded to chunks.

This is consistent with the default 4MiB average chunk size.

That’s not what matters in a backup solution. Please search this and other forums for reasoning why these services should be avoided in general, and for iDrive in particular. This has been discussed before.

The fact that you have uploaded some data to the service designed for this is hardly an endorsement.

Thanks a lot for your valuable advice.

Playing deil’s advocate here, I’ve had much better success with IDrive E2 than I ever had with Wasabi.

Every cloud backup storage has horror stories. IDrive had most of them when they were newer. Currently, I don’t see any higher issue rate online with IDrive E2 compared to Backblaze B2 or Wasabi.

i don’t have that much data, i’ve migrated to GCS. i’m trying out autoclass as i assume chunks will age out over time (presumably running periodic basic checks only accesses the metadata). i’m currently in the 3-month free trial period.

This one:

I would stay away from iDrive…

1 Like

FWIW I’ve had all sorts of issues with Storj not working correctly on my ~8TB backup. I have a couple threads about it and despite some heroic efforts by forum members, we could never resolve them.

I’ve started testing IDrive e2 now, and it seems to be faster, more stable, and cheaper. I have several years with Storj and only a couple weeks with IDrive, but it feels significantly better already. They even have an auto-migrate option to migrate your data from another provider and they don’t charge for it (although you might have to pay egress from your other service).

If anything changes, I’ll update here. But I think a lot of the IDrive issues from 2022 have now been resolved in 2024.

1 Like

In the recent news:

  • Storj’s performance at smaller object sizes has improved drastically. You can now expect Storj to have comparable round the clock average performance to wasabi. This has been tested directly with veeam and performance enhancements will continue with each release
  • We have new mechanisms to minimize or eliminate segment fees
  • Storj will support s3-enabled object locking in the coming months but will defer to our public roadmap for an ongoing ETA

Yeah, but this still does not remove all the original concerns: how can they be offering the prices they do, when even Backblaze at much higher prices is not profitable, and therefore implied corner cutting is still a concern. There is nothing wrong with such provider for hot data, but backup requires stronger assurances.

Let’s hope they changed their development, testing, and QA processes, hired competent people, and this is a real trend and not a random period of lower bug count.

I went down a few threads from what you linked and it seems like most tools have some specific guides for optimizing for Storj, such as on: Guides to Using Third-Party Tools - Storj Docs

I wonder if we could figure out and post ideal settings for Duplicacy, too?

All that being said, I still have the issues that you helped me with a while back - where any tasks that takes a long time starts failing on Storj (Backup, Check, Restore, etc): Check fails with multiple errors ("can't be found" and "doesn't seem to be encrypted")

Ideally, duplicacy should figure out the right settings for the remote automatically. But I agree, publishing step-by-step guide there might be useful. In this case, probably just increasing average chunk size shall suffice.

Wasn’t that due to network/modem stability and sheer number of requests storj sent? It would be interesting to retest now if anything changed. I have friend with fiber internet, I"ll try to re-run those tests some time later this week.