Unraid architecture vs storage landscape today

Let me dive head first in this juicy offtopic.

Yes, unRaid is a problem. It has ran its course. There is no reason to use it today, when objectively superior technologies exist.

I’ll elaborate.

uRAID back in the days solved a very important problem for home users - making use of zoo of disks of random sizes laying around, that otherwise would end up on the land fill. They have invented their own raid layout (massive feat, kudos to them) and UI was fairly decent too, they gained users.

Today the landscape is completely different.

  • Because the industry storage demands grow at ever increasing pace, the secondary market is flooded with slightly used enterprise disks at half to a quarter of price. You can buy a coherent uniform sets of 10-18TB drives for cheap. The “use what you have” argument is obsolete.
  • Electricity costs soared, and unlike unraid array (which can keep idle disks spun down) ZFS spins all vdev members during use, so random consumer disks make even less sense. Coherent sets of enterprise drives are far more power-efficient per TB and perform better

Therefore there is no reason to cling to old collection of random mismatched disks. You can always sell them on ebay, and buy a coherent collection of equally sized, enterprise-gtrade ones.

Moreover, even if cost savings is not a priority – I would still strongly suggest not buying new drives. Buying slightly used and refurbished moves you away from a left slope of a bathtub curve.

  • but what about warranty? Nothing. Warranty is only worth AFR * (cost of a disk). With realistic AFR the expected value of a 5-year warranty is tiny, tens of dollars. But buying used saves you hundreds. Therefore warranty can be ignored, it is not worth the retail markup. (And not only you pay more – if disk fails, you get a refurb anyway, which actually works. So why not get it from a get go, at half the price?)
  • And yet, a lot of vendors offer their own no-questions-asked warranty on used disks anyway.

This modern landscape completely removes the advantage unraid historically provided. Now it’s yet-another NAS OS, that can run containers - so they double down in that. But the design still suffers from the architectural compromises that now have no reason to exist. You get no pros, just cons, and sunken cost fallacy.

Noteworthy: they have acknowledged that by adding support for ZFS on Linux. ZFS is the correct filesystem to use today in these scenarios. Unforunately bolting on another filesystem on top of legacy architecture does not magically undo the compromises.

What to use?

If you want just a rock solid NAS – look at FreeBSD-based TrueNAS Core 13.3. No new features, only critical bug fixes, but it is very fast, robust, stable, and will last another 10 years. You can still run software in FreeBSD jails and/or bhyve.

If you want containers and “apps” – TrueNAS Scale is where IXSystems is moving today. Similar platform, but built on Linux, because users asked for containers. Docker containers are much higher overhead over jails, but there are some advancements (like LXC) to shorten the gap.

Still, I"m bulling a third server now, and it is going to run Core, I won’t touch Scale for another 10 years at least, because I don’t believe they will manage to get to parity with Core. They still keep fixing silly bugs and regression that have been long resolved in Core. Anyway, Core vs Scale and FreeBSD vs Linux is another rant for another time.

And yes, my Core at home runs plex in the jail just fine.

Oh, and TrueNAS is free for home users. Completely.

Another good news is that that only file enumeration and small block access is affected. The large media files that are read sequentially should not experience noticeable degradation.

This is wrong on so many levels, but literally none of what you said applies outside of the U.S anyway (and, I suspect, won’t for much longer with the economic situation there). Used, new, enterprise, whatever - large capacity still isn’t cheap, and HDD prices, rather than falling, are already creeping back up.

If you’re starting from scratch, same-size drives makes sense - but that’s the only point in the lifecycle of a server/NAS anyone, anywhere, is going to be at. For all other times, like when you add new capacity over time - mismatched drives - for mass media - is not only far more space and cost-effective, it provides the most flexibility. Even RAID-Z expansion still isn’t as flexible as JBOD, and you already explained how ZFS can’t spin drives down, which contradicts the cost considerations for anyone living outside of France or Texas.

I can’t speak about Unraid too much, but I do know it offers flexibility for those who do need mis-matched drives, who do want to spin drives down, and do want an all-in-one NAS solution (with VMs and docker) with less capable hardware requirements. Plus it has an extremely good community to support less technically inclined users.

Personally, I’d probably pick TrueNAS over Unraid myself, but I find even that option restrictive compared to tailoring my storage under Proxmox - using ZFS RAIDz1 (with mirrored special vdev), and 4 mis-matched drives ranging from 8-14TB with single parity SnapRAID pooled with mergerfs. I’m not restricted to just ZFS and can mount literally everything through an LXC or docker in an LXC/VM, or even VM (via VirtioFS). ZFS makes absolutely no sense for mass media.

Now, if you can’t imagine why Unraid might still not be viable for anyone, I could say the same about TrueNAS. But I don’t, because people get to choose what works for them; based on how much they want their hand held, or how much they’re willing to spend on storage in their country.

I’m all ears.

That’s a regional pricing issue, not a technical argument. Cloud offload cycles are global. EU/UK/AU/SG all have steady supply of 10–18 TB ex-hyperscale drives at commodity prices. If your country blocks import or slaps punitive duties, that’s customs policy — not evidence that mismatched disks are somehow technically advantageous.

“Flexible” is not synonymous with “architecturally sound.” JBOD + parity-on-write layers (SnapRAID) + mergerfs is exactly the same patchwork Unraid solved for amateurs fifteen years ago. It still works — but it’s a workaround born from scarcity. Scarcity is gone. Uniform sets of used enterprise drives eliminate the original justification for the Frankenstein layout.

Unraid acknowledged that. They bolted, however poorly, zfs support. You seem to still cling to it.

True but irrelevant. Flexibility for its own sake isn’t a metric — correctness and predictability are. JBOD’s “flexibility” is exactly the ability to glue arbitrary disks together with no invariants and no guarantees. ZFS’s constraints are deliberate: they preserve consistency, performance characteristics, and recovery semantics across the pool. That’s not “less flexible”; that’s less willing to accept garbage layouts.

If your workflow requires plugging in a random disk every six months, you don’t have a storage architecture — you have a junk drawer. ZFS explicitly avoids that class of failure mode.

Right — because its coherency model assumes devices are available. That is a design decision, not a defect. It gives you deterministic latency, robust self-healing, and correct scrubbing. Spindown breaks all of that.

If your top priority is kWh minimization, fine — but that is a power-grid constraint, not a filesystem argument. You’re optimizing around electricity costs, not around data integrity or architecture.

Local electricity pricing doesn’t invalidate ZFS; it invalidates running multi-drive arrays in your region. That’s an environmental cost structure, not a justification for JBOD or Unraid parity schemes.

If your grid forces you to prioritize sleep/spindown over architectural coherence, then yes — you end up in the Unraid/SnapRAID/JBOD bucket because that’s the only bucket compatible with your constraint. But that does not make that bucket technically superior. It makes it the cheapest to keep powered on.
The correct solution here is to mimize number of drives, not increase sleep times. No drive is optimized for slee-wake scenarios, any power management mode is inherently difficult problem to solve. You want to avoid dealing with it as much as possible. Keeping disk spinning is a low price to pay for reliability.

False.
• recordsize=1M yields sequential throughput at device limits.
• special vdev removes metadata bottlenecks.
• scrub/repair correctness matters more as total capacity grows; bit rot doesn’t discriminate between movies and VM images.
The only reason ZFS “makes no sense” is if your top priority is disk spindown. And as we discussed above, this is a bad electricity-pricing driven requirement, not a filesystem argument.

Again, this is optimizing around constraints you’re choosing to keep. This desire is objectively wrong: Spindown is directly at odds with reliability and predictability. If you want to maximize reliability of your storage server (who does not?) spinning down disks should not be on your laundry list. If your power grid is expensive or unstable, then sure: spindown becomes a local requirement. That, however, doesn’t make Unraid technically superior. It just means you’re designing to a local minimum rather than for a robust architecture.

In 2025, adding “whatever random disk you found” is a self-inflicted problem. Adding vdevs to a pool is instant. Even ZFS vdev expansion exists. Used enterprise drives are cheap. Rebuilding a proper vdev is safer, cleaner, and performs better than a grab-bag array whose layout changes every time someone plugs in a garage-sale HDD.

UI friendliness isn’t the argument here. Of course Unraid’s onboarding is easier. That was always its strength. My point is architectural: the advantages Unraid used to have (heterogeneous disks, power savings, parity tricks) were compensations for an era of expensive, tiny disks. That era is over for the majority of NAS-scale users.

That’s a copout. It’s truism, not a technical counterpoint. Of course preferences differ and are irrelevant. The question was whether Unraid’s legacy architecture still makes objective sense given modern drive economics and modern failure modes and it resoundingly doesn’t. You can still like it without pretending that regional energy pricing and incremental-disk habits are universal engineering principles.

Basically I fail to see anything addressing technical claims. “millions lemmings cannot be wrong” and “people know what’s best for them” are not valid arguments. Because yes they can, and, no, they don’t.

You made a claim that multi-TB enterprise drives were ‘cheap’ and that “use what you have” is obsolete - these are pricing arguments not technical, it was you that brought them up…

The U.S is (was) an exception rather than the rule, so your claim is factually wrong. Even just a few years ago, refurbished drives in the U.S wasn’t so much a thing. It was shucking. (We still do shucking in the rest of the world for similar reasons, just as temporarily, enterprise refurbs are a thing for you now. They aren’t for us, and they haven’t always for either of us.)

This isn’t a workaround - it’s an efficient use of storage that’s architecturally just as good as ZFS. $ for $ ZFS can’t compete with this level of flexibility, unless $ isn’t a factor for you. For most people, it is.

Some of my drives are NAS grade and have been running 24/7 for 10+ years, some for 8 years, some for 3, 2, etc… If I had to rebuild a ZFS array at every stage of upgrade over the years, I’d have spent thousand more £ to achieve the same level or storage and resiliency. These facts are irrefutable. You choose to ignore these factors, most ordinary people cannot. Hence Unraid is still a thing.

I don’t know why you’re confusing JBOD with offline storage?

Again, you’re living in cloud cuckoo land - or are very rich - if you think everybody can downsize the number of drives at a whim.

Prior to the Ukraine war, where electricity prices were a third they are now, all my drives were running 24/7. As I said, I’ve got at least 3 working NAS-grade drives with ~11 years power-on time (and they still run 24/7 coz they’re NAS drives). Nowadays, I power down the white-label shucked drives in the summer months and leave everything on in winter. It’s a happy medium to which I carefully evaluated the pros and cons. There is no right or wrong setup.

Completely unnecessary for my media files.

Completely unnecessary for my media files.

SnapRAID solves that completely for me.

This isn’t for you to decide.

Since warranty and 3-2-1 backups are a thing, I think most of us adults can make sane cost-benefit analysis to decide how much electricity costs they can save by spinning drives down versus leaving them on 24/7 and risking early failure. Some of us also like to factor in noise at night.

Also, you’re gonna need to provide some actual evidence (rather than theory) that shows spindown is directly at odds with reliability. These are outdated rote claims that require testing in the real world.

Adding a drive to a JBOD is instant, I can also remove that drive in an instant, and directly access all the drives individually on another system if I so choose.

Not cheap enough outside of the U.S. 80% of the price for 50% or less of the longevity isn’t a sensible option to me, but you do you.

First, I refuse to be curtailed in this discussion to the mere ‘technical’ - you can’t just choose to ignore all other factors, to which there are many. (We all know it’s coz these inconvenient factors destroy your main claim that Unraid has no purpose in 2025, which is ofc absurd.)

Second, you haven’t even demonstrated that the many alternatives (mergerfs, SnapRAID, Unraid et al) are technically less capable than TrueNAS/ZFS anyway.

“Basically I fail to see anything addressing technical claims.”

A Formula 1 car has the best speeds and is technically superior to all other cars. It’s also not very practical on a London street (and prohibitively expensive besides). Your argument is basically that everyone should be riding the same car (a Tesla maybe), or the same OS (TrueNAS).

Finally, telling people living in a completely different economy, that “they don’t know what’s best for them” is super fucking arrogant to say the least.

Oh my.

The claim that a JBOD + mergerfs + SnapRAID stack reaches the architectural level of ZFS collapses immediately once you look at how these systems behave under failure, how they maintain coherency, and how they guarantee correctness. ZFS provides continuous redundancy, atomic transaction groups, end-to-end checksums for every block, and an integrated recovery model. Your setup works through disconnected layers with independent failure semantics, intermittent parity coverage, and no unified view of on-disk state. That is not equivalence in any meaningful engineering sense.

The argument about drive prices outside the U.S. is also outdated. Hyperscale inventory offload is global. eBay’s GSP, Aliexpress refurb vendors, UK/EU refurb aggregators, and standard cross-border logistics all deliver 10–18 TB enterprise drives everywhere broadband exists. Fees change; but not availability. Using local pricing irregularities to justify a 2010-era architecture doesn’t hold up.

Furthermore, local electricity costs, personal anecdotes about what survived 11 years, or regional quirks in the power grid have no bearing on the structure or correctness of a storage system. Those explain your personal constraints, nothing more. They don’t promote a layered, loosely-coordinated stack to the level of a coherent filesystem with well-defined invariants.

The unresolved question remains simple: once you accept that enterprise drives are globally accessible, what architectural advantage does the Unraid/SnapRAID/mergerfs lineage have over ZFS? If the honest answer is that the only advantages come from constraints you personally choose to optimize for—pricing, spindown habits, availability of second-hand disks—then that concedes the technical point entirely. The rest is personal economics, entirely irrelevant to the discussion.

I can address your other tangents, such as shucking (nobody in their right mind would do that — not then, not now), formula one (bad analogy, a distraction), and other goalpost moves, but I don’t find this to be a productive use of anyone’s time.

I know full well how these systems work - I’ve been using them successfully for over a decade.

I’ve even had a scenario where ZFS would have lost me more (i.e. all) data - if I didn’t have good backups. (2 drive failures on a single parity; on my setup, still able to recover data directly from all the good remaining drives - something which ZFS simply can NOT do. As a result, faster time to recover from backups, as you don’t have to rebuild everything. In a worst case scenario, you can recover some family photos which aren’t spread across drives in blocks!)

Meaningless blah.

And? I run syncs daily. I lose, at most, the data added since the last sync. Just like any backup. I can mitigate entirely against that by using a mergerfs cache mirror in the pool. Next…

Fake news.

The facts on the ground suggest this is more blah.

More blah. When you pay for my electricity bill, then you can decide what’s relevant to my situation.

Well for a start; simplicity (it’s JBOD), allows recovery from more than one failure (see above), allows drive spindown thus lower noise and electricity costs, can fill drives above 85% capacity without loss of performance, can use mismatched drives, can switch out drives of different sizes without rebuilding the array, portability of data (e.g. in an emergency you can access drives on systems without enough drive bays to cover the pool). All the while, with full data integrity at far less cost.

ZFS is expensive for storing mass media such as movies or shows. Few people in their right mind would choose that if there’s a better overall solution, and there is.

lol

“irrelevant”

Again, I welcome you to pay my electricity bill. lol

It’s really quite obvious the only argument you can come close to making is technical (when even that demonstrably falls short), so you try to make all other factors - which put the nail in the coffin against your case - irrelevant to the discussion. Talk about moving goal posts!

Now there’s something we can agree on.