Let me dive head first in this juicy offtopic.
Yes, unRaid is a problem. It has ran its course. There is no reason to use it today, when objectively superior technologies exist.
I’ll elaborate.
uRAID back in the days solved a very important problem for home users - making use of zoo of disks of random sizes laying around, that otherwise would end up on the land fill. They have invented their own raid layout (massive feat, kudos to them) and UI was fairly decent too, they gained users.
Today the landscape is completely different.
- Because the industry storage demands grow at ever increasing pace, the secondary market is flooded with slightly used enterprise disks at half to a quarter of price. You can buy a coherent uniform sets of 10-18TB drives for cheap. The “use what you have” argument is obsolete.
- Electricity costs soared, and unlike unraid array (which can keep idle disks spun down) ZFS spins all vdev members during use, so random consumer disks make even less sense. Coherent sets of enterprise drives are far more power-efficient per TB and perform better
Therefore there is no reason to cling to old collection of random mismatched disks. You can always sell them on ebay, and buy a coherent collection of equally sized, enterprise-gtrade ones.
Moreover, even if cost savings is not a priority – I would still strongly suggest not buying new drives. Buying slightly used and refurbished moves you away from a left slope of a bathtub curve.
- but what about warranty? Nothing. Warranty is only worth AFR * (cost of a disk). With realistic AFR the expected value of a 5-year warranty is tiny, tens of dollars. But buying used saves you hundreds. Therefore warranty can be ignored, it is not worth the retail markup. (And not only you pay more – if disk fails, you get a refurb anyway, which actually works. So why not get it from a get go, at half the price?)
- And yet, a lot of vendors offer their own no-questions-asked warranty on used disks anyway.
This modern landscape completely removes the advantage unraid historically provided. Now it’s yet-another NAS OS, that can run containers - so they double down in that. But the design still suffers from the architectural compromises that now have no reason to exist. You get no pros, just cons, and sunken cost fallacy.
Noteworthy: they have acknowledged that by adding support for ZFS on Linux. ZFS is the correct filesystem to use today in these scenarios. Unforunately bolting on another filesystem on top of legacy architecture does not magically undo the compromises.
What to use?
If you want just a rock solid NAS – look at FreeBSD-based TrueNAS Core 13.3. No new features, only critical bug fixes, but it is very fast, robust, stable, and will last another 10 years. You can still run software in FreeBSD jails and/or bhyve.
If you want containers and “apps” – TrueNAS Scale is where IXSystems is moving today. Similar platform, but built on Linux, because users asked for containers. Docker containers are much higher overhead over jails, but there are some advancements (like LXC) to shorten the gap.
Still, I"m bulling a third server now, and it is going to run Core, I won’t touch Scale for another 10 years at least, because I don’t believe they will manage to get to parity with Core. They still keep fixing silly bugs and regression that have been long resolved in Core. Anyway, Core vs Scale and FreeBSD vs Linux is another rant for another time.
And yes, my Core at home runs plex in the jail just fine.
Oh, and TrueNAS is free for home users. Completely.
Another good news is that that only file enumeration and small block access is affected. The large media files that are read sequentially should not experience noticeable degradation.