-.Recycle.Bin/*
-Plex-Media-Server/Library/Application Support/Plex Media Server/Media/localhost/*.bif
-kiwix-serve/*
-Plex-tmp/*
-Plex-var-tmp/*
Duplicacy will spend 4 minutes at this stage:
Loaded 5 include/exclude pattern(s)
Is that normal? It’s kind of weird because the backup only takes a couple of seconds. Kind of frustrating that the filtering stage should take 90% of the total ammount.
I am not entirely sure if I wrote the filters correct though. What I want is just to eclude the top level folders .Recycle.Bin, kiwix-serve, Plex-tmp and Plex-var-tmp. The only exception is the -Plex-Media-Server/Library/Application Support/Plex Media Server/Media/localhost/ folder where I want to exclude all *.bif files recursively.
The command I am using is duplicacy -log -verbose -stack backup -stats -threads 12
I don’t think rules have anything to do with the slowdown; it’s filesystem traversal itself.
Questions:
What happens if you run it twice in a row, without changing anything? Is the second time faster?
How much free ram do you have on the server?
What is the OS
What is the filesystem?
How is it mounted?
That said, your filter are inefficient and can be improved.
Use a trailing / for directory exclusion; /* is unnecessary and slows matching.
Recursion is automatic with *, as it also matches /
So, use this:
-.Recycle.Bin/
-kiwix-serve/
-Plex-tmp/
-Plex-var-tmp/
-Plex-Media-Server/Library/Application Support/Plex Media Server/Media/localhost/*.bif
Next step would be getting a spindump, or flamegraph (or whatever analogue go or your target os provides) and actually see what is it spending time on during those four minutes.
Same thing. Even if I omit the * in the filter as you suggested.
A lot — 107 GB.
Unraid 7.1.4
I am backing up from a ZFS NVMe to a ZFS NVMe on the same server.
I have no clue. I am not that Linux savvy, but this is what ChatGPT says:
"In Unraid, the appdata share is not mounted like a normal partition. Instead, it is provided through Unraid’s User Share system, which uses the shfs FUSE filesystem.
I don’t have much experience with unraid, but I think your robot-friend is right – all user mounts on unraid go via FUSE, and there is no way around it.
You can confirm by looking at output from mount command – see if you see any mention of fuse on the user mounts. You woudl see likely /mnt/user mounted as such – and that’s the root of your problem, it makes walking the filesystem very slow.
To confirm, cd to the root folder of your duplicacy repository, (if duplciacy is in the container - open shell into container, and do it there) and run
time find . -type f > /dev/null
This will measure time it takes to enumerate all files starting from the current directory (and throw away the result), and then print time. If you see the same 4 minutes – duplciacy cannot do anything about this.
I don’t think there is a way around this on unraid, other than moving plex’s app data folder to outside of user mount, that can be mounting directly, thus avoiding fuse. Or maybe it’s already accessible directly from other mount points? Or can be mounted directly to somewhere outside of /mnt/user?
If the find . -type f is fast but duplicacy is slow – it woudl be very unexpected. We can collect strace from duplciacy process during that time and look at system calls, to potentially uncover another bottleneck.
What I’m sure it is not – is regex/matching engine of duplciacy. It’s very fast. You should not be noticing any performance issues enabling filters.
Seems like you were right. Unraid might be the problem.
If I switch out /mnt/user/appdata with /mnt/cache/appdata/ (thereby bypassing the FUSE layer and the shfs overhead) the total backup time went from 6 minutes to 1 minute and 10 seconds.
This is good news and bad news.
The good news: is that the appdata folder is almost always stored on a single disk so this solution is effective and doesn’t have any considerable frawback.
The bad news: is that other shares such as, say a folder called TV SHOWS can be split across multiple drives, in which case this solution is not really possible. Well technically it is, but it will make the backup direcotries confusing. If the user at some point switch out the drives (the main selling point of Unraid) the backup repository will have to be updated accordingly.
Seems like the best way to go is to use the FUSE layer for most backups in Unraid to avoid complication down the road. But since the appdata folder specifically stays on the same drive we can make an exception.
Yes, unRaid is a problem. It has ran its course. There is no reason to use it today, when objectively superior technologies exist.
I’ll elaborate.
uRAID back in the days solved a very important problem for home users - making use of zoo of disks of random sizes laying around, that otherwise would end up on the land fill. They have invented their own raid layout (massive feat, kudos to them) and UI was fairly decent too, they gained users.
Today the landscape is completely different.
Because the industry storage demands grow at ever increasing pace, the secondary market is flooded with slightly used enterprise disks at half to a quarter of price. You can buy a coherent uniform sets of 10-18TB drives for cheap. The “use what you have” argument is obsolete.
Electricity costs soared, and unlike unraid array (which can keep idle disks spun down) ZFS spins all vdev members during use, so random consumer disks make even less sense. Coherent sets of enterprise drives are far more power-efficient per TB and perform better
Therefore there is no reason to cling to old collection of random mismatched disks. You can always sell them on ebay, and buy a coherent collection of equally sized, enterprise-gtrade ones.
Moreover, even if cost savings is not a priority – I would still strongly suggest not buying new drives. Buying slightly used and refurbished moves you away from a left slope of a bathtub curve.
but what about warranty? Nothing. Warranty is only worth AFR * (cost of a disk). With realistic AFR the expected value of a 5-year warranty is tiny, tens of dollars. But buying used saves you hundreds. Therefore warranty can be ignored, it is not worth the retail markup. (And not only you pay more – if disk fails, you get a refurb anyway, which actually works. So why not get it from a get go, at half the price?)
And yet, a lot of vendors offer their own no-questions-asked warranty on used disks anyway.
This modern landscape completely removes the advantage unraid historically provided. Now it’s yet-another NAS OS, that can run containers - so they double down in that. But the design still suffers from the architectural compromises that now have no reason to exist. You get no pros, just cons, and sunken cost fallacy.
Noteworthy: they have acknowledged that by adding support for ZFS on Linux. ZFS is the correct filesystem to use today in these scenarios. Unforunately bolting on another filesystem on top of legacy architecture does not magically undo the compromises.
What to use?
If you want just a rock solid NAS – look at FreeBSD-based TrueNAS Core 13.3. No new features, only critical bug fixes, but it is very fast, robust, stable, and will last another 10 years. You can still run software in FreeBSD jails and/or bhyve.
If you want containers and “apps” – TrueNAS Scale is where IXSystems is moving today. Similar platform, but built on Linux, because users asked for containers. Docker containers are much higher overhead over jails, but there are some advancements (like LXC) to shorten the gap.
Still, I"m bulling a third server now, and it is going to run Core, I won’t touch Scale for another 10 years at least, because I don’t believe they will manage to get to parity with Core. They still keep fixing silly bugs and regression that have been long resolved in Core. Anyway, Core vs Scale and FreeBSD vs Linux is another rant for another time.
And yes, my Core at home runs plex in the jail just fine.
Oh, and TrueNAS is free for home users. Completely.
Another good news is that that only file enumeration and small block access is affected. The large media files that are read sequentially should not experience noticeable degradation.
This is wrong on so many levels, but literally none of what you said applies outside of the U.S anyway (and, I suspect, won’t for much longer with the economic situation there). Used, new, enterprise, whatever - large capacity still isn’t cheap, and HDD prices, rather than falling, are already creeping back up.
If you’re starting from scratch, same-size drives makes sense - but that’s the only point in the lifecycle of a server/NAS anyone, anywhere, is going to be at. For all other times, like when you add new capacity over time - mismatched drives - for mass media - is not only far more space and cost-effective, it provides the most flexibility. Even RAID-Z expansion still isn’t as flexible as JBOD, and you already explained how ZFS can’t spin drives down, which contradicts the cost considerations for anyone living outside of France or Texas.
I can’t speak about Unraid too much, but I do know it offers flexibility for those who do need mis-matched drives, who do want to spin drives down, and do want an all-in-one NAS solution (with VMs and docker) with less capable hardware requirements. Plus it has an extremely good community to support less technically inclined users.
Personally, I’d probably pick TrueNAS over Unraid myself, but I find even that option restrictive compared to tailoring my storage under Proxmox - using ZFS RAIDz1 (with mirrored special vdev), and 4 mis-matched drives ranging from 8-14TB with single parity SnapRAID pooled with mergerfs. I’m not restricted to just ZFS and can mount literally everything through an LXC or docker in an LXC/VM, or even VM (via VirtioFS). ZFS makes absolutely no sense for mass media.
Now, if you can’t imagine why Unraid might still not be viable for anyone, I could say the same about TrueNAS. But I don’t, because people get to choose what works for them; based on how much they want their hand held, or how much they’re willing to spend on storage in their country.
That’s a regional pricing issue, not a technical argument. Cloud offload cycles are global. EU/UK/AU/SG all have steady supply of 10–18 TB ex-hyperscale drives at commodity prices. If your country blocks import or slaps punitive duties, that’s customs policy — not evidence that mismatched disks are somehow technically advantageous.
“Flexible” is not synonymous with “architecturally sound.” JBOD + parity-on-write layers (SnapRAID) + mergerfs is exactly the same patchwork Unraid solved for amateurs fifteen years ago. It still works — but it’s a workaround born from scarcity. Scarcity is gone. Uniform sets of used enterprise drives eliminate the original justification for the Frankenstein layout.
Unraid acknowledged that. They bolted, however poorly, zfs support. You seem to still cling to it.
True but irrelevant. Flexibility for its own sake isn’t a metric — correctness and predictability are. JBOD’s “flexibility” is exactly the ability to glue arbitrary disks together with no invariants and no guarantees. ZFS’s constraints are deliberate: they preserve consistency, performance characteristics, and recovery semantics across the pool. That’s not “less flexible”; that’s less willing to accept garbage layouts.
If your workflow requires plugging in a random disk every six months, you don’t have a storage architecture — you have a junk drawer. ZFS explicitly avoids that class of failure mode.
Right — because its coherency model assumes devices are available. That is a design decision, not a defect. It gives you deterministic latency, robust self-healing, and correct scrubbing. Spindown breaks all of that.
If your top priority is kWh minimization, fine — but that is a power-grid constraint, not a filesystem argument. You’re optimizing around electricity costs, not around data integrity or architecture.
Local electricity pricing doesn’t invalidate ZFS; it invalidates running multi-drive arrays in your region. That’s an environmental cost structure, not a justification for JBOD or Unraid parity schemes.
If your grid forces you to prioritize sleep/spindown over architectural coherence, then yes — you end up in the Unraid/SnapRAID/JBOD bucket because that’s the only bucket compatible with your constraint. But that does not make that bucket technically superior. It makes it the cheapest to keep powered on.
The correct solution here is to mimize number of drives, not increase sleep times. No drive is optimized for slee-wake scenarios, any power management mode is inherently difficult problem to solve. You want to avoid dealing with it as much as possible. Keeping disk spinning is a low price to pay for reliability.
False.
• recordsize=1M yields sequential throughput at device limits.
• special vdev removes metadata bottlenecks.
• scrub/repair correctness matters more as total capacity grows; bit rot doesn’t discriminate between movies and VM images.
The only reason ZFS “makes no sense” is if your top priority is disk spindown. And as we discussed above, this is a bad electricity-pricing driven requirement, not a filesystem argument.
Again, this is optimizing around constraints you’re choosing to keep. This desire is objectively wrong: Spindown is directly at odds with reliability and predictability. If you want to maximize reliability of your storage server (who does not?) spinning down disks should not be on your laundry list. If your power grid is expensive or unstable, then sure: spindown becomes a local requirement. That, however, doesn’t make Unraid technically superior. It just means you’re designing to a local minimum rather than for a robust architecture.
In 2025, adding “whatever random disk you found” is a self-inflicted problem. Adding vdevs to a pool is instant. Even ZFS vdev expansion exists. Used enterprise drives are cheap. Rebuilding a proper vdev is safer, cleaner, and performs better than a grab-bag array whose layout changes every time someone plugs in a garage-sale HDD.
UI friendliness isn’t the argument here. Of course Unraid’s onboarding is easier. That was always its strength. My point is architectural: the advantages Unraid used to have (heterogeneous disks, power savings, parity tricks) were compensations for an era of expensive, tiny disks. That era is over for the majority of NAS-scale users.
That’s a copout. It’s truism, not a technical counterpoint. Of course preferences differ and are irrelevant. The question was whether Unraid’s legacy architecture still makes objective sense given modern drive economics and modern failure modes and it resoundingly doesn’t. You can still like it without pretending that regional energy pricing and incremental-disk habits are universal engineering principles.
Basically I fail to see anything addressing technical claims. “millions lemmings cannot be wrong” and “people know what’s best for them” are not valid arguments. Because yes they can, and, no, they don’t.
You made a claim that multi-TB enterprise drives were ‘cheap’ and that “use what you have” is obsolete - these are pricing arguments not technical, it was you that brought them up…
The U.S is (was) an exception rather than the rule, so your claim is factually wrong. Even just a few years ago, refurbished drives in the U.S wasn’t so much a thing. It was shucking. (We still do shucking in the rest of the world for similar reasons, just as temporarily, enterprise refurbs are a thing for you now. They aren’t for us, and they haven’t always for either of us.)
This isn’t a workaround - it’s an efficient use of storage that’s architecturally just as good as ZFS. $ for $ ZFS can’t compete with this level of flexibility, unless $ isn’t a factor for you. For most people, it is.
Some of my drives are NAS grade and have been running 24/7 for 10+ years, some for 8 years, some for 3, 2, etc… If I had to rebuild a ZFS array at every stage of upgrade over the years, I’d have spent thousand more £ to achieve the same level or storage and resiliency. These facts are irrefutable. You choose to ignore these factors, most ordinary people cannot. Hence Unraid is still a thing.
I don’t know why you’re confusing JBOD with offline storage?
Again, you’re living in cloud cuckoo land - or are very rich - if you think everybody can downsize the number of drives at a whim.
Prior to the Ukraine war, where electricity prices were a third they are now, all my drives were running 24/7. As I said, I’ve got at least 3 working NAS-grade drives with ~11 years power-on time (and they still run 24/7 coz they’re NAS drives). Nowadays, I power down the white-label shucked drives in the summer months and leave everything on in winter. It’s a happy medium to which I carefully evaluated the pros and cons. There is no right or wrong setup.
Completely unnecessary for my media files.
Completely unnecessary for my media files.
SnapRAID solves that completely for me.
This isn’t for you to decide.
Since warranty and 3-2-1 backups are a thing, I think most of us adults can make sane cost-benefit analysis to decide how much electricity costs they can save by spinning drives down versus leaving them on 24/7 and risking early failure. Some of us also like to factor in noise at night.
Also, you’re gonna need to provide some actual evidence (rather than theory) that shows spindown is directly at odds with reliability. These are outdated rote claims that require testing in the real world.
Adding a drive to a JBOD is instant, I can also remove that drive in an instant, and directly access all the drives individually on another system if I so choose.
Not cheap enough outside of the U.S. 80% of the price for 50% or less of the longevity isn’t a sensible option to me, but you do you.
First, I refuse to be curtailed in this discussion to the mere ‘technical’ - you can’t just choose to ignore all other factors, to which there are many. (We all know it’s coz these inconvenient factors destroy your main claim that Unraid has no purpose in 2025, which is ofc absurd.)
Second, you haven’t even demonstrated that the many alternatives (mergerfs, SnapRAID, Unraid et al) are technically less capable than TrueNAS/ZFS anyway.
“Basically I fail to see anything addressing technical claims.”
A Formula 1 car has the best speeds and is technically superior to all other cars. It’s also not very practical on a London street (and prohibitively expensive besides). Your argument is basically that everyone should be riding the same car (a Tesla maybe), or the same OS (TrueNAS).
Finally, telling people living in a completely different economy, that “they don’t know what’s best for them” is super fucking arrogant to say the least.