Few thoughts, in no particular oder.
-
$400 for 1TB of high-performance flash is not even close to outrageous; I’d say not upgrading (and generally getting anything but the highest-end Mac means getting worse value than possible); on the other hand wasting all that goodness on static immutable media would indeed be a waste.
-
Photos and videos are incompressible, non-deduplicatable, immutable data which do not benefit from duplicacy or any other versioned backup tool. What you want is to write the photo or video to storage and prevent it from changing forever, because the only reason it would change is that it’s corrupted or encrypted by ransomware.
-
Something tells me that paying for iCloud storage is not in your plans either, so let’s assume you will have some cloud storage (including the one served from your NAS, for this discussion, as you can always install ZeroTier and have your NAS always assessable in the LAN)
-
We all also assume that you use Photos Library (you can apply the same logic to any other photo database – C1, Lightroom, or what have you)
-
Also worth mentioning, even though in my mind it’s a given no manual steps should be required; the tiered storage solution should just work.
So, this is what I’m suggesting:
- Storage
Configure rclone remote for your cloud storage destination. Be that unlimited Google Drive or Backblaze, or your NAS over SFTP does not matter. That location will keep full collection of your photos. You can replicate it right from there if you want to. They don’t have to be in plain text, on the contrary, I suggest encrypting any data you upload to any location outside of your mac, including your NAS; rclone can do it transparently via “crypt” remote.
Configure your Mac to mount that location to ~/Pictures/Photos\ Library.photoslibrary/originals
with local cache enabled, using macFUSE. Unlike how iCloud manages storage where “optimize storage on my Mac” is selected and offloaded files are replaced by stubs and are missing from the filesystem, virtual filesystem mounted by rclone will behave as if all data is present, while in reality no disk space will be used. Only when you try to read files small portions of these will be cached locally. That means, if Photos needs to extract metadata from a specific file, it will be able to do so without downloading the whole file locally via the magic of sparse files. When you add a new file – it will go to the cache immediately, and then slowly get uploaded to the cloud storage in the background.
As a result, you now have a bottomless SSD-level performing local drive, backed by the cloud service in the actual datacenter.
- Backup.
Since those are pictures and videos – you don’t need duplciacy. Just rclone copy
them via another rclone remote to another cloud storage provider. For example, another B2 bucket, via special keys that will only allow upload, but not rename, delete, or overwrite. Or just configure replication on B2 itself, so you don’t have to upload the same file twice. Or a million other opportunities.
For other data – documents, projects, source code, etc absolutely do use Duplicacy – to a cloud destination from any of the big players regardless of cost – the amount of data will be negligible and there is no difference in paying between $.001/month and $1/month.
Anecdotally this “mountable cloud storage with cache instead of a local drive or NAS” approach works well for me for quite a time now. Not only it works it is infinitely tunable and customizable. I don’t own NAS(es) anymore, and can’t be happier with the result. I do pay for 2TB storage on iCloud though, as full disclosure, because it’s cheap and well-integrated to the OS; and my mac has a 4TB SSD drive, 2TB of which is allocated for rclone cache… regardless, this solution scales very well, and what’s more important – “just works” without requiring you to babysit it ever and does not include any manual step, after initial configuration. I think it’s a win