You have clearly never tried to restore filesystems where metadata is corrupted. All filesystems have metadata, and if it is messed up, you won’t be able to recover much, even more so if your filesystem is encrypted. So for NTFS, see if you can recover anything when MFT and mirror MFT are corrupted. So from that perspective, there is no difference with storage - you have file data ( in chunks, local filesystems in sectors), and you have metadata ( in metadata chunks, local filesystems in metadata structures like MFT). In both cases, if your metadata is screwed, everything is basically gone.
The benefits of deduplication are not so much across files, but across time (snapshots). This is not something you can do by “running separate deduplication software”. Most setups do not have significant amounts of the same files in a single snapshot, but in most cases different snapshots that are close in time have massive overlap in data (e.g. files / parts of files that didn’t change). Unless your backup sets are trivial, you can’t create multiple snapshots without deduplication, your storage requirements quickly become ridiculous. With deduplication, you can run terabytes worth of daily snapshots if only small portions of them changing every day.
Not sure what you mean that your backup will be in cold storage. But if all you need is a single snapshot (e.g. some immutable documents) that you can put in a bank vault and don’t access it until recovery scenario, then you don’t really need , any copy will do. This is not really a backup strategy for live data though, for that any solution needs to be able to keep history over time.