(Technically speaking there’s a difference between how drive manufacturers and programmers talk about units of storage, e.g., KB vs KiB. Drive manufacturers like to say that 1MB = 1 million bytes, which isn’t the same definition used in file systems where it’s expressed in binary, but to keep the example below simple I’m temporarily ignoring it.)
At the most basic level, exFAT was designed (and optimized) for flash media, so on a HDD, the performance can be degraded.
Not knowing the make/model of your 2.5" HDD, I’m making some general assumptions based on details you’ve already mentioned earlier. Since your Duplicacy backup is ~700GB, the HDD is going to be a minimum of 750GB (in order to account for formatting overhead). And with write throughput of 50-75MB/sec, the USB interface must be at least USB 3.0.
- This mean that the HDD very likely has a 4,096 byte block size instead of the older 512 byte block size at the hardware layer.
- exFAT’s default block size varies depending on the volume size. If the disk partition on the HDD is 1TB, the default is a 256KB block (512KB for 1-2TB and up to 32MB for larger volumes.)
- A 256MB file copied to the exFAT volume on the HDD will be diced into 1,000 chunks to occupy 1,000 exFAT blocks.
So, where it gets messy is if each exFAT block is 256KB while the HDD’s native block size is 4KB, it means that 64 disk blocks are required to hold a single exFAT block. A 256MB file is therefore chopped up twice (first by Microsoft Windows and then by the disk controller) before it’s written onto the drive platter(s).
Because Duplicacy dices a snapshot into chunks, your ~700GB backup could easily consist of over 200,000 chunks if there’s very little deduplication. If every chunk was 4MB, there would be 175,000 chunks in the snapshot. 4MB / 256KB = 16 exFAT blocks = 1,024 disk blocks. That’s 1,024 write operations for each chunk, or just over 179 million write operations for the snapshot.
In comparison, NTFS has a default block size of 4KB, which aligns with the native block size of modern HDDs (and SSDs). Having matching block sizes streamlines the read/write process. One of the original reasons manufacturers migrated from 512 to 4K byte blocks on HDDs was faster read/write speeds that better aligned with the default block sizes used by modern file systems.
Part of the reason for the 50-75MB/sec write speed when copying files directly to the HDD is because there were likely a smaller number of much larger files instead of a lot of small files, so fewer IOPS.
Other potential weak link is the USB-to-SATA bridge in the external drive enclosure. It might have been optimized for larger files rather than lots of small files.
I also wanted to quickly mention that the bad sector reallocation on HDDs is done automatically while the bad block mapping at the file system level might or might not be automatic depending on the particular file system. When zeroing a HDD, bad sectors are handled automatically by the HDD, but there’s a fixed number of spare sectors set at the factory. At some point the pool of spare sectors will run out and the HDD will no longer be able to remap bad sectors. A similar process applies to SSDs, except that some SSDs support a variable bad block pool size, e.g., Samsung SSDs can be reconfigured to sacrifice storage capacity for data integrity.