Can you manually copy the file somewhere? e.g. cp /media/pi/BLABLA/BLABLABLA/Backupfile1.yyy /tmp/? Nano may be showing you the beginning for the file, but if the bad sector is somewhere in the middle you man not necessarily stumble on it.
In my honest opinion – you should stop backing up to a single usb drive much less connected to a raspberry pi.
Backup should be more reliable than your source media – when that fails, your backup should be there to save the day.
RPI is an unshielded prototyping devices, very susceptible to ESD and power fluctuations. USB storage stack is high power low performance slowpoke; USB drives in the enclosure are least reliable in the world: they are lowest binned devices, USB enclosure power delivery is crap, power and thermal management is from flawed to nonexistent; and even if that was not the case – single HDD will rot and degrade over time no matter what; you can’t write data to the disk and expect to read it tomorrow. That’s not how modern drives work. The industry decided it’s cheaper to make shittier disks but regain reliability by using them in redundant clusters. Overall cost is lower and reliability is higher. You can’t trust single hard drive anymore and the higher write density is – the less you can trust it.
You need to have some sort of redundancy with healing (raid array comes to mind). This will save you from bad sector on one of the drives ruining your day but will do nothing to address bit rot (no checksumming so not clear which copy of mismatched data to trust). To protect against that you need BTRFS or ZFS array with checksumming enabled and periodic scrub scheduled that to read every sector, verify checksums, and correct inconsistencies. And then you may consider this a half-decent local backup (don’t forget ECC RAM too for good measure). In addition, data shall be actively maintained to be viable (periodic scrub is a must).
What you can do short term? Format your drive with BTRFS with single disk DUP profile (see Manpage/mkfs.btrfs - btrfs Wiki). This will halve the available space on your drive but will write data with redundancy so that bad and rotten sectors will be detectable and correctable during periodic scrub. But this won’t save you from HDD failure. Also, you would need to enable ERC/TLER on the disk so that read on bad sector fails and not time out.
Middle-term? get a NAS with BTFS or ZFS and NAS or enterprise grade disk (for TLER/ERC) to avoid reinventing the wheel.
Long term – just backup to the commercial cloud, where other people’s full time job is to keep your data viable and available. Keeping data alive is hard. Outsource that menial task! And it will be significantly cheaper because unlike your one-off contraption in the closet they do it on scale.
What about Duplicacy’s erasure coding you say? It depends. It may save you in some cases. It does reduce risk somewhat and will help if bad sector happens to be somewhere inside chunk files. But if that happens elsewhere, or corrupts filesystem, etc – it does not really help and therefore IMO is not worth the efforts.
Replace the drive. A Raspberry Pi is a solid device and external HDDs are perfectly fine for backup - *so long as you copy it (with Duplicacy) to another, more reliable, destination (i.e. cloud storage) - and monitor this process so you know when something goes wrong.
The process of copying data from the local backup storage validates the integrity of the data, since it has to unpack and repack chunks for the destination. Though you should always run periodic test restores as well (from all backup stores).