I have storages on a local RAID5 DAS device and on B2. As protection against ransomware, I plan to copy
the RAID5 storage to a removable USB drive infrequently, e.g. every 2-3 months. If the local storage becomes damaged, is it correct that it could be recreated by a copy
from the removable storage to copy most chunks, followed by a copy
from the B2 storage, which would download only the chunks added since the last copy
to the removable storage?
Yes, your understanding is correct.
However, few additional thoughts:
- I would not trust offline media that is not being scribed periodically to not develop bad sectors or rot. While bad sectors are not a problem — you would just fail to copy those chunks files and download them from B2 later; the rot is: you may end up with silently corrupted chunk, and will have to detect it, delete, and fetch it from B2 manually.
- Conventional raid5 itself is susceptible to the same issue as it cannot deal with rot, unless it is based on btrfs or zfs.
- if the concern is ransomware the easier solution is periodic filesystem snapshots locally, and immutable bucket configuration on B2 as discussed here earlier (How secure is duplicacy?)
- if the concern is egress cost from B2 — bandwidth alliance with cloudflare addresses that too and duplicacy supports alternative download URL for the b2 endpoint
All good suggestions, thanks!
- I would not trust offline media that is not being scribed periodically to not develop bad sectors or rot. While bad sectors are not a problem — you would just fail to copy those chunks files and download them from B2 later; the rot is: you may end up with silently corrupted chunk, and will have to detect it, delete, and fetch it from B2 manually.
I did configure the offline storage with erasure encoding, which I presume helps?
- Conventional raid5 itself is susceptible to the same issue as it cannot deal with rot, unless it is based on btrfs or zfs.
It’s a QNAP TR-004, so it probably doesn’t. I should run check -chunks
periodically.
- if the concern is ransomware the easier solution is periodic filesystem snapshots locally, and immutable bucket configuration on B2 as discussed here earlier (How secure is duplicacy?)
It’s not clear to me from the linked post how to set this up with the Web UI so it would be automatic. It also sounds a bit experimental – maybe a feature request would be appropriate?
- if the concern is egress cost from B2 — bandwidth alliance with cloudflare addresses that too and duplicacy supports alternative download URL for the b2 endpoint
Thanks, I was not aware of Cloudflare – something to keep in mind. But this wouldn’t eliminate the cost of overrunning my ISP’s data cap, correct?
My main concern is speed of recovery from a ransomware attack. I’m taking extra precautions after having been hit by Qlocker last April. Fortunately, the local RAID5 storage wasn’t affected, and I was able to restore everything in less than a day.
I was using the CLI. I’m not quite sure how you’d set the environment variables when dealing with the Web UI. You could always fall back to pruning from the CLI, but I would agree that’s clunky.
As for experimental, I’ve been using it for 6+ months with no issue. Since it doesn’t rely on anything different from duplicacy’s perspective (it’s just B2 controlling what keys have what permissions - there are no code changes required from duplicacy) I don’t see that anything would need to be developed. Unless you meant, perhaps, a way to pass in environment variables using the Web UI if that doesn’t exist today.
Environment variable support is what I had in mind. But “unsupported” would be a better term than “experimental”; for example, a future CLI version could conceivably require delete permission for operations other than prune
.
I see a way to set the environment variables with the Web UI, but to make it automatic, wouldn’t a cleartext passphrase need to be stored somewhere where a malicious attacker with admin privileges could also obtain it?
And according to the Backup Immutability - Object Lock support? discussion, B2 storage could still be corrupted using only write access by overwriting chunks.
So I’m wondering if it’s worthwhile to implement separate backup/prune keys.
I store the environment variables encrypted in my script and they get unencrypted at runtime only. For sure, that would be a challenge with the Web UI I’d imagine.
Regarding the corruption of chunks, even if they are overwritten, the file revision policy on B2 (which should be set to retain at least 7 days of revisions) will let you retrieve the original version. Of course, this isn’t automatic and would require a script to restore the previous versions using B2 API (before duplicacy could do a restore). But the data are there.
It’s not a perfect solution, but pretty darn effective.
This topic was automatically closed 10 days after the last reply. New replies are no longer allowed.