Start backup on volume mount

I want to have the option to start a schedule when a volume is mounted.

I’m currently looking into to add cold backups into my recover strategy. The idea is if the computer gets infected with ransomware and if it targets and corrupts my online backups, I would still have an external USB-drive with somewhat recent backups. The plan is only plug it in once or twice per month. To streamline the process, ideally I just want to plugin the drive in order to initiate the backup and get a message once it’s done so that I can disconnect it and put it on a secure place.

On macOS you can use StartOnMount in your launchd service to run the backup:

 StartOnMount <boolean>
 This optional key causes the job to be started every time a filesystem is mounted.

Then you can unmount in the post-backup script.

On windows likely similar mechanism exists.

However this is a very poor protection (ransomware will just encrypt your stuff once you connect, or just wipe filesystem) and long term storage on a single usb HDD is not sustainable, data will rot, there is no data consistency guarantee.

Instead, configure backup to cloud storage, such as Backblaze B2, with access keys that only allow upload but not modify or delete. This will provide much better, pretty much bulletproof protection.

Thank you for your feedback. I will look into StartOnMount and what is possible in Linux.

That isn’t an issue so long as the infected computer is formatted or replaced before connecting the backup drive. The only way that could happen is if the drive is connected while getting infected with the ransomware.

Data rot is an concern, especially sense MacOS do not support ZFS or other good data integrity file systems. Erasure coding could maybe help to some extent?

Interesting concept, but will not work for me. In my opinion, an essential function of a backup system is the ability to delete old and unwanted data. I’m not interested to keep all data forever, especially if I need to pay for data that’s worthless to me.

Ah, it’s linux. I’m not a linux guy, but IIRC you want some combinations of After and WantedBy options for systemd.

Can you be sure you never connect it while encryption is silently in progress before you notice?

It may help reduce the impact somewhat. But not eliminate the risk. For example, if a single block where the config file resides rots - entire backup is gone. And that file is never re-written, only read.

Then there is correlated risk – power surge killing both drives during backup, flood, fire; so you still need offsite backup. The local one just becomes a non-critical convenience options, to save egress and and bandwidth when the source of data loss is not fire or power surge.

There is no contradiction: You can run prune from another machine (perhaps, from a cloud instance, that is only running for 5 min a month to do the prune, thus minimizing risk of compromise to obnoxiously low levels) with another set of keys, that do allow delete.

(Personally, I never delete anything, storing stuff is cheap, cost of a mistake deleting something useful prematurely is non-zero, so I’d rather pay extra $1/month for extra terabyte of garbage and not have to ever think about that. But your dataset maybe transient in nature, so pruning maybe justified)

I have both Linux and MacOS clients. :wink:

Yes and no, by following guidelines such as the UK’s National Cyber Security Center, I can to a high extent to be sure that I can protect my self to best of my abilities.

That’s why the config file should always be manually backuped and stored securely offsite. :wink:

I agree both offsite and onsite are needed, but I do currently such backups for different reasons than yours. In short, there are risks with offsite too. You can lose access to your storage because of eg. billing issues, account got blocked for some reason, long term loss of internet (eg. fiber cut-off) etc. Also, data centers haven’t eliminated the risk of either UPS power surge, flood, burn down etc, but it’s less likely to happen to such facilities than to the average home owner. But it’s even more unlikely that I will lose my data if something happens t my house and the data centers I store my offsite data at the exact same time.

In other words, by using both offsite and onsite I make my overall backup strategy better than each component by them self.

It seems not to be possible with B2 application keys. Only thee options (read, write and read&write) according to this documentation: How to Use B2 Cloud Storage Application Keys

I did however in the past tired object lock: https://help.backblaze.com/hc/en-us/articles/360052973274
But last time I tried it I got a lot of issues with my backups, and with similar feature with other S3 bucket providers.

1 Like

Agreed with the entire paragraph.

I’m not against local backups; I’m against using an USB HDD to host them. (I’m using NAS, that is on a separate UPS, and filesystem that provides data consistency guarantees, and snapshots backup datastore daily with 20 day retention – in case ransomware gets to the lan clients)

Not in GUI; You’ll need to use CLI: How secure is duplicacy? - #30 by tallgrass

Ah, I did not know they had object lock!

Not surprising – things behave differently so unless the tool officially supports it - it likely won’t work (not tested == broken)