Protection against Ransomware by Pulling Backups

As I got the idea of duplicacy correctly it pushes backup to a backup server (or destination).
This makes it vulnerable to ransomware because all necessary data (password/certificate) is stored on the machine where the source files are located. This makes it possible that an attacker can delete all backups on the remote backup server.
Is there any possibility (which I may have overseen) to pull backups instead of pushing them?
The only idea I have so far is to automatically mount a network drive from the source system to the target system and then trigger a backup. But this is only a workaround.

Nobody? Just let me know if me thoughts were right or wrong :wink:

There are some points to evaluate:

  • If the storage is going to “pull” backups, then it has to be something with processing capabilities (to execute :d:) and not just storage. Then it will also be vulnerable to attacks (including ransomware).

  • This “storage with processing” will have access (even if read only) to your original (unencrypted) files.

  • There are ways to set up storages to only accept new files, and not accept edits or deletions.

Due to these and other reasons, I think the “push” setting is safer.

1 Like

Hi!
(Assuming the pulling server is safe) the pull idea is a pretty good one.

Similiar topics (but perhaps not with this ransomware angle) has been discussed here:

You could disable storing the storage passwords in the storage, using the no_save_password option and only manual backups. Malware would not be able to access storage unattended.

@towerbr, options for immutable storage is a good one! (where available)
Perhaps one could have one cloud account with only add (for scheduled tasks) and another for management of the same data. (should you ever need it) (which one does) (ever so often!)

Pull-based backups can be a pretty good strategy but unfortunately Duplicacy’s current design doesn’t allow it and I don’t see that changing, or even needing to change, as there may be better ways to mitigate against ransomware imo…

Before Duplicacy, I was using another backup tool for backing up client’s data - dirvish.org, which is basically a Perl wrapper script around rsync and implements snapshots via *nix hardlinks. In fact I still use it for backing up cPanel websites over ssh. That tool is pull-based, and works quite well.

But you have the security problem of exposing access - potentially over the internet - directly to the machine to be backed up. Now this is fine if you know how to set up ssh/sftp with public/private key authentication (and keep everything patched!), but if you have a lot of endpoints…

Theoretically, you could combine such a system with Duplicacy. Use Dirvish/rsync to pull data and Duplicacy to backup that sync’d repository locally. Of course you’d need at least double the space. Or you could mount a remote repository with sshfs and back it up directly with Duplicacy.

Personally, I think it would be feasible to lock down a sftp server such that it would only allow chunks and snapshot files to be written (once) and not deleted. i.e. WORM (Write Once Read Many).

I haven’t tried this approach yet. For a start, I don’t think OpenSSH can do it. Also, I suspect Duplicacy uploads chunks and then renames them into place, so the permissions may need to be tweaked, and is it truly WORM if renames are allowed? Perhaps it would have to be combined with a special WORM-like filesystem and/or filesystem snapshots to protect the storage, so the worst an attacker could do is rename chunks and you’d still be able to recover thanks to periodic filesystem snapshots.

This “storage with processing” will have access (even if read only) to your original (unencrypted) files.

If you’re thinking of a naive approach where the “storage with processing” does the equivalent of an ssh to kick of the existing, unmodified duplicacy on some remote node, then yeah.

But that would be a silly way to implement it. To attain the security benefit, nodes being backed up would simply run a backup agent, which could offer up the files in encrypted form only. The “storage with processing” would talk to all the agents to pull backups and wouldn’t necessarily require any access to the plaintext data at all. It would mean modifying the existing code to run as a daemon, listening for such requests. It doesn’t seem all that hard to implement.

There is a workaround: if the backup destination server is a Linux or BSD, one can create a cronjob to set the files in the backup destination directories immutable. In Linux it can be done with the sudo chattr +i file command, on FreeBSD it’s sudo chflags schg file.

1 Like

My preference would be to have a trashbin on the storage server. Were if you delete a file, its put in the trash for like 30 days. and no way to delete the file earlier. Then the worst a virus or attacker then do is put all your backup files in the trash. but as long as you notice it within the 30 days, you’re good