Ransomware attack via encryption of remote storage

Hello,

Obviously, Duplicacy can protect against a ransomware attack where local files are encrypted and then a scheduled backup is made (which would now contain all of the newly encrypted files at a new backup revision) - the simple answer being to restore a previous version from a backup destination. But does Duplicacy offer any kind of protection against a ransomware attack that may get the app keys for, say, cloud storage from the local keyring or some other source and then encrypt or alter the cloud storage where the backup data is contained? My concern would be having my backup on the cloud encrypted as well and then having no access to perform the restore I previously mentioned. I haven’t seen any relevant discussion about this topic specifically on this forum or elsewhere. I understand this is more a question of securing the cloud app keys but I think this is a valid question and I’m curious about 1) what measures, if any, Duplicacy has to deal with this or prevent it, and 2) what others do to prevent such a situation or recover from it. A cold copy backup would work for this but I have no desire to burn 800 DVDs if you get what I mean.

Thank you

Duplicacy does not have control over what happens on the cloud and who else has access to that cloud. So all you can do pretty much is give duplicacy minimum required permissions to do backup that do not include ability to delete or modify files. (see similar thread here How secure is duplicacy?)

The simplest solution however would be not prevention – but recovery: simply implementing periodic snapshotting of the target storage completely eliminates the issue.

1 Like

Another complementary protection is to configure your cloud storage to keep files for a certain time after deletion. In my B2 buckets I configured to keep the files for 90 days.

@towerbr Is this 90 day retention on B2 a problem where :d: relies on hiding the files for fossil collection? Don’t hidden files on B2 get deleted after the retention policy days? And doesn’t this break the :d:/B2 backup model?

@saspus Understood. I have read the entirety of that thread and it is helpful for me to understand how :d: handles the fossil collection via hiding. Thank you for sharing. Doesn’t :d: require read/write (delete is just hiding, after all) to function correctly with its snapshot history?

Yes, prune requires deletions; but you can prune with different set of credentials, from different machine, or not prune at all…

1 Like

Which is what the guy on your linked post was saying. So there is never any hiding or deleting unless a prune is performed, is that right? (it does make sense on the surface)

Correct, backup does not delete anything. Only add.

But it DOES modify chunks on the cloud storage - which in the case or protecting from an encryption attack doesn’t help me a lot if I understand it right? (functionally this is the same as deleting, as the data is overwritten). What does rolling back the whole contents of a bucket on B2 to a certain date look like - because rolling back each individual file is just crazy?

Chunks are immutable. Filename is a hash of content. They are never modified. They are only created, renamed (in case of B2 — hidden) or deleted on prune.

3 Likes

Hi, how you configured this for b2 bucket?

Do you refer to this?

No I was referring to “ configure your cloud storage to keep files for a certain time after deletion”. I use exclusive mode and to me this sounds like a sort of trash bin. So if after prune I can save the files I delete for a period of time in a “locked” way could be a good thing. I’m actually keeping all file versions as lifecycle setting for my bucket.Hope my message is clear.

So, you are circumventing the safety net, only to construct another safety net with sap and twigs to protect against issues caused by you circumventing the original one in the first place?

Stop using exclusive mode, and the issue will disappear. The whole point of fossilization is to ensure nothing useful is deleted and take care of concurrency. Have a look at the duplciacy IEEE paper, it explains the approach very well.

What is the reason you are using exclusive mode in the first place, by the way?

Don’t prune files you might need. Modify your pruning schedule to keep versions longer.

This accomplishes nothing because duplicacy never edits files, and name of the file is unambiguously defined by its content; its’ hash of a content. Therefore there will be at most 2 versions – one with “hidden” attribute, which duplicacy uses to manage fossils, and is specific to Backblaze.

Unfortunately no, sorry. It’s not clear why are you configuring B2 for unlimited versioning, why are you running duplicacy in exclusive mode all the time, and why are you pruning snapshots that may not yet need to be pruned.

2 Likes

I’m using keep all version as life cycle rule as per Duplicacy documentation on B2. Using exclusive mode because I have a single host backing up to the bucket on b2.

Right, to be clear, it does not help in the context it was presented in your comment, with regards to versioning. Sorry if I misunderstood. The requirement to keep all versions is to support hiding the files, so technically two should suffice.

This explains why you could do that. Does not explain why you actually do that…

Furthermore, how many hosts you access datastore from is irrelevant with respect to safety of using that flag.

In exclusive mode many protections are bypassed, and prune expects that it’s the only operation touching storage right now. If you have another backup running, albeit from the same host, you will corrupt the datastore if you prune in -exclusive mode.

So, cons of pruning with exclusive: bypass of protections afforded by two-step fossil collection, guaranteed corruption unless you are very careful about schedules and concurrency.

Pros: ???

Generally it’s best to minimize number of flags you pass to any software, including duplciacy. Don’t change settings unless you have to and have a very good reason to. Defaults are supposed to provide working solution, and usually correspond to most-tested usecase.

As a side note – exclusive flag is helpful in immediate space recovery after some catastrophic events with the data store that required manual intervention. It shoudl not be used in day-to-day use, if nothing else but because it’s not a default mode of operation, and did not receive comparable test coverage.

2 Likes

I’ll add to this, @iocularis… using -exclusive with -keep is dangerous because, depending on your prune schedule, it can delete the last backup revision of any snapshot - if the schedule calls for them to be deleted - whereas not using -exclusive protects the last backup. (It can even delete all snapshots if they all fall outside of the retention period.)

Now you may think as long as your backups and prunes run as scheduled, this will be fine. Well you better have reliable monitoring set up coz if you go on holiday and backups fail due to e.g. lack of disk space / period of internet outtage, you’ll need to manually intervene before prunes start deleting your last good backup.

TL;DR - Don’t use -exclusive if you don’t have to. (And even if you have to, manually - careful while using it with -keep).

3 Likes

Thanks!

I will surely disable exclusive mode then.

Question, if I change my keep schedule to keep let’s say at least 1 backup of the last 3 months this will be OK to avoid Ransomware damage or for that to work I will need to use also b2 keys with limited capabilities?

I mean if a Ransomware strikes and I have keys with also the delete capability this could destroy also my backups that are months old?

Versioning only exists in duplicacy world. Files in the bucket are just files, ransomware does not care what’s inside – photos of cats, duplicacy buckets, or Linux iso files. It does not understand the content. And if it did – why would they write extra code to parse duplicacy datastore to only mangle last backup? This makes no sense. It’s just files. If anything, they are probably after known files – like doc, excel, etc, not some binary blobs, just to save time and encrypt most useful stuff asap. IF they will be grinding the disk for hours encrypting your windows.iso – well, its not very stealthy nor productive.

Theoretically? yes, anyone with the keys can do whatever they want that the keys allow. If your keys have delete permissions, they can just wipe your bucket.

Practically? Will there be ransomware targeted at duplicacy users specifically, that will scan the disk for credentials (or, to not bother, and workaround keychain, duplicacy binary itself, to prune the bucket completely in exclusive mode :wink: )? I highly doubt. People who care to use duplicacy not only are minority, but likely have multiple other backups, or restrictive access to buckets, or bucket replication, etc, and generally make poor (high cost low reward) targets, and are less likely to get ransomware in the first place (if you know about backup you probably also know not to click random links and execute downloaded stuff indiscriminately, keep your machine patched, firewalls working, etc) . Unless you are targeted specifically – but then it’s a different story entirely.

Sorry i do not completely understand this.

Chunks are immutable, so if a ransomware strikes could it encrypt my chunks/revisions on b2?
I mean, a ransomware should not be able to prune my backups so it will just try to encrypt everything. But if chunks are immutable i should be ok? Is this correct?

This statement implies that you don’t have keys that disallow delete. Otherwise you would not be able to prune.

If you want “immutable” backups and to be able to prune then you would need to have another set of keys that do allow modifications, including delete, and perform prune from another, more secure machine. (There is no point in having both set of keys on the same malware infested machine, obviously).

Chunks are immutable are in the sense that Duplicacy does not change them. Ransomware - might. To prevent that you have those special access keys that don’t allow change. But with those keys duplicacy cannot delete them either, so it won’t be able to prune. Hence, you need separate set of keys for prunes, that allows delete. And since those keys will allow delete to ransomware as well, this implies that you have more secure environment from which to prune that will not be affected by ransomware.

If the ransomware is a concern all around — never prune. Depending on your dataset it may not provide any measurable benefits anyway.