How secure is duplicacy?

Over on the dupliacti forum they are discussing how to duplicati’s security:

I’d like to point out one comment by the duplicati developer in particular:

After thinking about this for a while, I see that we need to consider the attacker scenario.

  1. User exposes password (like password re-use, etc)
  2. Machine/network is breached
  3. Destructive malware/ransomware

If we use the keyfile approach, as duplicacy, we can only really cover (1).

If the machine is breached, they can easily recover the real passphrase, and changing the keyfile passphrase is no going to prevent anything.
Malware/ransomware can effectively kill the keyfile and make backups useless.

What does this mean for duplicacy? How secure is it? Or rather: what are duplicacy’s current vulnerabilities?

3 Likes

Well, my opinion, in a simplified form: Duplicacy has to ensure the security of the files stored in backup in the cloud / storage. The scenarios cited affect all my files, not just the Duplicacy keys, and are outside the scope of a backup tool.

If you have bad habits with passwords, there is nothing Duplicacy can do. Solution: use a password generator / manager.

Keep your most confidential files encrypted. I use Veracrypt and AxCrypt. There is also Cryptomator and others. I travel a lot to work with my notebook, it can be stolen, etc.

Remember the “2” in the backup rule 3-2-1: 3 copies, 2 different technologies, at least 1 off site. Previously, “2” meant 2 media (HDD and DVD, for example). Today I use cloud and NAS. And specifically the Duplicacy keys and some more sensitive files I use another “technology”: Rclone. Even in the Duplicati forum topic, a user suggests using GPG and backing up the keys.

Perhaps something can be added to Duplicacy in terms of security, such as two-factor authentication. I don’t know the difficulty of implementing this, whether it would be a change in just some module or all software would have to be rewritten.

2fa sounds like an awful idea (implementation wise): gchen needs to have a server always available (that means your duplicacy needs internet access – some may not want that) where to vouch for your code. If that dies by any chance (or someone poisons dns or whatever else) – bye bye backups.

There’s also the problem if how to handle typing the 2fa code? i (you, we?) use shell scripts and task schedulers for running our backups. how do we automate those to use a 2fa program?

I completely agree. I also think 2fa does not apply well to Duplicacy.

Guys, what you’re saying is all very interesting, but so far you’re not really answering the question posed in the OP…

Agree…:roll_eyes:

Defined by how safe / good is your encryption password.

A example: you have to take care of your keyring files (if you don’t use environment variables). No one can have access to them and you should back them up by methods other than Duplicacy itself

I mentioned this before, but if the password is compromised, changing it might not make a difference (depending on how far an attacker has got access to snapshot and chunk files). I don’t think this is a problem with Duplicacy in itself or can be mitigated - choose a complex password from the outset and, aside from any coding vulnerabilities, it’s probably about as secure as it’s going to get? (This is of course assuming an attacker already has access to your local/ssh/cloud storage.)

Aren’t those partially encrypted by the OSs keyring / credential management? And do you really need to back them up? They should be recreated when you enter the master password…

I’ll do some testing on a different machine …

This is true, but in this case the attacker would already have the access to the original files, so I don’t see the point of protecting the backups.

As long as the config file is untouched (which is stored in the storage), you should be able to access the backups with the storage password.

1 Like

Is my understanding correct, that without the config file, I wouldn’t be able to restore anything, even if I have my storage password?

If so, this config file becomes the most vital file in my eventually TB-sized Wasabi bucket? A missing junk could render a file useless, but a missing (or corrupt) config converts my whole backup into a brick?

If my assumptions hold true, what is the recommended way dealing with this? Copy away the config file after every backup run?

Sorry, if this is answered elsewhere, but searching for “backup” and “config” in this forum yields quite a few hits :wink:

3 Likes

A write-only storage would be one way to protect it against malware. But not against hardware failure.

Edit: about immutable storage:

2 Likes

This is correct.

I believe Wasabi can be trusted for storing a small file that is usually not changed. If you’re really concerned, you should save this file to multiple places, like on your own computer or another cloud storage.

Wondering if we could use a config file from a bit-identical copy?
Duplicacy Add -copy (wasabistoragename) -bit-identical samename sameid C:\something\forconfigbackup

(It would not need the chunks for this purpose)

If this does not produce a reliable copy, maybe @gchen could add an option to Duplicacy for extracting a copy of the config for safekeeping. (If not the user must use other tools to do this)

I do exactly that. I have an empty extra copy (no chunks) of all the storages I use.

I set it up just at the time of the discussion above on this topic, over a year ago.

2 Likes

Could you elaborate? What exactly do you do? (Sorry if the answer seems obvious. It probably is for anyone using those commands on a regular basis.)

I simply created a storage copy (:d: add command), but I don’t create backups to that add-ed storage. So I have a copy of the config file, but there are no chunks or snapshots files.

1 Like

Now I get it. That would indeed be something worth mentioning in the documentation. Not sure exactly where. One option is to mention it under the init command because people will see it there. But it wouldn’t be consistent as this is about the add command. So maybe put it there? Or create a new How to on “added security” or something.

Maybe mention in the init command page something like:

“It is important that you back up the config file that was generated on storage. If it is lost or otherwise corrupted in storage, you will lose access to your backups. You can do this by using the add command to create a copy of your storage or by directly copying the config file and storing it elsewhere off the storage (like on your own computer or another cloud storage).”

Wasn’t there supposed to be a new feature whereby Duplicacy would cache the config file locally on the client?

Also, wouldn’t it be a good idea for the init, add and password commands - anything that writes to the config file in the storage - to also create a config.bak in there as an extra precaution? And if amending an existing config, to put the previous copy as config.old?

4 Likes

Thank you for your insights, @akvarius, @gchen, @towerbr, @Christoph and @Droolio. Very helpful.

I have no reason to not trust Wasabi to take care of the config file. I am more concerned about myself making a mistake (the sort of “oh, I thought this was my test bucket in this window here” or inadvertently triggering the wrong command from shell history).

It’s more of an operational thing than security related, IMHO (my mistake of course, since I hijacked the thread).

I like Droolio’s idea of storing the config redundantly on the remote storage. Why not put a verbatim copy of the config file in every of the 256 top level chunk folders? Would add a mere ~ 350 KB of data, but a lot of resiliency.