Run duplicacy-web as root or as dedicated user in Linux?

I am trying out duplicacy-web in Linux.
The file I want to back up mostly belong to my own user account. But some are owned by others, e.g. root or www-data. These files should be included in the backup.

One obvious solution would be to run duplicacy-web as root user. But is this a good idea?
What is the “best practice”?

Could I create a separate user “duplicacy” with read access to everything, but no write access?

I might be able to figure this out by myself, but I imagine I am not the first with this question.

Separate user/group will be of course better for many reasons. Especially since duplicacy downloads and runs executables from the internet. I’m not sure if it validates the signatures.

An exception would be if you run it in a container to backup read-only mounts. Then running it as root in the container is easier and just as good.

Separate user/group will be of course better for many reasons

But how would I set this up so that the separate user gets read access to all the files? Or is there a trick?

An exception would be if you run it in a container to backup read-only mounts. Then running it as root in the container is easier and just as good.

This sounds smart.

Either way, it would be cool to have at least one of these options documented somewhere. This should be relevant to everybody who uses duplicacy.

Regardless of if it’s Duplicacy or another backup program, if it’s possible to provide read-only access to any files that need backing up, do so. The primary reason is to avoid potential issues such as a bug and/or other unintended process damaging the original files.

To be root or not to be root…

Short of coding everything from scratch and/or auditing every line of code used, there’s some level of trust involved with the software that we all use.

I don’t think there’s much of an advantage to running Duplicacy web edition as a non-root user if the web microservice is only listening to localhost/127.0.0.1 and is properly secured (e.g., authorized access using a password, limited to known users, etc.). If being able to centrally back up any file on a system is required the additional hurdles with managing permissions increases the chances of a mistake. And if there’s a need to remotely access the web GUI:

  • Port forwarding via SSH, stunnel, or some other similar method.
  • Peer-to-peer VPN.
  • A reverse web proxy on the host running Duplicacy web edition to provide HTTPS, additional access controls, deal with XSS, etc.

But if there’s an important need to run Duplicacy as a non-root user while still being able to back up files owned by individual users, most files and directories can usually be handled with basic group permissions. For example, create a “duplicacy” user, make it a member of all of the groups that own files to be backed up and add group read permission to the files and directories. Files and directories that have root:root ownership and only readable by root can be backed up separately (i.e., “root” runs Duplicacy for its own files).

To container or not to container…

Containers have the advantage of being lightweight because they aren’t actually virtual machines (although they can seem very much like they are). The downside is that the separation/isolation between what’s running inside a container and the host isn’t as strong as for a true virtual machine. This means that if the sandbox around the container is breached, it’s up to the security of the host to limit the level of damage.

Docker uses a daemon (dockerd) to manage containers. Because dockerd defaults to running with root privileges and is responsible for launching the container runtime/engine for each container (a hub and spoke model), if intruders and/or malware manage to break out of a container, they/it will have root privileges on the host system. So even inside of a container it’s best to use a least privilege approach whenever possible.

Self-contained apps like Duplicacy that have no direct dependencies on system libraries are less vulnerable to LD_LIBRARY_PATH manipulation and other MITM attacks, but running Duplicacy CLI edition with root privileges inside a container with Docker can be less secure than running it as a dedicated non-root user on the host system.

Although running Duplicacy web edition inside a container provides additional protection from attacks against its web microservice, the same sandbox issue above applies. And if the web microservice is accessible over a LAN and/or WAN, the risk is even greater than running the CLI edition with root privileges in a container or natively on the host system.

Where a container really shines is in setups such as bottling up Duplicacy web edition with a reverse web proxy when there’s a desire to avoid commingling unnecessary software with a host system that isn’t already designed for running web apps (e.g., NAS, desktop). But it’s still important to be aware that a container doesn’t automatically improve security.