Run Duplicacy Web-Edition binary in place of the container

Hi,

i actually run Duplicacy web Edition in a container on my headless linux NAS.

I tried to use the Web Edition’s binary on my Ubuntu WSL and was wandering:

1- Why should i use a container for Duplicacy if it’s so simple to just run it as a single executable?
2- How can i run it at boot in a detached mode so i do not have always the:

Duplicacy Web Edition 1.5.0 (BAFF49)                                                               
Starting the web server at http://127.0.0.1:3875

3- If running Duplicacy this way on my headless NAS, how can i access the web interface from another device on the same lan? It actually uses localhost on port 3875.

Thanks!

EDIT: Oh, probably found the answer to my question 3 here How to access the duplicacy web-interface remotely

About question 2 instead found this in the same thread How to access the duplicacy web-interface remotely - #7 by mjw . So the solution to avoid the Duplicacy “output” is to run it as a service on Linux?

No reason other than simplified deployment – for example, you would not need to worry about your next question and updates.

You mean to run it unattended on boot as a daemon? You need to make a service out of Duplicacy_web and run it using your OSs facilities. See documentation for systemd or upstart of what does ubuntu use today.

You can do that with the understanding that this is extremely insecure. If your server is in the trusted LAN – then sure. Otherwise – keep it listening on loopback and access it via SSH tunneling, as described, for example, here Web-UI security: HTTPS, sessions, and logout button - #7 by saspus

You can improve the script posted there by actually checking for connectivity instead of waiting for 1 second (use nc -zv)

For the updates i think it’s basically to replace the executable with the new version?
I’m actually facing issues with my container as described here Duplicacy Web-ui container reset Fossil collection counts

Keyring not working so i have to set the password in my compose with env variable and do not like it on a security side, and also when rebooting system or updating docker or recreating the container it seems that my fossils collections are erased for some reason.

That’s why was looking for a more direct way to run it.

Exactly. I use OMV5 on top of Debian, so i suppose i should follow this? How to access the duplicacy web-interface remotely - #7 by mjw

Well actually i could use my_nas_ip:3875? This way i would access it the same way i actually access the container. Should be a difference?

There is no need to do it since version 1.4.1. You just need to log in once into web interface.

This is a container configuration issue. Eventually, you will figure it out. It’s much easier to fix, that to redo manually what container is already doing for you (updating, starting, etc)

I did not look into that thread, but this sounds like your mapped out folders don’t persist. Nothing should be getting erased following reboot, unless you placed your data into e.g., /tmp/

Likely yes. Note, you may need to provide arguments to duplicacy-web (such as -no-tray-icon and -background). Not sure if the former one is macOS specific, though.

With the understanding that anyone in your network can do that and access all of your data – yes, no difference. If you want some security – use SSH tunneling. Regardless whether you use container or not.

Binding to 0.0.0.0 allow connections from any interface. Hardly a good idea.
Binding to 127.0.0.1 only allows connection from local clients. Which SSHD tunnel endpoint is.

It does, but after a restart of the system or a container recreation i have to put in the password again otherwise the configuration will not be decrypted and the schedules not working.

My compose is really simple and the volumes are binded the same way as all my others containers.
Do not want to go off topic but would be glad if you could find the time to have a look at that topic.

Correct. Aren’t we talking about a nas? Which is supposed to be running 24/7? Unless your nas supports secure boot and TPM-like hardware key enclave for disk encryption it’s insecure device and it’s best not to keep sensitive data on it available plaintext at all. Definitely not at rest. So it’s a good thing that duplicacy requires password on start.

So basically also using the binary and not the container, after a reboot the password is lost from the keytring and requested again?

It’s the same regardless you use container or not. Credentials are encrypted and protected by the application password. Even thought it’s probably pointless anyway because when duplicacy CLI is running the passwords and other secrets are sticking in the process environment plaintext for everyone to see.

No, nothing is lost from a key ring. I don’t think there is one.

Do you have keyring on your nas? How do you unlock it following reboot?

1 Like

No never configured a keyring on the NAS.

I actually put the password in the first time and select the option to save it in the keyring/keychain . Then i do not need to put that in again. Also in the container a file called keyring is created (not able to look for the complete name now).

The need to insert the password again is present only when i update docker/restart the NAS/edit the compose and recreate container.

Are you talking about your browser? If so, it has nothing to do with the duplicacy instance on your NAS

No, it’s this Duplicacy User Guide in the “Setting up the Master Password” section.

I select the checkbox and then until I do not reboot etc as I was saying above then I’m ok.

This is the content of my container’s /config volume:

drwx------+ 2 root root 4096 Apr  9  2021 bin
-rw-------  1 root root 9771 Nov 30 01:02 duplicacy.json
drwx------+ 3 root root 4096 Aug  9 20:46 filters
-rw-------  1 root root  171 Nov 24 17:58 keyring
-rw-------  1 root root 1032 Nov 25 01:00 licenses.json
-rw-rw----  1 root root   33 Apr  9  2021 machine-id
-rw-rw----  1 root root  144 Apr  9  2021 settings.json
drwx------+ 4 root root 4096 Apr  9  2021 stats

And keyring file has content:

cat keyring
{
    "encryptionkey": "hidden" 
} 

Something is wrong up with your container/permissions/what have you. Post your exact command line you use to launch the container.

This is what I just tried.

  1. Fetch and launch the container:
temp=/tmp
mkdir -p $temp/logs $temp/cache $temp/storage $temp/backuproot $temp/config

docker run  --name duplicacy-web-container             \
        --hostname duplicacy-web-docker                \
         --publish 4875:3875/tcp                       \
             --env USR_ID=$(id -u)                     \
             --env GRP_ID=$(id -g)                     \
             --env TZ="America/Los_Angeles"            \
          --volume $temp/config:/config                \
          --volume $temp/logs:/logs                    \
          --volume $temp/cache:/cache                  \
          --volume $temp/backuproot:/backuproot:ro     \
          --volume $temp/storage:/storage              \
                   saspus/duplicacy-web:mini
  1. Login to http://localhost:4875.

→ You’ll see a password prompt.

  1. Create password.

/tmp/config/keychain will be created

  1. Stop the container.
  2. Delete the container.
  3. execute step 1 again
  4. execute step 2 again.

→ No password prompt, get directly to the UI.

  1. delete the /tmp/config/keychain file
  2. go through steps 1-2

→ Password is asked again.

So removing and recreating the container you had no issue because your volume was still there with the keyring in it.

This is my compose:

version: "3.7"
services:
  duplicacy:
    image: ghcr.io/hotio/duplicacy:testing
    container_name: duplicacy
    hostname: hidden
    ports:
      - 3875:3875
    environment:
      - PUID=0
      - PGID=0
      - UMASK=002
      - TZ=Europe/Rome
      
    volumes:
      - /srv/dev-disk-by-label-HC2/AppData/duplicacy/config:/config
      - /srv/dev-disk-by-label-HC2/AppData/duplicacy/cache:/cache
      - /srv/dev-disk-by-label-HC2/AppData/duplicacy/logs:/logs
      - /srv/dev-disk-by-label-HC2:/source:ro
    restart: unless-stopped
networks:
    default:
      external:
        name: my-net 

I’m not sure if I have this issue when I update the container with this same compose, that should basically be what you did above. If I’m not wrong this only happens when I restart the system or after a docker update itself. But actually I really can’t be sure that I remember correctly.

Well, that’s the point. Does [access to]/srv/dev-disk-by-label-HC2 volume disappearing?

Is that a network mount? If so, have you ensure that the mount is available before starting the container? (not sure if container would be actually able to start in the first place if the mount was missing)

Neither shall matter.

Test it out :slight_smile:

What do you mean?

No it’s just a shared folder on the hard disk. All the other containers have their configs under that /AppData dir.

Yes actually docker waits to start completely until the the disk is fully loaded. Did that editing a file, should check the OMV forum to find what it is again.

I will try just now to remove the DWE env variable to save the password as plaintext and rebuld the container.

Ok, did it now, removed that env variable from the compose and rebuilt the container.

No issue, i went directly to the prompt for the administration password, so no need to enter the “first” password.

You don’t need to rebuild the container, just restart it.

I’m wondering if you are violating some docker requirements here: you are mounting a folder structure (as read only), and then mounting its subfolder (as read-write).

This sounds suspicious.

You know looking at my same compose just now i was thinking the same thing. But why then i should have no issue when restarting/recreating the container but only at reboot of the nas and when docker updates?

That’s what undefined behavior is :). It can do one thing, or another, or send you an email, or brew coffee… Who knows :slight_smile:

1 Like

Coffee please :stuck_out_tongue:

I will try to perform a research on this.

Tried also to restart the container, no issue.