Trying to get Duplicacy to recognize NFS share in docker on Unraid

I’ve installed Duplicacy on Unraid using the community docker install.
Everything is setup as follows.

Unraid server has an NFS share to a remote server via Tailscale for remote backups.
I can write to the share and use it fine from Unraid.
I’ve configured in the docker variables:
User Data: /mnt/remotes/ip_backups/
Container Path: /backuproot

When I go into Duplicacy and to setup storage, I select /backuproot and get the following:
“Failed to check the storage at /backuproot: stat /backuproot/config: permission denied”
If I docker exec into the duplicacy docker I can read/write to backuproot seeing the files on the shared drive and writing to it.

What gives?

Depending on which container you are running, duplicacy itself may be running under different user/group id. The fact that the shell into the container allows you to read/write to NFS share reinforces that hypotheses.

Is this nfs3 or 4?

The share on the unraid server (where the duplicacy docker is running) is nfs4.
I am running “saspus/duplicacy-web”

Hey wait a minute…that’s you!

lol :slight_smile:

             --env USR_ID=$(id -u)                        \
             --env GRP_ID=$(id -g)                        \

Do you have USR_ID and/or UGRP_ID set for the container? Check what these variables end up in the container. As a test, you can set them to 0, and duplciacy will run as root in the container, and should have the same access as you when you opened a new shell there.

So that worked, I previously had the UID for the backup user I created on the two servers, and GID of 100.

Is there anyway to get it working with the UID 1044 for the user backup? I assumed


this was correct.

That should have worked.

NFS does most of the work client side, and very little on the server; unless you have configured idmapper/kerberos, for NFS to work the user with the ID that exists on the client shall have access to the folder on the server. It’s by ID, not name.

For example, ff you have a user with id 1044 on the client, on the server you can literally chown -R 1044 /path/to/share, regardless of whether user 1044 exists on the server. It does not need to exist. (Perhaps also give list access to all parent directories if any)

If this does not work – you are likely hitting nfs4 trickery and that I can’t help you with – I gave up when I was trying to make nfs4+idmapper work few years ago. Try downgrading to nfs3. If that works – it would confirm it. And then if its up to you if you want to jump into nfs4 rabbit hole

Ok, I’ll leave it at 0,0 I guess?
One question, how do I access my shares from unraid to backup from the docker?
I tried to make another path, adding /mnt/user but all that did was seemingly frag my unraid system. All my dockers went down, and there was a docker added called suspicious_tu and rebooting the server won’t come back up. I have no idea how adding a variable to a docker could do that. Well it came back up, and duplicacy docker is gone. So I have to start this all over again. I think the last bit of info I need is how to add my local unraid shares from the server the duplicacy docker is running from.
I want to backup those shares, docker containers etc.

You would need to mount the shares on the host and then bind them to the container.

Containers don’t like when the mount is not accessible and fail to start. It does not have to lead to the whole docker engine exploding but that’s the unraid stability for you in action.

That would be the simplest but I would not do that. You don’t want an app that fetches other apps from the internet to have root access, albeit in the container.

This brings me to another thing to check — does unraid has any apparmor/selinux actuve? That may affect who and how can access shares.

If I were you I would not use containers at all. They are not needed for duplicacy, it’s already self contained executable, but they add an extra layer that can fail.

You may want to look at this Duplicacy Web on Synology Diskstation without Docker | Trinkets, Odds, and Ends

There are examples for both upstart and systemd

So it’s not that the mount was inaccessible. I simply went into the docker, added a variable, path /mnt/user to /backupnew. When I restarted, everything went to hell.
I’m definitely past the tinkering stage, I just want things to work, so I’ll have to stick with docker. I added a specific share /mnt/user/pictures and that worked, I’m doing a backup as we speak. Guess it didn’t like going to the root share.

While I’ve got you here, I’ve had multiple issues with this thing blowing up my unraid server. Last time was adding 3 backups and trying to run them at once. I had to reinstall, and redo everything.
My question is this:
This is just my first total backup locally on the lan. When I move this thing offsite, and its doing incremental backups, how should I configure this to run? Will it run each one at a time, then continue after each etc?

I don’t know what “blowing up” means in this context :), but none of that shall happen. Are you sure you don’t have hardware issues there? I would run a couple of rounds of of https://www.memtest86.com

You mean connectivity-wise? You can use port forwarding (just not use NFS in this case, use SSH or some other protected protocol) or you can chose some sort of VPN; site-to-site VPN based on Wireguard is a popular option, then there is TailScale and ZeroTier too.

What do you mean? You can setup a schedule, e.g. hourly, or daily, depending on how much data are you adding and how fast.

So I’m doing the first backups locally on my 10G network. Total 32TB.
I’ve broken them up into separate backups. Pictures, work, video, etc.
I’m currently doing ONE backup of these groups at a time. As I said. I tried doing three at a time, the server lost all network connectivity, and even at the terminal I couldn’t get video signal. That’s how badly it bricked my system. When it came back online, Duplicacy docker was deleted, AGAIN. I don’t have issues with my system, and have never had any application do this to my server in over 5 years. Just Duplicacy.

So what I’m asking is, once all my initial backups are complete, if I schedule them for weekly backups, they will be incremental yes? Will each backup wait until the previous is done, and then continue on through all the backups?

Looks like it ran out of ram. Since you are running duplicacy in the container limit the amount of ram container can use.

Or what else was the source of failure? What’s in the logs? Any panic logs?

I would also highly suggest increasing the chunk size if you mostly backup media files.

Yes, all subsequent backups are incremental.

So I keep having the same issue now:
Running backup command from /cache/localhost/2 to back up /backup_unraid
Options: [-log backup -storage server_backup -threads 4 -stats]
2024-04-10 20:21:54.086 INFO REPOSITORY_SET Repository set to /backup_unraid
2024-04-10 20:21:54.088 INFO STORAGE_SET Storage set to /backuproot
2024-04-10 20:21:54.091 ERROR STORAGE_CREATE Failed to load the file storage at /backuproot: stat /backuproot: stale NFS file handle
Failed to load the file storage at /backuproot: stat /backuproot: stale NFS file handle

I know this isn’t Duplicacy specifically, but this is happening across all my applications that require NFS shares. Is NFS really that big a pile of dung? Because all my NFS shares constantly go stale. Wondering what the best, recommended way to use network shares for duplicacy is?

This can happen in two cases:

  • underlying file information has changed on the server
  • server rebooted while client had open handles.

No, NFS is ultra old, uber fast, crazy low overhead, and extremely stable technology. It’s accomplished by a specific design choices that make it prone to such failures if your machines are not stable.

If your consistency see these issues (I use nfs all the time and have never encountered it by the way) perhaps look into resolving the root cause — which is likely your server stability. It should not be rebooting out the blue nor closing filesystem handles otherwise.

NFS is fine. If you want more robust error handling when server goes away and high performance in the LAN — use SMB. (Duplicacy has smb backend, you don’t need to manually mount the share). If you want to backup remotely over high latency connection — consider SFTP instead.

So the network I’ll be using is a tailscale over fiber. Both source and destination is on 2GB fiber 6ms latency. In last nights case, it was my fault. I setup appdata backup and was a moron and didn’t turn OFF Duplicacy. It shut down down to backup in the middle of my backup. >EEK<

Great. 6ms is still much higher than lan latencies but not by too much. I think either protocol will be fine, and nfs would have least overhead compared to SMB, let alone SFTP.