Backup destination ends up on the source volume?

I just started using @saspus docker container in Unraid.

The configuration seemed self explanatory, I mapped the backuproot to /mnt/user, the rest of the paths - default.

  • started the container, accessed the WebGUI,
  • set up the source: 3 drive unraid array (2 data + 1 parity)
  • the destination: separate ZFS pool, a 2-drive mirror (supported natively as of Unraid v6.12)
  • configured the job
  • started the first backup.

After the indexing was done I immediately noticed slow speeds 20MB/s and 18 hrs left for a 1.25TB of data. Yes, technically it’s HDD to HDD copy, not expecting hundreds of MBps, but not 20MB/s either.

I go to Unraid dashboard to look at the Reads/Writes and see this:
the Reads 20MB/s and Writes 20MB/s occurring on the same source array! The ZFS pool is not registering any Writes!

So I open the console and browse to the ZFS pool and my backup destination subfolder: /mnt/recpool/duplicacy-backup

It is empty!

I start browsing around to locate the actual destination, where the backup is being written. I find it on the Unraid array disk 1. It’s under another folder with the identical name to the one that I created on recpool:
duplicacy-backup.

In my next comment I will describe how I am able to reproduce the problem.

Show your full docker run command.

Did you bind the destination into the container? Have you configured destination to be that mount point in the container?

I am a Linux beginner. A lot, including Linux file systems, is still unclear to me. Please don’t beat me too hard.

  1. I mapped backuproot to /mnt/user

  2. the /mnt/user location contains Unraid system shares, user shares and cache. As of v6.12 when ZFS support became native, the /mnt/user location also contains pointers (symlinks?) to the ZFS pool top level folders. My pool named recpool has 2 folders: “rec” and “duplicacy-backup”.
    As a result I can access my ZFS pool data via /mnt/user/rec and /mnt/user/duplicacy-backup.

  3. Since the backuproot was mapped to /mnt/user, I could configure the backup destination in Duplicacy WebGui quite easy: Storage > Add > Directory > Browse > backuproot\duplicacy-backup.

And that’s exactly what’s causing the problem. Immediately after Applying the change, somehow, for some reason, exact the same folder duplicacy-backup is created on my main Unraid array (clarification edit: and becomes the actual backup destination).

I suspect it’s because /mnt/user is special in the way that it comprises user shares from the main array, folders from the ZFS pools, cache data from cache pools. I suspect that Duplicacy treats mnt/user as the exclusive location of Unraid’s main array. When I point to backuproot\duplicacy-backup, it doesn’t find that folder on the main array and so it creates it and sets as destination.

======

To prove my assumptions, I changed the mapping of backuproot to 1 level higher: /mnt/
Unlike /mnt/user/, this location contains mounts to actual physical disks and the actual ZFS pools (recpool in my case). To access my original destination I would need to navigate to /mnt/recpool/duplicacy-backup.

In Duplicacy WebGUI the path would look like this: backuproot\recpool\duplicacy-backup.

I attempted another backup and boom, Duplicacy understands that the folder is on a different volume now. Everything works as expected, 60MB/s!

I hope this helps someone and perhaps @saspus can look into it.

The end.

Hi saspus! I saw your reply only after posting the second part. I believe it answers your question.

I did bind the destination to the ZFS pool, but not directly (as I understand it, linux newbie here). I used the mnt/user/ instead.

I am in the process of the first full backup. Not sure how to view the run command without restarting the container.

If it’s still relevant I will post it once I am able to restart it.

Looks like some mixup with permissions of what your duplicacy instance is running as (usr_id/grp_id parameters) and can access on the host.

It’s always best to bind the target folders explicitly, like you did in the second attempt, and to avoid following mountpoits on the host: it’s possible to make it work but too much efforts and maintenance.

Another option is not using containers in the first place. Duplicacy does not benefit from containerization, there are no external dependencies, and docker only adds another level of indirection without any benefits in return. You can launch duplicacy with service daemon on your os ( no idea what unraid is using — systemd?) and avoid some overheard both at runtime and maintenance.

I’ve done it on Synology, you can adapt the same approach to unraid: Duplicacy Web on Synology Diskstation without Docker | Trinkets, Odds, and Ends

What is the advantage of this, versus installing and running the Synology package directly?

At the time of writing Synology packages did not exist.

Today on Synology — minimal to no advantage.

1 Like

With backuproot being mapped to /mnt/, is it possible to configure the “Storage” (backup destination) one level up from /mnt/?

My other ZFS pool is mounted outside /mnt/. There is a separate folder /zfs/ for it.

If you ask me why - that’s how some of the online guidelines recommended it back when ZFS was not natively supported in Unraid.

You can bind as many folders from your host to the container as you need.

Pardon my ignorance and let me know if the procedure is documented - I’ll look it up.

But if it’s easy to explain in here - how do I do it in the Container’s Edit page?

I can see the parameter User Data that maps a host path (/mnt) to the /backuproot.

Do I just create a new parameter of type “Path”, give it any name, any container path and bind a host path to it?

I don’t have access to unraid container manager UI to check, but I’d think there would a way to add another mapping to the container, just the same way you’ve added the mapping of /mnt to /backuproot. Ultimately this all gets passed to the docker CLI as command line argument --volume /mnt:/backuproot. You can have multiple arguments like these, mapping multiple locations on the host into various folders in the container.

Sometimes it’s easier to do things in the CLI than battle the UX designer’s idea of how it should work…

I figured it out. Thank you!