Web UI Crash during backup

Hello guys, I have bought the license for my NAS and so far not been able to complete yet a single backup using the web UI. :frowning:

Please describe what you are doing to trigger the bug:
Always towards the end if I want to check the current backup job I get a 502 Bad Gateway as my reverse proxy can’t access the web UI anymore - checking systemctl --user status duplicacy-web confirms that the service is not active. Looking at duplicacy_web.log it shows hundreds of POST /get_backup_status but I guess thats normal and at the time of the 502:

2025/06/08 21:51:34 Failed to get the value from the keyring: Unable to open dbus session &{%!s(*dbus.Conn=&{0xc00003e560 0xc0004a7080 true ea6a2dce5914a03daf890e6b684605e6 [:1.1] {{0 0} 0 0 {{} 0} {{} 0}} {0 0} 3 map[0:true] map[] {{0 0} 0 0 {{} 0} {{} 0}} 0xc0004a6fc0 0xc00004e180 false {{0 0} 0 0 {{} 0} {{} 0}} 0xc00043e500 <nil> {0 0}}) org.freedesktop.secrets /org/freedesktop/secrets}: The name org.freedesktop.secrets was not provided by any .service files

2025/06/08 21:51:34 Temporary directory set to /tmp/Duplicacy-Repositories

2025/06/08 21:51:34 Schedule Test next run time: 2025-0615 02:00

2025/06/08 21:51:34 Duplicacy Web Edition 1.8.3 (5A554A)

2025/06/08 21:51:35 Duplicacy CLI 3.2.3

But now that I think about it it may be also coming after I restart the service.

Here is the full log of the backup itself:

2025-06-08 12:32:42.150 INFO BACKUP_LIST Listing all chunks

2025-06-08 12:34:59.645 INFO BACKUP_INDEXING Indexing /mnt/nas/PVE_DOCKER

2025-06-08 12:34:59.646 INFO SNAPSHOT_FILTER Parsing filter file /tmp/Duplicacy-Repositories/localhost/2/.duplicacy/filters

2025-06-08 12:34:59.646 INFO SNAPSHOT_FILTER Loaded 4 include/exclude pattern(s)

2025-06-08 13:31:10.130 INFO UPLOAD_KEEPALIVE Skip chunk cache to keep connection alive

2025-06-08 14:01:11.006 INFO UPLOAD_KEEPALIVE Skip chunk cache to keep connection alive

2025-06-08 14:31:12.069 INFO UPLOAD_KEEPALIVE Skip chunk cache to keep connection alive

2025-06-08 15:01:13.168 INFO UPLOAD_KEEPALIVE Skip chunk cache to keep connection alive

2025-06-08 15:31:14.099 INFO UPLOAD_KEEPALIVE Skip chunk cache to keep connection alive

The backup is about 1 TB and it crashed on both a OneDrive backup and a local to local backup.

Please describe what you expect to happen (but doesn’t):
A backup can complete without Duplicacy service crashing mid-way every time.

Please describe what actually happens (the wrong behaviour):
See above.

System: Ubuntu Server - Duplicacy as binary file download v3.2.3

Can anyone please help me?

When you say “crash” what do you actually mean?

Which process crashed? Do you have a core file? Evidence in /var/log/messages or system journal (grep for duplicacy)? Anything in duplicacy_web.log?

How much ram is on the server? How many, ballpark, files are in the repository?

What interface is duplicacy_web listening at? When you can’t access it, is the process still alive?

With crash I mean that the web UI is not running anymore and the backup failed.
/var/log/messages does not exists.

journalctl | grep duplicacy returns this:

Jun 08 10:59:51 hdocker duplicacy_web_linux_x64_1.8.3[290957]: Log directory set to /mnt/enc-nvme/store/Duplicacy
Jun 08 10:59:51 hdocker duplicacy_web_linux_x64_1.8.3[290957]: Duplicacy Web Edition 1.8.3 (5A554A)
Jun 08 10:59:51 hdocker duplicacy_web_linux_x64_1.8.3[290957]: Starting the web server at http://10.27.1.3:3875
Jun 08 17:01:12 hdocker systemd[290950]: duplicacy-web.service: Killing process 376957 (duplicacy_linux) with signal SIGKILL.
Jun 08 17:01:12 hdocker systemd[290950]: duplicacy-web.service: Consumed 1h 31min 59.818s CPU time.
Jun 08 21:51:34 hdocker duplicacy_web_linux_x64_1.8.3[753851]: Log directory set to /mnt/enc-nvme/store/Duplicacy
Jun 08 21:51:34 hdocker duplicacy_web_linux_x64_1.8.3[753851]: Duplicacy Web Edition 1.8.3 (5A554A)
Jun 08 21:51:34 hdocker duplicacy_web_linux_x64_1.8.3[753851]: Starting the web server at http://10.27.1.3:3875

I definetly did not kill that process at that time so I don’t know what’s going on there…

I already posted the content of the duplicacy_web.log.

The server has 8 Gigs of RAM and 12 cores. I don’t know how many files are backed up exactly but there are thousands of small files (about 1 TB).

Ah I think I know what’s killed it - systemd of Duplicacy is running in user mode and I have forgotten to enable linger for that user so if my ssh session expired it killed the process.

I will give feedback if this solution did not work otherwise I think this can be closed

1 Like