Unraid Docker Backblaze B2 sporadically "No route to host" error

I have had occasional backup failures for a few weeks now. Always different in time and not related to a backup source. The error message is as follows:

Running backup command from /cache/localhost/1 to back up /backuproot/mtex
Options: [-log backup -storage Backblaze-B2 -threads 4 -stats]
2025-05-01 06:30:01.733 INFO REPOSITORY_SET Repository set to /backuproot/mtex
2025-05-01 06:30:01.733 INFO STORAGE_SET Storage set to b2://Fileserver-B2-backup
2025-05-01 06:30:01.760 ERROR STORAGE_CREATE Failed to load the Backblaze B2 storage at b2://Fileserver-B2-backup: Post "https://api.backblazeb2.com/b2api/v1/b2_authorize_account": dial tcp: lookup api.backblazeb2.com on 1.0.0.1:53: read udp 172.17.0.2:46379->1.0.0.1:53: read: no route to host
Failed to load the Backblaze B2 storage at b2://Fileserver-B2-backup: Post "https://api.backblazeb2.com/b2api/v1/b2_authorize_account": dial tcp: lookup api.backblazeb2.com on 1.0.0.1:53: read udp 172.17.0.2:46379->1.0.0.1:53: read: no route to host

Here is the overview from this week

2025-05-02_204428|690x174

Duplicacy runs in a Docker container (saspus/duplicacy-web) under Unraid 7.0.1

I have already tried the following:

DNS server of the host changed from Cloudflare to Google to Quant and back again
2nd and 3rd DNS server specified
Docker container network types tried: Bridge, Host, Custom
GODEBUG Variable changed netdns=cgo and netdns=go

I can resolve the address api.backblazeb2.com within the container without any problems.

What else can I try?

Looks like intermittent DNS failures, or container networking mishaps.

Set your DNS resolver in the container to 1.1.1.1

looks like it’s already set?!

# Generated by Docker Engine.
# This file can be edited; Docker Engine will not make further changes once it
# has been modified.

nameserver 1.1.1.1
nameserver 1.0.0.1

# Based on host file: '/etc/resolv.conf' (legacy)
# Overrides: []
1 Like

Oh, i misread 1.0.0.1 as 10.0.0.1 which would be in your lan. My bad.

Yes, it seems it’s configured correctly. It is then docker bridge/networking shenanigans.

Is the issue frequency the same in bridge vs host networking?

Pretty much, yes. It really only ever varies in terms of time and docker port

I would start ping 1.0.0.1 in the container for a few days (or until first failure) and see if you can correlate failures to anything going on or around the machine.

Also start ping from the host, to see if this is container connectivity issue or host connectivity issue.

I.e. when ping from container failing — do you also see pings from the host failing as well.

And also ping your gateway from both. This will rule out gateway issues.

any suggestions on how best to implement this?

Assuming unraid ships with tmux and ssh sever (I don’t use unraid):

ssh to unraid. Run tmux. Create four panes. In two panes open shell into the container. Then run ping in each one to 1.0.0.1 and your gateway respectively. Perhaps via tee so you can review files later. E.g.

ping 1.0.0.1 | tee /tmp/from-host-to-cloudflare.txt

You can also open four ssh sessions — but if sessions disconnect ping processes would die. With tmux they will continue running.

Alternatively you can setup duplicacy directly on unraid and avoid dealing with docker altogether. It’s probably not worth your time to debug this only to discover that that the dockers flakiness was indeed the culprit

Many thanks for the quick and detailed answer. As you mentioned, I will first monitor via ping when and where the connection drops. Maybe this is related to something. Let’s see. If none of this helps, I will have to run duplicacy directly on unraid for better or worse.