Duplicacy Web Backblaze B2 backup causes thousands of DNS queries, resulting in rate-limiting and failures for Cloudflare.

Please describe what you are doing to trigger the bug:
I am using the container saspus/duplicacy-web, to backup to backblaze b2 storage for some local files. When this occurs, DNS resolvers such as cloudflare and local routers get thousands of requests from duplicacy, resulting in rate limiting. To be clear, cloudflare responds with the right IP, so the container does not fail or have any issues. When the rate limiting occurs, Duplicacy doesn’t seem to care and just continues backing up.

When I look at the container logs, it is just spammed with [::1]:43040 POST /get_backup_status. A possible clue here is that my network does not support ipv6, so the container never gets a record for ipv6.

The DNS records it appears to be after is as follows:
backblazeb2.com,
backblaze.com

These have various sub-domains, such as api domains and file storage domains, but always get a response with the correct ipv4 address but gets a refused notice for ipv6 records.

Please describe what you expect to happen (but doesn’t):
I expect the container or app to cache the results of DNS and not repeat-ask every second, resulting in excessive load to cloudflare or local dns providers, and eventual rate limiting.

Please describe what actually happens (the wrong behaviour):
The app or container seems to /get_backup_status excessively when backing up to a backblaze storage target. This seems to happen at the same time as the DNS request so I suspect each time this is logged it causes a new DNS request.

A note on how I found this: initially I saw rate limiting errors on my local dns providers, then I switched it over to public ones for just that container, and observed strange failures with cloudflare, realizing I must be getting rate limited by them as well? Either way, I swapped it back to local providers and set a lower client rate limit for this container in particular.

Thanks for your time.

Anything on this ? This is a problem that can cause rate limiting for DNS servers, can it be fixed please?

This is your browser polling the Duplicacy web UI for progress over loopback (::1).

It has nothing to do with DNS, Cloudflare, or Backblaze.

Duplicacy is written in Go. Go does not implement DNS caching. Each lookup is handed to the OS resolver. Whether results are cached depends entirely on whether there is a caching resolver in front (systemd-resolved, unbound, dnsmasq, etc.).

Many container setups have no DNS cache at all. In that case, repeated lookups are expected behavior.

This makes sense. If IPv6 is unavailable and the resolver retries AAAA lookups without negative caching, query volume will spike.

B2 uploads require calling b2_get_upload_url, which returns an upload endpoint used for subsequent transfers. DNS resolution of that endpoint occurs according to the behavior of the resolver stack in use. Duplicacy does not implement or control DNS caching.

Adding a caching resolver in front of the duplicacy container would be the correct mitigation.

Hello,

Thanks for your reply! I presume then, the container saspus/duplicacy-web - Docker Image does not include any dns caching locally in the container for this? This is what I am using in combination with my license.

In that case I will have to develop a resolver cache for it. I did have pihole in front of this container at one stage, but it sent so many requests that it actually caused performance issues on the pihole hence why I thought it was a bug.
Cheers.

No. The idea is one application – one container.

This is strange. Pihole is a caching resolver, and it should have no problems responding repeatedly on the same requests. This is literally its one job. I woudl look into why did it struggle in the first place.

Since you use container, you can run a caching resolver, like unbound, in front of duplicacy.

Below is an example compose file, but don’t quote me on it, I botched it together from my other services, and did not test.

version: "3.8"

services:
  unbound:
    image: klutchell/unbound:latest
    container_name: unbound
    restart: unless-stopped
    networks:
      dnsnet:
        ipv4_address: 10.20.0.2
    volumes:
      - ./unbound.conf:/etc/unbound/unbound.conf:ro

  duplicacy:
    image: saspus/duplicacy-web
    container_name: duplicacy
    restart: unless-stopped
    hostname: duplicacy
    environment:
      - TZ=America/Los_Angeles
      - USR_ID=1000
      - GRP_ID=1000
    networks:
      - dnsnet
    dns:
      - 10.20.0.2
    depends_on:
      - unbound
    ports:
      - "3875:3875"
    volumes:
      - ./duplicacy/config:/config
      - ./duplicacy/cache:/cache
      - ./duplicacy/logs:/logs
      - /path/to/backup/source:/backuproot:ro

networks:
  dnsnet:
    driver: bridge
    ipam:
      config:
        - subnet: 10.20.0.0/24

and in the ./unbound.conf should have at least

server:
  interface: 0.0.0.0
  access-control: 10.20.0.0/24 allow

Then you can tweak the rest of the caching parameters (memory usage, threading, etc) per unbound documentation.

But before all that trying pointing DNS to your gateway hosted caching resolver. Likely this would be enough.