Where is exclude filter location on docker

Running Duplicacy docker container on Synology (DSM 6.2.4). I edited “filters” under .duplicacy under the shared folder I see for the shared folder “Photos” I am backing up. I want to exclude:

e:@eaDir
e:#recycle

Yet the backup log shows:

2026-03-29 18:17:56.880 INFO UPLOAD_FILE Uploaded #recycle/happyanniversarwwwy.png (1390796)
2026-03-29 18:17:56.880 INFO UPLOAD_FILE Uploaded #recycle/@eaDir/happyanniversarwwwy.png/SYNOFILE_THUMB_M.png (227895)
2

If I understand correctly this the correct location for filters? Is my filter format incorrect? I see both +/- and i:/e:

Thanks!

Are you using WebUI? You shall specify the filters in the WebUI then. Or specify path to the filters file in web ui, with

@/path/to/actual/filters/file

and put your filters into that file (path is of course is inside docker container.

For these filters you don’t need regular expressions, you can use normal syntax, assuming these are folders in the root of repository.

-@edDir/
-#recycle/

Using the container WebUI, I discovered that the filters are stored under .../config/filters/localhost/1 for the first backup. This wasn’t clear to me initially as I have a filters file under a .duplicacy folder that resides at the root of the repo (presumably this was created when I initially used the CLI to set up the backup), which I thought was the correct location.

So far, so good? I manually added a comment to the 1 file so I can tell which backup job it is associated with, For now, I’ll leave the .duplicacy folder located at the root of the repo alone, as it seems ignored by the container.

A follow-up question… where do I execute duplicacy -d -log backup -enum-only to test filters? I tried that from the root of the container, but got bash: duplicacy: command not found. FWIW, I’m trying to exclude all Synology’s @eaDir directories located within every subdirectory that is backed up.

Hope this makes sense. Thanks for your help.

No. Assuming you use web UI, and assuming this is the correct location where duplicacy CLI is launched from by web ui (you did not specify which container are you using) data in that location is generated by webUI and will be overwritten. Correct approach is in my message above.

Are you using web ui? Then create a dummy backup schedule in web ui and edit parameters.

If you are not using web ui — then in the root of your repository.

Then you need to

# exclude one in the root 
-@eaDir
# exclude every one in the folder hierarchy. * also marches /
-*/@eaDir

Better solution is to stop Synology from polluting your data folders. Unless you use their photos and media server (or don’t need these folders indexed) you can turn off indexing service and delete those folders recursively.

Assuming you use web UI, and assuming this is the correct location where duplicacy CLI is launched from by web ui (you did not specify which container are you using) data in that location is generated by webUI and will be overwritten. Correct approach is in my message above.

I thought I explained this, but apparently our terminology is a mis-match.

I’m presently using the duplicacy docker container run on the Synology NAS, not CLI (though I’ve used a script file with Task Scheduler some time ago), So this:

  duplicacy-web:
    image: saspus/duplicacy-web:mini
    container_name: duplicacy-web

I have no idea where duplicacy is launched from by the docker container, though I see duplicacy_linux_x64_3.2.5 under .../config/bin It seems to me that the docker container doesn’t bother with the existing .duplicacy directory at the repo root. I renamed that to .duplicacySAVE and the container still does its thing.

Ok, so you are using webui via saspus/duplicacy-web container.

Duplicacy-web creates scratch temporary repositories pointing to your actual data folders under /cache folder:

         --volume ~/Library/Caches/Duplicacy:/cache      \

and launches CLI there.

There is a number of temporary repositories, some are used for backup, some for pruning, etc. But these are irrelevant implementation details.

The point is: you should not be messing with them. Data under /cache, including .duplicacy folders, is disposable and unstable. It’s a scratch location. It is not stable, it is subject to deletions and overwrite.

If you use webUI, all configuration must be made in webUI.

That’s all helpful. I believe I have everything sorted. Since I’m on DSM 6, I’m hoping that your docker image will continue to support my ancient docker version, as the Synology package seems limited to DSM 7.

Thanks.

You don’t have to suffer through Synology incompetence: Duplicacy Web on Synology Diskstation without Docker | Trinkets, Odds, and Ends

Thanks for the link. Looks quite interesting, so I’ll need to dwell on it. In the meantime, I have a 36 hour B2 upload I’m waiting to complete (my upstream is an amazing 1.5 MB/s).

Oh wow. I remember the time where I had to do 1TB upload over 12 megabit per second connection :slight_smile:

Do you have SQM enabled on your gateway? If you don’t – consider it, it’s almost magic, removes impact of saturating upstream on other internet activities like browsing and video calling.