How to limit logs for Duplicacy web?

I have been using Duplicacy web in a Docker container (saspus/duplicacy-web) for a few years now and have worked really well. But one thing I find quite weird is I have a quite large log folder. It isn’t a few large files but rather 47 455 files smaller ones (max file size is 10 mb, oldest file from 2020-08-12).

Are log files saved indefinitely? Can I safely clean up (via a cron job) and remove older logs without causing issues for the web client?

Log files older than 30 days should be automatically removed. If you have jobs running in parallel, then those old log files were likely the result of this bug: Old log files which were produced by parallel jobs don't get cleaned out by Duplicacy-web (MacOS Big Sur)

Thank you for the reply.

Yes, I do run jobs parallel, and I’m using version 1.5.0. So in other words, seems to be the same exact issue.

So I guess I need to wait for version 1.6.0 is available as docker image for saspus/duplicacy-web. Thank for the information!

Version 1.6.0 seems to not solved my problem with 5 gb+ log folder, even after 10 days of upgrade of the Docker container.

What can I do to fix this?

I’m curious, why are your logs so huge? Maybe the -d flag was left on?

Thank you for your reply.

As far as I’m aware of I haven’t enabled any debug mode or similar. I’m using your Docker image with a Unraid community docker template (unRAID-CA-templates/duplicacy.xml at master · selfhosters/unRAID-CA-templates · GitHub).

Interesting. If you have genuinely so many log files being generated perhaps we should look into integrating logrotate Into the container.

I’m wondering if duplicacy_web is compatible with that; specifically, is there a way to tell it to let go of the file or if copytrucate mode will suffice.

This would be an universal solution. In fact, I think perhaps it’s best implemented not in the container but system-wide — in one central place to manage system logs.

Well, I have no interest of storeing almost 2 years of logs. The standard behavior, if worked correctly, of 30 days of logs is enough for me. If needed, I could add a cronjob that sorted that out for me (eg. find /path/to/logs/* -mtime +30 -exec rm {} \;), but that’s unnecessary if the web client is supposed to do that anyway.

This won’t work. You need logrotate, potentially with copytruncate mode, if duplicacy does not let go of a log file on HUP.

The main log file duplicacy_web.log can rotate itself by default. What @36017c54763338ab10ea wanted to get rid of is the individual log files created by running jobs. The new 1.6.2 version doesn’t check log files that were already leaked in previous versions by parallel jobs, so you’ll have to delete them manually.

2 Likes

After a week of testing, it seems to be working. Oldest log is from April 20, aka about 1 month old.

I can also confirm find /path/to/logs/* -mtime +30 -exec rm {} \; worked to manually clean up the old logs.

Thank you for your help!

This topic was automatically closed 10 days after the last reply. New replies are no longer allowed.