Please describe what you are doing to trigger the bug:
Following the advice in Cannot "check -chunks" -- fatal error: concurrent map read and map write - #8 by sevimo, I added -persist
to my check
command. This gets me a little farther along, but after a couple of days it eventually stops working as well.
Please describe what you expect to happen (but doesn’t):
Duplicacy continues to check chunks, making progress towards the goal.
Please describe what actually happens (the wrong behaviour):
Duplicacy eventually gets stuck, and the output is entirely “socket: too many open files” like this:
:53: socket: too many open files; retrying after 28.83 seconds
:53: socket: too many open files; retrying after 28.83 seconds
:53: socket: too many open files; retrying after 28.83 seconds
:53: socket: too many open files; retrying after 28.83 seconds
:53: socket: too many open files; retrying after 28.83 seconds
:53: socket: too many open files; retrying after 43.25 seconds
:53: socket: too many open files; retrying after 43.25 seconds
Full output: Dropbox - too-many-open-files.txt - Simplify your life
lsof|grep duplicacy|wc -l
produces 10271.
It seems like duplicacy is leaking sockets or files or something. I noticed that Dropbox sends back a lot of 429 errors which causes Duplicacy to back off. Perhaps sockets or files are not being cleaned up properly in this or some other error scenario?