I am using opendrive with webdav backend (DUPLICACY_STORAGE=“webdav://***********@email@example.com/Backups”).
The backup works well for a while but getting slower and slower (9MiB/sec -> 3MiB/sec) and eventually fails with several errors like this (and duplicacy process stops):
URL request ‘PUT chunks/33/981a99d02d07b28002ba332e3ac0d82e8ee9cfc9ce1bbaaa65723d44286d24’ returned status code 500
URL request ‘PUT chunks/33/981a99d02d07b28002ba332e3ac0d82e8ee9cfc9ce1bbaaa65723d44286d24’ returned status code 504
I would like to upload 3.9TB data, ~400000 files 10 threads, 32MB avg. chunk size (min/max is default /4, *4), encrypted. Restarting inclomplete backup works (takes few minutes to continue), but after several hours of activity the process fails with similar errors.
I contacted OpenDrive support too, they asked if there is a possibility to fine-tune webdav parameters? e.g. increase timeout value, etc.
Is it possible to set parameters for duplicacy webdav backend? I duplicacy checked docs and found nothing related.
I am also wondering how connections handled in case of multiple threads. Is connection recreated periodically? Or on each upload? Or only once for each thread?
Also I was thinking of mounting webdav as local fuse fs and do a local to local backup instead of local to webdav. Would that be more reliable? I might have more control over webdav parameters.
P.s. running duplicacy on QNAP intel x64 arch linux