Opendrive with webdav backend

Hi,

I am using opendrive with webdav backend (DUPLICACY_STORAGE=“webdav://***********@gmail.com@webdav.opendrive.com/Backups”).

The backup works well for a while but getting slower and slower (9MiB/sec -> 3MiB/sec) and eventually fails with several errors like this (and duplicacy process stops):

URL request ‘PUT chunks/33/981a99d02d07b28002ba332e3ac0d82e8ee9cfc9ce1bbaaa65723d44286d24’ returned status code 500
URL request ‘PUT chunks/33/981a99d02d07b28002ba332e3ac0d82e8ee9cfc9ce1bbaaa65723d44286d24’ returned status code 504

I would like to upload 3.9TB data, ~400000 files 10 threads, 32MB avg. chunk size (min/max is default /4, *4), encrypted. Restarting inclomplete backup works (takes few minutes to continue), but after several hours of activity the process fails with similar errors.

I contacted OpenDrive support too, they asked if there is a possibility to fine-tune webdav parameters? e.g. increase timeout value, etc.

Is it possible to set parameters for duplicacy webdav backend? I duplicacy checked docs and found nothing related.

I am also wondering how connections handled in case of multiple threads. Is connection recreated periodically? Or on each upload? Or only once for each thread?

Also I was thinking of mounting webdav as local fuse fs and do a local to local backup instead of local to webdav. Would that be more reliable? I might have more control over webdav parameters.

P.s. running duplicacy on QNAP intel x64 arch linux

500 and 504 are server error and gateway timeout. Those are server side issues; the webdav interface on open drive is beta; and even when it isn’t it’s not the most performant one. Plus open drive have anti-incentive to make large transfer fast – you already paid money – all incentives are to prevent you from using bandwidth and storage.

Fine-tuning and throttling upload would be counterproductive – you would need to back off and throttle to match dying service.

I don’t know – I would not trust any important data to web-dav backend, much less the one still in beta on an “unlimited” service.

1 Like

OpenDrive “unlimited” plan for personal use offers 10TB storage without any official limitations, 20TB for business users on the same price as Google Drive 2TB storage. I checked other options too: I was able to reach 32 MiB/sec upload speed with duplicacy on Google Drive, but they have no reasonable plan for 4-6-8-10 TBs of data. Previously I used HubiC (still have account for 10 TB), but up/down speed is worse (1/10) with duplicacy compared to OpenDrive. I also tried https://dashboard.blomp.com/ but their swift api is not compatible with duplicacy backend (or duplicacy is not compatible with that - I was not able to set up it working).

So I have tried many options offering 10TB on a reasonable price. I have been using cloud backups for 4 years, no incidents so far. For me it is just an additional safety layer. I dont want to spend huge amount of money on this.

My experience is that during peak hours in the USA the upload speed drops to 1/3 or worse. During the late night/early morning/working hours the up speed is better. Same is true for probablity of 500/504 errors. I restarted duplicacy around 3:30 MST and speed is 6.5 MiB/sec so far. Was 1.7 MiB around 20:00 MST with much more 500/504 errors. So I suspect it is related with server load/uaser activity. I will clarify with OpenDrive support.

Because of above timeout value might be still useful during busy hours. Recreating webdav connection on backup threads might also help. I suspect this is why they asked about possibility to increas timeout or configure reconnect parameters in duplicacy.

My question is still valid: is there any option/switch/param/env variable for duplicacy to control webdav connection params/timeout/retry/etc.?

The number of tries is hardcoded to 8:

You can modify this number and build your own CLI version following instructions here: Installation · gilbertchen/duplicacy Wiki · GitHub

thanks for pointing out the code
currently the backup is running with a stable 5.6 - 5.9 MiB/sec, no errors so far

I suspect OpenDrive support adjusted some values to make this happen

anyway if I experience issues again I will replace hardcoded 8 with an option to overwrite this from parameter (env/command line) - this migh be useful in general for others too in case of less reliable network/backend server

currently my workaround could be enclosing the duplicacy process invocation with a loop and restart in case of exit code >0 (assuming exit code is >0 when not finished with success)

Update:

Several days ago I also managed to complete a full check of my remote backup (-chunks only, ~4 TBs, all was OK). I had further discussion with OpenDrive support and finally they fixed multi-threaded webdav implementation. Now it is stable, no more http errors during up/downloads.

I also tested partial file restore from the backup and that was fast/OK too.

Ever since I daily perform incremental backups (manually now) and speed is around 6-7 MiB/sec.

My next plan is to automate the process (based on forum recommendations regarding this topic).

2 Likes