WebDAV: Storage path does not exist

I’m testing local network storage options (on an odroid HC2) with Duplicacy Web.

SFTP was slow (< 30 MB/s) because the CPU’s limited, so I’m trying WebDAV.

I can’t create the new storage through the browser (because it defaults to HTTPS which requires a valid certificate) so I edited the existing SFTP storage manually in .duplicacy/preferences per this post doing my best to infer the necessary changes:

From this:

{
“name”: “Test”,
“url”: “sftp://duplicacy@10.0.1.100/duplicacy”,
“encrypted”: false,
“ras_encrypted”: false,
“erasure_coding”: “5:2”,
“credentials”: {
“ssh_password”: …

to this:

{
“name”: “Test”,
"url": “webdav-http://duplicacy@10.0.1.100/duplicacy/”,
“encrypted”: false,
“ras_encrypted”: false,
“erasure_coding”: “5:2”,
“credentials”: {
"webdav_password": …

But it doesn’t seem to work. Here’s the log file printout:

Running copy command from /cache/localhost/all

Options: [-log -verbose -d copy -from Local -to Test -id core-backup]

2021-08-05 13:10:14.476 INFO STORAGE_SET Source storage set to /usb-hdd

2021-08-05 13:10:14.476 DEBUG STORAGE_NESTING Chunk read levels: [1], write level: 1

2021-08-05 13:10:14.477 INFO CONFIG_INFO Compression level: 100

2021-08-05 13:10:14.478 INFO CONFIG_INFO Average chunk size: 4194304

2021-08-05 13:10:14.478 INFO CONFIG_INFO Maximum chunk size: 16777216

2021-08-05 13:10:14.478 INFO CONFIG_INFO Minimum chunk size: 1048576

2021-08-05 13:10:14.478 INFO CONFIG_INFO Chunk seed: 6475…

2021-08-05 13:10:14.478 TRACE CONFIG_INFO Hash key: 6475…

2021-08-05 13:10:14.478 TRACE CONFIG_INFO ID key: 6475…

2021-08-05 13:10:14.478 TRACE CONFIG_INFO Data shards: 8, parity shards: 3

2021-08-05 13:10:14.478 INFO STORAGE_SET Destination storage set to webdav-http://duplicacy@10.0.1.100/duplicacy/

2021-08-05 13:10:14.478 DEBUG PASSWORD_ENV_VAR Reading the environment variable DUPLICACY_TEST_WEBDAV_PASSWORD

2021-08-05 13:10:14.481 ERROR STORAGE_CREATE Failed to load the WebDAV storage at webdav-http://duplicacy@10.0.1.100/duplicacy/: Storage path duplicacy/ does not exist

I tried adding/removing the trailing slash, adding removing the directory, all produce the “Storage path does not exist” error.

I can connect and write to the WebDAV server with Mac finder and the same user so I don’t think the problem’s on the server end.

Any suggestions?

Without answering your question directly — I don’t think it’s worth pursuing because webdav has huge overhead and I don’t think it will be faster than sftp.

Consider SMB instead. Or better yet, NFS. That would be as performant as it gets.

However storage on odoroid? That won’t end well — the reliability and performance will be horrendous, and I don’t think even erasure coding will help here.

I’d like to use SMB or NFS but the web version doesn’t offer a mechanism to mount them. I could keep them permanently mounted on the server where duplicacy runs but that defeats the purpose of network isolation (e.g. if the server gets infected with an encrypting virus it would encrypt the backups as well)

Re: performance I can can get > 80MB/s (unencrypted) from the odroid.

Re: reliability, I have 2 copy-compatible storages, local and remote. I figure if either fails I’ll replace it with the other. Or is your concern silent corruption?

You can use prebackup and post backup scripts to do that.

I don’t see a difference here with sftp connection or webdav. If duplicacy can manipulate files – so can intruder running on the same machine under same user; yes, it’s slightly more involved with keychains and what not, but not impossible.

The correct solution is periodic filesystem snapshots on your target.

This is sustained performance of data transfer. It is pretty much irrelevant. Check small transactions performance – there is huge fixed cost of per-transaction overhead.

And they you worry about security and then do plaintext handshake with the server?

Mostly bad blocks developing, with CRC errors or without. You woudl need to do duplicacy check -chunks periodically (huge performance hit) and then when you do discover bad chunk – what’s your plan? You can’t just copy chunk from the other storage, it must be bit-identical for it to work. Essentially you are emulating BTRFS or ZFS RAID here using sap and twigs. Not worth your efforts.

Either get a proper network attached storage with BTRF or ZFS and periodic scrub, or save money and backup to the cloud.

For reference I just did a test client-to-odroid over ethernet:

SFTP: ~25 MB/s avg
WebDAV: ~50 MB/s avg

It seems the transfer’s CPU-bound (restricting ssh to weaker ciphers didn’t buy much.)

I appreciate the general advice, there are some issues like the web version not supporting pre and post AFAIK. But for now do you see any obvious problem in my webdav block above?

You are testing sustained throughput. This is not what affects performance.

Do this: Mount the SFTP as virtual drive (e.g. via rclone mount) and then mount WebDav and run the disk performance test with AmorphousDiskMark. (I did; WebDav was not even in the same ballpark).

And even if the performance was OK – are you sure you want to connect without encryption if you are concerned about malware?

I’m not aware of any issues. Yes, you can’t specify them in the UI (like many other things) but it does not matter you can’ use them (like many other things)…

However note, restricting interval for the connection does absolutely nothing for security. You can keep the drive mounted all the time.

Can you mount that URL in Finder directly? This shoudl rule out your server configuration issues from that of duplicacy.

1 Like

I should have mentioned I’m running the docker version and I don’t know a way to mount them permanently from within it that would persist through upgrades.

I dug deeper into scripts for the web version. I see they go in subdirectories of ~/.duplicacy-web/repositories/localhost/. But with docker my only localhost is under /cache/ which I thought wasn’t technically persistent - am I wrong? If so or if there’s another way to run scripts in the docker version that’d persist, that would solve my problem.

Yes, from both Mac and Linux if I swap the duplicacy-specific webdav-http: for the standard directive.

Another option would be to install minio and use the S3 protocol.

1 Like

Thank you all for the ideas and assistance.

I got webdav working with Duplicacy by upgrading lighttpd mod-webdav from 1.4.53 to 1.4.59 which hasn’t made it into stable Buster.

For comparison these are two checks run with " -chunks" on the same dataset. The first with SFTP the second with WebDAV. Read is in MB/s. Note it’s about twice the throughput and lower CPU temperature. Poor SFTP performance is owed (I’m sure) to the absence of AES-NI so don’t take it as representative.

1 Like

To follow-up for anyone curious (because I doubt my Odroid/32bit ARM storage setup is all that common), I tested the new SMB option to see if performance beat WebDAV.

WebDAV

# /config/bin/duplicacy_linux_x64_3.2.3 benchmark -storage Local
Storage set to webdav-http://duplicacy@odroid/duplicacy/
Enter the WebDAV password:xxxxxxxx
Generating 256.00M byte random data in memory
Writing random data to local disk
Wrote 256.00M bytes in 0.23s: 1120.00M/s
Reading the random data from local disk
Read 256.00M bytes in 0.06s: 4069.27M/s
Split 256.00M bytes into 52 chunks without compression/encryption in 3.43s: 74.56M/s
Split 256.00M bytes into 52 chunks with compression but without encryption in 4.70s: 54.46M/s
Split 256.00M bytes into 52 chunks with compression and encryption in 4.55s: 56.30M/s
Generating 64 chunks
Uploaded 256.00M bytes in 2.72s: 94.04M/s
Downloaded 256.00M bytes in 4.78s: 53.57M/s
Deleted 64 temporary files from the storage

SMB

# /config/bin/duplicacy_linux_x64_3.2.3 benchmark -storage Test
Storage set to smb://duplicacy@odroid/test/duplicacy
Enter the SAMBA password: xxxxxxxx
Generating 256.00M byte random data in memory
Writing random data to local disk
Wrote 256.00M bytes in 0.22s: 1147.72M/s
Reading the random data from local disk
Read 256.00M bytes in 0.07s: 3728.83M/s
Split 256.00M bytes into 50 chunks without compression/encryption in 3.27s: 78.20M/s
Split 256.00M bytes into 50 chunks with compression but without encryption in 4.84s: 52.90M/s
Split 256.00M bytes into 50 chunks with compression and encryption in 4.65s: 55.08M/s
Creating directory benchmark
Generating 64 chunks
Uploaded 256.00M bytes in 16.45s: 15.56M/s
Downloaded 256.00M bytes in 12.17s: 21.04M/s
Deleted 64 temporary files from the storage

Significantly worse.
Real-world results mirrored the relative difference between the benchmarks.

I suspect (as with SFTP) it’s due to the CPU demands of encrypted SMB vs unencrypted WebDAV given the absence of hardware AES.