So, why WebDAV, let alone unencrypted? macOS supports SFTP out of the box, NFS, SMB, etc⌠Use any of those protocols instead. I would recommend SFTP, as you likely already use SSH, so no extra configuration will be required.
WebDav is way, way, less secure than SFTP. So if you are planning to expose it this to the internet â donât.
I donât know whatâs your setup to provide specific recommendations, but if you describe your network (i.e. how is the client connecting to the internet, where is the server â firewalls, any other restrictions, how are you using WireGuard: hosting a server at your server, or using it as a bridge on a VPS and route packets with masquerading, etc., we could suggest something.
OP is using Wireguard for security, WebDav security is irrelevant here. In fact, running SFTP within Wireguard tunnel is a massive overkill - double wrapping only makes things slower without gaining anything.
We donât know that. I asked for clarifications. It is not obvious that wireguard is end to end in this scenario. It can be used to connect to jump box and expose port there; for example as a workaround if a server is behind restrictive firewall or a provider NAT
You gain performance, stability, and reliability by avoiding WebDAV.
But first Iâd verify the WebDAV server is working correctly. On linux you can use litmus to test - it should pass all tests except propfind_invalid2, which you can ignore.
If there is misconfiguration that allows someone unauthorized to gain access to another side of encrypted Wireguard tunnel, there could be misconfiguration that allows to gain access to the other end of encrypted SFTP tunnel just as well. This has very little to do with the original OP question.
SFTP can be slow. There is a reason why rclone allows to serve both SFTP and WebDAV, e.g. people who do streaming from rclone for instance tend to use WebDAV.
@sl474118737: I donât know if you can do it from the GUI, but you can edit preferences file manually. Ideally youâd add some https WebDAV storage (some other server), then edit âstorageâ key in preferences file from âwebdav://username@httpsserver:portâ to âwebdav-http://username@httpserver:portâ.
sftp is rarely a bottleneck over the internet. And if it is, and wireguard is indeed used end to end â there are other protocols, like NFS, available. There is no place for WebDAV in this usecase, and in my opinion duplicacy shall drop it altogether: itâs never suitable.
Thatâs not the reason that is applicable here, and using webdav to serve bulk storage is contrary to the purpose and protocol design. Rclone provides webdav adapter for webdav applications. Streaming, that benefits from http range requests, is one of them. But this is entirely different usecase. Iâd say, completely opposite: few large files with range requests vs many small files retrieved as a whole.
It has its uses. serviving a backup target (thousands of small files) is not one of them. This is an overgeneralization.
I am so glad that youâre not the one making these calls. I am using all the things that in your opinion should be dropped from - GDrive, OneDrive, WebDav etc. Works quite well for me, but hey, none of these are ever suitable. /s
We had this conversation before. I donât accept half-measure and â wrong, but works, quite wellâ solutions. Only accept well motivated designs, that always work, scale well, and not just in specific circumstances, for specific users, for indeterminate periods of time, with poor alignment of incentives. Doing otherwise provides poor user experience.
Even though a forum is not a reflection of real world usage, you can count topics about issues with e.g. S3 vs e.g. gdrive, to gen some vague idea.
And besides, it did not work anywhere near âquite wellâ for one user â me. Albeit it did start promising, on a small-ish datasets in simple use-cases. Iâve suffered through it for two years, far too long in the hindsight. Those issues are not fixable, for the reasons discussed before. I have never heard of anyone who had issues with AWS S3 or GCS.
Yes, anything can be made to work, even âquite wellâ, but I refuse to accept this copout.
Letâs be honest, the only reason you use and tolerate high latency google drive is because itâs âfreeâ hot storage and duplicacy does not support archival tiers. I seriously doubt that it was even a contender had price not been a factor or if archival storage was supported. That, or you have a tiny dataset, as these solitons donât scale.
So, instead of defending misguided compromises how about advocating for the appropriate for the job scalable design and making the correct choices for the users.
Because the backup program that supports webdav among main protocols screams âI donât know what iâm doingâ or âI donât care about user experienceâ or âmy users shall be able to shoot themselves in the foot in the worst way possibleâ. The latter bit is important. Those solitons work somewhat in the beginning but then invariably fall apart, but you are too deep in.
So yes, if duplicacy dropped webdav and google drive tomorrow, some people, including you, will be pissed, some will move to another solution, and some will trust the developer and play along, migrate to the appropriate for the job technology and be better off in the long run.
But if short term profit is the driving force â yes, we will have these posts from users about webdav, google drive, latency, and other, completely avoidable, hurdles.
Iâm surprised I have to elaborate on this.
But just look at the modern software landscape, good software is far and between. Why? Because people are content with âquite wellâ.
Yes, this is an excellent reason to eliminate this as an option for everybodyelse /s. You know that yours is not the only use case out there, right? By the way, one of my datasets is about 20TB, how much have you tested with?
I donât even know if I can add anything to that. If price is not a factor, literally every single thing in the world would be different. But price is a factor, in everything. At least for people in the real world.
I am advocating for having choices for users, and some of them might not be just like you (gasp!) and have different use cases and considerations as to what is important or not for them.
Under 3TB. Above two it was started getting unbearable. Prune would take weeks and restore version enumeration tens of minutes. Please donât tell me I had to have fewer versions: deleting data to make slow software happy is not a solution. On the contrary, it is an illustration that this backend does not work.
The problem here is that duplicacy forcing users to hot storage makes them do these compromises. Archival storage is cheap, and is best suited for backup.
And besides price there is âvalueâ. Optimizing price is foolish: lowest cost solutions are notorious at offering poor value.
Thatâs where we disagree the most. I think users shall have no say in inner workings of the solution. Even if having choices is great on paper, supporting all those choices is often impossible. If I know nothing about backup, and I pay money for a piece of software, i expect it to guide me to the âcorrectâ solution. I donât want it to force me to neither make mundane choices nor those in the domains Iâm not an expert on, nor âlearn on my own mistakesâ.
Backup software developer is in much better position to decide which storage providers are suitable to provide the designed user experience.
Ideally, there shall be just one backend, (one of the big three), with all the optimizations made to take full advantage of that one backend specifics. Supporting 17 hundreds backends dilutes the quality of each, and increases support non-linearly â crappy backends will generate more support volume.
The software design is already fixed. Some storage provides will objectively work better than others. Hence, there can be one provider chosen that provides best experience. There is no room for user choice other than for the sake of having one. If only one provider was supported we would not be having these chats.
The only reason we wouldnât be having these chats is that many current users (myself included) would be using some other software. I am done with this conversation.
Above I said that some users like yourself, who evidently prefer penny pinching to quality, may end up leaving for the âcompetitorsâ. And thatâs ok.
You are vastly overestimating number of such users. Had this been my software â I would happily fire all my cost-{preoccupied,driven} users. These users generate least profit, but require tons of support (both directly and indirectly, in the form of development and testing spent on workarounds for bad backends). Been there, done that.
Regardless of the nature of the product, with the fixed amount of available resources you can either make a quality product that is not cheap, or half-hearted half-baked contraption riddled with âuser choicesâ, at bottom of the barrel prices. Not both. You are advocating for the latter. Iâm for the former.
Iâm still not sure why you think that shifting the burden of making critical domain-specific choices from the vendor to the customer is an acceptable, let alone desirable, thing.
I can imagine SFTP outperforming WebDAV but whatâs the concern with stability/reliability?
Iâve run 2 WebDAV storages for about 2 years (with check -chunks as part of the schedule) and the only issue was one corrupted chunk which I suspect was caused by a power outage.
WebDAV protocol design goals (document exchange) are drastically different from what software like duplicacy requires (fast access to a massive arrays of small files), and as a result, there are bottlenecks in âweirdâ places, slowing down access in the best case, or losing data in the extreme, where e.g. server runs out of ram, drops connection, or writes corrupted data due to some bugs: there is the whole web server driving this under the hood, as opposed to purpose-build engine like sftp.
Regardless of whether or not you think users should be given âchoiceâ by any given software developer, this statement alone demonstrates why they absolutely should be given choices at all costs. You (in particular) are not qualified, nor have the experience of others, to judge what is a âwrongâ solution.
Case in point - you claim GCD is inadequate for Duplicacy. My vast and flawless experience with it says youâre 100% wrong.
How do we resolve this conflict?
Well how about offer users some bloody âchoiceâ and stop treating everyone like children?
Letâs be honest, the only reason you keep dissing on Google as a provider is you quite clearly have a bias against almost everything they do. They compete with Apple and it pisses you off their customers arenât as bled dry as you are.
Compromises, such as not verifying backups due of exorbitant egress costs, perhaps?
Advising people to switch to sftp from WebDAV is one thing. (I wonder, how you would have come to your conclusion, had you not the opportunity to test it for yourself.)
Advocating for the removal of all but the protocols and features you personally approve of, is the height of arrogance. (I wonder, how youâd feel if I said âarchival tierâ storage should never, ever, get added to Duplicacy. And yet, even though I would never use it, Iâd be very happy if it did get added.)
And yet despite all this, somehow, this is a wrong solution. Shame on gchen for steering you down this forbidden path