Support for pCloud API

Currently, duplicacy supports pCloud storage only via webdav. Would it be conceivable to add native support for the pcloud API? Rclone already supports pCloud and given that it is also written in GO, it might be easy to use some of that code in duplicacy?

There also seems to be a GO library for the pCloud api:

Just wondering if you’ve ever tried Rclone’s serve feature with, say, sftp. It’s a bit like a mount, but interfaces between a remote and another protocol.

2 Likes

No, I haven’t. I looked at it last year, and couldn’t quite wrap my head around how it works and then I forgot about it. I just looked at it again and the part I understand is that I can setup my pcloud storage as an rclone remote (and thereby benefit from the rclone implementation of the pcloud api. And since it’s serve and not the ordinary rclone way of doing things, it appears that it will give me access to pcloud via, say, sftp, but without mirroring the entire backend locally, right? So how do I access it? Does it provide an sftp-port on localhost which is then “tunneled” to pcloud?

If my understanding is okay, so far, this looks like a nice solution in theory, but how much friction am I introducing into my backup setup, both in terms of possible failures, as well as use of resources?

I don’t use Rclone’s serve method with my own backups, so not entirely sure if it’ll be better in your case. But since Rclone seems to use checksums with pCloud, you might end up with a more reliable storage, with maybe a little overhead.

1 Like

I haven’t tried this with Duplicacy yet, but rclone has a mount command that lets you mount any storage system it supports as if it were a local directory, via FUSE. You could mount pCloud to /mnt/backup/ or something similar, do the backup to that directory, then unmount it.

1 Like

I managed to set up rclone with fuse and I can indeed see my cloud storage as a local folder. But something is still not configured properly as duplicacy has become unusably slow when I use that local storage as my backend:

Mount can be very high latency, and also depends on caching configuration. How did you mount?

In either case, I’d strongly suggest using rclone serve instead. (sftp or webdav). There is no reason to involve virtual filesystem (although it may still be used if caching is enabled) and to have backup target mounted locally (not to invite corruption by accident or ransomware)

You can also run and kill the rclone serve instance from pre- and post- scripts, making it completely transparent.

Hi @gchen, it would be great if you could add support for the PCloud API, it’s very well documented and seems simple to do (I don’t have the programming skills to do it myself :confused: )

I asked Arq Backup support, and they added PCloud support in less than 10 days.

In the meantime, my second backup on PCloud is done via Arq Backup, it works very well and I would also have liked to have native support in duplicacy to put it back as the main backup.

Many thanks

1 Like

I would be very careful relying on pcloud for backup, let alone main one. “Works very well” means nothing if the data rots or disappears in 5 years.

Storage providers that offer fixed priced storage for unlimited time or unlimited storage for fixed price are in the race to the bottom, and absolutely do cut corners whenever possible on things least visible – like data durability. You can search this and other forums for pcloud for an overview of existing issues.

1 Like

I’ve been using PCloud for quite a few years and have never encountered this type of problem. My number 1 backup is on RAID 5 hard drives and the secondary is on PCloud. The only topic I found about data disappearing from PCloud is about users having connected the client on different computers, and a smart guy went behind on one of them to delete everything. In the different cases that I have seen, nothing can be recovered via rewind. Other cases the data was indeed present online, but had a corrupt PCloud client local cache problem, corrected by a purge and reinstallation of the client.

Can you find me any links or information on this subject ? because even if I haven’t had any problems I would like to know what could potentially happen and what to expect.

Many thanks :slight_smile:

A post was split to a new topic: Choice of storage providers

Any news about supporting pCloud native?
Meanwhile, many other apps are supporting pCloud, including Duplicati, it’s no more a no name provider.

ArqBackup support PCloud, other backup software too.

1 Like

OK, I see that it looks like the project Duplicacy is dead. The latest Duplicacy Web Edition release is almost 2 years old. That’s why they won’t be adding pCloud anymore. It’s such a shame that such a great project has come to an end.

Nothing really changed with pCloud in particular, and similar providers in general. Same business model, same corner-cutting incentives, everything I said about incentive alignment still stands.

A few apps adding their API doesn’t magically make them a good place to put long-term backups. “Lifetime” storage for a one-time fee only works if they keep costs down in places you never see — replication depth, integrity checks, version retention, repair traffic. That’s exactly where cheap providers like to shave.

“I’ve used it for years and had no issues” isn’t a durability guarantee, it is “nothing happened yet” or “I don’t know if data is ok”.

There’ve been cases where files were “still there” until someone actually tried to restore them, and then rewind showed a directory but half the files were missing or zero-byte. Client cache reset didn’t fix it. Server-side metadata was simply… wrong. Silent falires happen. Monitoring for them is expensive, you are not paying them to do it. So they don’t. Use search. This is not a revelation really.

Eh? CLI is maintained, licensing works, fixes land when they matter. The web UI isn’t supposed to churn every month. Backup software is supposed to be boring.

If someone wants to trust years of data to a provider whose entire model is “pay us once and we promise to store it forever,” that’s on them. pCloud suddenly becoming a solid backup target because Duplicati or Arq added a checkbox is wishful thinking. You can’t get anything forever for a fixed price, let alone anything reliable, let alone for hosting backup.

Do yourself a favor and pick a storage provider where you pay for what you use, that has incentives to continue offering you service, and maintain the data. Pcloud is not and does not. It’s fine to share a bunch of files win colleagues. Not for backup.

You can think about it this way: If the price you pay can’t possibly fund the infrastructure required to keep your data alive for ten years, then your data won’t be alive in ten years.

Ironically, lack of support of pCloud in duplicacy is a silver lining protecting you from data loss.

1 Like

Yes, everything you say may be true. In my case, I am now retired, unfortunately with a very small pension.

That’s why I have to make sure I don’t spend the same amount of money on backups as companies that have enough money for it. Although I strongly doubt that wealthy companies would necessarily choose Duplicacy. For them, there are a large number of well-functioning but expensive solutions that are selected by their IT departments.

Regarding pCloud: It’s not the only place I store my backups; I also have other secure alternative storage locations.

And as for the outdated software: It’s a sign that hardly anything is happening anymore. CLI may be maintained, but an ancient GUI shows me that nothing is happening there anymore.

P.S.: I personally regret that Duplicacy is no longer being actively developed. It had so much potential. Unfortunately, maintaining a little bit of CLI is not the same as active development. Duplicacy had a lot of potential to become a serious backup solution. But unfortunately, that is not happening, possibly for financial reasons.

I understand. However this does not make pcloud suitable. There are other ways to accomplish low cost backup that does not involve companies in a race to the bottom.

Especially, if you have other backups — then you can get away with archival storage. About $1.5/TB/month. Very cheap to store. Quite expensive to restore — but you are not planning to, it’s an insurance policy.

Duplicacy does not support Amazon glacier ($1/TB/month), but it supports Google archive. It’s a bit more expensive; you may search the forum, some people had good experience with it. Or you can switch to other tools that do suppprt glacier — like Arq. I’m using it myself, paying about $3.6/month for about $3+TB stored. I can look up actual numbers if you are interested.

I never understood what’s the point of duplicacy web ui. So can’t answer that.

Lack of development is a good thing. It works. It’s a pretty simple piece of software: shred/encrypt/transfer data. I’d rather it not change at all, unless some weird bugs are discovered. I’m using some old years old version of the CLI. It works. I don’t need it to keep updating.

Most of the recent CLI updates were to accommodate various cloud services changes — which I don’t care about. I backup via SFTP.

You are right. Have you seen this thread? Is this project dead? - #9 by saspus

1 Like