Support for pCloud API

Currently, duplicacy supports pCloud storage only via webdav. Would it be conceivable to add native support for the pcloud API? Rclone already supports pCloud and given that it is also written in GO, it might be easy to use some of that code in duplicacy?

There also seems to be a GO library for the pCloud api:

Just wondering if you’ve ever tried Rclone’s serve feature with, say, sftp. It’s a bit like a mount, but interfaces between a remote and another protocol.

2 Likes

No, I haven’t. I looked at it last year, and couldn’t quite wrap my head around how it works and then I forgot about it. I just looked at it again and the part I understand is that I can setup my pcloud storage as an rclone remote (and thereby benefit from the rclone implementation of the pcloud api. And since it’s serve and not the ordinary rclone way of doing things, it appears that it will give me access to pcloud via, say, sftp, but without mirroring the entire backend locally, right? So how do I access it? Does it provide an sftp-port on localhost which is then “tunneled” to pcloud?

If my understanding is okay, so far, this looks like a nice solution in theory, but how much friction am I introducing into my backup setup, both in terms of possible failures, as well as use of resources?

I don’t use Rclone’s serve method with my own backups, so not entirely sure if it’ll be better in your case. But since Rclone seems to use checksums with pCloud, you might end up with a more reliable storage, with maybe a little overhead.

1 Like

I haven’t tried this with Duplicacy yet, but rclone has a mount command that lets you mount any storage system it supports as if it were a local directory, via FUSE. You could mount pCloud to /mnt/backup/ or something similar, do the backup to that directory, then unmount it.

1 Like

I managed to set up rclone with fuse and I can indeed see my cloud storage as a local folder. But something is still not configured properly as duplicacy has become unusably slow when I use that local storage as my backend:

Mount can be very high latency, and also depends on caching configuration. How did you mount?

In either case, I’d strongly suggest using rclone serve instead. (sftp or webdav). There is no reason to involve virtual filesystem (although it may still be used if caching is enabled) and to have backup target mounted locally (not to invite corruption by accident or ransomware)

You can also run and kill the rclone serve instance from pre- and post- scripts, making it completely transparent.