ERROR UPLOADING_CHUNK moved permanently

Hello all,

unfortunately im getting the above Error-Message everytime i try to run duplicacy with pcloud.
it worked fine for a while, but now it’s getting everytime the same error message.

i’ve also tried to reset alle but the error still happens.

Can anyone help?
Best regards

Code 401 means “unauthorized” and I’d tend to think that this is just pcloud being pcloud.

Are you using it via WebDav or via rclone serve as an adapter? You might wan to try the latter, to see if it behaves any better.

You got two 401 errors (with different chunks) and then the “moved permanently” error.

Since you are using WebDav + pcloud, I also think the root cause is


im using it with webdav. What i dont understand, it worked for a period of weeks! :frowning:

Ill try with rclone these days and bring the results back to this topic.

Can you show me, where to find an instruction to use rclone with duplicacyˋ?

thanks and regards

If it did not work at all pcloud would have instantly fixed that. Fixing spurious intermittent failures is much more difficult.

Here it is:

  1. Configure pcloud endpoint as described here: pCloud
  2. Start serving that storage over sftp as described here: rclone serve
  3. Configure duplicacy to use sftp backend with url from step 2 (which would be on localhost), as described here: Supported storage backends
1 Like

Hi Sapsus,

Backup is now running with pcloud-rclone-sftp-duplicacy

Currently all looks fine. I’m curious if this will work stable :slight_smile:

Anyway thank you very very much!

Hey Sapsus,

i’ve got following issue: Rclone is serving with sftp. but as the upload is asynchroneous , it seems to take the chunkgs in cache and after filling up the cache at a level i currently dont understand, it stops talking to duplicacy (or to a manual sftp connection). As i’m running rclone verbose, i see, that its uploading the chunks to pcloud, but still a connection is not possible. i think, it first wants to empty the cache at a certain level before taking new file transfers…

do you have an idea, how i can handle this? can i extend the rclone cache or make this unlimited?

thanks a lot again

You mean duplicacy quickly uploads data over sftp until rclone cachs fills and then connection is stalled until some data gets actually uploaded to plcoud? This is by design, how else could it possibly behave?

You can (should?, to make backup deterministic) disable cache --vfs-cache-mode off as described here. Duplicacy does not do any operations that would require it, e.g. concurrent read/write

There is also a pcloud console client called ‘pcloudcc’ which can be found here:

With this tool you can access the files an folders but more important you can mount your cloud storage with it. The command would be: ‘pcloudcc -u mail@domain.tld -m /your/mount/point’

For me this is working very fine. With this I can directly save it to the mount ‘locally’ which is fast. But one thing, this caches the files which needs to be uploaded. So everything is written to your hdd during backup and uploaded in the background.

1 Like

I have tried using rcloud mount, which achieves the same thing, but it doesn’t seem to work reliably for me. And it is a memory hog:


Is the console client different in any way?

The pcloudcc tool is made by the creators/hosters of pcloud. So I think it is doing it the best way. In my case I had a lot of troubles using it with webdav (not only duplicacy). So i switched to their implementation and it seems to work very well.

What do you mean by “their implementation”?

As for my progress, I think I may finally have found a setup that works with pcloud: I’m running rclone serve sftp on my home server so that both the server itself as well as other devices on my local network can backup to pcloud via that server. It seems much more stable that going via pcloud’s own webdav interface.

I have also tried pclouds macos app (not for duplicacy but for manually deleting some files) and it’s completely hopeless. It hardly shows me any files and gives me beachballs all over when opening a folder. I think it simply can’t handle thousands of files in a folder.