ERROR UPLOADING_CHUNK moved permanently

Hello all,

unfortunately im getting the above Error-Message everytime i try to run duplicacy with pcloud.
it worked fine for a while, but now it’s getting everytime the same error message.

i’ve also tried to reset alle but the error still happens.

Can anyone help?
Best regards
W

Code 401 means “unauthorized” and I’d tend to think that this is just pcloud being pcloud.

Are you using it via WebDav or via rclone serve as an adapter? You might wan to try the latter, to see if it behaves any better.

You got two 401 errors (with different chunks) and then the “moved permanently” error.

Since you are using WebDav + pcloud, I also think the root cause is

Hi,

im using it with webdav. What i dont understand, it worked for a period of weeks! :frowning:

Ill try with rclone these days and bring the results back to this topic.

Can you show me, where to find an instruction to use rclone with duplicacyˋ?

thanks and regards

If it did not work at all pcloud would have instantly fixed that. Fixing spurious intermittent failures is much more difficult.

Here it is:

  1. Configure pcloud endpoint as described here: pCloud
  2. Start serving that storage over sftp as described here: rclone serve
  3. Configure duplicacy to use sftp backend with url from step 2 (which would be on localhost), as described here: Supported storage backends

Hi Sapsus,

Backup is now running with pcloud-rclone-sftp-duplicacy

Currently all looks fine. I’m curious if this will work stable :slight_smile:

Anyway thank you very very much!
Greetings

Hey Sapsus,

i’ve got following issue: Rclone is serving with sftp. but as the upload is asynchroneous , it seems to take the chunkgs in cache and after filling up the cache at a level i currently dont understand, it stops talking to duplicacy (or to a manual sftp connection). As i’m running rclone verbose, i see, that its uploading the chunks to pcloud, but still a connection is not possible. i think, it first wants to empty the cache at a certain level before taking new file transfers…

do you have an idea, how i can handle this? can i extend the rclone cache or make this unlimited?

thanks a lot again

You mean duplicacy quickly uploads data over sftp until rclone cachs fills and then connection is stalled until some data gets actually uploaded to plcoud? This is by design, how else could it possibly behave?

You can (should?, to make backup deterministic) disable cache --vfs-cache-mode off as described here. Duplicacy does not do any operations that would require it, e.g. concurrent read/write