Errors during uploads

Hi All,

New user here. I have setup and tested using Storj as destination with no issues.
However, now backups are actually running, I keep getting errors similar to these, on both new and incremental backups, at random times at which point the backup stops and fails (some values redacted);


2023-03-04 12:41:43.802 ERROR UPLOAD_CHUNK Failed to upload the chunk xxx: RequestError: send request failed
caused by: Put "https://duplicacy.gateway.storjshare.io/chunks/xx/xxx": read tcp 9.9.9.9:1234->185.244.226.3:443: read: connection reset by peer
Failed to upload the chunk xxx: RequestError: send request failed

Is it possible to find cause?
Is there a way to force retry when this happens?

Thanks in advance :grinning:

This looks like there was some issue with the Storj S3 gateway. Can you try using the new Storj backend supported by the latest CLI and web GUI?

Hi

Thanks for the response.

I did setup the Storj via CLI (non-default chunk settings). And backups have worked on both CLI and web previously.

Not sure which version of GUI has Storj option for backend?

Ahhh, I see forum post for 1.7 :slight_smile:
Website still has 1.6.3 on it :worried:

I shall try and see if any better

Thanks for your hard work

OK, downloaded 1.7 and installed over older 1.6.3, started up and navigated around, no issues

Created a new Duplicacy backup location using Storj as type, pointing to the same bucket on Storj.

Then shutdown Duplicacy and edited the json file to rename Storj S3 storage as old and the new storage as old name.

Tested a few smaller backups, ran checks, all good :grinning:

Then ran the backup that has been erroring… :worried:

2023-03-06 22:52:44.333 ERROR UPLOAD_CHUNK Failed to upload the chunk xyz: uplink: metaclient: rpc: tcp connector failed: rpc: dial tcp: lookup eu1.storj.io: no such host
Failed to upload the chunk xyz: uplink: metaclient: rpc: tcp connector failed: rpc: dial tcp: lookup eu1.storj.io: no such host

So, ran nslookup

$ nslookup eu1.storj.io
Server:		a.b.c.d
Address:	a.b.c.d#53

Non-authoritative answer:
Name:	eu1.storj.io
Address: 104.199.30.73
Name:	eu1.storj.io
Address: 34.105.138.64
Name:	eu1.storj.io
Address: 34.159.134.91

So, similar error, just presented differently me thinks

For context, this is a large-ish folder - 89GB over 161k files. However, other folders with similar size and file counts have backed up without issue - in fact my Media backup folder is nearly 600GB over 120k+ files. That said, it doesn’t seem to be the size/volume, but more the communications?

and, now, I just keep getting these errors, on most backups since upgrade;

2023-03-07 01:12:36.546 ERROR UPLOAD_CHUNK Failed to upload the chunk xyz: uplink: stream: open /var/folders/m2/09d4dt055cq4dkvcmqys4ky40000gn/T/tee541498421: too many open files
Failed to upload the chunk xyz: uplink: stream: open /var/folders/m2/09d4dt055cq4dkvcmqys4ky40000gn/T/tee541498421: too many open files

@gchen is there an issue with the native storj interface?

Using new Storj, I constant get above errors
Switching back to Storj S3, no issues (my original issue not reoccured since switching back)

2023-03-07 01:12:36.546 ERROR UPLOAD_CHUNK Failed to upload the chunk xyz: uplink: stream: open /var/folders/m2/09d4dt055cq4dkvcmqys4ky40000gn/T/tee541498421: too many open files
Failed to upload the chunk xyz: uplink: stream: open /var/folders/m2/09d4dt055cq4dkvcmqys4ky40000gn/T/tee541498421: too many open files

I’m not sure why storj needed to open that file. Is this your temporary directory?

It is, yes.

I did some open file checking and don’t see that folder during s3 interface use, only new storj.

@gchen any thoughts here?