StorJ connection reset by peer

Running a backup:

-log -d backup -storage storj-main -threads 8 -stats

using

storj://blah@us1.storj.io:7777/duplicacy/backups/

I want to upload around 4TB, but somewhere along the way (in the low GB range), this is happening:

:ERROR UPLOAD_CHUNK Failed to upload the chunk blah:: uplink: metaclient: manager closed: closed: read tcp 172.16.10.32:33586->34.150.199.48:7777: read: connection reset by peer

Which completely stops the scheduled backup process.

I have plenty of bandwidth, a pretty steady fiber connectivity, running opnsense, for backups I have tried everything from 2 to 10 threads with the same results and have allocated plenty of memory and CPU to the docker instance running the latest :mini version of duplicacy web.

  • Can I just restart the backup? (i.e. will it start from the last stopping point without consequence?)
  • Stopping every ~30GB with a peer reset is going to become bothersome. I’m using storj:// but would using S3 be any more stable?
  • Why doesn’t duplicacy pickup or re-establish the uplink and continue?

Thanks.

TLDR: Reduce number of threads. Or switch to using s3 gateway.

The problem is somewhere between duplicacy and your ISP — native backend opens huge number of connections rapidly; they are from the pool, but some devices can’t handle them anyway. Duplicacy retries on failure, and uplink library retires internally as well; in fact, it uploads for more nodes than necessary, dropping long tail. When the failure bubbles up to the client — something is screwed up in the network stack, as all retries failed

You mentioned you have tried 2 threads. Does it affect how soon the issue reproduces?

If you consistently hit this after some amount of data transfers —- something is leaking somewhere. I would start with your gateway, or maybe removing docker from the picture. They have their own network stack that I would not trust on bit.

Yes, you can restart backup, it will not re-upload the same chunks, but it will go through the hashing process from scratch.

Using s3 gateway will cut resources requirements by about 100. If you don’t need blazing fast speeds (and for backup it’s rarely needed) — s3 gateway is an adequate solution.