Running a backup:
-log -d backup -storage storj-main -threads 8 -stats
using
storj://blah@us1.storj.io:7777/duplicacy/backups/
I want to upload around 4TB, but somewhere along the way (in the low GB range), this is happening:
:ERROR UPLOAD_CHUNK Failed to upload the chunk blah:: uplink: metaclient: manager closed: closed: read tcp 172.16.10.32:33586->34.150.199.48:7777: read: connection reset by peer
Which completely stops the scheduled backup process.
I have plenty of bandwidth, a pretty steady fiber connectivity, running opnsense, for backups I have tried everything from 2 to 10 threads with the same results and have allocated plenty of memory and CPU to the docker instance running the latest :mini version of duplicacy web.
- Can I just restart the backup? (i.e. will it start from the last stopping point without consequence?)
- Stopping every ~30GB with a peer reset is going to become bothersome. I’m using storj:// but would using S3 be any more stable?
- Why doesn’t duplicacy pickup or re-establish the uplink and continue?
Thanks.