Best thread count for SFTP?

You don’t have to enable encryption in duplicacy, if you use storj native integration — it’s end to end encrypted.

When using gateway however - gateway has encryption keys by necessity, so this is no longer end to end encrypted. I would continue using duplicacy encryption for consistency — maybe you’ll want to move duplicacy datastore elsewhere in the future.

There is no concept of regions with storj, but duplicacy wants one. You can specify any, e.g. I tend to put “us-east-1”. It gets ignored anyway.

It worked with “global” as region. It’s now backing up without errors, let’s see how it goes. Thanks for all the info so far!

1 Like

BTW I set thread count to 20 and it’s backing up at 18 MB/s. How can I speed it up? What thread count should I use? I have 400 Mbps upload and with both iDrive and Wasabi it was backing up at almost full speed, but not with Storj.

Did you increase chunk size or is it the default?

Storj will have more latency, as each transfer will need to be uploaded to gateway, then split to shards, erasure coded and distributed to geographically uncorrelated nodes, and only then return confirmation (i.e. do everything native integration would have done locally). Conventional provider don’t have that overhead — they can just synchronously accept the whole file and write to the same datacenter.

You can keep increasing thread count until performance no longer increases.

It’s an interesting trade off. It’s either very fast with native one, or very horrible, and s3 gateway is somewhere in the middle.

I’ve actually experimented with running my own gateway on a cloud instance — I got excellent results, but the cloud instance needed to be quite beefy. I’ll try to find that thread.

I set -min-chunk-size 16 -max-chunk-size 64M - is that what you said?

I got excellent results, but the cloud instance needed to be quite beefy. I’ll try to find that thread.

I am using a Mac mini M2 Pro which is quite powerful, so it’s not a problem with my computer. :slight_smile:

Found it. Completely new: where to start - #61 by saspus

I understand; what I mean is that when you use s3 gateway all that work is done on the gateway, not on your machine. I guess if their capacity is limited maybe that contributes to lower performance. To rule that out i tried running native integration on Amazon instance directly (what gateway would accomplish).

Plus latency if you are far from the gateway negates the benefits of distributed nature of the network.

I’m still not sure what went wrong with your local integration — you have quite a beefy internet and hardware. I have Mac m2 but my upstream is only 20Mbps at home, so I might not be able to replicate your result. But I’ll try tomorrow nevertheless. The running out of handles bit should not have happened; but my zshrc is quite extensive, I might have configured something and forgot. I’ll check.

Revisiting the native one. What did you set ulimit -n value to? Try something large, like 65536.