While this is likely a bug (ignoring throttling requests), I wanted to make a slightly tangential comment:
You should be able to let duplicacy fully saturate your upstream without affecting any other applications on the network. If you do see that everything else dies when you fully utilize your upstream (what I think you meant by “overloading”) then you are likely experiencing bufferbloat – undesired latency spikes due to network equipment buffering too much – which effectively prevents you from fully utilizing bandwidth you are paying for!
Managing client devices and services bandwidth to attempt to “fix” this is a losing battle, because ultimately the bandwidth is not a problem, latency is.
To confirm that this is what you are actually experiencing – start pinging google.com in one window and then start multi-threaded backup with duplicacy at full speed. Watch the ping. It shall not change. If it does change (and it may increase drastically, 1000x is not unusual) you just confirmed that your issue is indeed caused by the buffer bloat.
You need network equipment that can manage the queue to prevent buffers from filling up. (that is usually achieved by SQM algorithms, such as fq_codel). There exist a number of devices, both commercial (Ubiquiti EdgeRouters and USG) and free (OpenWRT) that support that rather well.
How well? Anecdotally, I had 12/1 MBps (yes, 1 Megabit per second) upstream for a very long time, and I was uploading non-stop (of course, it used to take forever to transfer anything) with various tools (backup, sync, all other services); essentially my connection was saturated at 100% all the time while the ping never exceeded 10ms and users did not see any impact on their browsing or other activities.
What I’m trying to say is that instead of limiting your backup speed and effectively under-utilizing the connection you are paying for it would be more productive to address the root cause of the issue, which has nothing do with bandwidth utilization.