Rate-Limiting for Backup and Copy Commands


I’ve used the rate-limiting options when using the ‘backup’ and ‘copy’ commands. In either case, no rate-limiting has occurred. I verified this using the ‘iftop’ command and checking the traffic on my network interface. I am using CLI version 2.1.2 with Ubuntu Linux 18.10 x64 on systems that are basically idle (very little, if any network traffic and disk activity).

The command I used for backup:

sudo ~/duplicacy backup -limit-rate 20 (I would assume an upload rate of 20 KiloBytes/second here)

And for the copy command:

sudo duplicacy copy -upload-limit-rate 150 -from default -to offsite-wasabi

Any ideas as to why the rate limiting is not working in either command?

Thanks in advance for the help!


I see this too. I set a limit of 3000 and duplicacy consistently uploads at a rate of around 6MB/s which is overloading my internet connection.


While this is likely a bug (ignoring throttling requests), I wanted to make a slightly tangential comment:

You should be able to let duplicacy fully saturate your upstream without affecting any other applications on the network. If you do see that everything else dies when you fully utilize your upstream (what I think you meant by “overloading”) then you are likely experiencing bufferbloat – undesired latency spikes due to network equipment buffering too much – which effectively prevents you from fully utilizing bandwidth you are paying for!

Managing client devices and services bandwidth to attempt to “fix” this is a losing battle, because ultimately the bandwidth is not a problem, latency is.

To confirm that this is what you are actually experiencing – start pinging google.com in one window and then start multi-threaded backup with duplicacy at full speed. Watch the ping. It shall not change. If it does change (and it may increase drastically, 1000x is not unusual) you just confirmed that your issue is indeed caused by the buffer bloat.

You need network equipment that can manage the queue to prevent buffers from filling up. (that is usually achieved by SQM algorithms, such as fq_codel). There exist a number of devices, both commercial (Ubiquiti EdgeRouters and USG) and free (OpenWRT) that support that rather well.

How well? Anecdotally, I had 12/1 MBps (yes, 1 Megabit per second) upstream for a very long time, and I was uploading non-stop (of course, it used to take forever to transfer anything) with various tools (backup, sync, all other services); essentially my connection was saturated at 100% all the time while the ping never exceeded 10ms and users did not see any impact on their browsing or other activities.

What I’m trying to say is that instead of limiting your backup speed and effectively under-utilizing the connection you are paying for it would be more productive to address the root cause of the issue, which has nothing do with bandwidth utilization.


I’d just like to add, that while SQM may fix this issue, I don’t think it would be accurate to describe this as bufferbloat. Bufferbloat is about latency-sensitive traffic (gaming, VoIP etc.) being affected by long queues. (I suppose it depends on if the poster above is doing any of that, but the fact is all download traffic will be affected…)

Really, at its simplest level, this is most probably caused by an asymmetric connection (similar to what you describe; with a very small upstream) being throttled by itself - the upstream saturated so much that ACK packets can’t get out in a timely fashion.

The result is the downloads slow down because the TCP/IP connection at the other end isn’t receiving those ACKs, so it purposefully slows down the connection thinking you can’t handle more packets to download, even when you’ve got plenty of downstream spare.

Personally, I haven’t got around to re-flashing my Archer C7 back to OpenWRT, might do that this week. But for the last few years I’ve been using cFosSpeed traffic shaper for Windows, which does a wonderful job of prioritising ACKs and various applications to appropriate levels. However, I highly recommend OpenWRT and getting some SQM and bufferbloat-busting action on the go. Being able to browse/game while streaming or otherwise heavily downloading on the rest of your network, is really quite nice.


Are you using just the one thread?


I have tested with one thread or multiple threads.


Thanks, this was helpful and after configuring my router I no longer need to rate limit.


Are you saying that most routers aren’t good at managing traffic or are you recommending OpenWRT in general, like you might also recommend that people flash a better OS onto their Android phone?


I think both examples are very similar, yes…

(Although, for me personally, I prefer untinkered pure stock Android on my Pixel 2 XL :slight_smile: …for the sake of sheer convenience and security - an update literally comes out every month - and I don’t need that level of customisation anyway. Plus I don’t think a better ROM exists for it atm, but when Google no longer supports it, it’ll nice to be able to flash it and gain customisation options as well as a longer life)

For routers, it’s a slightly different story. Vendors are generally quite lazy with implementing a full feature-set AND keeping up with security standards.

For example, the stock firmware in my TP-Link Archer C7 v2 is supposedly based on an old DD-WRT / OpenWRT firmware flavour heavily modified in-house, but it lags way behind the latest official OpenWRT.

The stock firmware is updated occasionally, perhaps fixes a handful of things but few major new features, and eventually the updates dry up. Meanwhile, LEDE added Smart Queue Management (SQM) relatively recently and remains open, more secure (imo) and more customisable than stock.


I will also say +1 for custom router firmware:

  • tplink is bad.

  • huawei is meh as well

  • asus on the other hand seem to care about their users (their routers are expensive, but for sure they are updated many years afterwards!) (although asus has had problems with their security, their features are damn nice.) for the average user there’s almost no reason to go the merlin route

  • netgear also seems ok (though i only used a netgear router for <6 months so my opinion does not have a decent base to build upon, the rest of the routers i had for at least 1 year on stock).