Diabolical download speeds off OneDrive using Duplicacy

I am a new licensee! Yay! Currently running Duplicacy in an Unraid docker.

My internet rated speed is nominally 140Mb/s Down and 28Mb/s Up (Mbits per sec)

As a new user (to both) I wanted to test a backup and restore process just using one or two files and comparing to uploading/downloading to OneDrive using their WebUi.

Both Duplicacy and the the WerbUi will upload a 842MB compressed file @ ~ 2.7MB/s (MBytes per second) with Duplicacy being a tad faster than the browser upload.

However on download, whereas the UI is able to download the file @ ~ 13MB/s Duplicacy downloads at a speed less than it can upload, namely downloads about 2MB/s. This would make a lengthy restore unworkable.

Is there any other testing/settings I can try that will improve this? Currently running the default 4 threads.

Any help would be welcome.
Thanks

Donā€™t use more than 2 threads with OneDrive. Ideally ā€” stick to one.

I tried a single thread and it made no difference to the upload speed @ 2.7MB/s and a marginal increase on restore speed @ 2.3MB/s. Thatā€™s just unusable for any plan to be able to quickly restore stuff, especially given I have a 900Mb fibre upgrade in two weeks!

On one hand, 28Mb/s is about 3.5MBps, and accounting for API latency overhead, you get about the correct speed.

On the other hand, OneDrive is not designed for bulk storage: itā€™s a file sync and collaboration service, and as long as OneDrive client works (and it does not need to be fast) Microsoft would not be fixing any potential issues that help customers abuse the platform. You will never get any good performance from it, and if you try (e.g. with multiple threads) they will throttle you up to a ban. So to use OneDrive as a target in the first place, you should not be using more than 2 threads. Ideally, 1. Asking for good performance is too much.

To verify, connect to one drive with the same credentials using CyberDuck or Transmit, and try to transfer a collection of 1-10MB files. See what kind of performance you get.

If you want good restore performance ā€“ avoid file sharing services (Dropbox, OneDrive, etc). Use B2, or S3, or Azure, where you can crank up thread count and get unbounded throughput.

Relevant comment here: (Newbie) Backup entire machine? - #6 by saspus

1 Like

With 28mbit upload youā€™d be fine using 4 threads, likely to be better than 2 or 1 over the long term. Youā€™d be throttled once in a while, but most of the time youā€™d be maxing out your upload bandwidth (I certainly can cap out 20mbit upload with OneDrive most of the time). This might be different for 900mbit upload usecases, you may want to see how much youā€™re being throttled and potentially limit your bandwidth usage and/or threads. Throttling will also be impacted by any other OneDrive usage you may have in parallel (e.g. syncing with OD client, direct web client operations, rclone etc).

You may want to play with duplicacy benchmark command to see where your bottlenecks are.

Thanks for that. I am beginning to suspect throttling with OD. I donā€™t have any client syncing with OD as I am using one of the ā€˜extraā€™ family subscriptions to try it.

As another test I used my Amazon AWS account to create a ā€˜freeā€™ S3 repository and configured that. This connection maxed out my upload bandwidth and managed 8.5 MB/s restore speed, so nearly half of my max download.

This was with 4 threads (forgot yo change it), so I changed the restore options to -threads 2 . and the restore was a bit slower at 6 MB/s.

But 8.5 MB/s restore is far better than the 2.3 OneDrive was struggling to deliver. Just wish it could max out my bandwidth!

Cheap, fast and high capacity - you usually canā€™t have all three at the same time. OneDrive can hit cheap and high capacity, but it ainā€™t fast. For my backup use case, this is perfect. If you want fast and high capacity, it ainā€™t going to be cheap.

1 Like

There is also consideration of reliability. OneDrive uses Azure storage, so why not eliminate another layer of complexity that can introduce new issues and use Azure directly?

Same deal with GoogleDrive (albeit itā€™s much more stable) ā†’ Google Cloud Storage, and Amazon Drive (when it still worked) ā†’ AWS S3.

Penny pinching with the storage service that your disaster recovery plan relies on is hardly worth it. Storage is cheap these days, no reason to take on extra risk.

LOL, when Azure will provide unlimited storage and unmetered bandwidth for 50$/month, I am sure Iā€™ll switch. Until then, I donā€™t think so.

Ok, but storage and bandwidth cost money to provide and anything ā€œunlimited for fixed priceā€ can not be a sustainable business model. Look at what happened to those who tried. (CrashPlan, or Backblaze personal for that matter, or even Amazon Drive)

Essentially you are looking for a way for others to subsidize your use. You may be able to (keep) find(ing) that short term here and there, accept throttling and other drawbacks, but ultimately, itā€™s not a reliable way going forward.

I, on the contrary, prefer ā€œpay for what you useā€ model, I actually do like that amazon charges me per transaction, per byte uploaded and downloaded and stored based on tiers: because I pay for it, and therefore can rely on it, itā€™s an honest and sustainable business model, I donā€™t depend on other users, not have to look for another loophole tomorrow.

1 Like

You can have successful all-inclusive models. and unsuccessful pay-as-you-go models, there is no direct dependency there. All-you-can-eat restaurants? Can be sustainable. Unlimited bandwidth internet connection? Can be sustainable. Unlimited minutes mobile plans? Can be sustainable.

My internet provider provided unlimited bandwidth plans for many, many years, and there is no indication they are going out of business any time soon. In fact, unlimited bandwidth for end users is more of a norm than the exception nowadays. Are heavy users being subsidized by light users? Sure, just like people buying things on sale are being subsidized by people who pay full price.

And if you think just because youā€™re paying a-la-carte your services cannot be terminated or substantially changed, wellā€¦ letā€™s just say there is no universal law that prevents it from happening. You may want to check out Nirvanix story, itā€™s dated but still quite relevant. Some clients had to extract petabyte+ datasets on a two week notice :wink:

I personally treat all storage as unreliable, and public cloud is no exception. As far as I am concerned, any single storage can disappear in its entirety at any particular moment for whatever reason, and my overall infrastructure should be resilient to such events.

Iā€™m looking at this from a value to a consumer perspective, not service providersā€™ ability to make profit. And Iā€™m not saying that pay-as-you-go model guarantees service immortality. All the above you mentioned are examples of horrible, to the consumer, models, for two main reasons:

  1. These models unfairly penalize many light users, to subsidize few heavy abusers.
  2. Service provider incentives are not aligned with that of the customer: service provider is incentivized to prevent you from using resources you paid fixed price for ā€“ because the less/slower you use ā€“ the more they earn. In contrast, in pay-for-what-you-use approaches ā€“ the more/faster you use ā€“ the more they earn.

You can wrap it any way you want ā€“ but this conflict is fundamental and unavoidable.

Iā€™m beginning to think that Amazon S3 is probably the better option for me. I just want to store a weekly full system backup on a Sunday (~75GB compressed) and daily incremental backups (~ 2GB) and hold two weeks worth. So about 175GB in total which I hope never to have to download for a full system restore. Trouble is is that the backup software I use doesnā€™t appear to support S3. It supports OneDrive (Personal and Business), Dropbox and Google Drive, but thatā€™s it. Mmmmmm.

May just think about backing up to Unraid and then let Duplicacy back that up to S3. But this means that if my backup on the NAS was unavailable, I would potentially need to download a full weeks worth of backups to something else to restore my system instead of just restoring directly from the Cloud. Never easy and straightforward is it?!

Just wish my testing of the Duplicacy process even with S3 was faster for a potential restore than the 8.5MB/s I experienced. Perhaps a bit of throttling on a ā€˜freeā€™ 5GB account?

Does the download speed look any faster on a paid-for S3 tier that youā€™ve experienced?

There must be a typo there somewhere. So I assume 200GB. Since you only want to store it for a very short time, AWS ā€œS3 Standard - Infrequent Accessā€ tier seems the best fit, which results in 200GB * $0.01/GB/month = $2/month, + api cost. (see Amazon S3 Simple Storage Service Pricing - Amazon Web Services).

This is so weird. What software is this?

You can backup with duplicacy to unraid, and replicate with another instance of duplicacy from unraid to the cloud. Then, if you need to restore, and unraid is gone, you can initialize duplicacy with that cloud destination locally and restore directly. No need to download in full.

How many threads? With s3 you can use 10, 20, 40 etc threads, and you are only limited by your ISP connection. Duplicacy has built-in benchmark. What does it report?

Unless the daily changes are new compressed music files, images and/or videos, Duplicacyā€™s deduplication will reduce the total storage requirement quite a bit.

A traditional 7-day round robin schedule of 1 full + 6 incremental backups isnā€™t needed with Duplicacy. Your very first full backup will likely be less than the estimated ~75GB. Each daily incremental afterwards will likely be less than the estimated ~2GB, and might even shrink over time depending on the type of data.

One of the huge advantages to a chunk-based deduplicating backup tool like Duplicacy is that only the very first run is a full backup. All successive uploads to the same storage destination ā€“ while technically ā€œincrementalā€ backups ā€“ are effectively full backups because duplicate chunks are reused. Any snapshot, including the first one, can be pruned at any time.

Nope, unfortunately reliable backups rarely are. :smirk:

I havenā€™t compared Amazon S3, but it certainly is the case for Google Drive and Microsoft OneDrive (I have business accounts for both at work).

Given that your storage requirements are around 100GB (75GB + 13 * 2GB), as an alternative to OneDrive / Google Drive / S3, consider rsync.net.

Rsync.net offers a special pricing tier for advanced users willing to pay annually https://rsync.net/products/borg.html ā€“ 200GB of data costs $36/yr with no ingress/egress charges and/or bandwidth caps (minimum charge is for 100GB = $18/yr, $0.015/GB thereafter).

I helped a friend set up a backup to rsync.net. No ā€œbucketsā€ or other opaque storage format and no special API, just standard SSH + SFTP access to a Linux account that you can store data on however you want.

A good and reliable solution must be simple and straightforward. Otherwise, you canā€™t trust it. If at any point you feel the arrangement becomes too cumbersome ā€“ itā€™s time to stop, re-assess and likely start over.

Iā€™ve looked at it, rsync.net may still has its uses, but itā€™s not a good fit for backup in general and OP in particular for many reasons:

  • You have to pay for storage upfront, regardless whether you use it or not, and therefore waste unused space
  • There is 680GB minimum order.
  • Cost of their geo-redundand tier is comparable to the most expensive AWS S3 tier
  • No egress fees: With backup you rarely if ever have to egress, and yet you are indirectly paying for other peopleā€™s egress.
  • No API fees: There is very small amount of calls involved in uploading backup data, and yet you are indirectly paying for other peopleā€™s use of the infrastructure.

They provide an interesting solution with interesting features, but those features will be wasted if it is only used as a backup target. In fact, if you look closely, their main selling points (ā€œWhat Makes rsync.net Specialā€) are not special at allā€¦ Since they started in 2001 a lot has changed. Itā€™s hard to compete with amazon, google, and microsoft.

What? Why is that downside?

1 Like

Iā€™ve just explained right there. ā€œFreeā€ is an illusion. Traffic costs money. Not charging a specific customer for it means the aggregate cost is rolled in into the storage cost. For backup usecase there is very little egress. Hence, the customer will be paying for other usersā€™ egress. Or, put it another way, part of the payment will be going to cover other userā€™s egress and as a result customer will receive smaller value of services provided, aka overpaying. Same goes on the infrastructure/API costs

In other words ā€“ ā€œfreeā€ things, as always, are the most expensive ones.

LOL, thatā€™s getting better and better. But Iā€™ll play the ball and expand the argument. So both Google Cloud storage and AWS charge for storage and bandwidth, and are right there in terms of how things should work, right? But wait, Google Cloud is losing a ton of money as a business unit - this means if you use GC youā€™re leeching off ad users who have to pay higher ad rates to support your GC usage. Not fair.

AWS is opposite, it is quite profitable. But wait, that means that youā€™re supporting some other businesses that lose money, like Amazonā€™s investments into electric trucks/vans in Rivian. Not fair again!

Blockquote saspus
There must be a typo there somewhere. So I assume 200GB. Since you only want to store it for a very short time, AWS ā€œS3 Standard - Infrequent Accessā€ tier seems the best fit, which results in 200GB * $0.01/GB/month = $2/month, + api cost. (see Amazon S3 Simple Storage Service Pricing - Amazon Web Services).

Iā€™ll take a look. Cheers!

Blockquote saspus
This is so weird. What software is this?

EaseUS Todo Backup their ā€˜Enterpriseā€™ version even though I just use if for my personal PC.

Blockquote saspus
You can backup with duplicacy to unraid, and replicate with another instance of duplicacy from unraid to the cloud. Then, if you need to restore, and unraid is gone, you can initialize duplicacy with that cloud destination locally and restore directly. No need to download in full.

Iā€™m afraid I got into the habit of using EaseUS to do a full system/drive image backup including the UEFI partition. To ā€˜restoreā€™ the entire system EaseUS has a Pre-OS installed that can just restore straight to the UEFI and boot partitions in their entirety. Not sure how I could achieve that with Duplicacy if there is a non-functioning system drive to sort out!

Blockquote saspus
How many threads? With s3 you can use 10, 20, 40 etc threads, and you are only limited by your ISP connection. Duplicacy has built-in benchmark. What does it report?

It was 4 threads. Interesting, Iā€™ll test with more and see what happens!