Onedrive family

Trying to copy my local repository to remote Onedrive storage (family subscription) and got follow error message:
Response code: 413; retry after xxxx seconds

This message repeat with different number of seconds, but no chunks will be copied after that. Sometimes several chunks copied before this message appear, but not too much, usually less then 10 chunks.

Can you tell me, please, what meaning of error code 413 and how I can fix it ?

It’s OneDrive telling you that are using too much resources, and to back off. It’s how it rate limits.

To avoid this, reduce number of threads to one.

OneDrive and DropBox don’t tolerate abuse by tools like duplicacy and others that try to use these services for bulk storage. They are designed for file sharing, not to serve as a backing store for another app. There are specialized storage services designed for that instead: Amazon S3, Backblaze B2, Wasaib are most well known examples.

You can get away with abusing the service to a degree, but you need to be careful, and use as little resources as possible, to fly under the radar so to speak.

Reducing number of threads to a minimum, and increasing average chunk size are few things you can do to minimize impact on the service.

I would not recommend using any of the *Drive type remote as backup destination to begin with - there are much better suitable alternatives readily available

Well, I am alreday play with tread parameter, and find the follow pattern:
Duplicacy copy as many chunk, as number of threds specified, i.e. only one chunk in every thred. I am reduce it to one and got same result - only one chunk copied before error message appear.

I try using another backup program with same OneDrive account and do not meet any kind of problem. It’s work, and I got a good upload speed by the way. But this program have another permission to work with OneDrive:

  • Read your profile
  • Have full access to all files you have access to
  • Maintain access to data you have given GoodSync access to
    Only last one used in Duplicay. Maybe that is the problem ?

P.S. My average chunk size pretty big, it’s 128M

GoodSync is a file copy program. OneDrive limits api rate.

Copying one large file may take one api call.

Duplicacy shreds a file into small chunks. This results in proportionally more api calls — the smaller the chunk the more calls.

Did you specify it at the init time? If not, average chunk size would be 4MB.

Ultimately, the problem here is OneDrive(Dropbox, and other document sharing services) not being suitable as a backup target. It may work at first on small datasets but you will hit limits sooner or later.

It’s not a new issue. Here is an old thread:

I recommend switching to cloud object storage, like Storj, B2, or Wasabi.

1 Like

Yes, I specify chunk size when init the storage.

I try the google drive right now. Look like it work pretty well. Speed not so great, but acceptable. If no problem will rised, I think it will be my choise.

Google drive has been working for a very long time and quite well. This is changes recently, they are tightening the security, and they started enforcing quotas recently.

Make sure you don’t use Shared Drive as a destination – google shared drives have a limit of 400k objects. With decently sized backup it’s easy to hit it very quickly.

Any reason you can’t use B2/Wasabi/STORJ/AWS/GCS? It will work dramatically better than any drive service (which is built on top of those services itself anyway)

Right now I play with “My drive”. I do not planing to share my backups.

For long term storage I use Google cloud (archive). GCloud buckets are very customizable with tonn of setting and permissions. Right now I am looking for remote storage for hot data. I.e. with no download fee.

1 Like

Google Drive has some restriction for uploads. It’s allow upload only 750Gb per day for same account. After hitting that limit I got error message and no more chunks will be uploaded, until next day.

I don’t think that is huge restriction, but it’s there. For exaple, I can’t copy big storage at once to google drive. But I can resume copy routine next day, sipmle repeat the same copy command.

You are on a forum of a backup software — it’s literally all storage, once upload, and never download. Why do you try to optimize download?

For that exact reason using hot storage for backup is waste of money. You want as cold storage as it can get: overall cost needs to be optimized, egress can be exorbitantly expensive, multiplied by probability of needing it it’s still negligible: you are planning to pay storage forever, and never restore.

Google will continue tightening bolts and nuts to prevent abuse of their file storage and collaboration service.

The only right approach long term is to use service provider with incentives aligned with yours, as described at the linked post above.

Otherwise it will be relentless cat and mouse chase. Nobody needs this with their backup plan.