Google Drive errors: User rate limit exceeded

When i upload to google drive i get the errors sometimes, causing the whole backup to cancel:
User rate limit exceeded.

Any way to fix this?

Thanks!

How many threads are you using? Reducing the number of threads may help.

There is a PR that hasn’t been merged but I plan to do it for version 2.0.10.

that worked! thanks alot! i first thought applying “rate limit” in the app would help, because it looked like the google error, but threat limit did the trick.

However i woke up this morning to find out a other issue, but il just make another thread for that!

The issue is back, always pops up at about the same time. It keeps on going even though i lowered the thread rate and maximum speed.
https://puu.sh/y3pnX/39eaf53738.png

The errors all start with “ERROR Failed to find the path for the chunk” .

The chunks are always different in their name.

If you build from the latest source on github and run the CLI version, it should handle this situation better – that PR has been merged.

is it possible to make the GUI version for that? any guide how?^^

I can get you a pre-2.0.10 build tomorrow. Will a 64 bit Windows build work for you?

Not to resurect an old thread, but I’m trying to perform an initial backup/snapshot to gcd and it came back with this issue as well after running for a number of hours. Thus, I do not believe it is an issue with the number of threads (I am only using 4), but maybe a “daily limit” or something?

Any further workarounds since this thread was last discussed (v2.1.2)?

Does this message give any indication of which limit I exceeded?

[1] Maximum number of retries reached (backoff: 64, attempts: 15)
Failed to upload the chunk 7c68b274da5073552d7bce4816b3fa482eb36c3e46728ce918e0f3ddf7938518: googleapi: Error 403: User rate limit exceeded., userRateLimitExceeded

Thanks.

How many threads are you using? Try the same backup with no more than 4 threads and in that case you shouldn’t see the error again (my best guess).
You could also run the backup with debugging info to see more details: duplicacy -d -log backup -threads 3

As mentioned, I typically use 4 threads. I can try reducing that. I tried restarting the initial backup and duplicacy went and performed a re-scan of the files and then relatively quickly hit the same issue this morning.