Google Drive storage times out

I recently started encountering time-out errors when backing up to Google Drive. What could be causing this, and how do I fix it?

A couple of relevant lines from the log are:

2024-08-10 04:14:19.425 INFO GCD_RETRY [0] Maximum number of retries reached (backoff: 64, attempts: 15)
2024-08-10 04:14:19.426 ERROR UPLOAD_CHUNK Failed to upload the chunk e3cb8d2f79f43220ccff0694363c1e30aaa9d077ad04cc36dca05b99ab513963: Post "https://www.googleapis.com/upload/drive/v3/files?alt=json&fields=id&prettyPrint=false&supportsAllDrives=true&uploadType=multipart": read tcp 10.0.0.10:52273->172.253.122.95:443: wsarecv: A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond.

[I cannot post more lines because they contain URLs, and I cannot upload files because I am a “new user”.]

You seem to be throttled.

How many threads are you backing up with? You may want to reduce that to 4 or even 2.

I don’t think that’s it. I’ve been using 4 threads all along, and the time-out issue cropped up recently.

I tried reducing the number of threads to 2 and running a backup manually. When I do so, the progress bar doesn’t progress, and the predicted completion time rises to several hours. I’ll let it run to completion and report back with what the log says.

Reducing the number of threads to 2 results in no change.

Interesting. So this is not the throttling.

What is the very first error message? GCD_RETRY is not the first.

I’m unable to post the complete log for one of the backup job that’s failing, since it gets flagged due to the URLs that are in the log, but here (hopefully) is the first part.

Running backup command from C:\ProgramData/.duplicacy-web/repositories/localhost/0 to back up C:/data/IT/WindowsImageBackup
Options: [-log backup -storage Google-drive-storage -threads 4 -stats]
2024-08-09 23:45:01.857 INFO REPOSITORY_SET Repository set to C:/data/IT/WindowsImageBackup
2024-08-09 23:45:01.858 INFO STORAGE_SET Storage set to gcd://Duplicacy backups
2024-08-09 23:45:06.786 INFO BACKUP_START Last backup at revision 641 found
2024-08-09 23:45:06.828 INFO BACKUP_INDEXING Indexing C:\data\IT\WindowsImageBackup
2024-08-09 23:45:06.829 INFO SNAPSHOT_FILTER Parsing filter file \\?\C:\ProgramData\.duplicacy-web\repositories\localhost\0\.duplicacy\filters
2024-08-09 23:45:06.829 INFO SNAPSHOT_FILTER Loaded 0 include/exclude pattern(s)
2024-08-10 04:14:19.425 INFO GCD_RETRY [0] Maximum number of retries reached (backoff: 64, attempts: 15)

Interestingly, on this weekend’s scheduled run, I’m getting a different error. Unfortunately I cannot post the complete log because the forum won’t let me. Instead of a “connection attempt failed” message, I get:

wsasend: An existing connection was forcibly closed by the remote host.

Looks like it took 4.5 hours until google decided it’s enough.

You can add global -d flag to the backup commmand to get more verbose logging and see what was actually going on.

I think if you format your logs (use three backticks) it should not be detected as URL and trigger antispam.

```
Logs go here
One two three
```

Results in

Logs go here 
One two three

I’ve edited your comments to add the backticks.

More here Formatting posts using markdown, BBCode, and HTML - Using Discourse - Discourse Meta

Or you can delete url.

Thanks for the tip about formatting logs as code!

So it turns out the issue was not with Duplicacy at all. A couple of days before I was notified of the problem with backups, I had done an upgrade on our gateway. The upgrade seemed to go well, but apparently it caused multiple issues with uploads, which affected Duplicacy backups. Restarting the gateway resolved the issues, and I just completed a manually triggered backup.

Thank you for help, and I will now go away and slap my forehead.

1 Like

Very interesting! Did you find out what was the actual issue? Was it running out of memory keeping old connections in memory? What gateway is this? Google drive api is quite light on resources (as opposed to say, STORJ native baked).

The gateway is a Unifi Dream Machine Pro, and memory use was fine. To be honest I don’t know what the actual problem was or whether it will re-occur, but I do know that restarting the gateway resolved multiple issues, not just with Duplicacy. If it does reoccur, I know where to focus my trouble-shooting!

Thanks again!

I use UDMP since it was released too. Never had to reboot it, outside of automatic updates, inspite of abusing it 24/7 with about 200Mpbs p2p traffic almost nonstop.

Maybe there is some more fundamental issue, like bad ram, or flash? I’m not sure how to validate ubiquiti hardware but it would be worth it to contact their support; under normal operations the reboots shall not be required.

It could be other things, like bad SFP+ plugs, if you use them, or something else in the network that gets reset when you reboot the udmp — perhaps modem or ont or what have you. I do hav to reboot the modem almost biweekly because it gets concussed from all those connections.