I’ve been doing a copy between Wasabi (S3-type storage) and Google Cloud Drive of approx 5Tb. Around 90% of the chunks have copied across okay. It’s all been going well but now it won’t progress any further and continually throws this error:
2019-04-29 22:26:15.649 ERROR UPLOAD_CHUNK Failed to upload the chunk 0479b7f173aa964ba35ac19d94311ead75c22511e8deb88c93a6698a1b6144ca: googleapi: Error 400: Bad Request, failedPrecondition
My question is, can I now start a backup directly from my storage to the Google Cloud Drive location without re-uploading all the copied chunks? i.e. if I start the backup will it recognise the chunks which are already in the storage?
Note that I did set up the GCD storage using ‘copy’ and ‘-bit-identical’ i.e.
duplicacy add -e -copy wasabi-backup -bit-identical gcd-backup home-server-backup gcd://duplicacy/home-server-backup
Also worth noting that I haven’t re-run the Wasabi backup while the GCD copy was started since I didn’t want to potentially mess anything up by adding/changing chunks in Wasabi while the copy was underway.
It may also be relevant that I was doing the copy from a different system (running the copy from a Google Compute Engine instance as I hoped the speed would be quicker) than the one I’ll be starting the backup from so the machine which will be backed up doesn’t have the cache from the copy.
My thinking is that since I used the ‘-bit-identical’ flag to set up GCD storage, I’ll be able to start the backup and it won’t upload the existing copied chunks??
(But maybe it won’t know about the existing chunks since the cache from the copy is on a different machine??)
Please let me know if there is an existing answer for this. I’ve tried searching so apologies if I missed it.