Running backup after failed copy

I’ve been doing a copy between Wasabi (S3-type storage) and Google Cloud Drive of approx 5Tb. Around 90% of the chunks have copied across okay. It’s all been going well but now it won’t progress any further and continually throws this error:
2019-04-29 22:26:15.649 ERROR UPLOAD_CHUNK Failed to upload the chunk 0479b7f173aa964ba35ac19d94311ead75c22511e8deb88c93a6698a1b6144ca: googleapi: Error 400: Bad Request, failedPrecondition

My question is, can I now start a backup directly from my storage to the Google Cloud Drive location without re-uploading all the copied chunks? i.e. if I start the backup will it recognise the chunks which are already in the storage?

Note that I did set up the GCD storage using ‘copy’ and ‘-bit-identical’ i.e.
duplicacy add -e -copy wasabi-backup -bit-identical gcd-backup home-server-backup gcd://duplicacy/home-server-backup

Also worth noting that I haven’t re-run the Wasabi backup while the GCD copy was started since I didn’t want to potentially mess anything up by adding/changing chunks in Wasabi while the copy was underway.

It may also be relevant that I was doing the copy from a different system (running the copy from a Google Compute Engine instance as I hoped the speed would be quicker) than the one I’ll be starting the backup from so the machine which will be backed up doesn’t have the cache from the copy.

My thinking is that since I used the ‘-bit-identical’ flag to set up GCD storage, I’ll be able to start the backup and it won’t upload the existing copied chunks??
(But maybe it won’t know about the existing chunks since the cache from the copy is on a different machine??)

Please let me know if there is an existing answer for this. I’ve tried searching so apologies if I missed it.

That’s what deduplication is all about: the chunks already existing will be detected and used, instead of being reuploaded (and just getting a file-with-the-same-name conflict) when you do a backup.

Since you used -bit-identical you can retry the copy using rclone or something alike (since that’s the purpose of -bit-identical), to have all the chunks on GCD. Even if that does not work you can continue backing up to GCD normally.

As a data security measure, i would suggest you run a
duplicacy -d -log check -all on GCD to see if all the chunks in your backups were copied successfully after you copy and do the first backup.

I don’t understand what you mean by this :-?.

Thanks for the clarification.

In reference to the cache comment…
when a backup or copy runs a ‘cache’ folder is created in .duplicacy. I was just a bit unsure about how it all works together. I thought that maybe duplicacy referred to the local cache content to work out which chunks had already been uploaded. So if I was running the backup from a different system than the incomplete copy it wouldn’t have the copy’s cache to refer to and therefore wouldn’t know which chunks already exist in the destination.

From your explanation my understanding is obviously incorrect.

Thanks for your help.

The cache is used just for speeding up :d:. You can just as well delete the cache each time you run any command without any risk whatsoever (though of course that will make everything slower).

For anyone: Feel free to use the :heart: button on the posts that you found useful.

For the OP of any #support topic: you can mark the post that solved your issue by ticking the image box under the post. That of course may include your own post :slight_smile:

This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.