....
Copied chunk e8383d32e9baf341f2f4f997b47889dacb92d89bf4fed8769fcb62cbc36d5da5 (35/37) 52.87MB/s 00:00:00 94.6%
Copied chunk e67905ae91d5632b8e02fdcaed38c085f91e8b7315f2a5d31d8040b5770d4e51 (37/37) 51.89MB/s 00:00:00 100.0%
Failed to upload the chunk 212455ce1dbe8e940d787dc0c5543bdf26ebced7be898573bf00507860549c59: RequestError: send request failed
caused by: Put https://s3.eu-central-1.amazonaws.com/foo/bar: read tcp some-ip:some-port->some-ip:some-port: read: connection reset by peer
We also saw that the exit code of duplicacy was 0 in this case.
I had a look around the code were the log message was printed:
In case of an error the function returns false, however, the return value is not handled. Below is the only invocation of the method. The return value is just discarded, which would explain that the exit code is still 0:
Then, since the log messages above indicate that 100% of the chunks were uploaded I thought there may hide a race condition. I thought the connection to the S3 bucket may be reset in between putting the last chunk in the taskQueue
and before it was handled by
Maybe related, maybe not:
When browsing around the code I noticed that numberOfUploadingTasks is not decremented in every return path of the Upload method, which should (as I understand the code) actually lead to duplicacy hanging forever when Stop is called. I think atomic.AddInt32(&uploader.numberOfUploadingTasks, -1) should be deferred at the beginning of the Upload method. However, this would mean that the above erroneous duplicacy run shouldn’t have exited…
To wrap up, we would have expected that duplicacy had returned an exit code other than 0.