Returned 503 no tomes available

I have a job that backups a drive to a local storage, performs a check on the backup afterward, copies the backup to B2, and then checks the data on B2 afterward.

For the past few days, step 3 (copy to B2) has been failing and the log shows the line below. Does anyone know whether this is an issue with Backblaze? The fact that the local set of the backup is good, the copy should be a straightforward task. Or is there a bad chunk on B2 that I need to remove? Any possibility this is related to the B2 announcement about their recent write improvement (How We Achieved Upload Speeds Faster Than AWS S3)?

2023-11-09 21:17:09.043 ERROR UPLOAD_CHUNK Failed to upload the chunk 4944f4da15e931cf812ec937aafdea466f750c308ea0d825be34a6b9aedd97ca: URL request ‘https://pod-040-2016-01.backblaze.com/b2api/v1/b2_upload_file/7729016381036c4488550a11/c004_v0402016_t0026’ returned 503 no tomes available
Failed to upload the chunk 4944f4da15e931cf812ec937aafdea466f750c308ea0d825be34a6b9aedd97ca: URL request ‘https://pod-040-2016-01.backblaze.com/b2api/v1/b2_upload_file/7729016381036c4488550a11/c004_v0402016_t0026’ returned 503 no tomes available

I would start with Backblaze support. Errors 5xx are server errors.

I take that back.
It seems it’s a feature of their architecture and client software is supposed to retry on 503, after waiting a bit, to get a new upload URL that actually has space: Why You Are Getting a B2 503 or 500 Server Error and What to Do Next

@gchen?

@ExitCodeZero, you may want to switch to using their S3 gateway and make it their problem.

Ultimately, S3 won, even Storj runs S3 gateway. There is no point in trying to risk using very niche api for no benefits in return.

Thank you for the response. After reading your response and the blog post, it’s useful to know that it’s a matter of retrying. I ran the copy again on Saturday (hoping that it’s less busy) and it was able to complete the task after uploading ~65,000 chunks.

Great, but I meant that the API client, I.e. duplicacy, shall be retrying, not the user.

Otherwise it will happen again.

That assuming it does not retry today. Perhaps it tried to retry a few times and gave up? Was there more attempts in your log after the initial failure?

If so, the fix probably would be to wait before retrying, or at least only retrying when new url is received.

I understand. I was just eager (and happy) to get the copy to the cloud.

I looked at the COPY options and don’t see a retry option. Looking at the long, it encountered the error and the task just stopped, i.e. no retry attempt.

At this point, what are my options? I looked into the S3 Gateway you mentioned and not sure I understand the specific I need to do. It seems to refer to getting a B2 API key and use that in creating the “S3” storage for it.

I’m dealing with this same issue. Can anyone from Duplicacy help? This needs to be fixed and released ASAP.

To workaround this immediately — switch to backblaze’s S3 endpoint.

I thought about that but I didn’t see any way to edit an existing storage’s type/credentials in the UI?

Remove it, and add s3 one with the same name.

Thanks, I’ll try that :pray: