Returned 503 no tomes available

I have a job that backups a drive to a local storage, performs a check on the backup afterward, copies the backup to B2, and then checks the data on B2 afterward.

For the past few days, step 3 (copy to B2) has been failing and the log shows the line below. Does anyone know whether this is an issue with Backblaze? The fact that the local set of the backup is good, the copy should be a straightforward task. Or is there a bad chunk on B2 that I need to remove? Any possibility this is related to the B2 announcement about their recent write improvement (How We Achieved Upload Speeds Faster Than AWS S3)?

2023-11-09 21:17:09.043 ERROR UPLOAD_CHUNK Failed to upload the chunk 4944f4da15e931cf812ec937aafdea466f750c308ea0d825be34a6b9aedd97ca: URL request ‘https://pod-040-2016-01.backblaze.com/b2api/v1/b2_upload_file/7729016381036c4488550a11/c004_v0402016_t0026’ returned 503 no tomes available
Failed to upload the chunk 4944f4da15e931cf812ec937aafdea466f750c308ea0d825be34a6b9aedd97ca: URL request ‘https://pod-040-2016-01.backblaze.com/b2api/v1/b2_upload_file/7729016381036c4488550a11/c004_v0402016_t0026’ returned 503 no tomes available

I would start with Backblaze support. Errors 5xx are server errors.

I take that back.
It seems it’s a feature of their architecture and client software is supposed to retry on 503, after waiting a bit, to get a new upload URL that actually has space: Why You Are Getting a B2 503 or 500 Server Error and What to Do Next

@gchen?

@ExitCodeZero, you may want to switch to using their S3 gateway and make it their problem.

Ultimately, S3 won, even Storj runs S3 gateway. There is no point in trying to risk using very niche api for no benefits in return.

Thank you for the response. After reading your response and the blog post, it’s useful to know that it’s a matter of retrying. I ran the copy again on Saturday (hoping that it’s less busy) and it was able to complete the task after uploading ~65,000 chunks.

Great, but I meant that the API client, I.e. duplicacy, shall be retrying, not the user.

Otherwise it will happen again.

That assuming it does not retry today. Perhaps it tried to retry a few times and gave up? Was there more attempts in your log after the initial failure?

If so, the fix probably would be to wait before retrying, or at least only retrying when new url is received.

I understand. I was just eager (and happy) to get the copy to the cloud.

I looked at the COPY options and don’t see a retry option. Looking at the long, it encountered the error and the task just stopped, i.e. no retry attempt.

At this point, what are my options? I looked into the S3 Gateway you mentioned and not sure I understand the specific I need to do. It seems to refer to getting a B2 API key and use that in creating the “S3” storage for it.

I’m dealing with this same issue. Can anyone from Duplicacy help? This needs to be fixed and released ASAP.

To workaround this immediately — switch to backblaze’s S3 endpoint.

I thought about that but I didn’t see any way to edit an existing storage’s type/credentials in the UI?

Remove it, and add s3 one with the same name.

Thanks, I’ll try that :pray:

@saspus switching to the s3 endpoint works great, thanks a lot!

@saspus

looks like I spoke too soon, I am now getting this:

ERROR UPLOAD_CHUNK Failed to upload the chunk 71314f28af1e831640f7f08e594f6957c6157a9e63c0a8f9183498ef06bc2743: ServiceUnavailable: no tomes available status code: 503, request id: 88baafd66858c3db, host id: aY240wWZOZf0zfGNuY4tjnzdMZJAwyTPK

Oh my… so they are just passing the error down the stack?!, what’s the point of providing S3 support if user still have modify everything :man_shrugging:

@gchen, should not Duplicacy retry on 5xx, including 503, on s3 by default?

It seems the S3 compatibility is limited to the API commands but the semantics are still B2’s.

The b2 backend retries on all 5xx errors. The s3 backend is based on GitHub - aws/aws-sdk-go: AWS SDK for the Go programming language., which I think also retries 3 times by default. @seidnerj can you check the log to see if there were retry warnings before the error?

This worked without issues until a few week ago when I noticed error in the log (I hadn’t checked the few weeks previous that I recall): Now it’s happened several times.

Here’s the tail of one of the logs, shorted the chunkIDs and backblaze url for posting.

multiple already exists at the destination storage here and above then

2023-12-05 09:14:35.108 INFO SNAPSHOT_EXIST Snapshot 'name' at revision 478 already exists at the destination storage
2023-12-05 09:14:49.552 INFO SNAPSHOT_COPY Chunks to copy: 228, to skip: 4363, total: 4591
2023-12-05 09:14:51.449 INFO COPY_PROGRESS Copied chunk ChunkId2Here (2/228) 683KB/s 00:03:34 0.9%
2023-12-05 09:14:54.977 INFO COPY_PROGRESS Copied chunk ChunkId1Here (1/228) 930KB/s 00:20:31 0.4%
2023-12-05 09:19:26.343 INFO COPY_PROGRESS Copied chunk ChunkId4Here (4/228) 25KB/s 04:18:20 1.8%
2023-12-05 09:20:20.232 INFO COPY_PROGRESS Copied chunk ChunkId5Here (5/228) 25KB/s 04:05:48 2.2%
2023-12-05 09:20:24.985 INFO COPY_PROGRESS Copied chunk ChunkId6Here (6/228) 42KB/s 03:26:51 2.6%
2023-12-05 09:24:45.892 ERROR UPLOAD_CHUNK Failed to upload the chunk ChunkId7Here: URL request 'https://..backblazeUrlHere..' returned 503 no tomes available
Failed to upload the chunk ChunkId7Here: URL request 'https://..backblazeUrlHere..' returned 503 no tomes available

I’m not seeing any retry warnings.

Doubt this makes a difference, but I’m using the web interface and options is set to “-threads 2”

Can you grep the log for one of the chunks that fail: perhaps that chunk is being retried earlier in the log? I’d expect there to be some back-off interval.

Or is this the whole log, and they fail immediately out of the blue?

The log I’m looking at is website interface, schedule, click failed for the status. If there’s a better log, let me know. It fails out of the blue. The chunk that fails is the first time that chunk is mentioned in the log. here’s the entire log, I put some ellipses in between lines that didn’t change other than chunk id and the count. This log is only 93 lines long in total.

Running copy command from /root/.duplicacy-web/repositories/localhost/all
Options: [-log copy -from sg-offsitebackup -to b2-offsite -threads 2]
2023-12-05 09:14:28.926 INFO STORAGE_SET Source storage set to /media/usb/duplicacy/
2023-12-05 09:14:28.933 INFO STORAGE_SET Destination storage set to b2://backup-offsite
2023-12-05 09:14:29.103 INFO BACKBLAZE_URL download URL is: https://f004.backblazeb2.com
2023-12-05 09:14:30.660 INFO SNAPSHOT_EXIST Snapshot offsitebackup1 at revision 379 already exists at the destination storage
2023-12-05 09:14:30.708 INFO SNAPSHOT_EXIST Snapshot offsitebackup1 at revision 389 already exists at the destination storage

2023-12-05 09:14:32.614 INFO SNAPSHOT_EXIST Snapshot offsitebackup1 at revision 477 already exists at the destination storage
2023-12-05 09:14:32.660 INFO SNAPSHOT_EXIST Snapshot offsitebackup1 at revision 478 already exists at the destination storage
2023-12-05 09:14:33.176 INFO SNAPSHOT_EXIST Snapshot offsitebackup2 at revision 379 already exists at the destination storage
2023-12-05 09:14:33.233 INFO SNAPSHOT_EXIST Snapshot offsitebackup2 at revision 389 already exists at the destination storage

2023-12-05 09:14:35.108 INFO SNAPSHOT_EXIST Snapshot offsitebackup2 at revision 478 already exists at the destination storage
2023-12-05 09:14:49.552 INFO SNAPSHOT_COPY Chunks to copy: 228, to skip: 4363, total: 4591
2023-12-05 09:14:51.449 INFO COPY_PROGRESS Copied chunk 78c0b5f3c266025166bfacd7ed7d41562f95741223fa91a8c1ed1c40c4a2757d (2/228) 683KB/s 00:03:34 0.9%
2023-12-05 09:14:54.977 INFO COPY_PROGRESS Copied chunk 772aff92ed3aa92bdbe8dfb24f3de4ef5d31aed431c0f5ff7ce26855ced8077f (1/228) 930KB/s 00:20:31 0.4%
2023-12-05 09:19:26.343 INFO COPY_PROGRESS Copied chunk 7ec7caaa0a90593fef8ee829106405bc251d912440b633c998c35674fb9964ea (4/228) 25KB/s 04:18:20 1.8%
2023-12-05 09:20:20.232 INFO COPY_PROGRESS Copied chunk b0e0466370897322d95cc9982ec420b538d75f18ca04c19f7c012003859c2906 (5/228) 25KB/s 04:05:48 2.2%
2023-12-05 09:20:24.985 INFO COPY_PROGRESS Copied chunk 26f81ac20ad0c04523e768795729227d5624af997e99e9a5ce6cc149e8224141 (6/228) 42KB/s 03:26:51 2.6%
Here’s the whole log minus some already exists lines (where the ellipses are).

2023-12-05 09:24:45.892 ERROR UPLOAD_CHUNK Failed to upload the chunk 74d96b94abc4696049d7a429833ea57dce66216236cce676a182f7b39d8d9b2d: URL request ‘https://pod-040-2000-07.backblaze.com/b2api/v1/b2_upload_file/etc’ returned 503 no tomes available
Failed to upload the chunk 74d96b94abc4696049d7a429833ea57dce66216236cce676a182f7b39d8d9b2d: URL request ‘https://pod-040-2000-07.backblaze.com/b2api/v1/b2_upload_file/etc’ returned 503 no tomes available
This morning’s log had no errors in it.

I haven’t had this error reoccur for the time being, this behavior of intermittent errors is also true for using the regular B2 endpoint. When this happens again I will update the thread with the full log. I can’t locate the original one.