503 Service Unavailable in wasabi when prune job is running

Hello everyone.
There was a problem processing the task prune in wasabi.
Here is part of the log

2026-04-18 13:57:25.118 WARN CHUNK_FOSSILIZE Chunk a376b6624147da1d3a06804e77df345e3224b534a921a1463abc699db1903b2b is already a fossil
2026-04-18 13:57:25.163 WARN CHUNK_FOSSILIZE Chunk 0af50a962bfcb42f5d64d8f14e22b9cde682c19bc6e9eed823a712b611e5902a is already a fossil
2026-04-18 13:57:25.665 WARN CHUNK_FOSSILIZE Chunk 31e559755a209b04efdcc0d7e44a73b1264dc0d23012587517d8aac93449da57 is already a fossil
2026-04-18 13:57:27.494 WARN CHUNK_FOSSILIZE Chunk 060bff4d65ac77da6a3c335aa3ef6fa7b932566ef42203a9f3fc23bbd34cf66f is already a fossil
2026-04-18 13:57:27.537 WARN CHUNK_FOSSILIZE Chunk 52f57121ec9956e49f511d9adb025277c8881662726be99d5ce9e8d51f30a32b is already a fossil
2026-04-18 13:57:30.761 ERROR CHUNK_DELETE Failed to fossilize the chunk 2506f690ad0bc78a8d241bc9ce703f8c2b12fd35bb93305b6d969428c9f96fad: 503 Service Unavailable
Failed to fossilize the chunk 2506f690ad0bc78a8d241bc9ce703f8c2b12fd35bb93305b6d969428c9f96fad: 503 Service Unavailable

What could be the reason?

When i run check job some chunks is missed.

2026-04-18 14:00:37.883 INFO SNAPSHOT_CHECK All chunks referenced by snapshot 03_wassabi_backup at revision 601 exist
2026-04-18 14:00:38.338 INFO SNAPSHOT_CHECK All chunks referenced by snapshot 03_wassabi_backup at revision 602 exist
2026-04-18 14:00:38.338 ERROR SNAPSHOT_CHECK Some chunks referenced by some snapshots do not exist in the storage
Some chunks referenced by some snapshots do not exist in the storage
1 Like

Wasabi being wasabi. Nothing unusual. Open support ticket with them. Not that it will
Help — but maybe if everyone files tickets they will do something about it after years of flakiness

Same problem here. I opened a support ticket since they claim Duplicacy is validated for use.

It would be nice if Duplicacy could back off and retry though. It is common for services to respond with status 429 or 503 to throttle activity when making too many requests.

Lol. Duplicacy uses their API which they claim is S3. They don’t get to “validate” anything.

It does handle both. On 429 it retries, on 503 aborts.

Ultimately, if you want reliability – use google or amazon. If you want cheap - then the flackiness is not unexpected.

It’s their own validation. As such, they should be returning status 429 so Duplicacy retries.

It’d still be nice if Duplicacy retried on status 503. The S3 documentation says it can return 503 Slow Down as a response. And I’ve seen AWS return this under enough load.

1 Like

Good point. @gchen?

Wasabi support said:

Looking at the logs associated with this object key, we have an indication of why this occurred. Within the same second, your application sent a rapid burst of requests. Wasabi maintains a limit of 250 active TCP connections coming from a single IP address per minute per user-server per region to ensure fair resource distribution across our shared infrastructure. When an application like Duplicacy bursts a massive amount of requests simultaneously during an intensive job like a prune, it can momentarily exceed these active connection thresholds, resulting in an HTTP 503 error to prompt the client to slow down.

1 Like

This seems like a scenario where 429 (too many requests) would be appropriate. The fact that they return 503 (service unavailable) seems like a bug that they try to masquerade as a feature. Because service is clearly available. It just does not like too many requests, as support confirmed.

But it is what it is, and retrying on 503 maybe a good enough workaround for this broken provider.

This is not the first time when well behaving services and apps are intentionally made noncompliant to accommodate third party broken service. Oh well.

I solved this problem.
But, since I did two things at once, it’s unclear which one solved the problem.
First, I completely cleared the cache.
Then, realizing that it simply couldn’t find some old file chains, I used the prune with options -exhaustive -exclusive -keep 0:90 -keep 7:30 -a
I understand that I lost all the old archive chains, but at least it worked properly.
May be - simply clearing the cache would have been enough, and I should have tried each one one at a time.
But it’s too late now.
Everything is working correctly now.

This is not a solution though. You just took the different path resulting in different call cadences, and avoided seeing the issue. The issue is still there, and will resurface in the future under the same conditions that caused it before…

The problem is wasabi bug: returning 503 instead of 429. The fact that some jumping through hoops happens to avoid hitting it does not make it acceptable.

For future reference, you should never use -keep at the same time as -exclusive - you run the risk of deleting your last, possibly your only, revision for a particular snapshot.