500 Internal Server Error

Hi,

I’ve been getting random “500 Internal Server Error” messages while pruning for about two months. This is happening with Wasabi storage.
I usually encounter this error in at least one out of every 10 backups I perform across different servers.
These backups are automatically run at 00:00h (GMT+2). If I manually run the backup the next day of the error at 9:00 (GMT +2), it works fine

Example:

This is a schedule of 3backups+prune+check

Running prune command from C:\ProgramData/.duplicacy-web/repositories/localhost/all
Options: [-log prune -storage StorageServidorADH -keep 0:360 -keep 30:30 -keep 7:7 -a -threads 4]
2024-09-10 00:35:48.119 INFO STORAGE_SET Storage set to wasabi://eu-west-2@s3.eu-west-2.wasabisys.com/backupduplicacy/BackupServidorADH
2024-09-10 00:35:49.075 INFO RETENTION_POLICY Keep no snapshots older than 360 days
2024-09-10 00:35:49.075 INFO RETENTION_POLICY Keep 1 snapshot every 30 day(s) if older than 30 day(s)
2024-09-10 00:35:49.075 INFO RETENTION_POLICY Keep 1 snapshot every 7 day(s) if older than 7 day(s)
2024-09-10 00:36:08.237 INFO SNAPSHOT_DELETE Deleting snapshot BackupServidorADH_D at revision 480
2024-09-10 00:36:08.295 INFO SNAPSHOT_DELETE Deleting snapshot BackupServidorADH_D at revision 502
2024-09-10 00:36:08.364 INFO SNAPSHOT_DELETE Deleting snapshot BackupServidorADH_D at revision 503
2024-09-10 00:36:08.416 INFO SNAPSHOT_DELETE Deleting snapshot BackupServidorADH_D at revision 504
2024-09-10 00:36:08.489 INFO SNAPSHOT_DELETE Deleting snapshot BackupServidorADH_D at revision 505
2024-09-10 00:36:08.596 INFO SNAPSHOT_DELETE Deleting snapshot BackupServidorADH_D at revision 506
2024-09-10 00:36:08.714 INFO SNAPSHOT_DELETE Deleting snapshot BackupServidorADH_E at revision 477
2024-09-10 00:36:09.169 INFO SNAPSHOT_DELETE Deleting snapshot BackupServidorADH_E at revision 499
2024-09-10 00:36:09.592 INFO SNAPSHOT_DELETE Deleting snapshot BackupServidorADH_E at revision 500
2024-09-10 00:36:09.995 INFO SNAPSHOT_DELETE Deleting snapshot BackupServidorADH_E at revision 501
2024-09-10 00:36:10.389 INFO SNAPSHOT_DELETE Deleting snapshot BackupServidorADH_E at revision 502
2024-09-10 00:36:10.787 INFO SNAPSHOT_DELETE Deleting snapshot BackupServidorADH_E at revision 503
2024-09-10 00:36:11.204 INFO SNAPSHOT_DELETE Deleting snapshot BackupServidorADH_C at revision 482
2024-09-10 00:36:11.216 INFO SNAPSHOT_DELETE Deleting snapshot BackupServidorADH_C at revision 505
2024-09-10 00:36:11.226 INFO SNAPSHOT_DELETE Deleting snapshot BackupServidorADH_C at revision 506
2024-09-10 00:36:11.236 INFO SNAPSHOT_DELETE Deleting snapshot BackupServidorADH_C at revision 507
2024-09-10 00:36:11.245 INFO SNAPSHOT_DELETE Deleting snapshot BackupServidorADH_C at revision 508
2024-09-10 00:36:11.255 INFO SNAPSHOT_DELETE Deleting snapshot BackupServidorADH_C at revision 509
2024-09-10 00:36:28.070 WARN CHUNK_FOSSILIZE Chunk 0f56f25e480a2c3a39a92cb00c85d179a06286bd434e420b331476f0fe25b15b is already a fossil
2024-09-10 00:36:36.510 WARN CHUNK_FOSSILIZE Chunk da312bcd550c28f65b6d1c05315cedbe82591d48138148cad26d630e22dba845 is already a fossil
2024-09-10 00:36:38.878 WARN CHUNK_FOSSILIZE Chunk 43669240a5820609915d6bf0e19040d7d61e6bb93fc3834026cb27307b5070e4 is already a fossil
2024-09-10 00:36:42.187 ERROR CHUNK_DELETE Failed to fossilize the chunk f175a205ae6e4783f1b811110c974093c16d3dfd81a18bd1d9c90e92e0181d7f: 500 Internal Server Error
Failed to fossilize the chunk f175a205ae6e4783f1b811110c974093c16d3dfd81a18bd1d9c90e92e0181d7f: 500 Internal Server Error

This error is returned by the wasabi. Talk to wasabi support.

Ok, I’ll talk to wasabi.
When this message is shown, has Duplicacy retried sending the command?

This is the response I got from wasabi:

Thanks for getting back.
I understand. However, since Sep 8 there are no errors in your logs, including during the timeframe of the issue, that we can correlate to the error. Everything appears to be functioning correctly on our end, and typically, we would expect to see a corresponding 500 Internal error in the PUT logs.
As this issue is happening intermittently with your backups, we recommend scheduling them at different times throughout the day to see if that helps, and rerunning them if they fail.

Hope this helps.

I suppose that when he says “there are no errors in your logs”, he meant to say “there are no errors in our logs”

Is there a retry system? It seems like retrying could solve the problem

Duplicacy uses aws library that retries several times on 500, 502, 503, and 504 errrors before giving up.

It does not reflect well on wasabi competency if they have failures not tracked in their logs.

Suggestion to try at different time and pray is ridiculous.

I would push back, tell them again the time of failure (with times zone) and escalate to someone who can actually find logs or fix the issue that causes their logs to disappear.

If they keep denying — change provider to the one that can debug their own systems.

Btw this is not a new issue. Over 5 years ago I ditched them for exact same reason — slews of 500 failures they would not acknowledge or fix. us-west-1 was significantly worse than us-east-1, to the point of being unusable. Us-east-1 was a bit better but still annoying.

Ultimately you get what you pay for; it was a consideration when they were significantly cheaper than anyone else (under $4/TB). But then they jacked up prices 6 times within 2 years; they also claim to be hot storage and yet charge penalty for early deletion or cancel accounts for excess egress — I have no idea why do people put up with them. There are no benefits, only drawbacks.

Which storage do you use/recommend?

Past few years I use storj, both with duplicacy and for other personal projects. I’m yet to experience any issues with it.

It does require some tweaking to get max performance on weak hardware (e.g. consideration to increase chunk size or use s3 gateway) but for me it works fine as-is.

As a secondary backup (disaster recovery) I use glacier deep archive for the past decade. But duplicacy does not support storages that need to be thawed so I’m using another backup tool for that. An alternative would be google archival storage — it has no thawing requirement but is more expensive with much longer required retention period.