ONEDRIVE RETRY Response code: 503

Since 30 April 2026 I am getting this response from OneDrive on Purge or Check jobs.

After some research it seems like Microsoft made a change to their Graph API.

Anyone else experiencing this, or have a solution for it?

This is from my logs:
Running check command from C:\ProgramData/.duplicacy-web/repositories/localhost/all
Options: [-log check -storage OneDrive -a -stats -a -tabular]
2026-05-05 04:00:36.924 INFO STORAGE_SET Storage set to one://backup
2026-05-05 04:01:38.283 INFO ONEDRIVE_RETRY Response code: 503; retry after 524 milliseconds
2026-05-05 04:02:42.283 INFO ONEDRIVE_RETRY Response code: 503; retry after 1667 milliseconds
2026-05-05 04:03:46.286 INFO ONEDRIVE_RETRY Response code: 503; retry after 753 milliseconds

Same here. I created an issue in the github repo outlining the problem and potential solution.

However, I suspect this software might have been abandoned. No releases since May of 2025.

This sadly looks to be the case. With no sign of active development any change to other storage APIs is going to be a bad day for lots of us.

What a shame.

I’m not looking for change for change sake. But the absolute void in development against an ever growing issues list isn’t a good sign for a tool we’re relying on to save the day when the proverbial hits the fan.

I’ve abandoned the use of this product and am now using duplicati.

Big mistake. Please do your research. The fact that it appears first in search result is not a vote of quality.

They are so many much better alternatives.

The practical solution is to to let duplicacy do backup, a task that is polished and stable and just works.

And separate connectivity with some goofy remotes to another piece of software, like rclone.

It baffles me why does duplicacy even try to keep up with 20 different api providers. It’s a fools errand.

Don’t misuse *drive remotes for backup and you won’t have any issues. The problem here is not duplicacy. It’s “storage providers” who keep breaking their API.

AWS s3 works and does not required babysitting, does not it?

Gchen need to make one change — drop Google Drive, Dropbox, and other BS as endpoints. And then freeze the code.

But this is not blocking anyone. Anyone can switch to storage does does not suck and stop chasing ghosts.

I have standardised on S3-compatible storage, I use Storj and Backblaze for remote storage, and I run Garage and a USB target locally in addition to snapshots on my NAS. I think I am covered, and those parts of duplicacy seem stable. However, still concerned about any lack of development or engagement from Gchen, which seems to have gone on for quite some time now.

@saspus, what would you replace duplicacy with if you had to?

@saspus lol. A lot of underlying assumptions in your reply. I’m not emotionally tied to any tool and don’t feel like I have to justify my decision to switch and the tool I chose.

I don’t think asking for more than zero releases with updates a year is too much.

Strongly depends on the host OS. On a macOS – Arq7. No question about it. The only downside – an abhorrent size of local caches it creates. Which could have been an issue of my own making – I adopted 3TB backup on a 512GB SSD Mac…

Any other OS if duplicacy vanishes into the void – restic. Also pretty much no competition. But duplciacy exist – so I would not have switched to anythign else, have I been still needing backup tool

Since last few (4? 5? years) I use neither. My backup is zfs snapshot replication between servers across few states. Simple like an iceberg, and solid like … also an iceberg. Mac gets backed up to the same NAS with Time Machine, and samba extension creates a snapshot upon successful backup completion, which gets promptly replicated to remote server. I use zrepl. I guess this is my backup: GFS style filesystem snapshots + replication.

I don’t use neither windows, nor linux – so no recommendation there. Servers run freeBSD, zfs is first class citizen there.

Nor should you be. I’m not expecting you to justify. I’m just seeing a fellow forum user heading straight into the open manhole and I feel obligated to attempt to warn. But it’s also ok to learn on your own mistakes, I don’t have any vested interest in which backup solution you end up with.

It depends for what reason you are asking.

  • To keep chasing ever-changing landscape of storage providers that break their api – yes, it’s too much to ask. The correct solution is to drop support of such providers. Rclone does superb job chasing, and there is no need to replicate effort.
  • to fix longstanding bugs – such as interrupted prune leaving the storage in inconsistent state – there I’m with you, it’s long overdue and does not receive attention it deserves.

But updates for updates sake – no.

2 posts were split to a new topic: Snapshot replication as a backup

Ever since I learned of Rclone, I concluded the same.

Although, I must stress that it was good Duplicacy natively supported the big providers/protocols out of the box. But I’m also not at all surprised Microsoft breaks stuff so easily. If OneDrive can be fixed, great. If not, it might be a good idea to drop broken providers - but only when they do break and can’t easily be fixed.

(I wonder why anyone hasn’t actually pinged @gchen yet?)

The fact that restic has its own REST API integration through Rclone serve, shows the correct path Duplicacy should take. Perhaps it can adopt the same API and integrate better with Rclone? @gchen

1 Like