OneDrive (odb) issue with jobs dying

I’ve created a forum post to bring this issue https://github.com/gilbertchen/duplicacy/issues/611 to your attention. As it stands OneDrive for Business is not usable especially after a second snapshot has left some tmp files on OneDrive with names matching to a list of chunks that are missing from the repository.

Let me know if you would like any additional debug logs etc.

How often does this happen to you? Can you post all error-related logs?

I ran a test to upload a 100G file and had one 400 error.

Thanks for the response. The problem has been happening all week but typically I was able to create a new backup with an additional 20Gb last night. It succeeded without duplicacy exiting. Maybe it was an issue with OneDrive? I’ll keep an eye on it.

After the successful backup I ran a check which is reporting missing chunks for the latest backup. If I look for the missing chunks they are indeed missing from the backup, however, there are ~tmp files with names that match the missing chunk names. I have set the maximum chunk size to 1MB (I’m backing up large files) and the ~tmp file sizes range from 700k-998k so it looks like the upload has stopped before duplicacy can rename the ~tmp file?

It’s a bit worrying that I get a backup successful message when in reality there are issues. I’ll run my next backup with the debug switch and capture the output to a log file to see if it gives any clues. Anything else I can check?

Thanks

If 1MB is the fixed chunk size it is normal to have 700k-998k chunks due to compression. Those ~tmp files were created by OneDrive – for OneDrive Duplicacy doesn’t upload to temporary files first and then rename.

How many ~tmp files are there? Can you manually rename them and then run duplicacy check -chunks to see if these files are actually complete?

Thanks for the explanation, that makes sense. There’s around 50 ~tmp files and I’ve renamed the chunks that are marked as missing for the latest revision. I’m running a duplicacy check -chunks but some of the chunks are already failing verification so it looks like something has gone wrong. It’s got 1 day to run so I won’t know the full extent of the problem for a while.

Do you have any further suggestions or is it going to be a case of starting again?

You can delete all ~tmp files and then change the repository id and run a backup again to see if these file can be regenerated.

I’ll run some tests to reproduce the issue. In the past OneDrive had a bug that caused incomplete files to be saved: OneDrive Business: ERROR DOWNLOAD_CHUNK. But that has been fixed and the behavior here is different. I wonder if in this case their servers somehow returned responses indicating completion without having saved the complete files.

Thanks again. I’ve deleted all the temp files, deleted the corrupt chunks and ran another backup after increasing the repository id by one. I’ve backed up another 20Gb and it bombed with one 400 error but ran to completion after manually resuming. The amount of 400 errors seems to be related to the amount I’m backing up, whether this is just chance or something specific to OneDrive I’m not sure.

After more backups I’ve done another chunk check and they’ve all verified successfully so looking good for the moment. It makes me a bit nervous about my backups so I’ll leave it a few days and then run another check to see how things are going.

I have the same problem. I’ve been trying to upload an inital backup of about 400GB to Onedrive Business for days. After a few hours it always fails with 400 errors.

If it helps here is the stacktrace of the last crash:

2021-05-08 07:01:08.234 ERROR UPLOAD_CHUNK Failed to upload the chunk 6c34cc53a2511c9fc5979dcd915995c34f05a02694525e1265adc45ade83344d: 404 Item not found
goroutine 146 [running]:
runtime/debug.Stack(0xc0000740c0, 0x0, 0xc00044da00)
        /usr/local/go/src/runtime/debug/stack.go:24 +0x9d
runtime/debug.PrintStack()
        /usr/local/go/src/runtime/debug/stack.go:16 +0x22
github.com/gilbertchen/duplicacy/src.CatchLogException()
        /Users/chgang/zincbox/go/src/github.com/gilbertchen/duplicacy/src/duplicacy_log.go:227 +0x88
panic(0xf49d60, 0xc00b826330)
        /usr/local/go/src/runtime/panic.go:522 +0x1b5
github.com/gilbertchen/duplicacy/src.logf(0x2, 0x1026158, 0xc, 0x103dda7, 0x21, 0xc00044dea0, 0x2, 0x2)
        /Users/chgang/zincbox/go/src/github.com/gilbertchen/duplicacy/src/duplicacy_log.go:180 +0x8f9
github.com/gilbertchen/duplicacy/src.LOG_ERROR(...)
        /Users/chgang/zincbox/go/src/github.com/gilbertchen/duplicacy/src/duplicacy_log.go:107
github.com/gilbertchen/duplicacy/src.(*ChunkUploader).Upload(0xc0004fb900, 0x1, 0xc00de0ccc0, 0x1847f, 0x1)
        /Users/chgang/zincbox/go/src/github.com/gilbertchen/duplicacy/src/duplicacy_chunkuploader.go:140 +0x32d
github.com/gilbertchen/duplicacy/src.(*ChunkUploader).Start.func1(0xc0004fb900, 0x1)
        /Users/chgang/zincbox/go/src/github.com/gilbertchen/duplicacy/src/duplicacy_chunkuploader.go:60 +0x83
created by github.com/gilbertchen/duplicacy/src.(*ChunkUploader).Start
        /Users/chgang/zincbox/go/src/github.com/gilbertchen/duplicacy/src/duplicacy_chunkuploader.go:55 +0x48

've been running into the same issue. I’ve restarted the job 3 times, and am pulling my hair out trying to fix the problem. I’m backing up 5TB, but it crashes every 200GB or so.

What did you end up doing as a workaround?

Probably not what you want to hear but I decided to use restic and rclone instead.