Uploaded file but failed to store EOF

Not sure what to do about this:

2018-04-27 22:15:30.068 INFO UPLOAD_PROGRESS Uploaded chunk 1002 size 3179504, 8.97MB/s 00:24:24 26.6%
2018-04-27 22:45:29.829 ERROR UPLOAD_CHUNK Failed to upload the chunk 4612a87462e2390ffe42c73571e34b29f921b58e66cacf0c67370bf2a39d2d98: Uploaded file but failed to store it at /share/homes/backups/KristinV/chunks/46/12a87462e2390ffe42c73571e34b29f921b58e66cacf0c67370bf2a39d2d98: EOF


I think this was caused by a closed connection due to temporary network issues. If you run it again does the same error happen again?

It seems to happen again… yes. I will try a third time tonight to see. Thanks sir!!

Well… third time seems to be a charm… so far it is going ok… up to 75% where it used to fail around 30%., hoping for the best!

mmmm- but this time it failed at 93% :frowning:

so you think it is a bad network link?

I think it is a bad network link. Which backend is this? Maybe some retry logic is needed here.

this is a local sftp to a local server. Nothing fancy.

Retry logic looks like it would help… I finally got it to finish after the 5th try.

I have others going to the same server that dont seem too have this problem (that I know of yet) :slight_smile:

If there is anything you think I could do on my end- I’d appreciate the advice! Perhaps I need to somehow trap the error and then do my own retry?

If you want to catch the error in a script, run the backup command with the -log option (like duplicacy -log backup) and grep the output for both ERROR and EOF. If you get a hit, rerun the backup command.

Will the resume (or retry X times) backup/upload option for SFTP be added to the next version?
I noticed when the SFTP server goes down the backup just “hangs” and you need to restart it when you want to “continue” the backup (which “failed”, so it restarts from 0).

I’m thinking of adding the retry logic to the chunk uploader on an EOF error so it will work on all backends. However, I’m not sure if this helps in your case – if it just “hangs” as you said then the retry code will never get invoked.

I am still experiencing this problem periodically and cant really figure out why… so I’d love to see if you can get some retry logic in there… I would love to see it!!

I still seem to be having this problem quite a bit but only with one computer in the building… so that is weird.

Any further thoughts on adding some retry logic for sftp?

Still getting these EOF errors… any chance you want to use me as a testcase for some beta code retry logic? :slight_smile:

I’ll try to get it done in about 2 weeks.

Any chance this is included in the latest code?

Could I ask about an update on this by chance? Thanks!!

If you reply to @gchen’s post instead of your own or @-mention him, he will be notified about your post.

Thanks! @gchen might I ask about an update to my question ? Is there still a plan to implement ftp retries?


Sorry for not doing what was promised. I might be able to get it done by tomorrow. If not, this will be the first thing next week since I’ll be traveling from this Thursday to next Monday.

I knew you have another request (Adding a feature to help with server side data integrity verification · Issue #330 · gilbertchen/duplicacy · GitHub) which is a bit different. At first I thought that would be an easy job to upload an unencrypted chunk list at the end of every backup. But now I came to the conclusion that it is more complicated. However, I do have a plan to support a new user case where each repository can has its own encryption password while it is still possible for the owner of the storage to check the integrity of backups without known individual password. So what you requested will be part of a big change that will be implemented once the new web-based GUI is done.


many many many many thanks!!!