Duplicacy check too slow and the connection was forcibly closed by the remote host

Hi guys, I want to implement a verify procedure with my jobs and based on what I read the recommended command is “duplicacy check -all -files”. After doing this, it was really taking some time, not sure exactly if this is normal and then I got an error message:

That was only 4 revisions and took around 3-4 hours until I saw the error message.

The plan is to do this once a week, but this is the first time I try it honeslty.

Any recommendations on how to make this work properly? Thanks in advance for your time.


You can try running the check with multiple threads:

duplicacy check -all -files -threads 4

But, the connecting being closed may be the result of hitting the rate limit. Using multiple threads may not help at all and even make it worse.

I would suggest the -chunks option instead:

duplicacy check -all -chunks -threads 4

This ensures that each chunk will be downloaded only once.

Hi gchen, thank you very much for your fast reply. I am currently running your last suggestion, will see how it goes. But in the meantime, could you please confirm the -all parameter runs the check of all chunks within that target storage of any job (from any server) that points to it?



OK disregard my previous question, I just reconfirmed what I thought. So “duplicacy check -all -chunks -threads 4” failed again after less than an hour with a similar error:

After confirming that -all checks every job within that storage, I am trying to adjust this to only check the default local job, and I can create independent verify jobs on each server (not a big deal, actually I thought it was like that originally). This will definitely reduce the processing time of the job and hopefully avoid that error message.

Currently running “duplicacy check -chunks” which is supposed to end in 1 hour 15 minutes approx. I am not using -threads anymore due to your previous comment.

I will let you know once done.



OK I can confirm duplicacy check -chunks worked this time:

So, at this point I only have one question, -chunks should be enough to validate the integrity of a backup job?



-chunks should be enough. If you really want to make sure that every file is restorable, you can copy the azure store to a local one and then run check -files against the local storage.

Thanks gchen. However, bad news, the job again failed with the original error message “the connection was forcibly closed by…”. I have tested it in 2 different computers, 2 different jobs with the same target Azure Storage. I can confirm it failed at different chunks and at different times. For example one of the jobs was almost done (90%) and failed after more than 10-15 hours. Currently using duplicacy check -threads 4 -chunks

Based on the error message, I started digging a bit more at the Azure end and tested switching to “Internet Routing”:

Unfortunately, the result was almost the same; not sure if there might be another possible adjustment there.

Do you have any other ideas on how to adjust/handle this?

Thanks again for your time,