speed seems to deteriorate over time - crash causes restart

Hello,

I notice the speed of backup deteriorates over time. I am trying to run a backup to our cloud repository, so expect it to take a long time - however our backup starts at 30MB upload and by the next morning is down to 758kbs. A stop and start restores 30mb - BUT it doesn’t find the existing backup files and continue.

Is there a way for it to realise the existing files? I have seen the same behaviour a couple of times with the backup crashing (a chunk error or similar) and it starts again - despite their being 40/50gb already up there.

Thanks
Brett

Which storage backend are you using and what is the error message? Some storage backends can retry on errors but not all of them.

When you restart the backup after a failed one, those files that have already been uploaded will not be uploaded again, but Duplicacy still needs to read entire content of these files to make sure they are unchanged. That means 30MB/s is the read speed not the upload speed. 758Kb/s is perhaps closer to the actual upload speed – of course it depends on your upload bandwidth.

If you have a large directory that takes many days to back up, you can perhaps use the include/exclude patterns to limit the first backup to certain subdirectories and later remove the patterns after a successful one.