Initial backup is very slow, is it going to be faster the next times?

I’m sorry, I’m sure this has been asked already but I can’t find answers:

I have around 2To on an HDD to backup to Google Drive through the web edition and it’s taking days. To be clear: it took 3 days and it ended with an error because a file wasn’t uploaded (from my iCloud photo library, probably a file changed or was deleted during the process… I excluded the folder for next time).
So the backup was shown as failed. I launched a new backup and it’s again showing days remaining. I thought that at least 99% of chunks were uploaded and that the backup would only increment. Looks like it’s starting all over again. Is this normal behavior? If initial backup succeeds, will it take only minutes to incrementally backup a few Mo next time? Should I add more threads to speed up the process?
Edit: my upload speed isn’t an issue here I think.
Thanks a lot!!

Be assured, it isn’t starting again, and should skip chunks that’s already uploaded…

However, there may be a little extra time overhead in restarting initial backups because Duplicacy will list the chunks on the destination before determining what to upload. Also, if Duplicacy didn’t save the incomplete file, it may have to rehash the content locally again.

Either way, it shouldn’t waste additional bandwidth re-uploading chunks. The progress indicator includes all chunks, but in the logs you’ll notice it quickly skips ones that have been re-uploaded, and the throughput speed may even increase beyond your actual bandwidth.

Certainly, using multiple threads (e.g. -threads 8) will help speed things up - especially when it has to skip chunks. And subsequent backups will be fast, as it only looks at files that have changed since the last snapshot.

1 Like

Ok, we’ll see. I tried setting up a Google drive file stream but I’m not sure it changed the settings since the logs still show I’m uploading to my Google drive storage and not the actual mounted drive. Thanks