Incomplete First Backup doesn't resume from where it stopped


#1

Dear developer,
I’m evaluating the GUI version of duplicacy before buying it and i came across an issue.

When i was trying to backup a large dataset to google drive (about 1.5TB) i reached the google drive daily upload limit.

Then it told me that duplicacy created a .incomplete file.
I was under the impression that it will continue from where it left off but when i ran the backup again it seems like its scanning through all the already backed up chunks which is wasting a lot of time.
I read somewhere on the forum that it suppose to know where it stopped based on the incomplete file.

any ideas?
is this a bug?

Thanks a bunch !!
-DM


#2

Fast-resume only works for files whose chunks have been completely uploaded. If the backup is aborted in the middle of scanning a file, then in the retry that file will need to be rescanned from the beginning, because technically it is possible that the content of the file may change.


#3

The problem is that its skipping chunks for hundreds of files that were completely uploaded already.
It took a few hours just to skip through all the files that were already uploaded :frowning:

I think that there should be something in place to deal with these kinds issue
thanks !!
-DM


#4

Hm, in the OP you say

and my understanding from this discussion was that the existence of an .incomplete file makes the skipping of chunks unnecessary. So if you’re seeing chunks being skipped for ages (like I did), that could to indicate that your incomplete snapshot is from an earlier interrupted backup and when the process was interrupted again later on, no (new) incomplete snapshot was saved.

But that explanation would only work if duplicacy doesn’t delete the incomplete snapshot once it resumed the backup.

Otherwise the only explanation would be that tje chunks have been uploaded from a duffetent repository.