Dear developer,
I’m evaluating the GUI version of duplicacy before buying it and i came across an issue.
When i was trying to backup a large dataset to google drive (about 1.5TB) i reached the google drive daily upload limit.
Then it told me that duplicacy created a .incomplete file.
I was under the impression that it will continue from where it left off but when i ran the backup again it seems like its scanning through all the already backed up chunks which is wasting a lot of time.
I read somewhere on the forum that it suppose to know where it stopped based on the incomplete file.
any ideas?
is this a bug?
Thanks a bunch !!
-DM