If you only modify a few bytes at the start of the file (or any other places), then Duplicacy should be able to download only the chunk that is affected by the modification.
However, if you add or delete a few bytes, then Duplicacy would have problem finding unmodified chunks from the existing file. This is not due to the inefficiency of the variable size chunking algorithm, but rather the way Duplicacy splits the existing file – it cuts the existing file at the same offsets as the chunks from the storage so if there is insertion/deletion it won’t get one identical chunk for the entire file.
An obvious improvement is to run the variable size chunking algorithm on the existing file. But, normally you don’t do insertion/deletion to large files, and the current implementation is much faster (no need to calculate the rolling hash one byte at a time over the entire file, just the hashes of chunks at specific offsets).