I would further clarify that while this is correct, there are two modes:
- Mode 1 where Duplicacy traverses your whole file tree and uploads all “chunks” which don’t already exist. If it is a redundant backup with only minor changes, then only the new and different chunks will be uploaded. However, it still takes time for Duplicacy to “go through the motions” of checking each individual chunk to see whether it already exists or not
- Mode 2 where Duplicacy makes an internal record of all files previously backed up, and only visits those files that have changed. It then breaks those into chunks, and again uploads only the chunks that don’t already exist. This is useful, say, if you have a movie file that you changed the meta information for, but did not change the data. Any backup program, including Duplicacy, will see this as “changed.” However, with Duplicacy, if the original movie file was already backed up, then only the new/different “chunk(s)” containing the changed metadata would be uploaded, potentially saving lots of bandwidth.
In any case, Mode 2 is faster than Mode 1 because it doesn’t have to do nearly as much work - but in the end, both result in the same space taken on the backup destination. The destination is the same, only the journey is expedited (in mode 2). So, on the first pass, Duplicacy does #1 above, then on the following passes, it does #2 (no, we’re not referring to potty training )
I have a RAID with 22TB of data on it, but with Duplicacy this gets down to about 8TB for the backups, and incrementals add only a tiny bit too that.