There is an aspect that I still don’t understand in , even after so long use.
When small changes occur in a repository, backup is insanely fast (seconds), quickly identifying the modified files and updating the storage.
However, when many files are modified / added, seems to redo a full repository scan, obviously skipping 95% of the chunks, but it still takes a long time, as it scans the entire repository. It doesn’t seem to easily identify the modified / added files.
What determines whether a backup will “scan” the entire repository or quickly identify modified / added files? The size of these files? The cache?