I’ve just started using Duplicacy after relying on ‘ad hoc’ backups (i.e. just copying my important data onto other hard drives). I’m now trying to do proper backups and my data is all sitting on an Unraid server, which I’m backing up using Duplicacy. I want to get rid of the old backups, but I’m reluctant to just delete them. Presumably if I back up those hard drives using Duplicacy, to the same storage of my main backup, the deduplication feature will mean that as long as the files on those old backups are identical to my Unraid server then I’ll use almost no extra storage space? As a bonus, is there any way to identify non-identical files?
Thanks
So long as those old backups aren’t packed in a different way - i.e. compressed or encrypted - and are pretty much the same sorts of files even if in different locations, then yes the de-duplication should work out of the box.
As for identifying non-identical (unique or un-de-duplicated) files… first you have to remember Duplicacy works on the chunk level, so you can only get an indication when you run a check after it’s all backed up. You’ll see a column for what’s unique, but they’re more about the chunks than the files.
You could maybe run a backup job with the -dry-run flag, but it may not tell you the whole story about de-duplicateable chunks within the same source. (Only if there’s already existing chunks.)