Long time user, but had to migrate to a new home system recently. Running Duplicacy in a Docker container and the underlying system data was migrated (disk to disk via rsync) during the move. Couple of notes:
- Duplicacy Web Edition 1.7.2
- The Docker container volume mountpoint used to do backups changed, but the underlying data is the same.
- The Duplicacy app data, config and cache was migrated, I did not rebuild the setup from scratch.
- Storage is in wasabi and contains approx 500k chunks coming in around 3TB.
Checks and pruning jobs work fine, but what I am seeing is that when backups start, the job wants to backup every file as if every file has changed. I would have hoped that it recognized that it already had content backed up in wasabi and so it would just perform incrementals and backup only on changed files. I suspect something happened to make it think every file has changed (timestamp? chown permissions? backup endpoint change?). I thought I was careful with that when using rsync, but it’s not clear to me what kind of changes would trigger a need for Duplicacy to perform a full backup.
Unfortunately this will double my storage usage if I let it run. I would either need to make the backup look beyond whatever surface level change causing this to happen (I am confident that no data file content has changed) or I will have to do some heavy pruning to make room for a full backup (something i’d like to avoid if I can).
Cheers.