I’ve seen this behavior previously, but I’m curious as to why it happens.
I relocated the local storage pool (local USB drive) from the NAS (repository host) to another server. Using the web GUI, I created a new storage at the new location. I then ran a Check operation to confirm that Duplicacy could read the existing snapshots. First time through, Check did not list the snapshots. I had to adjust file permissions and rerun the Check job. 2nd time - sucess.
Next, I started a backup job. Duplicacy appears to be reading/processing EVERY file in the repository and then finding/confirming that the vast majority are already chunked and available in the storage pool. The repository is 7TB and this complete rediscovery is taking over 24 hours. The following message was posted in the log: “BACKUP_START No previous backup found”.
Did I miss a step? Or is there some way to avoid having Duplicacy start from scratch like this?