This is what I would do…
1) Create a new local repository using the add -copy
command (from your cloud storage).
This effectively copies the storage parameters such as chunk size and encryption keys, making it copy-compatible. (You can use a different master password when using -encrypt
if you want.)
2) Adjust your backup schedules to backup only to the local storage.
3) Run all your backup jobs to this new storage, using the same repository IDs.
This quickly populates the local storage with chunks that should be mostly identical to the ones in the cloud (minus historic revisions and some minor differences due to chunk boundaries while rehashing).
4) Copy (with -all
) the cloud storage to the local storage.
This fills in the rest of the missing chunks from previous revisions, and brings all those old snapshots back to your local storage. This should also overwrite the snapshot revision #1 of all the backups made in step 2, with the original ones you did to the cloud time ago, unless these were pruned*.
5) Make a new job to copy -all
from local storage to cloud storage.
Perhaps add this after the backup jobs, or you can do it once a day or whenever.
6) [Optional] Run a prune -exclusive -exhaustive
on the local storage to tidy things up.
There’ll probably be unreferenced chunks from step 2.
Step 3 is basically a time saving procedure, and should hopefully save a lot of downloading.
You can do this with the web GUI but you’d need to use CLI for the add
command.
*IF the revision 1s in the cloud were pruned, the local, temporary, ones you made in step 2 won’t be overwritten in step 4, so you might want to manually delete all the 1
files in the local storage under snapshots/ just to be consistent - I don’t know if there’ll be any adverse affects if you didn’t, though.