Effectively yes, and the same concept was recently discussed here.
You don’t have to keep the snapshot IDs as alluded to in that thread - in fact it makes things much easier to simply use new ones from that point forward (3) as you can follow precisely those steps.
However, if you did want to keep and reuse the IDs then some additional steps might be necessary:
duplicacy backup -storage my-b2-backup -hash
duplicacy copy -id <snapshot id> -r <last revision> -from my-b2-backup -to my-local-backup
- 2d. Optional: manually delete the revision 1 file created in step 2a.
-hash option makes sure every chunk that a fresh, local, backup generates, should also exist on B2. This may not be the case if your repositories evolve quite a bit over time. A chunk back at revision 1 may include parts of files that no longer exist, but they won’t get pruned because they’re still referenced.
So when the chunking algorithm looks at the way your files are now organised, while most chunks are likely de-duplicated and deterministic, some at the boundaries will be new.
Personally, I’d use a unique set of snapshot IDs for the new setup and start from revision 1.
You wouldn’t have to worry about running a
-hash as these get recomputed on the local side anyway and uploaded if missing. Plus you can still keep the old snapshots around until they get pruned out of the B2 storage.