Hi! I’ve been using duplicacy for a some time now, for the last 4 years actually. It’s been a great tool with marvelous backup speed. However, I’ve never had a true disaster recovery use case until yesterday, and I wasn’t able to solve it to my liking. So I am wondering if there is a workflow that solves my use case elegantly.
My situation can be described as:
- My duplicacy repository is my home directory
- Inside my home dir, there is a folder called
photos. It holds about 100GB of media files.
- I was about to start a bulk renaming job that changes the naming schema of all
.jpgfiles in this directory.
My order of action was this:
- I realized there was a lot of room for mistake ahead, so I created a duplicacy revision before embarking on my journey (revision 100)
- I then started the renaming job, but due to an unhandled edge case, about 20% of the photos received a bogus file name.
- I only noticed the error after creating a new revision 101.
- So now I’d like to restore the
photosdirectory to the clean revision 100 via
duplicacy restore -r 100 -delete -overwrite "photos/*"
- According to the documentation, the
restorecommand ignored the
deleteoption as I gave a filter pattern. However, that means duplicacy just restored the clean state into the faulty state, and 20% of my photos are now duplicated with different names.
What I was expecting is a workflow similar to a
git reset --hard: That my
photos directory gets restored to the clean state of revision 100. How could I achieve this? Of course I could have completely wiped the
photos directory beforehand, but then I’d have to download a whopping 100GB when restoring from revision 100.
Any guidance appreciated!