I’ve been testing multiple backup workflows and solutions over the last week and have to say am really impressed with Duplicacy so far. However there is a specific workflow that I need to be able to acomplish in order to deploy this as a permanent solution.
I have a very specific use case where I need to perpetually back up multiple large (50x2-10TB and growning) shares from NAS1 to NAS2 using a PC running Duplicacy in between. The reson for it is becasue NAS1 has a propriatary file system and a locked down OS so there is not way of running Duplicacy on NAS1 or making NAS2 See it directly.
With that in mind, what I also need is to be able to do is either:
A.) keep all these shares in separate self contained backups
B.) have a way of extracting a particular share from the backup as well as shrinking the master backup after extraction.
The reason for it is becasue the retention policy is such that anything older than 1 year has to be sent off to AWS Glacier DA. I can perform that part of the process manually, however I need a way of being able to extract and remove a particular share from the backup when time comes.
What is the best way of going about this ?
With Duplicati it’s fairly easy as I can set different destination every time I set up a new job, but with Duplicacy destinations are set up separately with a unique ID, having to do 50 of them, then wipe 50 of them to add new ones every year seems like a very tedious solution…
Also, will I then be able to restore from cloud (recalling to S3 Standard storage first) on another comupter ?