I have a use case where I have several VMs (with new VMs coming and going over time), a SFTP-accessible server, and a number of disks to be used in a backup rotation.
I have a script to, from a central place, ssh into all the VMs, and run duplicacy to trigger a backup over SFTP. The challenge is that want to avoid the need to pre-initialize storage for all new VMs against each current backup disk before I can start a backup. Rather, that should be part of the backup script, to initialize the backend each time to handle the case where I slot in a never-before-used disk.
I tried the obvious to blindly run “duplicacy init” at the start of the backup process, under the hope that it would initialize against the backend if the backend didn’t already know about the snapshot id in question. However the init immediately command returns “The repository […] has already been initialized” based solely on the presence of the local .duplicacy/preferences file, as opposed to the state of the backend. The storage backend isn’t even consulted.
I can script my way around this, but I’m proposing a -force option with the init command, or something similar, to actually check with the backend to see whether things truly do need to be initialized (and in which case it would probably be prudent to invalidate local cache – at least snapshots and fossils).
I could submit a PR if this sounds sane.
Thanks!