Not sure if this will help you at all - but I went slightly tangential way:
- Duplicacy from clients to NAS via SFTP
- NAS takes volume snapshots once a day
- NAS then replicates those snapshots to remote NAS using NAS native tools which are supposed to work by definition, and there are few recent snapshots stored so I have a few recent duplicacy datastore states to revert to if e.g. NAS dies mid-replication.
Not sure how easy it would be to adopt this approach with wasabi though…
Edit - scratch that. You can just run duplicacy copy on a nas in a loop all the time. Adding new currently unreferenced chunks to the remote datastore via copy does not invalidate its current state, so at any point in time you have consistent datastore in the cloud; and it does not matter how long it takes to transfer data to wasabi, or whether more chunks were added while you copy.
Unless I misunderstand severely how duplicacy works. Hopefully Gilbert can comment.
Edit 2. Now that I think about it, I think I’ll start doing the same thing. My NAS has 8GB ram…
Edit 3. Wow. I just decided to make a docker container for duplicacy to avoid compiling for synology and then found this: GitHub - christophetd/duplicacy-autobackup: Painless automated backups to multiple storage providers with Docker and duplicacy.. If your NAS can run docker - this would be the way to go.