ZFS and Docker and KVM on Proxmox backup strategy

Hi, I’m moving from Unraid to Proxmox and ZFS, and I want to continue using Duplicacy and B2.

With ZFS I can backup snapshots, and I’d like to know what other ZFS or Proxmox users do for backup orchestration?

I don’t use ZFS or Proxmox myself, but I’ve heard that vm backups created by Proxmox aren’t deduplication-friendly. So your only option may be to back up ZFS snapshots.

The Proxmox backup creates dated compressed archives, so not possible to dedupe content inside the backups.

My intent is to backup the files, like I’ve been doing on Unraid, but I think there is an opportunity to use ZFS, maybe even a good future enhancement.

Today backup takes time, and if files change during the backup, the backup as a set is not consistent.
For some apps that use files those files can’t be backed up unless the app is closed, so in my case I stop docker, backup, then restart docker. This leaves docker down for and extended period of time.

If we can use snapshots, the backup can be done using the snapshot data, that is atomically captured and consistent. I can still stop docker, but the time to take the snapshot is near instant, so the downtime is very short.

Now this can all probably be done without Duplicacy needing to know anything about snapshots, I just need to figure out how to point Duplicacy to the snapshot view of the data. After the backup completes the snapshot can be deleted.

I’d still be interested in hearing from anybody that backs up snapshots using Duplicacy?

Did you ever figure out how to use snapshots (zfs or lvm) to get a consistent backup? I am also very interested in this or any Linux equivalent to Volume Shadow Copy on Windows.

It is still on my todo list.

I think the main challenge will be to point the backup source at the snapshot, either by changing the backup config on every run, or by using the same snapshot name for every snapshot.
Today I’m using the web based docker version, and I’d like to continue to do so, but I may have to go CLI.

Is it possible to run a custom script before backup when using the scheduled web version?

I’m thinking the easiest way to backup a snapshot is to use a fixed the snapshot directory as source, and delete and re-create the snapshot right before the backup runs.

I just need a way to invoke my snapshot commands in the web version?

Yes, as described here:
Pre Command and Post Command Scripts. The script folder location is different, otherwise it’s exact same duplicacy executable:

@saspus, where does ~/.duplicacy-web/repositories/localhost/n/.duplicacy/scripts map under the docker /config mount?

The /config is mapped to ~/.duplicacy-web: line 6 in launch.sh, however in settings.json duplicacy configures temporary_directory which by default I think is ~/.duplicacy-web/repositories.

In my container that directory is configured to /cache instead, which is supposed to be mapped out to some docker volume.

This raises a question for @gchen: Currently the only way to specify the pre/post scripts is to copy them to a location temporary by its nature and not suitable to keep user configuration. There should be a way to specify those scripts elsewhere in ~/.duplicacy-web that would be then copied by duplicacy to its temporary_directory.