For some of my applications I have the option, when backing up locally, to
- Not compress the backup (backup is a tar file)
- Compress with gzip (backup is a tar.gz file)
- Compress with zstd (backup is a tar.zst file)
I store all of my backups in a specific folder on my server, and I then point Duplicacy to that folder. Duplicacy then backs up my backups to B2.
My understanding is that Duplicacy GUI defaults to backing up everything to B2 using gzip compression. This is great, but this is also where I have a question.
Knowing that Duplicacy is going to backup my files/folders using gzip, is there a preference in using tar vs. tar.gz, vs. tar.zst for my local files? For example, is Duplicacy able to see files within a tar archive, and not store duplicate them on my B2 storage, therefor saving my storage costs?
My question is purely around what saves the most amount of space when backing up multiple revisions of a backup to B2. Locally, I save the most space with zst, then gzip, and then tar. zst is the fastest, followed closely by tar, and then gzip is about 2x slower then both of those. So my preference would be to store in zst vs. tar locally, since it’s slightly faster then tar, while reducing the size of the backup by about 3X. But if that’s going to end up creating even more data on my B2 backup bucket compared to a normal tar file, I’d rather store a larger file locally, in order to save on storage costs in the long run with B2. Hopefully this makes sense what I’m trying to say.