Zstd compression algorithm

Lz4 is a great compression algorithm for large datasets, and is extremely fast, but its compression ratio is limited compared with other algorithms.
During my research around compression algorithms I saw that @gchen has said in the past that Duplicacy doesn’t support zstd as it does not have a pure go implementation.

Could this Go package be used to incorporate zstd into Duplicacy?

Or is it not a suitable package (I’m by no means an expert)? Would be interested to see what sort of Duplicacy backup size could be achieved with zstd as compared with lz4 and what the speed tradeoffs would be.

2 Likes

This is exactly what we need. I’ll give it a try for the next CLI release.

3 Likes

Oh wow, great! Looking forward to hearing about the results.

@gchen now that 3.2.0 has been released (why is my webui not picking it up?) can we change the compression on an existing storage? i assume it just changes the default compression if nothing is specified for new chunks, the existing chunks stay as they are (i.e. lz4).

i know we can also specify the compression in the backup command, which i guess would have a similar effect?