This is the commit: Implement zstd compression · gilbertchen/duplicacy@53b0f3f · GitHub
Zstd compression can be enabled by providing -zstd or -zstd-level <level> to the init or add command. The use of -zstd sets the compression level to default , while -zstd-level allows for the selection of a compression level from fastest , default , better , or best .
The backup command can also accept these 2 options, which means you can switch to zstd without the need to initialize a new storage.
I ran a test on my own project code base. With the default LZ4 algorithm:
Files: 405702 total, 53,027M bytes; 403024 new, 53,027M bytes
File chunks: 10567 total, 53,027M bytes; 9636 new, 49,110M bytes, 32,570M bytes uploaded
Metadata chunks: 69 total, 87,238K bytes; 69 new, 87,238K bytes, 34,928K bytes uploaded
All chunks: 10636 total, 53,112M bytes; 9705 new, 49,195M bytes, 32,604M bytes uploaded
Total running time: 00:07:11
With -zstd:
Files: 405702 total, 53,027M bytes; 403024 new, 53,027M bytes
File chunks: 10567 total, 53,027M bytes; 9636 new, 49,110M bytes, 28,364M bytes uploaded
Metadata chunks: 69 total, 87,238K bytes; 69 new, 87,238K bytes, 23,309K bytes uploaded
All chunks: 10636 total, 53,112M bytes; 9705 new, 49,195M bytes, 28,386M bytes uploaded
Total running time: 00:06:35
So zstd performs slightly faster while using (also slightly) less storage space.
would be able to handle both on a case by case basis?