CLI release 3.2.0 is now available

Binaries can be downloaded from Release Duplicacy Command Line Version 3.2.0 · gilbertchen/duplicacy · GitHub

  • Added a Samba backend: 3a81c10
  • Implemented zstd compression: 53b0f3f
  • Added support for custom OneDrive credentials: #632 (thanks to @sevimo123)
  • Added support for SharePoint document libraries to the ODB backend: #633 (thanks to @sevimo123)
  • Fixed a connection leak in the Dropbox backend on some http errors: gilbertchen/go-dropbox#5 (thanks to @northnose)
  • Fixed bugs that caused the B2 and Google Drive backends to fail to download metadata chunks that have been marked as fossils: ff207ba 1f9ad0e
  • Fixed a crash when some backends return with an empty entry path: cdf8f5a
  • Fixed a crash caused by concurrent access to a map during chunk verification: #649 (thanks to @northnose)
  • Fixed a chunk leak when listing files in a revision: 9be475f

(@gchen, I thought it was important to create the post here on the forum - in addition to github - as always, communicating about the new version. Feel free to edit / take ownership.)

4 Likes

so is it possible to change the compression on an existing storage? or is that meaningless when you set it on the backup command?

or i guess in another way…what does enabling compression on a storage do compared to just setting it on the backup commands?

Specifying compression at a storage unit stage sets the default compression that will be used if you don’t specify one in the backup command.

Retroactively? You can create a new storage with the desired compression and duplicacy copy all snapshots in the old storage to the new.

Why would it be an option if it was meaningless?. Backup command is one of the few commands that use the compression.

You can specify it in the backup command and all new chunks will be compressed with the specified compression. Or you could have specified it at the storage creation time and save yourself a command line argument every time you backup.

can i change the ‘default’ going forward for a storage while leaving existing chunks alone? just by editing the config? i did this on my local storage and it seemed to work fine (could backup, browse, etc), but then i got an ‘incompatibility’ error trying to copy it to the online storage.

I don’t think you shall modify config file, and it’s encrypted if encryption is enabled anyway.
I expected the configuration to be in .duplicacy/preferences.

To be able to copy the target storage shall be created as copy-compatible with the current one. Is it? also, the version of duplicacy doing the copy shall not be old

the config file is in the storage. it’s only ‘incompatible’ if i change the default compression on the local config.

The web version isn’t automatically installing this when selecting either latest or stable for some reason (on MacOS).

Thanks to @towerbr for posting the release notes. There are some minor issues (Invalid compression level 201 and Allow two copy-compatible storages to have different compression levels · gilbertchen/duplicacy@4e9d2c4 · GitHub) which have been fixed in the main branch. I’ll create a new release 3.2.1 later this week.

It usually takes a few weeks for a new CLI release to become the latest version for the web GUI to automatically install. If you want to try it out you can manually download the CLI executable to ~/.duplicacy-web/bin and restart the web GUI.

4 Likes

thanks for the update. so can we just ‘edit’ the config file on local/remote storage to change the default for ‘new’ chunks?

Only if the storage is not encrypted (in which case the config file is just a plain json file). You can’t easily edit the config file if the storage is encrypted.

I would not suggest editing unencrypted config file though. A better option is to add -zstd to every backup job.

Will 3.2.1 be released soon?

This topic was automatically closed 20 days after the last reply. New replies are no longer allowed.