Version 2.1.1 has been released

Thank you @gchen!

I have particular interest in the new Wasabi backend, but I have one doubt:

Can I edit my preferences files and simply change s3:// to wasabi://? Or do I have to create new storages? If yes, can I point the new storages to the same Wasabi locations that are currently set with s3://?

This should work, but you also need to change s3_id to wasabi_key, and s3_secret to wasabi_secret.

However this build has a bug with snapshot deletion if you’re pointing duplicacy at the top level of the storage bucket. I submitted a pull request fixing this a couple weeks back but it didn’t make the 2.1.1 release. So you may want to hold off for the next release before using the wasabi backend or cut your own build from master.

2 Likes

Thanks for the catch. I’ve fixed the links…

Your fix is in the 2.1.1 release. I didn’t explicitly mention it because the wasabi backend was new to this release.

1 Like

Oh, great to know, thanks. I assumed it wasn’t in there not because you didn’t mention it in the change log, but because v2.1.1 was tagged in git before my pull request was merged.

Thanks for the catch. It has been fixed (a typo in my release script).

Why is there now only an arm64 version and no arm version? I can not run it on my NAS :frowning:

I uploaded the arm version.

1 Like

Ok thanks! I just updated the wiki concerning webdav path and keys (please review it, some valaues like the environement variable is just assumed by me and not tested)

Are these always going to be backwards compatible beyond maybe a major release (3.0+, etc)? I am on 2.0.7 and think it’s about time I upgrade to get the latest fixes, but want to make sure I won’t have to change anything.
Edit:
I am using the cli version 2.0.7 with gcp and the following command
duplicacy.exe -log backup -vss -threads 8 -stats >> %logDir%\duplicacy_backup_%filesafetimestamp%.log
And I am using the cli version 2.0.9 on a separate machine each into their own storage folder

You can safely upgrade to 2.1.1!

1 Like

It looks like a second version of v2.1.1 has been pushed to GitHub, silently overwriting the original v2.1.1 files. Was this intentional? I noticed because I maintain the AUR build for this package in Arch Linux, and the checksum for the tarball has started failing.

Yes, that was intentional because If you're running prune and backup in parallel, please upgrade to 2.1.1. @gchen added the fix in the second 2.1.1 release.

@gchen I disagree this practice of silent rebuild and find it a bit misleading: any new release, containing even a single commit should be a new release. (is this somehow related: Print git commit number · gilbertchen/duplicacy@48cc5ea · GitHub ? )

2 Likes

Sorry about that. The release was sort of broken (mostly Removed a redundant call to manager.chunkOperator.Resurrect · gilbertchen/duplicacy@f304b64 · GitHub and Remove extra newline in the PRUNE_NEWSNAPSHOT log message · gilbertchen/duplicacy@8ae7d2a · GitHub). And I took this opportunity to fix a few other things.

The other reason to redo a release is that I like to keep the CLI and GUI versions synchronized. I hope once the GUI v3 is available there will be no need to keep them synchronized and therefore no silent rebuild of the CLI version.

@gchen I guess you must have pushed another overwrite just now? The checksums have changed yet again, and I am now seeing Continue to check other snapshots when one snapshot has missing chunks · gilbertchen/duplicacy@e8b8922 · GitHub in the new download. This makes packaging difficult and causes a bunch of uncertainty for people installing the packages by any method, since now myself and a friend could both have machines with v2.1.1 installed but different software running. If you want to keep the version numbers in sync, could you consider using an incidental version to tag these bugfixes, like v2.1.1-2?

1 Like

Sorry this should be the last time it changed. In the future I’ll make all releases immutable.

Thank you for maintaining the package by the way!

You’re welcome! Thanks for a very nice tool. I appreciate the assurances about future releases.

Quick question… does the latest version have either FTP retries or server side checksum files implemented?

I think not since I didnt see either listed but wanted to double check.