Can't edit or delete a storage location from the Web UI

Hi,

I have installed Duplicacy Web Edition on my Synology NAS using your build from Feb 2024.
It is version 1.8.3 and the Web UI states and the command line section in the setting tab says 3.2.3

I have sarted by adding a storage location. It is an Azure Storage container. Since this was the first connection Duplicacy asked me for details such as compression, encryption, etc. I left those empty since I just wanted to perform a quick test to learn and check performances.

Once I made my first test backup to make sure I understand the basics, I decided to go back to the storage section and edit my Azure settings with encryption etc.

That’s where I’m stuck. I can’t see where to edit. So I decided to delete the storage and create it again.
But when I add the storage again it detects I already added it before and simply asks me for the name and the password. I don’t have another option so this is what I do… and I’m back to where I was before deleting it.

Can anyone help and explain how I can trully delete that storage account from Duplicity’s database using the web UI? Or better: how to edit an existing storage?

Thank you.

You can’t edit the storage once it’s created; configuration file contains encryption keys and other data, such as chunking configuration, and is immutable once created. You can only delete it and recreate, effectively deleting all data.

Deleting storage in web ui just deletes reference to existing storage. It does not delete data at that storage.

Next time you attemp to init the same storage it just gets adopted, as you have observed.

To completely purge the data you can delete the duplicacy data from azure using third party tools. Or you can create a new storage with different path at the same location — since the config file would not exist at the new path prefix — it will be created with specified parameters.

I cleared my Azure storage container and as you said, this solved my problem. Thanks.

I was able to generate my RSA key and launch a new backup with parity bits and compression, and it’s working so fast! It’s 5 to 10 times faster than Hyper Backup. Very happy with the solution.

Just a bit sad the web UI doesn’t allow me to select multiple folders to backup in a single job though.

I thought azure already guarantees data integrity? Erasure encoding will just use more space.

Anything is faster than HyperBackup :). But speed is not really important in backup solution; resilience to corruption is, and I’m glad you moved away from HyperBackup. Duplicacy takes this a step further and provides concurrency without locking database, further eliminating points of failure.

Duplicacy follows first level symlinks, so you can create a folder somewhere, and symlink stuff that needs to be backed up into there. And then configure backup of that folder in duplicacy. That would be the easiest.

Another approach is to configure backup of the common root folder and then use filters to selectively include or exclude data; but this is may be an annoying overkill if the goal is to just pick up a bunch of folders.