Duplicacy Web Edition 0.2.10 Beta is now available

4 posts were split to a new topic: Trouble pruning with duplicacy web

Hello, Iā€™ve tested this web UI on my NAS but HubiC storage is missing. Is it normal ? The main reason I want to use Duplicacy is that itā€™s still support HubiC but if that will no more be the case, Iā€™ll have to change my provider and in that case Iā€™ll probably not need Duplicacy anymore as other providers are supported by my NAS backup app natively. Iā€™ve tried in CLI but itā€™s always failing at some point and each command with Hubic link inside need to give the token location, itā€™s quite cumbersome, it should be asked only once and then stored for good.

One quick question if I may- is there a target date for a general release of the web gui? Just curious when it is planned to come out of beta.

Thanks!

3 Likes

First of all, I love the Web Edition. I probably wouldnā€™t still be using Duplicacy without it. So great job, and Iā€™m looking forward to seeing it be improved further.

A couple suggestions:

Would it be possible to change it so the stats are updated even if the check command reports ā€œMissing Chunksā€? The problem Iā€™m having is that my stats are not updated most of the time because the last backup in the check command usually reports ā€œMissing Chunksā€. And I think thatā€™s because Iā€™m running backups every 15 minutes, but the check command takes much longer, so thereā€™s usually at least one backup that starts while the check command is running, which seems to throw it off. It would be great if the stats would update even if the check command reports ā€œMissing Chunksā€ for this reason. Right now my charts and summaries are fairly useless because theyā€™re usually showing 0.

My other suggestion is that it would be great to have a simpler way to configure backups and schedules. Itā€™s pretty painful right now when you have multiple backups and multiple destinations. I have to create a separate backup for each combination, which means I have to schedule a separate backup for each one, too. I also prefer to prune each backup separately so that the computer the backup is for is in control of it, which means I have to schedule a separate prune for each one, too. Itā€™s nice having the flexibility to configure it however you want, but most of the time it would be nice to have a more consolidated way to create a backup, specify which destinations it should go to, and how often it should run and be pruned. Itā€™d be great if the check commands were just run automatically too instead of having to manually schedule those since the stats are worthless otherwise. I realize the Web Edition is a wrapper around the CLI, but perhaps you could have a way to abstract it more and make it more convenient to set up backups? Itā€™s just very tedious right now.

Thank you!

1 Like

HubiC is closing down so we wonā€™t add HubiC to the web UI.

There is still on more update planned for the beta version before the final release. The target date (hopefully) is end of March or beginning of April.

3 Likes

I wonder if something else caused ā€œMissing Chunksā€. An incomplete backup wonā€™t, because the snapshot file is only uploaded at the last step of backup. The check command will see chunks first, and then the snapshot file.

Registration is closed but the service remain open for current users. Synology has cut the cord, they are a commercial company so I can understand but Duplicacy is one of the last hope for HubiC user and itā€™s open source so I would hope that it stays offering all options. If itā€™s not the case Iā€™ll stop losing my time and pay more for another service.

By default, if you just go with the defaults with the Prune command in the GUI, it does not appear to do anything. I think this is because it requires you to select -a or specify a snapshot ID. ā€œExpertsā€ will deal with this issue and add -a to the command line. But I assume the GUI is for non experts, so I suspect that they should be prompted to either select a snapshot ID or select ā€œAllā€.

Likewise, I want to remove some old and not used ā€œSnapshot-IDsā€ a shared storage. I am not a Duplicacy newbie, but it is not clear how I do that, and what impact it has on 2 stage fossil collection etc. Maybe it might be nice to put an option in the GUI to prune and remove a complete Snapshot-ID. I assume this will prevent old, not used snapshot-ids holding up unused chunks being deleted? But I am guessing there is a little more to it than that, and would I be right in guessing that if there has not been a snapshot for a little while (maybe 7 days), it will ignore that and delete fossils?

I to have seen this. ie what I assume in Check complaining about missing chunks when a backup is running. This surprised me a little, for the reason you outline. Rerun of the check once backup is complete fixes the issue. If not backup, I would be curious what the other thing might be.

2 posts were split to a new topic: Connection freeze with Wasabi

8 posts were split to a new topic: Trouble with the encryption password on a headless Linux box

Iā€™m also seeing storages listed as ā€œSome may not have been updated recentlyā€ on the dashboard and ā€œUpdated 5 days agoā€ on the storage page. Check runs every day on my box, but also has missing chunks every day. I think itā€™s because I run prune just before the check runs.

Itā€™s getting pretty close to mid-April. Is there a new ETA for the next beta?

5 Likes

Sorry for not keeping my promise. I have been working on a new release of the CLI (supposedly 2.2.0). After that Iā€™ll be working on the final release of the web GUI (which should happen by the end of April).

7 Likes

5 posts were split to a new topic: Migrate from CLI to Web-UI?

5 posts were split to a new topic: Memory usage problems

Closed for the better and long-awaited Duplicacy Web Edition 1.0.0 is now available!

1 Like