Duplicacy Web Edition 0.1 Beta

Thanks very much for the new web-based gui. I’ve been trying it out on two computers, mostly very successfully, and it is a much nicer experience than the previous gui. I just want to report a few issues, one seems significant, the others are minor. All testing has been on Win10 in Firefox.

First the significant bug. I have a directory, D:/user, that has 5 subdirectories (a/, b/, c/, d/, and e/) that I want to back up on different schedules. So I set up three backup jobs:

  1. D:/user/ with filters -c/ -d/ -e/
  2. D:/user/c/ with filter -*.lrdata/
  3. D:/user/ with filters -a/ -b/ -c/

When I ran backup 2, it used the filters specified for backup 0, not 2, and when I ran backup 1 it seemed to ignore the filter. When I looked into the files in ~/.duplicacy-web/, I see that filters/localhost/0, filters/localhost/1, and filters/localhost/2 are different and correct. However, the files repositories/localhost/*/.duplicacy/filters are all the same, with the entries for backup 0. The result seems to be that the gui shows the different set of filters, but the cli is called with the wrong filters for backups 1 and 2. Obviously I can fix it locally, but it seems like something to try to fix. On the other PC, I used the same filters for backups 0 and 1, and no filters for a third backup, and there were no problems.

Setting the starting time for a schedule is a little quirky. I couldn’t use the arrows, and had to type in a time in the exact format used. For example, 4:00pm (no leading zero) didn’t work, 16:00 and 04:00 pm also fail. Maybe some off the shelf jscript time chooser would help. Also, once I realized that setting the starting time sets the first time each day that the job will run, I thought it could also be useful to have an ending time so that you could run a backup regularly only during work hours, for example.

My cloud backups go to a single B2 bucket to allow deduplication, and that means I have 5 backups, plus “all”, displayed on the storage graph for that B2 bucket. It seems like there is no color specification working for the 6th line on the graph. The sixth line is the same color as the first (All), and the sixth dot and label in the legend below the plot are black. The color spec for the sixth legend dot in the html is color:Zgotmp1Z, and for the text in the legend is rgba(104, 179, 200, 0.8). In the plot, the sixth line and dot have the same rgba color spec, which has the same rgb values as the first dot and line, which are specified as #68b3c8.

Finally, on the longer term, I want to support the previous request to be able to restore by finding the file to be restored and then choosing the version to restore.

I have question about pre/post backup scripts.
They work, but I find current location non-intuitive.

I have to put post-backup in folder C:\Users\username.duplicacy-web\repositories\localhost\1.duplicacy\scripts
But can I see backup number (0,1) in GUI somewhere? After some time with many backups, it can be quite confusing.

image

Hi everyone,

The new web GUI is amazing. thank you @gchen

I just have some suggestions:

1-restores:

1.1-A new restore flow might be good.
Just browsing a repository at last revision and click on a file or folder.
Click on show revisions->
then on a secondary filed would show the available revisions.
to keep it fast we only query when we press show revisions.

1.2-Designate a special directory for quick restores. behind the scenes, we would initiate the repository with the correct repository id download the file and clear the repository until next use.

1.3-allow restores selecting multiple files and directories.

2-Security:

2.1 Regarding authentication: A full-fledged user system is a step in the right direction if we want to move towards multiple endpoints controlled by the web interface but I understand that it would add complexity and is too much for now.
Maybe just have a user-defined timeout and ask for the master password on timeout.

2.2- I would also ask for the for and master password via the CLI. That way at no point would the user be at risk.

3-Question: how to backup of the web gui local data?:

what happens after rm -rf ~/.duplicacy-web/?
What do we need to get things working again?

Ideally, we would backup essential files to the storage itself.
Then on a clean install, we would get asked to restore after entering the storage password and the correct endpoint.

4-Small ui fixes:

4.1. I agree that destructive actions should be visually separated from normal actions.
4.2 A visual cue for drop-downs.(a simple coloured border would work)
4.3 possible bug: If you select and deselect the “parallel” checkbox it can be duplicated (goes away after a refresh).
4.4 Edit job options on the backup pane or concentrate them on the schedules section. (that would be my choice due to flexibility.)

5-Feature requests:

5.1: API endpoints via the web GUI.
That way we could control backup restore and monitoring of endpoints and we open up a lot of functionality. Like instant backup and sync.

5.2: Once we hit stable: a quick video tutorial and links to online help on each section.

That’s it for now, in the end, the web gui is everything I hoped it would be and the potential unbelievable.

Thanks again!

2 Likes

Update:

4 small UI fixes

4.5 make the “selected item” shade a bit darker or a light green so it’s more apparent.

5-Feature requests

5.3 instead of status, show last successful run timestamp in green (or error in red) for each job.
(this is bc in a desktop the scheduler will not be always running)

5.4 If a job in a schedule is selected run just the selected job. (or display a “per job” run icon)

1 Like

Feedback on the Dashboard:

  1. Enhancement: Make the icons at the top of the dashboard into “drilldown” links to their respective details page.
  2. Issue: The “Activities” timeline exceeds it’s box depending on browser window size:
  3. Question on proposed licensing: Will the GUI be free for those entitled to use the CLI for free, or will it require a subscription?

Again - fantastic work. I believe this is the one missing piece that will make Duplicacy a de facto standard for folks I know (running a NAS, which is my use case). This could easily be made into a package for QNAP and Synology NAS app stores. A third-party app store has already bundled Duplicacy CLI for that purpose. However, I’m a big fan of docker for running web GUI interfaces so I don’t need to rely on the buggy or vulnerable Apache web server & PHP version that ships with the NAS.

1 Like

If you are interested, one way around the default port is to use docker:

1 Like

I few more bits of feedback for consideration after more uses :-

  1. The naming of log files with on a prefix of “copy” or “backup” etc and then only the date and time can be a bit difficult to find the log file you want on a box with lots of different backup jobs. I think as a previous poster has suggested, it might be good to be able to name backup jobs, or at least “schedules” and use these names on the backup files to be able to quickly job to the log file you want if you are searching for log files.

  2. As further reason to change the implementation of the ways you do restores is highlighted by a minor annoyance I have with something I have setup. I have a backup job to an onsite server. Then I run a copy job to the offsite server. Because I don’t have a “backup” job from the local server to the offsite server, there is no way for me to do a restore in the GUI, and you then need to go to the CLI. So based on that, and further to other comments above, I think it would much more sensible completely decouple backups and restores. From other backup software, the most standard way to do this would be to have a completely separate “Restores” top level option. If it was me, I would then allow people to browse all the configured “Storage’s” (I would also allow an additional storage to be configured at this point or at least point back to doing that at the “Storages” top level if required), and then browse all the backup IDs on that particular storage, irrespective if it has come from that machine or not, and then browse into the revisions etc. Of course you could implement the same thing from the “Storages” top level menu, which would offer the same functionality. But for me it seems counter intuitive to have to go to “Storages” to do a restore, and I suspect that Backup software needs to have “Restores” as a primary item Restores is a primary function of backup software, and backup is completely useless without restores. As a case it point, when I 1st started playing with the GUI, I though you had not yet implemented restores in the GUI yet, because I did not think to look for it under “Backups” and missed the significance in the tiny icon in backups that takes you to the restores option.

  3. And of course, bonus marks for restores if it is possible to implement file revisions at the file level once you have drilled into the file, rather than having to search every revision from the top down. Interim step might be to allow file history from the GUI to be able to see the dates the file changed

  4. I could believe it might be something for a future version, but for GUI users who have used backup software that allows you to choose what you want to backup and exclude from a directory tree you can drill into which has tick boxes next to the things you want to backup and/or exclude, it is hard to go back to methods similar to current implementation used in Duplicacy. I guess this is further complicated in Duplicacy’s CLI and “Repository” based approach. But ideally for simple non technical GUI users, I think it would be simpler if they couple create jobs based on frequency and destination and place all the directories in that backup strategy in the 1 job.

Anyway, the work so far looks fantastic, so keep up the good work. And I appreciate there are lots of reasons you have done things the way you have that I have not even thought of. But hopefully these end user insights are useful.

Just tried it out, first time, on a Mac. Had to “chmod +x” the executable, but then it ran without a hitch. Might be better to distribute it as a zip, which might preserve the x bit, too.

It would be nice if you’d (optionally) use the native file browser. The one that’s now used is not user friendly compared to what macOS offers.

1 Like

Hi,

I have just started using Duplicacy and am using the Web Edition on Linux.

Some feedback I have:

  • When I click on “Status” of a job (when a copy/backup etc) is in progress, then log file does not show up in new window, with “Failed to open the log file /root/.duplicacy-web/logs/null”.
  • Check jobs shouldn’t need to be something that needs to be setup, but just happens at certain intervals in the background.

Features that would be great:

  • It would be good to see some kind of throughput status/time-remaining on a copy job.
  • It would be nice be able to set Schedule names, so you can see what each schedule represents in the dashboard view.
  • More granular timings on the graph view on the dashboard/storages and be able to change the timeframe. For example, be able to see hourly increments in the graph for example and also change the period you are viewing.
  • Tooltips when you hover over the icons (showing what they do) and also the dashboard/storages graphs (show #Gb/Chunks/Revisions)

That’s all my feedback/suggestions for now :>

Thank you very much for working on the Web based GUI.
It great good so far!

Regards
pdaemon

This has been fixed in the latest beta version.

That would be better, but how do you set the times and the intervals?

I plan to fix this in a later release.

These have been all fixed in the current beta, which should be available sometime next week.

2 Likes

A general option in config would be nice
.
But I also like the option to do it on-demand.

Hi gchen,

Thanks for fixing and including suggestions in the next version!

In regards to the check jobs, they could run in the background at certain intervals (global setting), but can be overwritten with your own intervals if required (on-demand) as bkeeper suggests?

I look forward to the new version next week.

I also have another question…
How do I go about restarting the duplicacy server service? I couldn’t find a CLI option.
Just a kill -9? and run the cmd again? An option in the GUI to restart the service would be good :smile:

Thanks Alot!

Regards
pdaemon

To restart the Duplicacy services you’ll need to run the Windows Service Manager (enter services in the run dialog) and then right click the Duplicacy service to invoke the menu and select Restart.

Hi gchen,

I’m using Linux :> How is the service restarted on the backend when you make changes in the “Settings” in the GUI?

Sorry for getting off topic. If this is not simple, I’ll post something in a different section of the forum.

Thanks.

Regards
pdaemon

After you click the Save button in the Settings page, it should restart by itself. If it doesn’t then this is a bug.

Yeah, it does restart.
I’ll get back to you with any other feedback once you release the new version.

Thanks again.

Another question about the Web GUI.

I have setup a copy to B2 Backblaze and have read that having multiple backup threads running at once can be better for data throughput to B2.

What is the default number of threads that are used when setting up a B2 backup in the GUI? Will there be a way to change the number of threads used per backup job?

Thanks.

Cheers pdaemon

I guess we can customize the CLI command at each step, (also for restores?) but right now there is no reference info.
But we should make it easier to customize the CLI command that gets run at each step:
With placeholder examples or even a drop-down menu or control widgets for usual command modifiers (such as threads).

1 Like

The default number of threads is always 1. In the web GUI you can set the options for each job separately so you can use -threads n if the command supports it.

Hi gchen,

OK thanks. I’ll give that a go.

Any update on when the next version is coming out? :slight_smile:

Cheers pdaemon