First off, purchased a two year license, like the app, the direction it’s going and the attentiveness of support forum. Thanks for that.
Second, a few items that would make things even better. Sorry if it’s already been suggested and discussed. I primarily use the Web UI (#web-ui) inside of docker on linux if that helps. (web v1.1.0, cli v2.3.0)
More Insight into processes
I had to use
lsof -pto get an idea of where the backup is. It would be helpful to have some frequently updated UI component that showed completed filename, aggregate upload sum or other X of total Y progress.
The UI progress status bar is a little misleading
Its time estimate is way off and bandwidth value seems to have nothing to do with how much MB/s is actually going over the network. Had to look at my router to see actual bandwidth being used. Maybe I’m misunderstanding what those value represents, but I expected it to be network bandwidth and actual time remaining in the backup (estimates based on remaining sizes and current bandwidth).
No way to make processes verbose in the Web UI (global params)
While the -v parameter (and other global parameters) might work for CLI, adding it to the web ui causes the job to fail. I think because that global option is meant to be added before the app command settings and the UI puts the options after the command settings. It’s possible I’m just missing something.
The log files only produce results after the jobs are complete
For example, the log file will start a backup with some basic header info, then be silent for the next 8 hours only to produce the entire backup list at the end of the run. That behaviour is not
Create an official docker container
Just a suggestion. There are several good options already out there, but would be nice to see an official container created and maintained by those in the know. A actively maintained/working container was a big selling point for me.
No joke, I almost did not buy the app (or believe all of the rave performance reviews) on my first run; it appeared to be running abysmally slow. As it turned out just bumping
-threads 2made a huge difference. IMHO that should be standard or highlighted.
Better Process Error Recovery
Had some problems early on with wasabi connections being killed (a problem with Wasabi) but it felt like the app didn’t have much in the way of “will retry again” or “re-establishing connections” error handling. It was very much a “stop everything, report very little, start over from the beginning to try again” kind of experience.
I have read the docs, looked at examples and am using prune with values I think do what I want them to, but I still have not got my head around what will be kept in my backup set (and more importantly, whether that is a good strategy for my needs). I’ll keep reading, but would it be possible to have some “reasonable” presets for general purpose “daily backups for a home network” type operation? I know everyone has different needs, but somehow I am just not getting what -
keepis doing and how it will affect my S3 usage.
Thanks for listening!