Web edition realtime log of successfully backed up files?

I discovered I can click the progress bar of a backup job to see the current log file with WARN entries of failures like file locks, but I don’t see a list of files that have been uploaded or are currently being uploaded. Opening log files of previous backup jobs in the logs folder does show these files. Is there a way to see successful and current uploads of the running job in the web edition?

1 Like

In my experience, it does sometime work… indeed, I think it’s supposed to, and it’s supposed to periodically refresh the list (IIRC, the URL opens up with &tail=1 or something like that). But most often it just seems to get stuck, and doesn’t show an updated view of the actual .log. Which is a shame.

Hopefully this can be fixed in future, because a nice real-time log - especially shown within a frame of the Web GUI itself - would be quite useful and would add polish.

1 Like

It is supposed to be real-time, but I think there is a buffer somewhere which causes the delay. I’ll look into it.

1 Like

I believe errors do appear in real time in the logs, but after the backup is complete, it’s populated with all the successfully uploaded files instantly. At least that’s the case with B2.

As an FYI, I have found that if I remove the &tail part of the log, I can at least load the log and refresh for updates. The tail seems to have some issues.

Separately, I do see when new files are uploaded, most of the time. It’s not great and super readable. But it is there on some of my backups.

Hi,

I think the issue still exists. Currently I am migration to the WebEdition and have the issue that no logs are visible after the info message “INFO BACKUP_LIST Listing all chunks”. The job is currently running for several hours and no further entries are shown (neither directly in the log file on the system nor with or without tail in the web).

A job I have runned yesterday (on an other system) was the same. The entries have shown after the backup was finished. If I take a look in the log file (~50MB) one can see that all lines have been written on the end. Note the gap between 2020-01-15 16:25:59.387 and 2020-01-16 12:52:40.635.

Options: [-log backup -storage B2-duplicacy-europe -stats]
2020-01-15 16:12:51.004 INFO REPOSITORY_SET Repository set to backup
2020-01-15 16:12:51.252 INFO STORAGE_SET Storage set to b2://....
2020-01-15 16:12:53.369 INFO BACKUP_START No previous backup found
2020-01-15 16:12:53.369 INFO BACKUP_INDEXING Indexing 
2020-01-15 16:12:53.369 INFO SNAPSHOT_FILTER Parsing filter file
2020-01-15 16:12:53.369 INFO SNAPSHOT_FILTER Loaded 0 include/exclude pattern(s)
2020-01-15 16:25:59.387 INFO BACKUP_LIST Listing all chunks
2020-01-16 12:52:40.635 INFO UPLOAD_FILE Uploaded 00008020-00126DE10CD8002E/DDNABackup.plist (2167)
2020-01-16 12:52:40.635 INFO UPLOAD_FILE Uploaded 00008020-00126DE10CD8002E/Info.plist (10094657)
2020-01-16 12:52:40.635 INFO UPLOAD_FILE Uploaded 00008020-00126DE10CD8002E/Info.plist.backup (10094657)
2020-01-16 12:52:40.635 INFO UPLOAD_FILE Uploaded 00008020-00126DE10CD8002E/Manifest.db (138432528)
2020-01-16 12:52:40.635 INFO UPLOAD_FILE Uploaded 00008020-00126DE10CD8002E/Manifest.plist (213131)

2020-01-16 12:52:48.650 INFO UPLOAD_FILE Uploaded iMazing.Versions/Versions/d04882e457d43121443c9c1f95c1f53d330aa0a3/2020-01-14-23.11.16/fc/fc2d489e68ab7f6d3119fcf1e471d29dc73dd779 (7904)
2020-01-16 12:52:48.650 INFO UPLOAD_FILE Uploaded iMazing.Versions/Versions/d04882e457d43121443c9c1f95c1f53d330aa0a3/2020-01-14-23.11.16/fd/fd316d08914e562dca9a56bac683df575f42e41b (1392)
2020-01-16 12:52:48.650 INFO UPLOAD_FILE Uploaded iMazing.Versions/Versions/d04882e457d43121443c9c1f95c1f53d330aa0a3/2020-01-14-23.11.16/fe/fecb99e12684ae3212bc55ac9686d85633225af3 (2576)
2020-01-16 12:52:48.702 INFO BACKUP_END Backup for at revision 1 completed
2020-01-16 12:52:48.702 INFO BACKUP_STATS Files: 347173 total, 148,872M bytes; 347173 new, 148,872M bytes
2020-01-16 12:52:48.702 INFO BACKUP_STATS File chunks: 30320 total, 148,872M bytes; 26604 new, 132,223M bytes, 132,695M bytes uploaded
2020-01-16 12:52:48.702 INFO BACKUP_STATS Metadata chunks: 22 total, 111,647K bytes; 22 new, 111,647K bytes, 47,914K bytes uploaded
2020-01-16 12:52:48.702 INFO BACKUP_STATS All chunks: 30342 total, 148,981M bytes; 26626 new, 132,332M bytes, 132,742M bytes uploaded
2020-01-16 12:52:48.702 INFO BACKUP_STATS Total running time: 20:39:55

I am using the WebEdition 1.1.0 on Windows 64bit and on an arm Linux 64bit machine.

The UPLOAD_FILE logs are dumped at once towards the end of a backup. There are PACK_END logs that are real-time along with each processed file but this log is not enabled in the web GUI. I’ll fixed this in the next update.

I decided not to enable the PACK_END log messages by default. With the new version, you can now specify -d or -v as a global option which will produce far more output so that the log will be updated more frequently.

1 Like

I don’t think its a good solution to flood the log with too verbose to be point of uselessness data (as I understand to indirectly update the log by having buffer filled up and flushed) simply flush the output stream after each line written? (analogue of fflush in C)?

2 Likes

What’s the status on this one? This is still broken.

1 Like

I would love there to be an option to see within the WebUI on what file/folder is currently being process / uploaded live (not referring to the Log File) not very important granted, but would be nice to see?

1 Like

Before releasing 1.4.0 I tried the idea of calling Sync() after writing every log message to the log file and still got the same behavior. So it appears that the buffering happens on the browser side which can only be fixed by a customized log viewer.

Just thought I’d toss in my two cents, I’d love for something to be changed with the current behavior.

I’m currently fighting with a filter that I have messed up somehow and I’m uploading way more than I should. I’m completely blind as to what is actually being uploaded right now so I can’t get a feeling for where my filter configuration is wrong.

While the backup is starting from DWE, I’m tailing the logs in terminal and nothing is updating. At the very least I’d like the logs to be updated when cancelling a currently running backup job.

If I let a backup task run to completion, it updates the log with uploaded files as the job finishes.
If I cancel a backup task, it just writes the cancellation to the logs and SKIPS writing the changes uploaded.

I have an task that should have been 20 min, but it is now uploading for 2 full days. If I had visibility to the currently uploading file it would be extremely easy to see where I’m going wrong.

Thanks! If there is anything I should do to better troubleshoot, let me know… otherwise consider this my +1 for the feature request :grinning: I’m going mad with this.

1 Like

This is the same behaviour I observed here, so I wonder if this is a regression given it renders the log blank until the backup finishes?

Could be, I thought at some point I recalled seeing the log populated during the backup session.

I am now 10 days after my previous post and I’m still blindly uploading something. Thats a lot of wasted bandwidth.

I tried using the enum-only option, but this instance is actually a Docker container of DWE and I haven’t had enough time to figure out how to run CLI commands on the DWE docker. Unfortunately running backup -enum-only via web ui does not output the result to the log. :frowning:

-enum-only should work in the web GUI but you’ll need to enter -d as the global option to see the debug-level output in real time.

Global options apply to the schedule as well right? When I add -d as an option to my first backup, it shows loads of output when run manually but no extra verbosity when run as part of a schedule.

EDIT: Oh, a “global option” (that can be used globally) not a “global option” (which applies globally). That explains my schedule now :sweat_smile:.

Thank you!! I’ve finally managed to track down my misconfigured filter.

A typo led me to include an entire minio s3 folder. I back up all of my computers to a local minio instance. Then backup my server and copy the minio backup to B2. The minio backups were getting double encrypted due to the raw data being backed up and using like 4x the expected space on B2 :sweat_smile:

Thank you much for the global -d callout. This did help me solve this issue. I do hope in the future we’ll see some better feedback in web ui, but I’m taken care of for now :grinning: