Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Stuck on evaluate #3920

Open
agerewines opened this issue Jun 27, 2019 · 1 comment
Open

Stuck on evaluate #3920

agerewines opened this issue Jun 27, 2019 · 1 comment
Labels
enhancement New feature or request P2 Priority of the issue for triage purpose: Needs to be fixed at some point. usability Smoothing user interaction or experience

Comments

@agerewines
Copy link

agerewines commented Jun 27, 2019

System information

  • OS version/distro: Windows 10
  • .NET Version (eg., dotnet --info): ML.NET 1.1.0

Issue

  • What did you do?
    Created ML model with Model Builder, given 1000 seconds, it was able to evaluate 2 models and returned the best one.
    Afterwards I tried to rerun the modelbuilder class to recreate the model.
    The csv from where I take the data is around 8 MB, with 70k rows
  • What happened?
    It has been stuck on Evaluate for so long, 48 mins. EDIT: now 82 mins.
  • What did you expect?
    Have some feedback on the evaluation process or create the new model.

My suggestion is to have some logs on the screen while it is evaluating.

Source code / logs

Proof

@justinormont
Copy link
Contributor

Related issue: dotnet/machinelearning-modelbuilder#126

Have some feedback on the evaluation process or create the new model.
My suggestion is to have some logs on the screen while it is evaluating.

Your suggestion sounds good to me. Though ML.NET currently has the option of a "firehose" trace log, or nothing at all.

To quote an earlier issue comment:

As mentioned in #3235, MLContext.Log() doesn't have a verbosity selection, so it's more of a firehose.

If a verbosity argument is added to MLContext.Log(), the log output from there should be human readable to see general progress.

I believe it's still hidden within the firehose of output and once the verbosity is scaled down, you should see messages like:

LightGBM objective=multiclassova
[7] 'Loading data for LightGBM' finished in 00:00:15.6600468.
[8] 'Training with LightGBM' started.
..................................................(00:30.58)	0/200 iterations
..................................................(01:00.9)	1/200 iterations
..................................................(01:31.2)	2/200 iterations
..................................................(02:01.4)	2/200 iterations
..................................................(02:31.9)	3/200 iterations
..................................................(03:02.5)	4/200 iterations
..................................................(03:32.9)	4/200 iterations
..................................................(04:03.6)	5/200 iterations
..................................................(04:34.4)	5/200 iterations
..................................................(05:04.8)	6/200 iterations

And naively extrapolating, there's around 2.7 hours left in the LightGBM training.

@justinormont justinormont added enhancement New feature or request usability Smoothing user interaction or experience labels Jun 28, 2019
@frank-dong-ms-zz frank-dong-ms-zz added the P2 Priority of the issue for triage purpose: Needs to be fixed at some point. label Jan 9, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request P2 Priority of the issue for triage purpose: Needs to be fixed at some point. usability Smoothing user interaction or experience
Projects
None yet
Development

No branches or pull requests

3 participants