Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Adding getBestModel and getBestModelInfo to TuneHyperparametersModel #355

Merged

Conversation

imatiach-msft
Copy link
Contributor

Adding getBestModel and getBestModelInfo to TuneHyperparametersModel (python API)

@mmlspark-bot
Copy link
Contributor

@mmlspark-bot
Copy link
Contributor

@mmlspark-bot
Copy link
Contributor

@mmlspark-bot
Copy link
Contributor

@mmlspark-bot
Copy link
Contributor

PASS Pass! — The build has succeeded. (a0f37312)

MMLSpark 0.13.dev21+1.ga0f3731

This is a build for Github PR #355, changes:


@imatiach-msft imatiach-msft force-pushed the ilmat/tune-hyperparams-wrapper branch 2 times, most recently from b68dc3b to 7a04ead Compare August 14, 2018 00:23
@mmlspark-bot
Copy link
Contributor

@mmlspark-bot
Copy link
Contributor

PASS Pass! — The build has succeeded. (7a04ead6)

MMLSpark 0.13.dev21+1.g7a04ead

This is a build for Github PR #355, changes:

  • [7a04ead] Ilya Matiach Adding getBestModel and getBestModelInfo to TuneHyperparameters and fixed autogenerated code for model name

}
val (modelClass, modelQualifiedClass) = sc.getSuperclass.getSimpleName match {
case "Estimator" => getModelFromGenericType(typeArgs.head)
case "ProbabilisticClassifier" => getModelFromGenericType(typeArgs(2))
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why do we need to specifically code against this API here? seems like this muddies the abstraction and is not needed becase a Probabbalistic Classifier is a predictor

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I needed probabilistic classifier to get the model type, otherwise in the autogenerated lightgbm wrappers the model was "M" instead of lightgbmclassificationmodel

@@ -23,6 +23,8 @@ abstract class WrapperGenerator {

def wrapperName(myClass: Class[_]): String

def modelWrapperName(myClass: Class[_], modelName: String): String

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

does this need to also take in the modelName?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

the model wrapper needs to take in the model name because that is what it returns, it uses the class to determine whether to put an underscore at the beginning

@@ -9,7 +9,7 @@
basestring = str

from mmlspark._LightGBMRegressor import _LightGBMRegressor
from mmlspark._LightGBMRegressor import M
from mmlspark._LightGBMRegressor import _LightGBMRegressionModel
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is nice

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

thanks

mhamilton723
mhamilton723 previously approved these changes Aug 17, 2018
@mmlspark-bot
Copy link
Contributor

@mmlspark-bot
Copy link
Contributor

PASS Pass! — The build has succeeded. (c1ea31d1)

MMLSpark 0.13.dev23+1.gc1ea31d

This is a build for Github PR #355, changes:

  • [c1ea31d] Ilya Matiach Adding getBestModel and getBestModelInfo to TuneHyperparameters and fixed autogenerated code for model name

@mhamilton723 mhamilton723 merged commit d287118 into microsoft:master Aug 18, 2018
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants