-
Notifications
You must be signed in to change notification settings - Fork 42
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
docs: Add a sample to demonstrate the evaluation results #364
Conversation
Here is the summary of changes. You are about to add 2 region tags.
This comment is generated by snippet-bot.
|
} | ||
) | ||
|
||
# Some models include a convenient .score(X, y) method for evaluation with a preset accuracy metric: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Let's also mention that the results are in the same form a ML.EVALUATE
here. This from the SQL description would be really important to include:
Because you performed a logistic regression, the results include the following columns:
precision — A metric for classification models. Precision identifies the frequency with which a model was correct when predicting the positive class.
recall — A metric for classification models that answers the following question: Out of all the possible positive labels, how many did the model correctly identify?
accuracy — Accuracy is the fraction of predictions that a classification model got right.
f1_score — A measure of the accuracy of the model. The f1 score is the harmonic average of the precision and recall. An f1 score's best value is 1. The worst value is 0.
log_loss — The loss function used in a logistic regression. This is the measure of how far the model's predictions are from the correct labels.
roc_auc — The area under the ROC curve. This is the probability that a classifier is more confident that a randomly chosen positive example is actually positive than that a randomly chosen negative example is positive. For more information, see Classification in the Machine Learning Crash Course.
https://cloud.google.com/bigquery/docs/create-machine-learning-model#evaluate_your_model
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, I agree. Will have those edits today.
# roc_auc — The area under the ROC curve. This is the probability that a classifier is more confident that | ||
# a randomly chosen positive example | ||
# is actually positive than that a randomly chosen negative example is positive. For more information, | ||
# see Classification in the Machine Learning Crash Course. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Include the link to this course.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, will do!
# precision — A metric for classification models. Precision identifies the frequency with | ||
# which a model was correct when predicting the positive class. | ||
# recall — A metric for classification models that answers the following question: | ||
# Out of all the possible positive labels, how many did the model correctly identify? | ||
# accuracy — Accuracy is the fraction of predictions that a classification model got right. | ||
# f1_score — A measure of the accuracy of the model. The f1 score is the harmonic average of | ||
# the precision and recall. An f1 score's best value is 1. The worst value is 0. | ||
# log_loss — The loss function used in a logistic regression. This is the measure of how far the | ||
# model's predictions are from the correct labels. | ||
# roc_auc — The area under the ROC curve. This is the probability that a classifier is more confident that | ||
# a randomly chosen positive example | ||
# is actually positive than that a randomly chosen negative example is positive. For more information, | ||
# see Classification in the Machine Learning Crash Course. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The formatting is a bit funky / hard to read. Please add some bullet points and indentation:
# precision — A metric for classification models. Precision identifies the frequency with | |
# which a model was correct when predicting the positive class. | |
# recall — A metric for classification models that answers the following question: | |
# Out of all the possible positive labels, how many did the model correctly identify? | |
# accuracy — Accuracy is the fraction of predictions that a classification model got right. | |
# f1_score — A measure of the accuracy of the model. The f1 score is the harmonic average of | |
# the precision and recall. An f1 score's best value is 1. The worst value is 0. | |
# log_loss — The loss function used in a logistic regression. This is the measure of how far the | |
# model's predictions are from the correct labels. | |
# roc_auc — The area under the ROC curve. This is the probability that a classifier is more confident that | |
# a randomly chosen positive example | |
# is actually positive than that a randomly chosen negative example is positive. For more information, | |
# see Classification in the Machine Learning Crash Course. | |
# * precision -- A metric for classification models. Precision identifies the frequency | |
# with which a model was correct when predicting the positive class. | |
# * recall -- A metric for classification models that answers the following question: | |
# Out of all the possible positive labels, how many did the model correctly identify? | |
# * accuracy -- Accuracy is the fraction of predictions that a classification model got right. | |
# * f1_score -- A measure of the accuracy of the model. The f1 score is the harmonic average of | |
# the precision and recall. An f1 score's best value is 1. The worst value is 0. | |
# * log_loss -- The loss function used in a logistic regression. This is the measure of how far the | |
# model's predictions are from the correct labels. | |
# * roc_auc -- The area under the ROC curve. This is the probability that a classifier is more | |
# confident that a randomly chosen positive example is actually positive than that a randomly | |
# chosen negative example is positive. For more information, see "Classification" in the | |
# Machine Learning Crash Course at | |
# https://developers.google.com/machine-learning/crash-course/classification/video-lecture |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, that does make sense. I will do that now.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks!
Merge-on-green attempted to merge your PR for 6 hours, but it was not mergeable because either one of your required status checks failed, one of your required reviews was not approved, or there is a do not merge label. Learn more about your required status checks here: https://help.github.com/en/github/administering-a-repository/enabling-required-status-checks. You can remove and reapply the label to re-run the bot. |
Thank you for opening a Pull Request! Before submitting your PR, there are a few things you can do to make sure it goes smoothly:
Fixes #<issue_number_goes_here> 🦕