Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

docs: Add a sample to demonstrate the evaluation results #364

Merged
merged 9 commits into from
Feb 6, 2024
Merged

Conversation

DevStephanie
Copy link
Contributor

Thank you for opening a Pull Request! Before submitting your PR, there are a few things you can do to make sure it goes smoothly:

  • Make sure to open an issue as a bug/issue before writing your code! That way we can discuss the change, evaluate designs, and agree on the general idea
  • Ensure the tests and linter pass
  • Code coverage does not decrease (if any source code was changed)
  • Appropriate docs were updated (if necessary)

Fixes #<issue_number_goes_here> 🦕

@DevStephanie DevStephanie requested review from a team as code owners January 31, 2024 21:27
Copy link

snippet-bot bot commented Jan 31, 2024

Here is the summary of changes.

You are about to add 2 region tags.

This comment is generated by snippet-bot.
If you find problems with this result, please file an issue at:
https://github.com/googleapis/repo-automation-bots/issues.
To update this comment, add snippet-bot:force-run label or use the checkbox below:

  • Refresh this comment

@product-auto-label product-auto-label bot added size: s Pull request size is small. api: bigquery Issues related to the googleapis/python-bigquery-dataframes API. samples Issues that are directly related to samples. labels Jan 31, 2024
}
)

# Some models include a convenient .score(X, y) method for evaluation with a preset accuracy metric:
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let's also mention that the results are in the same form a ML.EVALUATE here. This from the SQL description would be really important to include:

Because you performed a logistic regression, the results include the following columns:

precision — A metric for classification models. Precision identifies the frequency with which a model was correct when predicting the positive class.
recall — A metric for classification models that answers the following question: Out of all the possible positive labels, how many did the model correctly identify?
accuracy — Accuracy is the fraction of predictions that a classification model got right.
f1_score — A measure of the accuracy of the model. The f1 score is the harmonic average of the precision and recall. An f1 score's best value is 1. The worst value is 0.
log_loss — The loss function used in a logistic regression. This is the measure of how far the model's predictions are from the correct labels.
roc_auc — The area under the ROC curve. This is the probability that a classifier is more confident that a randomly chosen positive example is actually positive than that a randomly chosen negative example is positive. For more information, see Classification in the Machine Learning Crash Course.

https://cloud.google.com/bigquery/docs/create-machine-learning-model#evaluate_your_model

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, I agree. Will have those edits today.

@product-auto-label product-auto-label bot added size: m Pull request size is medium. and removed size: s Pull request size is small. labels Feb 1, 2024
# roc_auc — The area under the ROC curve. This is the probability that a classifier is more confident that
# a randomly chosen positive example
# is actually positive than that a randomly chosen negative example is positive. For more information,
# see Classification in the Machine Learning Crash Course.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Include the link to this course.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, will do!

Comment on lines 139 to 151
# precision — A metric for classification models. Precision identifies the frequency with
# which a model was correct when predicting the positive class.
# recall — A metric for classification models that answers the following question:
# Out of all the possible positive labels, how many did the model correctly identify?
# accuracy — Accuracy is the fraction of predictions that a classification model got right.
# f1_score — A measure of the accuracy of the model. The f1 score is the harmonic average of
# the precision and recall. An f1 score's best value is 1. The worst value is 0.
# log_loss — The loss function used in a logistic regression. This is the measure of how far the
# model's predictions are from the correct labels.
# roc_auc — The area under the ROC curve. This is the probability that a classifier is more confident that
# a randomly chosen positive example
# is actually positive than that a randomly chosen negative example is positive. For more information,
# see Classification in the Machine Learning Crash Course.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The formatting is a bit funky / hard to read. Please add some bullet points and indentation:

Suggested change
# precision — A metric for classification models. Precision identifies the frequency with
# which a model was correct when predicting the positive class.
# recall — A metric for classification models that answers the following question:
# Out of all the possible positive labels, how many did the model correctly identify?
# accuracy — Accuracy is the fraction of predictions that a classification model got right.
# f1_score — A measure of the accuracy of the model. The f1 score is the harmonic average of
# the precision and recall. An f1 score's best value is 1. The worst value is 0.
# log_loss — The loss function used in a logistic regression. This is the measure of how far the
# model's predictions are from the correct labels.
# roc_auc — The area under the ROC curve. This is the probability that a classifier is more confident that
# a randomly chosen positive example
# is actually positive than that a randomly chosen negative example is positive. For more information,
# see Classification in the Machine Learning Crash Course.
# * precision -- A metric for classification models. Precision identifies the frequency
# with which a model was correct when predicting the positive class.
# * recall -- A metric for classification models that answers the following question:
# Out of all the possible positive labels, how many did the model correctly identify?
# * accuracy -- Accuracy is the fraction of predictions that a classification model got right.
# * f1_score -- A measure of the accuracy of the model. The f1 score is the harmonic average of
# the precision and recall. An f1 score's best value is 1. The worst value is 0.
# * log_loss -- The loss function used in a logistic regression. This is the measure of how far the
# model's predictions are from the correct labels.
# * roc_auc -- The area under the ROC curve. This is the probability that a classifier is more
# confident that a randomly chosen positive example is actually positive than that a randomly
# chosen negative example is positive. For more information, see "Classification" in the
# Machine Learning Crash Course at
# https://developers.google.com/machine-learning/crash-course/classification/video-lecture

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, that does make sense. I will do that now.

Copy link
Collaborator

@tswast tswast left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks!

@tswast tswast added the automerge Merge the pull request once unit tests and other checks pass. label Feb 5, 2024
Copy link

Merge-on-green attempted to merge your PR for 6 hours, but it was not mergeable because either one of your required status checks failed, one of your required reviews was not approved, or there is a do not merge label. Learn more about your required status checks here: https://help.github.com/en/github/administering-a-repository/enabling-required-status-checks. You can remove and reapply the label to re-run the bot.

@gcf-merge-on-green gcf-merge-on-green bot removed the automerge Merge the pull request once unit tests and other checks pass. label Feb 6, 2024
@tswast tswast added the automerge Merge the pull request once unit tests and other checks pass. label Feb 6, 2024
@tswast tswast merged commit cff0919 into main Feb 6, 2024
14 of 15 checks passed
@tswast tswast deleted the bqml_eval branch February 6, 2024 17:43
@gcf-merge-on-green gcf-merge-on-green bot removed the automerge Merge the pull request once unit tests and other checks pass. label Feb 6, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
api: bigquery Issues related to the googleapis/python-bigquery-dataframes API. samples Issues that are directly related to samples. size: m Pull request size is medium.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants