Skip to content

Commit

Permalink
docs: add code samples for metrics.{accuracy_score, confusion_matrix} (
Browse files Browse the repository at this point in the history
…#478)

Thank you for opening a Pull Request! Before submitting your PR, there are a few things you can do to make sure it goes smoothly:
- [ ] Make sure to open an issue as a [bug/issue](https://togithub.com/googleapis/python-bigquery-dataframes/issues/new/choose) before writing your code!  That way we can discuss the change, evaluate designs, and agree on the general idea
- [ ] Ensure the tests and linter pass
- [ ] Code coverage does not decrease (if any source code was changed)
- [ ] Appropriate docs were updated (if necessary)

Fixes #<issue_number_goes_here> 🦕
  • Loading branch information
ashleyxuu authored Mar 21, 2024
1 parent 0bf1e91 commit 3e3329a
Showing 1 changed file with 42 additions and 0 deletions.
42 changes: 42 additions & 0 deletions third_party/bigframes_vendored/sklearn/metrics/_classification.py
Original file line number Diff line number Diff line change
Expand Up @@ -26,6 +26,24 @@
def accuracy_score(y_true, y_pred, normalize=True) -> float:
"""Accuracy classification score.
**Examples:**
>>> import bigframes.pandas as bpd
>>> import bigframes.ml.metrics
>>> bpd.options.display.progress_bar = None
>>> y_true = bpd.DataFrame([0, 2, 1, 3])
>>> y_pred = bpd.DataFrame([0, 1, 2, 3])
>>> accuracy_score = bigframes.ml.metrics.accuracy_score(y_true, y_pred)
>>> accuracy_score
0.5
If False, return the number of correctly classified samples:
>>> accuracy_score = bigframes.ml.metrics.accuracy_score(y_true, y_pred, normalize=False)
>>> accuracy_score
2
Args:
y_true (Series or DataFrame of shape (n_samples,)):
Ground truth (correct) labels.
Expand Down Expand Up @@ -58,6 +76,30 @@ def confusion_matrix(
:math:`C_{0,0}`, false negatives is :math:`C_{1,0}`, true positives is
:math:`C_{1,1}` and false positives is :math:`C_{0,1}`.
**Examples:**
>>> import bigframes.pandas as bpd
>>> import bigframes.ml.metrics
>>> bpd.options.display.progress_bar = None
>>> y_true = bpd.DataFrame([2, 0, 2, 2, 0, 1])
>>> y_pred = bpd.DataFrame([0, 0, 2, 2, 0, 2])
>>> confusion_matrix = bigframes.ml.metrics.confusion_matrix(y_true, y_pred)
>>> confusion_matrix
0 1 2
0 2 0 0
1 0 0 1
2 1 0 2
>>> y_true = bpd.DataFrame(["cat", "ant", "cat", "cat", "ant", "bird"])
>>> y_pred = bpd.DataFrame(["ant", "ant", "cat", "cat", "ant", "cat"])
>>> confusion_matrix = bigframes.ml.metrics.confusion_matrix(y_true, y_pred)
>>> confusion_matrix
ant bird cat
ant 2 0 0
bird 0 0 1
cat 1 0 2
Args:
y_true (Series or DataFrame of shape (n_samples,)):
Ground truth (correct) target values.
Expand Down

0 comments on commit 3e3329a

Please sign in to comment.