-
Notifications
You must be signed in to change notification settings - Fork 51
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Outreachy applications] Learning from misclassifications #63
Comments
KaairaGupta
added a commit
to KaairaGupta/PRESC
that referenced
this issue
Mar 14, 2020
mlopatka
pushed a commit
that referenced
this issue
Mar 20, 2020
mlopatka
pushed a commit
that referenced
this issue
Mar 20, 2020
dzeber
changed the title
Learning from misclassifications
[Outreachy applications] Learning from misclassifications
Jul 14, 2020
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
When training a classification model, it is common to look at accuracy and the confusion matrix, which give a summary view of misclassifications. By itself, these metrics are informative but not very actionable.
Develop a metric or visualization that reveals something more about each misclassified point (beyond just the fact that it was misclassified) that can be used to improve the model.
Some examples of a metric might be the classification probability scores for the different classes, which can indicate whether the misclassified points were close to the decision boundary or not, or the distance from the class mean in feature space, indicating whether the misclassified points are outliers.
A good place to start is to study the misclassifications you got from your model for task #2. What do they tell you about how to improve your model?
The text was updated successfully, but these errors were encountered: