-
Notifications
You must be signed in to change notification settings - Fork 51
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[fixes # 9 ] Comparing test sample classifications between models #82
[fixes # 9 ] Comparing test sample classifications between models #82
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for this really nice PR. I really like the idea of looking at class probability distributions, and also the simple and yet very informative figure breaking out class-specific misclassifications per classifier to gain insight into the nature of the decision threshold.
Please resolve a few outstanding issues before we merge this one in.
- Please use only relative (to the repo structure) path references when loading data.
"D:/PRESC/PRESC/datasets/defaults.csv"
doesn't exist on my machine so it requires code changes to successfully run your notebook locally. - Please adhere to pyhton Black formatting in all .py files. Instructions are in the repo README file with a link to the python black project.
@Sidrah-Madiha please resolve merge conflicts by pulling down the latest master branch and re-basing these changes on that. |
@mlopatka fixed conflicts, please review |
This branch fixes #9 , in this branch in folder: "Comparing-test_sample_classifications_between_models", there is a helper file "compare_test_sample_classifications.py" which implements 2 plots, first plot displays spread of class probabilities for each model and the second plot shows misclassified point in each class (of binary classifiers) for each model, these graphs are shown in test notebook file "Test_for_compare_test_sample_classifications_across_models", please see attached image:
I have also added an interpretation of the graph in notebook as well: please see attached image