-
Notifications
You must be signed in to change notification settings - Fork 252
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Overview of Explainability (Draft) #510
Conversation
Codecov Report
@@ Coverage Diff @@
## master #510 +/- ##
=======================================
Coverage 82.03% 82.03%
=======================================
Files 77 77
Lines 10513 10513
=======================================
Hits 8624 8624
Misses 1889 1889 |
Discussion on Explainability insights and misuse (From discussion with @jklaise)Point I want to make is that you can use explainability for testing. That given a model you can draw insights and if they conform to your expectations then that's evidence that the model has been trained properly. However this needs further discussion because:
|
… dependent Tree SHAP
A few more to keep track of:
|
…supported library model lists in summary table.
Left some very minor last comments. Otherwise good to go! |
Progress:
Intro:
Types of insight
Global Feature Attribution
Local Necessary Features
Local Feature Attribution
Counter Factuals