An examination of the approximation quality of post-hoc explainability methods between subgroups. For more details please see our FaCCT 2022 paper.
Run the following commands to clone this repo and create the Conda environment:
git clone [email protected]:MLforHealth/ExplanationsSubpopulations.git
cd ExplanationsSubpopulations/
conda env create -f environment.yml
conda activate fair_exp
We provide the adult
, recidivism
, lsac
datasets as .csv files in this repository. For access to the mimic
dataset, see ProcessMIMIC.md.
To reproduce the experiments in the paper which involve training grids of models using different hyperparameters, use sweep.py
as follows:
python sweep.py launch \
--experiment {experiment_name} \
--output_dir {output_root} \
--command_launcher {launcher}
where:
experiment_name
corresponds to experiments defined as classes inexperiments.py
output_root
is a directory where experimental results will be stored.launcher
is a string corresponding to a launcher defined inlaunchers.py
(i.e.slurm
orlocal
).
Sample bash scripts showing the command can also be found in bash_scripts/
.
Alternatively, a single model can also be trained at once by calling run.py
with the appropriate arguments, for example:
python run.py \
--dataset adult \
--blackbox_model lr \
--explanation_type local \
--explanation_model lime \
--n_features 5 \
--model_type sklearn \
--evaluate_val \
--seed 1
We aggregate results and generate tables used in the paper similar to the notebooks/localmodels.ipynb
notebook. Another example has been shown in the notebooks/agg_results.ipynb
notebook.
We provide the notebook used to simulate the real-world impact of biased explanations on law school admissions in notebooks/simulate_explanation_impact.ipynb
.
To reproduce the experiment described in Section 6.2 of the paper, run the DatasetDists
experiment, and aggregate results using the notebooks/agg_dist_results.ipynb
notebook.
If you use this code in your research, please cite the following publication:
@article{balagopalan2022road,
title={The Road to Explainability is Paved with Bias: Measuring the Fairness of Explanations},
author={Balagopalan, Aparna and Zhang, Haoran and Hamidieh, Kimia and Hartvigsen, Thomas and Rudzicz, Frank and Ghassemi, Marzyeh},
journal={arXiv preprint arXiv:2205.03295},
year={2022}
}