After cloning, move inside this repository and:
- create a conda environment using
conda env create -f environment.yml
- activate the environment using
conda activate audit_vis
- run
pip install .
Then, download results.tar.gz
, checkpoints.tar.gz
and images.tar.gz
folders from our Zenodo archive, untar them and move them inside the repository:
checkpoints
contains the weights of the null and anomalous modelsimages
contains the images we used to compute the explanationsresults
contains precomputed visualizations, anomaly scores, and the final results of the paper
The most important folders in this repository are:
visualize/
, which computes the model explanations and stores them inresults/visualizations/
analysis/
, which uses precomputed explanations to compute the anomaly scores and stores them inresults/anomaly_scores/
In each of these folders, there is:
- a
detection_script.py
script, which runs the jobs relevant for the detection task - a
localization_script.py
script, which runs the jobs relevant for the localization task
These 4 scripts use submitit
to schedule SLURM jobs on a cluster with GPUs. Before running each script:
- set
partition_name
to the name of your partition - for the
visualize
scripts, set thelist_anomalies
,list_methods
- for the
analysis
scripts, set thelist_anomalies
,list_lpips_nets
andlist_methods
variables
Finally, analysis/get_results.py
computes the final results for the detection and localization tasks from the anomaly scores.