This is the official implemenation of our TMLR 2024 paper: "Concept-Driven Continual Learning".
- In this work, we propose to bring interpretability into continual learning process to mitigate catastrophic forgetting problems.
- We proposed two novel methods Interpretability-Guided Continual Learning (IG-CL) and Intrinsically-Interpretable Neural Network (IN2) that can systematically manage human-understandable concepts within DNNs throughout the training process.
- Our proposed approaches provide unprecedented transparency and control over the continual learning process, marking a promising new direction in designing continual learning algorithms.
Overview of our Method 1: IG-CL
Overview of our Method 2: IN2
Execute the following code to set up the code environment.
bash setup/setup.sh
For methods related to MIR
and DER
, please follow setup/new_env.md
to modify the environment and code.
Execute the following code to get the experiment results. Here we take CIFAR-10 for example.
python continual_learning/train_all.py --result_dir results --strategy SRT --task_num 5 --dataset cifar10 --sol sol0
sol0
means freeze-all
implementation, and sol1
means freeze-part
implementation.
Execute the following code to get the experiment results.
python evaluate/metric.py --file_dir results --strategy SRT --task_num 5
- We need to know the classes in each task to create the corresponding concept set.
- Follow steps in
continual_learning/label_generation.ipynb
to get the classes for each task.
Execute the following code to generate the concept set. Please put your OpenAI api key in sandbox-lf-cbm/.openai_api_key
before running the code.
bash script_dir/CBM/exec_conceptset.sh
- For specific scenario, please modify the variables
seed_list
andtask_num
insandbox-lf-cbm/GPT_conceptset_processor.py
andsandbox-lf-cbm/GPT_init_concepts.py
. - Please see Label-free Concept Bottleneck Models for more details.
Execute the following code to train a model using IN2 strategy.
bash script_dir/CBM/cc_cbm.sh
Execute the following code to get the experiment results.
python evaluate/metric.py --file_dir results/cc_cbm --strategy cc_cbm --task_num 5
- Please check
experiments/
folder for experiments in the paper - Forward Transfer Metric: Use
avalanche.evaluation.metrics.forward_transfer
metric to get the results.
- CLIP-Dissect: https://github.com/Trustworthy-ML-Lab/CLIP-dissect
- Label-free CBM: https://github.com/Trustworthy-ML-Lab/Label-free-CBM
- Avalanche: https://avalanche.continualai.org
@article{yang2024conceptdriven,
title={Concept-Driven Continual Learning},
author={Yang, Sin-Han and Oikarinen, Tuomas and Weng, Tsui-Wei},
journal={Transactions on Machine Learning Research},
year={2024}
}