Skip to content

[AAAI, ICLR TP] Fast Machine Unlearning Without Retraining Through Selective Synaptic Dampening

License

Notifications You must be signed in to change notification settings

if-loops/selective-synaptic-dampening

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

20 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Selective Synaptic Dampening (AAAI + ICLR TP code)

GitHub last commit (branch) GitHub Repo stars GitHub repo size

SSD_heading

This is the code for the paper Fast Machine Unlearning Without Retraining Through Selective Synaptic Dampening (https://arxiv.org/abs/2308.07707), accepted at The 38th Annual AAAI Conference on Artificial Intelligence (Main Track).

Related research

Paper Code Venue/Status
Potion: Towards Poison Unlearning GitHub Journal of Data-Centric Machine Learning Research (DMLR)
Zero-Shot Machine Unlearning at Scale via Lipschitz Regularization GitHub Preprint
Parameter-Tuning-Free Data Entry Error Unlearning with Adaptive Selective Synaptic Dampening GitHub Preprint
Loss-Free Machine Unlearning (i.e. Label-Free) -> LFSSD see below ICLR 2024 Tiny Paper

Implementing LFSSD:

Replace the following in the compute_importances function(s):

# Vanilla SSD:
criterion = nn.CrossEntropyLoss()
loss = criterion(out, y)
...
imp.data += p.grad.data.clone().pow(2)

# LFSSD:
loss = torch.norm(out, p="fro", dim=1).pow(2).mean()
...
imp.data += p.grad.data.clone().abs()

Usage

All experiments can be run via

./MAIN_run_experiments.sh 0 # to run experiments on GPU 0 (nvidia-smi)

You might encounter issues with executing this file due to different line endings with Windows and Unix. Use dos2unix "filename" to fix.

Setup

You will need to train ResNet18's and Vision Transformers. Use pretrain_model.py for this and then copy the paths of the models into the respecive .sh files.

# fill in _ with your desired parameters as described in pretrain_model.py
python pretrain_model.py -net _ -dataset _ -classes _ -gpu _

We used https://hub.docker.com/layers/tensorflow/tensorflow/latest-gpu-py3-jupyter/images/sha256-901b827b19d14aa0dd79ebbd45f410ee9dbfa209f6a4db71041b5b8ae144fea5 as our base image and installed relevant packages on top.

datetime
wandb
sklearn
torch
copy
tqdm
transformers
matplotlib
scipy

You will need a wandb.ai account to use the implemented logging. Feel free to replace with any other logger of your choice.

Modifying SSD

SSD functions are in ssd.py. To change alpha and lambda, set them in the respective forget_..._main.py file per unlearning task.

Citing this work

@article{Foster_Schoepf_Brintrup_2024,
      title={Fast Machine Unlearning without Retraining through Selective Synaptic Dampening},
      volume={38},
      url={https://ojs.aaai.org/index.php/AAAI/article/view/29092},
      DOI={10.1609/aaai.v38i11.29092},
      number={11},
      journal={Proceedings of the AAAI Conference on Artificial Intelligence},
      author={Foster, Jack and Schoepf, Stefan and Brintrup, Alexandra},
      year={2024},
      month={Mar.},
      pages={12043-12051} }

Authors

For our newest research, feel free to follow our socials:

Jack Foster: LinkedIn, Twitter

Stefan Schoepf: LinkedIn, Twitter

Alexandra Brintrup: LinkedIn

Supply Chain AI Lab: LinkedIn

About

[AAAI, ICLR TP] Fast Machine Unlearning Without Retraining Through Selective Synaptic Dampening

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published