Randomized adversarial training based on AWP-TRADES (CIFAR-10/100):
python AWP_first_second/train_first_second.py
Randomized adversarial training based on TRADES (CIFAR-10/100):
python TRADES_first_second/train_first_second.py
Note that, to further reduce complexity, we implement First and Second Taylor terms with an approximation method described in Appendix D of the paper. When optimizing second Taylor terms in practice, we use Hadamard product to replace Kronecker product.
PGD and CW evaluation with epsilon=0.031:
python eval_attack.py
Auto attack evaluation is under standard version with epsilon=8/255:
python eval_autoattack.py
We got the best performance between epoch 100-200.
[1] AT: https://github.com/locuslab/robust_overfitting
[2] TRADES: https://github.com/yaodongyu/TRADES/
[3] AutoAttack: https://github.com/fra31/auto-attack
[4] MART: https://github.com/YisenWang/MART
[5] AWP: https://github.com/csdongxian/AWP
[6] AVMixup: https://github.com/hirokiadachi/Adversarial-vertex-mixup-pytorch
[7] S2O: https://github.com/Alexkael/S2O
If you find our paper and repo useful, please cite our paper:
@article{jin2023randomized,
title={Randomized Adversarial Training via Taylor Expansion},
author={Jin, Gaojie and Yi, Xinping and Wu, Dengyu and Mu, Ronghui and Huang, Xiaowei},
journal={CVPR 2023},
year={2023}
}