A curated list of awesome adversarial machine learning resources, inspired by awesome-computer-vision.
- Breaking Linear Classifiers on ImageNet, A. Karpathy et al.
- Breaking things is easy, N. Papernot & I. Goodfellow et al.
- Attacking Machine Learning with Adversarial Examples, N. Papernot, I. Goodfellow, S. Huang, Y. Duan, P. Abbeel, J. Clark.
- Introduction to Adversarial Machine Learning, Sarah Jamie Lewis.
- Robust Adversarial Examples, Anish Athalye.
- Intriguing properties of neural networks, C. Szegedy et al., arxiv 2014
- Explaining and Harnessing Adversarial Examples, I. Goodfellow et al., ICLR 2015
Image Classification
- DeepFool: a simple and accurate method to fool deep neural networks, S. Moosavi-Dezfooli et al., CVPR 2016
- The Limitations of Deep Learning in Adversarial Settings, N. Papernot et al., ESSP 2016
- Transferability in Machine Learning: from Phenomena to Black-Box Attacks using Adversarial Samples, N. Papernot et al., arxiv 2016
- Adversarial Examples In The Physical World, A. Kurakin et al., ICLR workshop 2017
- Delving into Transferable Adversarial Examples and Black-box Attacks Liu et al., ICLR 2017
- Towards Evaluating the Robustness of Neural Networks N. Carlini et al., SSP 2017
- Practical Black-Box Attacks against Deep Learning Systems using Adversarial Examples, N. Papernot et al., Asia CCS 2017
Reinforcement Learning
- Adversarial attacks on neural network policies, S. Huang et al, ICLR workshop 2017
- Tactics of Adversarial Attacks on Deep Reinforcement Learning Agents, Y. Lin et al, IJCAI 2017
- Delving into adversarial attacks on deep policies, J. Kos et al., ICLR workshop 2017
- Robust Deep Reinforcement Learning with Adversarial Attacks, A. Pattanaik et al., AAMAS 2018
- Vulnerability of Deep Reinforcement Learning to Policy Induction Attacks, V. Behzadan et al., MLDM 2017
Segmentation & Object Detection
- Adversarial Examples for Semantic Segmentation and Object Detection, C. Xie, ICCV 2017
VAE-GAN
- Adversarial examples for generative models, J. Kos et al. arxiv 2017
Adversarial Training
- Adversarial Machine Learning At Scale, A. Kurakin et al., ICLR 2017
- Ensemble Adversarial Training: Attacks and Defenses, F. Tramèr et al., arxiv 2017
Defensive Distillation
- Distillation as a Defense to Adversarial Perturbations against Deep Neural Networks, N. Papernot et al., SSP 2016
- Extending Defensive Distillation, N. Papernot et al., arxiv 2017
- Distributional Smoothing with Virtual Adversarial Training, T. Miyato et al., ICLR 2016
- Adversarial Training Methods for Semi-Supervised Text Classification, T. Miyato et al., ICLR 2017
- Deep Neural Networks are Easily Fooled: High Confidence Predictions for Unrecognizable Images, A. Nguyen et al., CVPR 2015
- Do Statistical Models Understand the World?, I. Goodfellow, 2015
- Classifiers under Attack, David Evans, 2017
- Adversarial Examples in Machine Learning, Nicolas Papernot, 2017
- Poisoning Behavioral Malware Clustering, Biggio. B, Rieck. K, Ariu. D, Wressnegger. C, Corona. I. Giacinto, G. Roli. F, 2014
- Is Data Clustering in Adversarial Settings Secure?, BBiggio. B, Pillai. I, Rota Bulò. S, Ariu. D, Pelillo. M, Roli. F, 2015
- Poisoning complete-linkage hierarchical clustering, Biggio. B, Rota Bulò. S, Pillai. I, Mura. M, Zemene Mequanint. E, Pelillo. M, Roli. F, 2014
- Is Feature Selection Secure against Training Data Poisoning?, Xiao. H, Biggio. B, Brown. G, Fumera. G, Eckert. C, Roli. F, 2015
- Adversarial Feature Selection Against Evasion Attacks, Zhang. F, Chan. PPK, Biggio. B, Yeung. DS, Roli. F, 2016
License
To the extent possible under law, Yen-Chen Lin has waived all copyright and related or neighboring rights to this work.