Skip to content

Pytorch Implementation of Adversarial attacks, namely, FGSM, PGD, and Carlini-Wagner

License

Notifications You must be signed in to change notification settings

nisargshah1999/Adversarial-attacks

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

4 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

EN 520.655 Project 1

Nisarg A. Shah1 {[email protected]} and Yasiru Ranasinghe1 {[email protected]}

1 Johns Hopkins University

Prerequisites

  • Python 3.6+
  • PyTorch 1.0+

Training

# Start training with: 
python main.py

# You can manually resume the training with: 
python main.py --resume --lr=0.01

Fast Gradient Signed Method attack

# Start training with:
python fgsm.py

Projected Gradient Descent L-2 norm attack

# Start training with:
python pgd_l2.py

Projected Gradient Descent L-infinity norm attack

# Start training with:
python pgd_linf.py

Carlini-Wagner attack

# Start training with:
python cw.py

Adversarial training

# Start training with:
python adversarial_training.py

About

Pytorch Implementation of Adversarial attacks, namely, FGSM, PGD, and Carlini-Wagner

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages