Skip to content

[IEEE VR'22] SPAA: Stealthy Projector-based Adversarial Attacks on Deep Image Classifiers

License

Notifications You must be signed in to change notification settings

BingyaoHuang/SPAA

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

14 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

SPAA: Stealthy Projector-based Adversarial Attacks on Deep Image Classifiers [IEEE VR'22]

Please watch the IEEE VR'22 presentation for a quick introduction of our work.


Introduction

PyTorch's implementation of SPAA (paper). Please refer to supplementary material (~66MB) for more results.

Prerequisites

  • PyTorch compatible GPU with CUDA 11.7
  • Conda (Python 3.9)
  • Other packages are listed in requirements.txt.

Usage

Reproduce paper results

  1. Create a new conda environment:
    conda create --name spaa python=3.9
    activate spaa       # Windows
    conda activate spaa # Linux
    
  2. Clone this repo:
    git clone https://github.com/BingyaoHuang/SPAA
    cd SPAA
    
  3. Install required packages by typing
    pip install -r requirements.txt
    
  4. Download SPAA benchmark dataset (~3.25 GB) and extract to data/, see data/README.md for more details.
  5. Start visdom by typing the following command in local or server command line: visdom -port 8097
  6. Once visdom is successfully started, visit http://localhost:8097 (train locally) or http://server:8097 (train remotely).
  7. Open reproduce_paper_results.py and set which GPUs to use. An example is shown below, we use GPU 0. os.environ['CUDA_VISIBLE_DEVICES'] = '0'
  8. Run reproduce_paper_results.py to reproduce benchmark results. To visualize the training process in visdom (slower), you need to set plot_on=True.
    cd src/python
    python reproduce_paper_results.py
    

Compare the three projector-based attackers in your setups

  1. Finish the steps above.
  2. Open main.py and follow the instructions there. Execute each cell (starting with # %%) one-by-one (e.g., use PyCharm Execute cell in console) to learn how to set up your projector-camera systems, capture data, train PCNet/CompenNet++, perform the three projector-based attacks (i.e., SPAA/PerC-AL+CompenNet++/One-pixel_DE), and generate the attack results.
  3. The results will be saved to data/setups/[your setup]/ret
  4. Training results of PCNet/CompenNet++ will also be saved to log/%Y-%m-%d_%H_%M_%S.txt and log/%Y-%m-%d_%H_%M_%S.xls.

Citation

If you use the dataset or this code, please consider citing our work

@inproceedings{huang2022spaa,
   title      = {SPAA: Stealthy Projector-based Adversarial Attacks on Deep Image Classifiers},
   booktitle  = {2022 IEEE Conference on Virtual Reality and 3D User Interfaces (VR)},
   author     = {Huang, Bingyao and Ling, Haibin},
   year       = {2022},
   month      = mar,
   pages      = {534--542},
   publisher  = {IEEE},
   address    = {Christchurch, New Zealand},
   doi        = {10.1109/VR51125.2022.00073},
   isbn       = {978-1-66549-617-9}
}

Acknowledgments

License

This software is freely available for non-profit non-commercial use, and may be redistributed under the conditions in license.

About

[IEEE VR'22] SPAA: Stealthy Projector-based Adversarial Attacks on Deep Image Classifiers

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages