Skip to content

Commit

Permalink
November 2022 release
Browse files Browse the repository at this point in the history
Co-Authored-By: Yihan Wang <[email protected]>
Co-Authored-By: Huan Zhang <[email protected]>
Co-Authored-By: Zhuolin Yang <[email protected]>
  • Loading branch information
4 people committed Nov 30, 2022
1 parent 57a5a43 commit 8b3dc00
Show file tree
Hide file tree
Showing 65 changed files with 2,029 additions and 886 deletions.
1 change: 0 additions & 1 deletion .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,6 @@ Verified-acc*.npy
vnn-comp_*.npz
*.tar.gz
verifier_log_*
*.pth
*.pt
.idea
*.so
Expand Down
20 changes: 0 additions & 20 deletions .travis.yml

This file was deleted.

12 changes: 9 additions & 3 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,5 @@
# auto_LiRPA: Automatic Linear Relaxation based Perturbation Analysis for Neural Networks

![](https://travis-ci.com/KaidiXu/auto_LiRPA.svg?token=HM3jb55xV1sMRsVKBr8b&branch=master&status=started)
[![Documentation Status](https://readthedocs.org/projects/auto-lirpa/badge/?version=latest)](https://auto-lirpa.readthedocs.io/en/latest/?badge=latest)
[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](http://PaperCode.cc/AutoLiRPA-Demo)
[![Video Introduction](https://img.shields.io/badge/play-video-red.svg)](http://PaperCode.cc/AutoLiRPA-Video)
Expand All @@ -13,6 +12,7 @@

## What's New?

- Bound computation for higher-order computational graphs to support bounding Jacobian, Jacobian-vector products, and [local Lipschitz constants](https://arxiv.org/abs/2210.07394). (11/2022)
- Our neural network verification tool [α,β-CROWN](https://github.com/huanzhang12/alpha-beta-CROWN.git) ([alpha-beta-CROWN](https://github.com/huanzhang12/alpha-beta-CROWN.git)) (using `auto_LiRPA` as its core library) **won** [VNN-COMP 2022](https://sites.google.com/view/vnn2022). Our library supports the large CIFAR100, TinyImageNet and ImageNet models in VNN-COMP 2022. (09/2022)
- Implementation of **general cutting planes** ([GCP-CROWN](https://arxiv.org/pdf/2208.05740.pdf)), support of more activation functions and improved performance and scalability. (09/2022)
- Our neural network verification tool [α,β-CROWN](https://github.com/huanzhang12/alpha-beta-CROWN.git) ([alpha-beta-CROWN](https://github.com/huanzhang12/alpha-beta-CROWN.git)) **won** [VNN-COMP 2021](https://sites.google.com/view/vnn2021) **with the highest total score**, outperforming 11 SOTA verifiers. α,β-CROWN uses the `auto_LiRPA` library as its core bound computation library. (09/2021)
Expand Down Expand Up @@ -91,7 +91,7 @@ user-defined ranges. We get guaranteed output ranges (bounds):

## Installation

Python 3.7+ and PyTorch 1.8+ are required.
Python 3.7+ and PyTorch 1.11+ are required.
PyTorch 1.11 is recommended, although other recent versions might also work.
It is highly recommended to have a pre-installed PyTorch
that matches your system and our version requirement.
Expand Down Expand Up @@ -159,10 +159,12 @@ We provide [a wide range of examples](doc/src/examples.md) of using `auto_LiRPA`
* [Certified Adversarial Defense Training on Sequence Data with **LSTM**](doc/src/examples.md#certified-adversarial-defense-training-for-lstm-on-mnist)
* [Certifiably Robust Language Classifier using **Transformers**](doc/src/examples.md#certifiably-robust-language-classifier-with-transformer-and-lstm)
* [Certified Robustness against **Model Weight Perturbations**](doc/src/examples.md#certified-robustness-against-model-weight-perturbations-and-certified-defense)
* [Bounding **Jacobian** and **local Lipschitz constants**](examples/vision/jacobian.py)

`auto_LiRPA` has also be used in the following works:
* [**α,β-CROWN for complete neural network verification**](https://github.com/huanzhang12/alpha-beta-CROWN)
* [**Fast certified robust training**](https://github.com/shizhouxing/Fast-Certified-Robust-Training)
* [**Computing local Lipschitz constants**](https://github.com/shizhouxing/Local-Lipschitz-Constants)

## Full Documentations

Expand All @@ -183,7 +185,7 @@ The general LiRPA based bound propagation algorithm was originally proposed in o
NeurIPS 2020
Kaidi Xu\*, Zhouxing Shi\*, Huan Zhang\*, Yihan Wang, Kai-Wei Chang, Minlie Huang, Bhavya Kailkhura, Xue Lin, Cho-Jui Hsieh (\* Equal contribution)

The `auto_LiRPA` library is further extended to allow optimized bound (α-CROWN), split constraints (β-CROWN) and general constraints (GCP-CROWN):
The `auto_LiRPA` library is further extended to allow optimized bound (α-CROWN), split constraints (β-CROWN) general constraints (GCP-CROWN), and higher-order computational graphs:

* [Fast and Complete: Enabling Complete Neural Network Verification with Rapid and Massively Parallel Incomplete Verifiers](https://arxiv.org/pdf/2011.13824.pdf).
ICLR 2021.
Expand All @@ -196,6 +198,10 @@ Shiqi Wang\*, Huan Zhang\*, Kaidi Xu\*, Suman Jana, Xue Lin, Cho-Jui Hsieh and Z
* [GCP-CROWN: General Cutting Planes for Bound-Propagation-Based Neural Network Verification](https://arxiv.org/abs/2208.05740).
Huan Zhang\*, Shiqi Wang\*, Kaidi Xu\*, Linyi Li, Bo Li, Suman Jana, Cho-Jui Hsieh and Zico Kolter (\* Equal contribution).

* [Efficiently Computing Local Lipschitz Constants of Neural Networks via Bound Propagation](https://arxiv.org/abs/2210.07394).
NeurIPS 2022.
Zhouxing Shi, Yihan Wang, Huan Zhang, Zico Kolter, Cho-Jui Hsieh.

Certified robust training using `auto_LiRPA` is improved to allow much shorter warmup and faster training:
* [Fast Certified Robust Training with Short Warmup](https://arxiv.org/pdf/2103.17268.pdf).
NeurIPS 2021.
Expand Down
245 changes: 245 additions & 0 deletions README_abcrown.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,245 @@
α,β-CROWN (alpha-beta-CROWN): A Fast and Scalable Neural Network Verifier with Efficient Bound Propagation
======================

<p align="center">
<a href="https://arxiv.org/pdf/2103.06624.pdf"><img src="https://www.huan-zhang.com/images/upload/alpha-beta-crown/logo_2022.png" width="36%"></a>
</p>

α,β-CROWN (alpha-beta-CROWN) is a neural network verifier based on an efficient
bound propagation algorithm ([CROWN](https://arxiv.org/pdf/1811.00866.pdf)) and
branch and bound. It can be accelerated efficiently on **GPUs** and can scale
to relatively large convolutional networks. It also supports a wide range of
neural network architectures (e.g., **CNN**, **ResNet**, and various activation
functions), thanks to the versatile
[auto\_LiRPA](http://github.com/KaidiXu/auto_LiRPA) library developed by us.
α,β-CROWN can provide **provable robustness guarantees against adversarial
attacks** and can also verify other general properties of neural networks.

α,β-CROWN is the **winning verifier** in [VNN-COMP
2021](https://sites.google.com/view/vnn2021) and [VNN-COMP
2022](https://sites.google.com/view/vnn2022) (International Verification of
Neural Networks Competition) with the highest total score, outperforming many
other neural network verifiers on a wide range of benchmarks over 2 years.
Details of competition results can be found in [VNN-COMP 2021
slides](https://docs.google.com/presentation/d/1oM3NqqU03EUqgQVc3bGK2ENgHa57u-W6Q63Vflkv000/edit#slide=id.ge4496ad360_14_21),
[report](https://arxiv.org/abs/2109.00498) and [VNN-COMP 2022 slides (page
73)](https://drive.google.com/file/d/1nnRWSq3plsPvOT3V-drAF5D8zWGu02VF/view?usp=sharing).

Supported Features
----------------------

<p align="center">
<a href="https://arxiv.org/pdf/2103.06624.pdf"><img src="https://www.huan-zhang.com/images/upload/alpha-beta-crown/banner.png" width="100%"></a>
</p>

Our verifier consists of the following core algorithms:

* **β-CROWN** ([Wang et al. 2021](https://arxiv.org/pdf/2103.06624.pdf)): complete verification with **CROWN** ([Zhang et al. 2018](https://arxiv.org/pdf/1811.00866.pdf)) and branch and bound
* **α-CROWN** ([Xu et al., 2021](https://arxiv.org/pdf/2011.13824.pdf)): incomplete verification with optimized CROWN bound
* **GCP-CROWN** ([Zhang et al. 2021](https://arxiv.org/pdf/2208.05740.pdf)): CROWN-like bound propagation with general cutting plane constraints.
* **BaB-Attack** ([Zhang et al. 2021](https://proceedings.mlr.press/v162/zhang22ae/zhang22ae.pdf)): Branch and bound based adversarial attack for tackling hard instances.
* **MIP** ([Tjeng et al., 2017](https://arxiv.org/pdf/1711.07356.pdf)): mixed integer programming (slow but can be useful on small models).

We support these neural network architectures:

* Layers: fully connected (FC), convolutional (CNN), pooling (average pool and max pool), transposed convolution
* Activation functions: ReLU (incomplete/complete verification); sigmoid, tanh, arctan, sin, cos, tan (incomplete verification)
* Residual connections and other irregular graphs

We support the following verification specifications:

* Lp norm perturbation (p=1,2,infinity, as often used in robustness verification)
* VNNLIB format input (at most two layers of AND/OR clause, as used in VNN-COMP 2021 and 2022)
* Any linear specifications on neural network output (which can be added as a linear layer)

We provide many example configurations in
[`complete_verifier/exp_configs`](/complete_verifier/exp_configs) directory to
start with:

* MNIST: MLP and CNN models
* CIFAR-10, CIFAR-100, TinyImageNet: CNN and ResNet models
* ACASXu, NN4sys and other low input-dimension models

See the [Guide on Algorithm
Selection](/complete_verifier/docs/abcrown_usage.md#guide-on-algorithm-selection)
to find the most suitable example to get started.

Installation and Setup
----------------------

α,β-CROWN is tested on Python 3.7+ and PyTorch 1.11. It can be installed
easily into a conda environment. If you don't have conda, you can install
[miniconda](https://docs.conda.io/en/latest/miniconda.html).

```bash
# Remove the old environment, if necessary.
conda deactivate; conda env remove --name alpha-beta-crown
# install all dependents into the alpha-beta-crown environment
conda env create -f complete_verifier/environment.yml --name alpha-beta-crown
# activate the environment
conda activate alpha-beta-crown
```

If you use the α-CROWN and/or β-CROWN verifiers (which covers the most use
cases), a Gurobi license is *not needed*. If you want to use MIP based
verification algorithms (feasible only for small MLP models), you need to
install a Gurobi license with the `grbgetkey` command. If you don't have
access to a license, by default the above installation procedure includes a
free and restricted license, which is actually sufficient for many relatively
small NNs. If you use the GCP-CROWN verifier, an installation of IBM CPlex
solver is required. Instructions to install the CPlex solver can be found
in the [VNN-COMP benchmark instructions](/complete_verifier/docs/vnn_comp.md#installation)
or the [GCP-CROWN instructions](https://github.com/tcwangshiqi-columbia/GCP-CROWN).

If you prefer to install packages manually rather than using a prepared conda
environment, you can refer to this [installation
script](/vnncomp_scripts/install_tool_general.sh).

If you want to run α,β-CROWN verifier on the VNN-COMP 2021 and 2022 benchmarks
(e.g., to make a comparison to a new verifier), you can follow [this
guide](/complete_verifier/docs/vnn_comp.md).

Instructions
----------------------

We provide a unified front-end for the verifier, `abcrown.py`. All parameters
for the verifier are defined in a `yaml` config file. For example, to run
robustness verification on a CIFAR-10 ResNet network, you just run:

```bash
conda activate alpha-beta-crown # activate the conda environment
cd complete_verifier
python abcrown.py --config exp_configs/cifar_resnet_2b.yaml
```

You can find explanations for most useful parameters in [this example config
file](/complete_verifier/exp_configs/cifar_resnet_2b.yaml). For detailed usage
and tutorial examples please see the [Usage
Documentation](/complete_verifier/docs/abcrown_usage.md). We also provide a
large range of examples in the
[`complete_verifier/exp_configs`](/complete_verifier/exp_configs) folder.


Publications
----------------------

If you use our verifier in your work, **please kindly cite our CROWN**([Zhang
et al., 2018](https://arxiv.org/pdf/1811.00866.pdf)), **α-CROWN** ([Xu et al.,
2021](https://arxiv.org/pdf/2011.13824.pdf)), **β-CROWN**([Wang et al.,
2021](https://arxiv.org/pdf/2103.06624.pdf)) and **GCP-CROWN**([Zhang et al.,
2022](https://arxiv.org/pdf/2208.05740.pdf)) papers. If your work involves the
convex relaxation of the NN verification please kindly cite [Salman et al.,
2019](https://arxiv.org/pdf/1902.08722). If your work deals with
ResNet/DenseNet, LSTM (recurrent networks), Transformer or other complex
architectures, or model weight perturbations please kindly cite [Xu et al.,
2020](https://arxiv.org/pdf/2002.12920.pdf). If you use our branch and bound
based adversarial attack (falsifier), please cite [Zhang et al.
2022](https://proceedings.mlr.press/v162/zhang22ae/zhang22ae.pdf).

α,β-CROWN combines our existing efforts on neural network verification:

* **CROWN** ([Zhang et al. NeurIPS 2018](https://arxiv.org/pdf/1811.00866.pdf)) is a very efficient bound propagation based verification algorithm. CROWN propagates a linear inequality backwards through the network and utilizes linear bounds to relax activation functions.

* The **"convex relaxation barrier"** ([Salman et al., NeurIPS 2019](https://arxiv.org/pdf/1902.08722)) paper concludes that optimizing the ReLU relaxation allows CROWN (referred to as a "greedy" primal space solver) to achieve the same solution as linear programming (LP) based verifiers.

* **LiRPA** ([Xu et al., NeurIPS 2020](https://arxiv.org/pdf/2002.12920.pdf)) is a generalization of CROWN on general computational graphs and we also provide an efficient GPU implementation, the [auto\_LiRPA](https://github.com/KaidiXu/auto_LiRPA) library.

* **α-CROWN** (sometimes referred to as optimized CROWN or optimized LiRPA) is used in the Fast-and-Complete verifier ([Xu et al., ICLR 2021](https://arxiv.org/pdf/2011.13824.pdf)), which jointly optimizes intermediate layer bounds and final layer bounds in CROWN via variable α. α-CROWN typically has greater power than LP since LP cannot cheaply tighten intermediate layer bounds.

* **β-CROWN** ([Wang et al., NeurIPS 2021](https://arxiv.org/pdf/2103.06624.pdf)) incorporates split constraints in branch and bound (BaB) into the CROWN bound propagation procedure via an additional optimizable parameter β. The combination of efficient and GPU accelerated bound propagation with branch and bound produces a powerful and scalable neural network verifier.

* **BaB-Attack** ([Zhang et al., ICML 2022](https://proceedings.mlr.press/v162/zhang22ae/zhang22ae.pdf)) is a strong falsifier (adversarial attack) based on branch and bound, which can find adversarial examples for hard instances where gradient or input-space-search based methods cannot succeed.

* **GCP-CROWN** ([Zhang et al., NeurIPS 2022](https://arxiv.org/pdf/2208.05740.pdf)) enables the use of general cutting planes methods for neural network verification in a GPU-accelerated and very efficient bound propagation framework.

We provide bibtex entries below:

```
@article{zhang2018efficient,
title={Efficient Neural Network Robustness Certification with General Activation Functions},
author={Zhang, Huan and Weng, Tsui-Wei and Chen, Pin-Yu and Hsieh, Cho-Jui and Daniel, Luca},
journal={Advances in Neural Information Processing Systems},
volume={31},
pages={4939--4948},
year={2018},
url={https://arxiv.org/pdf/1811.00866.pdf}
}
@article{xu2020automatic,
title={Automatic perturbation analysis for scalable certified robustness and beyond},
author={Xu, Kaidi and Shi, Zhouxing and Zhang, Huan and Wang, Yihan and Chang, Kai-Wei and Huang, Minlie and Kailkhura, Bhavya and Lin, Xue and Hsieh, Cho-Jui},
journal={Advances in Neural Information Processing Systems},
volume={33},
year={2020}
}
@article{salman2019convex,
title={A Convex Relaxation Barrier to Tight Robustness Verification of Neural Networks},
author={Salman, Hadi and Yang, Greg and Zhang, Huan and Hsieh, Cho-Jui and Zhang, Pengchuan},
journal={Advances in Neural Information Processing Systems},
volume={32},
pages={9835--9846},
year={2019}
}
@inproceedings{xu2021fast,
title={{Fast and Complete}: Enabling Complete Neural Network Verification with Rapid and Massively Parallel Incomplete Verifiers},
author={Kaidi Xu and Huan Zhang and Shiqi Wang and Yihan Wang and Suman Jana and Xue Lin and Cho-Jui Hsieh},
booktitle={International Conference on Learning Representations},
year={2021},
url={https://openreview.net/forum?id=nVZtXBI6LNn}
}
@article{wang2021beta,
title={{Beta-CROWN}: Efficient bound propagation with per-neuron split constraints for complete and incomplete neural network verification},
author={Wang, Shiqi and Zhang, Huan and Xu, Kaidi and Lin, Xue and Jana, Suman and Hsieh, Cho-Jui and Kolter, J Zico},
journal={Advances in Neural Information Processing Systems},
volume={34},
year={2021}
}
@InProceedings{zhang22babattack,
title = {A Branch and Bound Framework for Stronger Adversarial Attacks of {R}e{LU} Networks},
author = {Zhang, Huan and Wang, Shiqi and Xu, Kaidi and Wang, Yihan and Jana, Suman and Hsieh, Cho-Jui and Kolter, Zico},
booktitle = {Proceedings of the 39th International Conference on Machine Learning},
volume = {162},
pages = {26591--26604},
year = {2022},
}
@article{zhang2022general,
title={General Cutting Planes for Bound-Propagation-Based Neural Network Verification},
author={Zhang, Huan and Wang, Shiqi and Xu, Kaidi and Li, Linyi and Li, Bo and Jana, Suman and Hsieh, Cho-Jui and Kolter, J Zico},
journal={Advances in Neural Information Processing Systems},
year={2022}
}
```

Developers and Copyright
----------------------

The α,β-CROWN verifier is developed by a team from CMU, UCLA, Drexel University, Columbia University and UIUC:

Team lead:
* Huan Zhang ([email protected]), CMU

Main developers:
* Kaidi Xu ([email protected]), Drexel University
* Zhouxing Shi ([email protected]), UCLA
* Shiqi Wang ([email protected]), Columbia University

Contributors:
* Linyi Li ([email protected]), UIUC
* Jinqi (Kathryn) Chen ([email protected]), CMU
* Zhuolin Yang ([email protected]), UIUC
* Yihan Wang ([email protected]), UCLA

Advisors:
* Zico Kolter ([email protected]), CMU
* Cho-Jui Hsieh ([email protected]), UCLA
* Suman Jana ([email protected]), Columbia University
* Bo Li ([email protected]), UIUC
* Xue Lin ([email protected]), Northeastern University

Our library is released under the BSD 3-Clause license. A copy of the license is included [here](LICENSE).

2 changes: 1 addition & 1 deletion auto_LiRPA/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -5,4 +5,4 @@
from .wrapper import CrossEntropyWrapper, CrossEntropyWrapperMultiInput
from .bound_op_map import register_custom_op, unregister_custom_op

__version__ = '0.3'
__version__ = '0.3.1'
Loading

0 comments on commit 8b3dc00

Please sign in to comment.