From f5e5bb6780a8e4331b2ece72eb58ab566f59a9b1 Mon Sep 17 00:00:00 2001 From: "Documenter.jl" Date: Sat, 16 Dec 2023 02:47:01 +0000 Subject: [PATCH] build based on e5e908f --- .../batch_processing/index.html | 8 +++---- .../single_image/index.html | 4 ++-- dev/index.html | 2 +- dev/net_components/core_ops/index.html | 18 +++++++-------- dev/net_components/layers/index.html | 22 +++++++++---------- dev/net_components/nets/index.html | 2 +- dev/net_components/overview/index.html | 2 +- dev/search/index.html | 2 +- dev/tutorials/index.html | 2 +- dev/utils/import_datasets/index.html | 2 +- dev/utils/import_example_nets/index.html | 2 +- dev/utils/import_weights/index.html | 4 ++-- 12 files changed, 35 insertions(+), 35 deletions(-) diff --git a/dev/finding_adversarial_examples/batch_processing/index.html b/dev/finding_adversarial_examples/batch_processing/index.html index ef0b556..067200e 100644 --- a/dev/finding_adversarial_examples/batch_processing/index.html +++ b/dev/finding_adversarial_examples/batch_processing/index.html @@ -14,7 +14,7 @@ solve_if_predicted_in_targeted, adversarial_example_objective ) -

Runs find_adversarial_example for the specified neural network nn and dataset for samples identified by the target_indices, with the target labels for each sample set to the complement of the true label.

It creates a named directory in save_path, with the name summarizing

  1. the name of the network in nn,
  2. the perturbation family pp,
  3. the norm_order

Within this directory, a summary of all the results is stored in summary.csv, and results from individual runs are stored in the subfolder run_results.

This functioned is designed so that it can be interrupted and restarted cleanly; it relies on the summary.csv file to determine what the results of previous runs are (so modifying this file manually can lead to unexpected behavior.)

If the summary file already contains a result for a given target index, the solve_rerun_option determines whether we rerun find_adversarial_example for this particular index.

optimizer specifies the optimizer used to solve the MIP problem once it has been built and main_solve_options specifies the options that will be passed to the optimizer for the main solve.

Named Arguments:

source

Internal

MIPVerify.batch_find_targeted_attackMethod
batch_find_targeted_attack(
+

Runs find_adversarial_example for the specified neural network nn and dataset for samples identified by the target_indices, with the target labels for each sample set to the complement of the true label.

It creates a named directory in save_path, with the name summarizing

  1. the name of the network in nn,
  2. the perturbation family pp,
  3. the norm_order

Within this directory, a summary of all the results is stored in summary.csv, and results from individual runs are stored in the subfolder run_results.

This functioned is designed so that it can be interrupted and restarted cleanly; it relies on the summary.csv file to determine what the results of previous runs are (so modifying this file manually can lead to unexpected behavior.)

If the summary file already contains a result for a given target index, the solve_rerun_option determines whether we rerun find_adversarial_example for this particular index.

optimizer specifies the optimizer used to solve the MIP problem once it has been built and main_solve_options specifies the options that will be passed to the optimizer for the main solve.

Named Arguments:

  • save_path: Directory where results will be saved. Defaults to current directory.
  • pp, norm_order, tightening_algorithm, tightening_options, solve_if_predicted_in_targeted are passed through to find_adversarial_example and have the same default values; see documentation for that function for more details.
  • solve_rerun_option::MIPVerify.SolveRerunOption: Options are never, always, resolve_ambiguous_cases, and refine_insecure_cases. See run_on_sample_for_untargeted_attack for more details.
source

Internal

MIPVerify.batch_find_targeted_attackMethod
batch_find_targeted_attack(
     nn,
     dataset,
     target_indices,
@@ -29,15 +29,15 @@
     tightening_options,
     solve_if_predicted_in_targeted
 )
-

Runs find_adversarial_example for the specified neural network nn and dataset for samples identified by the target_indices, with each of the target labels in target_labels individually targeted.

Otherwise same parameters as batch_find_untargeted_attack.

source
MIPVerify.run_on_sample_for_targeted_attackMethod
run_on_sample_for_targeted_attack(
+

Runs find_adversarial_example for the specified neural network nn and dataset for samples identified by the target_indices, with each of the target labels in target_labels individually targeted.

Otherwise same parameters as batch_find_untargeted_attack.

source
MIPVerify.run_on_sample_for_targeted_attackMethod
run_on_sample_for_targeted_attack(
     sample_number,
     target_label,
     summary_dt,
     solve_rerun_option
 )
-

Determines whether to run a solve on a sample depending on the solve_rerun_option by looking up information on the most recent completed solve recorded in summary_dt matching sample_number.

summary_dt is expected to be a DataFrame with columns :SampleNumber, :TargetIndexes, :SolveStatus, and :ObjectiveValue.

source
MIPVerify.run_on_sample_for_untargeted_attackMethod
run_on_sample_for_untargeted_attack(
+

Determines whether to run a solve on a sample depending on the solve_rerun_option by looking up information on the most recent completed solve recorded in summary_dt matching sample_number.

summary_dt is expected to be a DataFrame with columns :SampleNumber, :TargetIndexes, :SolveStatus, and :ObjectiveValue.

source
MIPVerify.run_on_sample_for_untargeted_attackMethod
run_on_sample_for_untargeted_attack(
     sample_number,
     summary_dt,
     solve_rerun_option
 )
-

Determines whether to run a solve on a sample depending on the solve_rerun_option by looking up information on the most recent completed solve recorded in summary_dt matching sample_number.

summary_dt is expected to be a DataFrame with columns :SampleNumber, :SolveStatus, and :ObjectiveValue.

Behavior for different choices of solve_rerun_option:

  • never: true if and only if there is no previous completed solve.
  • always: true always.
  • resolve_ambiguous_cases: true if there is no previous completed solve, or if the most recent completed solve a) did not find a counter-example BUT b) the optimization was not demosntrated to be infeasible.
  • refine_insecure_cases: true if there is no previous completed solve, or if the most recent complete solve a) did find a counter-example BUT b) we did not reach a provably optimal solution.
source
+

Determines whether to run a solve on a sample depending on the solve_rerun_option by looking up information on the most recent completed solve recorded in summary_dt matching sample_number.

summary_dt is expected to be a DataFrame with columns :SampleNumber, :SolveStatus, and :ObjectiveValue.

Behavior for different choices of solve_rerun_option:

source diff --git a/dev/finding_adversarial_examples/single_image/index.html b/dev/finding_adversarial_examples/single_image/index.html index 908fddf..c78aa1e 100644 --- a/dev/finding_adversarial_examples/single_image/index.html +++ b/dev/finding_adversarial_examples/single_image/index.html @@ -13,5 +13,5 @@ tightening_options, solve_if_predicted_in_targeted ) -

Finds the perturbed image closest to input such that the network described by nn classifies the perturbed image in one of the categories identified by the indexes in target_selection.

optimizer specifies the optimizer used to solve the MIP problem once it has been built.

The output dictionary has keys :Model, :PerturbationFamily, :TargetIndexes, :SolveStatus, :Perturbation, :PerturbedInput, :Output. See the tutorial on what individual dictionary entries correspond to.

Formal Definition: If there are a total of n categories, the (perturbed) output vector y=d[:Output]=d[:PerturbedInput] |> nn has length n. We guarantee that y[j] - y[i] ≥ 0 for some j ∈ target_selection and for all i ∉ target_selection.

Named Arguments:

source
MIPVerify.frac_correctMethod
frac_correct(nn, dataset, num_samples)
-

Returns the fraction of items the neural network correctly classifies of the first num_samples of the provided dataset. If there are fewer than num_samples items, we use all of the available samples.

Named Arguments:

  • nn::NeuralNet: The parameters of the neural network.
  • dataset::LabelledDataset:
  • num_samples::Integer: Number of samples to use.
source
+

Finds the perturbed image closest to input such that the network described by nn classifies the perturbed image in one of the categories identified by the indexes in target_selection.

optimizer specifies the optimizer used to solve the MIP problem once it has been built.

The output dictionary has keys :Model, :PerturbationFamily, :TargetIndexes, :SolveStatus, :Perturbation, :PerturbedInput, :Output. See the tutorial on what individual dictionary entries correspond to.

Formal Definition: If there are a total of n categories, the (perturbed) output vector y=d[:Output]=d[:PerturbedInput] |> nn has length n. We guarantee that y[j] - y[i] ≥ 0 for some j ∈ target_selection and for all i ∉ target_selection.

Named Arguments:

source
MIPVerify.frac_correctMethod
frac_correct(nn, dataset, num_samples)
+

Returns the fraction of items the neural network correctly classifies of the first num_samples of the provided dataset. If there are fewer than num_samples items, we use all of the available samples.

Named Arguments:

  • nn::NeuralNet: The parameters of the neural network.
  • dataset::LabelledDataset:
  • num_samples::Integer: Number of samples to use.
source
diff --git a/dev/index.html b/dev/index.html index 4c739b7..950da4d 100644 --- a/dev/index.html +++ b/dev/index.html @@ -28,4 +28,4 @@ No variables, no constraints: Error During Test Got an exception of type ErrorException outside of a @test Invalid Gurobi license - ...

FIX: The error message indicates that you have not installed your Gurobi license. If it has been installed, the license is saved as a file gurobi.lic, typically in either the /home/ubuntu or opt/gurobi directories.

Getting Started

The best way to get started is to follow our quickstart tutorial, which demonstrates how to find adversarial examples for a pre-trained example network on the MNIST dataset. Once you're done with that, you can explore our other tutorials depending on your needs.

+ ...

FIX: The error message indicates that you have not installed your Gurobi license. If it has been installed, the license is saved as a file gurobi.lic, typically in either the /home/ubuntu or opt/gurobi directories.

Getting Started

The best way to get started is to follow our quickstart tutorial, which demonstrates how to find adversarial examples for a pre-trained example network on the MNIST dataset. Once you're done with that, you can explore our other tutorials depending on your needs.

diff --git a/dev/net_components/core_ops/index.html b/dev/net_components/core_ops/index.html index 997f9a9..cd13bfc 100644 --- a/dev/net_components/core_ops/index.html +++ b/dev/net_components/core_ops/index.html @@ -1,18 +1,18 @@ Core Operations · MIPVerify

Core Operations

Our ability to cast the input-output constraints of a neural net to an efficient set of linear and integer constraints boils down to the following basic operations, over which the layers provide a convenient layer of abstraction.

Index

Internal

MIPVerify.abs_geMethod
abs_ge(x)
-

Expresses a one-sided absolute-value constraint: output is constrained to be at least as large as |x|.

Only use when you are minimizing over the output in the objective.

source
MIPVerify.is_constantMethod
is_constant(x)
-

Checks whether a JuMPLinearType is constant (and thus has no model associated) with it. This can only be true if it is an affine expression with no stored variables.

source
MIPVerify.masked_reluMethod
masked_relu(x, m; nta)
+

Expresses a one-sided absolute-value constraint: output is constrained to be at least as large as |x|.

Only use when you are minimizing over the output in the objective.

source
MIPVerify.is_constantMethod
is_constant(x)
+

Checks whether a JuMPLinearType is constant (and thus has no model associated) with it. This can only be true if it is an affine expression with no stored variables.

source
MIPVerify.masked_reluMethod
masked_relu(x, m; nta)
 

Expresses a masked rectified-linearity constraint, with three possibilities depending on the value of the mask. Output is constrained to be:

1) max(x, 0) if m=0,
 2) 0 if m<0
-3) x if m>0
source
MIPVerify.maximumMethod
maximum(xs)
-

Expresses a maximization constraint: output is constrained to be equal to max(xs).

source
MIPVerify.maximum_geMethod
maximum_ge(xs)
-

Expresses a one-sided maximization constraint: output is constrained to be at least max(xs).

Only use when you are minimizing over the output in the objective.

NB: If all of xs are constant, we simply return the largest of them.

source
MIPVerify.maximumMethod
maximum(xs)
+

Expresses a maximization constraint: output is constrained to be equal to max(xs).

source
MIPVerify.maximum_geMethod
maximum_ge(xs)
+

Expresses a one-sided maximization constraint: output is constrained to be at least max(xs).

Only use when you are minimizing over the output in the objective.

NB: If all of xs are constant, we simply return the largest of them.

source
MIPVerify.relax_integrality_contextMethod
relax_integrality_context(
     f,
     model,
     should_relax_integrality
 )
-

Context manager for running f on model. If should_relax_integrality is true, the integrality constraints are relaxed before f is run and re-imposed after.

source
MIPVerify.reluMethod
relu(x)
+

Context manager for running f on model. If should_relax_integrality is true, the integrality constraints are relaxed before f is run and re-imposed after.

source
MIPVerify.reluMethod
relu(x)
 relu(x; nta)
-

Expresses a rectified-linearity constraint: output is constrained to be equal to max(x, 0).

source
MIPVerify.set_max_indexesMethod
set_max_indexes(model, xs, target_indexes; margin)
-

Imposes constraints ensuring that one of the elements at the targetindexes is the largest element of the array x. More specifically, we require x[j] - x[i] ≥ margin for some `j ∈ targetindexesand for alli ∉ target_indexes`.

source
MIPVerify.tight_boundMethod

Calculates a tight bound of type bound_type on the variable x using the specified tightening algorithm nta.

If an upper bound is proven to be below cutoff, or a lower bound is proven to above cutoff, the algorithm returns early with whatever value was found.

source
MIPVerify.tight_bound_helperMethod
tight_bound_helper(m, bound_type, objective, b_0)
-

Optimizes the value of objective based on bound_type, with b_0, computed via interval arithmetic, as a backup.

  • If an optimal solution is reached, we return the objective value. We also verify that the objective found is better than the bound b_0 provided; if this is not the case, we throw an error.
  • If we reach the user-defined time limit, we compute the best objective bound found. We compare this to b_0 and return the better result.
  • For all other solve statuses, we warn the user and report b_0.
source
+

Expresses a rectified-linearity constraint: output is constrained to be equal to max(x, 0).

source
MIPVerify.set_max_indexesMethod
set_max_indexes(model, xs, target_indexes; margin)
+

Imposes constraints ensuring that one of the elements at the targetindexes is the largest element of the array x. More specifically, we require x[j] - x[i] ≥ margin for some `j ∈ targetindexesand for alli ∉ target_indexes`.

source
MIPVerify.tight_boundMethod

Calculates a tight bound of type bound_type on the variable x using the specified tightening algorithm nta.

If an upper bound is proven to be below cutoff, or a lower bound is proven to above cutoff, the algorithm returns early with whatever value was found.

source
MIPVerify.tight_bound_helperMethod
tight_bound_helper(m, bound_type, objective, b_0)
+

Optimizes the value of objective based on bound_type, with b_0, computed via interval arithmetic, as a backup.

  • If an optimal solution is reached, we return the objective value. We also verify that the objective found is better than the bound b_0 provided; if this is not the case, we throw an error.
  • If we reach the user-defined time limit, we compute the best objective bound found. We compare this to b_0 and return the better result.
  • For all other solve statuses, we warn the user and report b_0.
source
diff --git a/dev/net_components/layers/index.html b/dev/net_components/layers/index.html index 6c2ab8c..b4c03ec 100644 --- a/dev/net_components/layers/index.html +++ b/dev/net_components/layers/index.html @@ -1,12 +1,12 @@ -Layers · MIPVerify

Layers

Each layer in the neural net corresponds to a struct that simultaneously specifies: 1) the operation being carried out in the layer (recorded in the type of the struct) and 2) the parameters for the operation (recorded in the values of the fields of the struct).

When we pass an input array of real numbers to a layer struct, we get an output array of real numbers that is the result of the layer operating on the input.

Conversely, when we pass an input array of JuMP variables, we get an output array of JuMP variables, with the appropriate mixed-integer constraints (as determined by the layer) imposed between the input and output.

Index

Public Interface

MIPVerify.Conv2dType
struct Conv2d{T<:Union{Real, JuMP.VariableRef, JuMP.AffExpr}, U<:Union{Real, JuMP.VariableRef, JuMP.AffExpr}, V<:Integer} <: Layer

Represents 2-D convolution operation.

p(x) is shorthand for conv2d(x, p) when p is an instance of Conv2d.

Fields:

  • filter

  • bias

  • stride

  • padding

source
MIPVerify.LinearType
struct Linear{T<:Real, U<:Real} <: Layer

Represents matrix multiplication.

p(x) is shorthand for matmul(x, p) when p is an instance of Linear.

Fields:

  • matrix

  • bias

source
MIPVerify.MaskedReLUType
struct MaskedReLU{T<:Real} <: Layer

Represents a masked ReLU activation, with mask controlling how the ReLU is applied to each output.

p(x) is shorthand for masked_relu(x, p.mask) when p is an instance of MaskedReLU.

Fields:

  • mask

  • tightening_algorithm

source
MIPVerify.PoolType
struct Pool{N} <: Layer

Represents a pooling operation.

p(x) is shorthand for pool(x, p) when p is an instance of Pool.

Fields:

  • strides

  • pooling_function

source
MIPVerify.ReLUType
struct ReLU <: Layer

Represents a ReLU operation.

p(x) is shorthand for relu(x) when p is an instance of ReLU.

source

Internal

MIPVerify.conv2dMethod
conv2d(input, params)
-

Computes the result of convolving input with the filter and bias stored in params.

Mirrors tf.nn.conv2d from the tensorflow package, with strides = [1, params.stride, params.stride, 1].

Supports three types of padding:

  • 'same': Specify via SamePadding(). Padding is added so that the output has the same size as the input.
  • 'valid': Specify via FixedPadding(). No padding is added.
  • 'fixed': Specify via:
    • A single integer, interpreted as padding for both axes
    • A tuple of two integers, interpreted as (ypadding, xpadding)
    • A tuple of four integers, interpreted as (top, bottom, left, right)

Throws

  • AssertionError if input and filter are not compatible.
source
MIPVerify.matmulMethod
matmul(x, params)
-

Computes the result of pre-multiplying x by the transpose of params.matrix and adding params.bias.

source
MIPVerify.matmulMethod
matmul(x, params)
-

Computes the result of pre-multiplying x by the transpose of params.matrix and adding params.bias. We write the computation out by hand when working with JuMPLinearType so that we are able to simplify the output as the computation is carried out.

source
MIPVerify.getoutputsizeMethod
getoutputsize(input_array, strides)
-

For pooling operations on an array, returns the expected size of the output array.

source
MIPVerify.getpoolviewMethod
getpoolview(input_array, strides, output_index)
-

For pooling operations on an array, returns a view of the parent array corresponding to the output_index in the output array.

source
MIPVerify.getsliceindexMethod
getsliceindex(input_array_size, stride, output_index)
-

For pooling operations on an array where a given element in the output array corresponds to equal-sized blocks in the input array, returns (for a given dimension) the index range in the input array corresponding to a particular index output_index in the output array.

Returns an empty array if the output_index does not correspond to any input indices.

Arguments

  • stride::Integer: the size of the operating blocks along the active dimension.
source
MIPVerify.poolMethod
pool(input, params)
-

Computes the result of applying the pooling function params.pooling_function to non-overlapping cells of input with sizes specified in params.strides.

source
MIPVerify.poolmapMethod
poolmap(f, input_array, strides)
-

Returns output from applying f to subarrays of input_array, with the windows determined by the strides.

source
+Layers · MIPVerify

Layers

Each layer in the neural net corresponds to a struct that simultaneously specifies: 1) the operation being carried out in the layer (recorded in the type of the struct) and 2) the parameters for the operation (recorded in the values of the fields of the struct).

When we pass an input array of real numbers to a layer struct, we get an output array of real numbers that is the result of the layer operating on the input.

Conversely, when we pass an input array of JuMP variables, we get an output array of JuMP variables, with the appropriate mixed-integer constraints (as determined by the layer) imposed between the input and output.

Index

Public Interface

MIPVerify.Conv2dType
struct Conv2d{T<:Union{Real, JuMP.VariableRef, JuMP.AffExpr}, U<:Union{Real, JuMP.VariableRef, JuMP.AffExpr}, V<:Integer} <: Layer

Represents 2-D convolution operation.

p(x) is shorthand for conv2d(x, p) when p is an instance of Conv2d.

Fields:

  • filter

  • bias

  • stride

  • padding

source
MIPVerify.LinearType
struct Linear{T<:Real, U<:Real} <: Layer

Represents matrix multiplication.

p(x) is shorthand for matmul(x, p) when p is an instance of Linear.

Fields:

  • matrix

  • bias

source
MIPVerify.MaskedReLUType
struct MaskedReLU{T<:Real} <: Layer

Represents a masked ReLU activation, with mask controlling how the ReLU is applied to each output.

p(x) is shorthand for masked_relu(x, p.mask) when p is an instance of MaskedReLU.

Fields:

  • mask

  • tightening_algorithm

source
MIPVerify.PoolType
struct Pool{N} <: Layer

Represents a pooling operation.

p(x) is shorthand for pool(x, p) when p is an instance of Pool.

Fields:

  • strides

  • pooling_function

source
MIPVerify.ReLUType
struct ReLU <: Layer

Represents a ReLU operation.

p(x) is shorthand for relu(x) when p is an instance of ReLU.

source

Internal

MIPVerify.conv2dMethod
conv2d(input, params)
+

Computes the result of convolving input with the filter and bias stored in params.

Mirrors tf.nn.conv2d from the tensorflow package, with strides = [1, params.stride, params.stride, 1].

Supports three types of padding:

  • 'same': Specify via SamePadding(). Padding is added so that the output has the same size as the input.
  • 'valid': Specify via FixedPadding(). No padding is added.
  • 'fixed': Specify via:
    • A single integer, interpreted as padding for both axes
    • A tuple of two integers, interpreted as (ypadding, xpadding)
    • A tuple of four integers, interpreted as (top, bottom, left, right)

Throws

  • AssertionError if input and filter are not compatible.
source
MIPVerify.matmulMethod
matmul(x, params)
+

Computes the result of pre-multiplying x by the transpose of params.matrix and adding params.bias.

source
MIPVerify.matmulMethod
matmul(x, params)
+

Computes the result of pre-multiplying x by the transpose of params.matrix and adding params.bias. We write the computation out by hand when working with JuMPLinearType so that we are able to simplify the output as the computation is carried out.

source
MIPVerify.getoutputsizeMethod
getoutputsize(input_array, strides)
+

For pooling operations on an array, returns the expected size of the output array.

source
MIPVerify.getpoolviewMethod
getpoolview(input_array, strides, output_index)
+

For pooling operations on an array, returns a view of the parent array corresponding to the output_index in the output array.

source
MIPVerify.getsliceindexMethod
getsliceindex(input_array_size, stride, output_index)
+

For pooling operations on an array where a given element in the output array corresponds to equal-sized blocks in the input array, returns (for a given dimension) the index range in the input array corresponding to a particular index output_index in the output array.

Returns an empty array if the output_index does not correspond to any input indices.

Arguments

  • stride::Integer: the size of the operating blocks along the active dimension.
source
MIPVerify.poolMethod
pool(input, params)
+

Computes the result of applying the pooling function params.pooling_function to non-overlapping cells of input with sizes specified in params.strides.

source
MIPVerify.poolmapMethod
poolmap(f, input_array, strides)
+

Returns output from applying f to subarrays of input_array, with the windows determined by the strides.

source
diff --git a/dev/net_components/nets/index.html b/dev/net_components/nets/index.html index d8d73c5..fad7508 100644 --- a/dev/net_components/nets/index.html +++ b/dev/net_components/nets/index.html @@ -1,2 +1,2 @@ -Networks · MIPVerify

Networks

Each network corresponds to an array of layers associated with a unique string identifier. The string identifier of the network is used to store cached models, so it's important to ensure that you don't re-use names!

Public Interface

MIPVerify.SequentialType
struct Sequential <: NeuralNet

Represents a sequential (feed-forward) neural net, with layers ordered from input to output.

Fields:

  • layers

  • UUID

source
MIPVerify.SkipSequentialType
struct SkipSequential <: NeuralNet

Represents a sequential (feed-forward) neural net, with layers ordered from input to output.

Fields:

  • layers

  • UUID

source

Internal

+Networks · MIPVerify

Networks

Each network corresponds to an array of layers associated with a unique string identifier. The string identifier of the network is used to store cached models, so it's important to ensure that you don't re-use names!

Public Interface

MIPVerify.SequentialType
struct Sequential <: NeuralNet

Represents a sequential (feed-forward) neural net, with layers ordered from input to output.

Fields:

  • layers

  • UUID

source
MIPVerify.SkipSequentialType
struct SkipSequential <: NeuralNet

Represents a sequential (feed-forward) neural net, with layers ordered from input to output.

Fields:

  • layers

  • UUID

source

Internal

diff --git a/dev/net_components/overview/index.html b/dev/net_components/overview/index.html index 14ba382..7d72a47 100644 --- a/dev/net_components/overview/index.html +++ b/dev/net_components/overview/index.html @@ -1,2 +1,2 @@ -Overview · MIPVerify

Overview

A neural net consists of multiple layers, each of which (potentially) operates on input differently. We represent these objects with NeuralNet and Layer.

Index

PublicInterface

MIPVerify.chainMethod

An array of Layers is interpreted as that array of layer being applied to the input sequentially, starting from the leftmost layer. (In functional programming terms, this can be thought of as a sort of fold).

source
MIPVerify.LayerType
abstract type Layer

Supertype for all types storing the parameters of each layer. Inherit from this to specify your own custom type of layer. Each implementation is expected to:

  1. Implement a callable specifying the output when any input of type JuMPReal is provided.
source
MIPVerify.NeuralNetType
abstract type NeuralNet

Supertype for all types storing the parameters of a neural net. Inherit from this to specify your own custom architecture. Each implementation is expected to:

  1. Implement a callable specifying the output when any input of type JuMPReal is provided
  2. Have a UUID field for the name of the neural network.
source
+Overview · MIPVerify

Overview

A neural net consists of multiple layers, each of which (potentially) operates on input differently. We represent these objects with NeuralNet and Layer.

Index

PublicInterface

MIPVerify.chainMethod

An array of Layers is interpreted as that array of layer being applied to the input sequentially, starting from the leftmost layer. (In functional programming terms, this can be thought of as a sort of fold).

source
MIPVerify.LayerType
abstract type Layer

Supertype for all types storing the parameters of each layer. Inherit from this to specify your own custom type of layer. Each implementation is expected to:

  1. Implement a callable specifying the output when any input of type JuMPReal is provided.
source
MIPVerify.NeuralNetType
abstract type NeuralNet

Supertype for all types storing the parameters of a neural net. Inherit from this to specify your own custom architecture. Each implementation is expected to:

  1. Implement a callable specifying the output when any input of type JuMPReal is provided
  2. Have a UUID field for the name of the neural network.
source
diff --git a/dev/search/index.html b/dev/search/index.html index 4da66c1..cfc855f 100644 --- a/dev/search/index.html +++ b/dev/search/index.html @@ -1,2 +1,2 @@ -Search · MIPVerify

Loading search...

    +Search · MIPVerify

    Loading search...

      diff --git a/dev/tutorials/index.html b/dev/tutorials/index.html index f44724b..00887b0 100644 --- a/dev/tutorials/index.html +++ b/dev/tutorials/index.html @@ -1,2 +1,2 @@ -Tutorials · MIPVerify

      Tutorials

      We suggest getting started with the tutorials.

      Quickstart

      A basic demonstration on how to find adversarial examples for a pre-trained example network on the MNIST dataset.

      Importing your own neural net

      Explains how to import your own network for verification.

      Finding adversarial examples, in depth

      Discusses the various parameters you can select for find_adversarial_example. We explain how to

      • Better specify targeted labels for the perturbed image (including multiple targeted labels)
      • Have more precise control over the activations in the output layer
      • Restrict the family of perturbations (for example to the blurring perturbations discussed in our paper)
      • Select whether you want to minimize the $L_1$, $L_2$ or $L_\infty$ norm of the perturbation.
      • Modify the amount of time dedicated to building the model (by selecting the tightening_algorithm, and/or passing in custom tightening_options).

      For Gurobi, we show how to specify optimizer settings to:

      • Mute output
      • Terminate early if:
        • A time limit is reached
        • Lower bounds on robustness are proved (that is, we prove that no adversarial example can exist closer than some threshold)
        • An adversarial example is found that is closer to the input than expected
        • The gap between the upper and lower objective bounds falls below a selected threshold

      Interpreting the output of find_adversarial_example

      Walks you through the output dictionary produced by a call to find_adversarial_example.

      Managing log output

      Explains how to get more granular log settings and to write log output to file.

      +Tutorials · MIPVerify

      Tutorials

      We suggest getting started with the tutorials.

      Quickstart

      A basic demonstration on how to find adversarial examples for a pre-trained example network on the MNIST dataset.

      Importing your own neural net

      Explains how to import your own network for verification.

      Finding adversarial examples, in depth

      Discusses the various parameters you can select for find_adversarial_example. We explain how to

      • Better specify targeted labels for the perturbed image (including multiple targeted labels)
      • Have more precise control over the activations in the output layer
      • Restrict the family of perturbations (for example to the blurring perturbations discussed in our paper)
      • Select whether you want to minimize the $L_1$, $L_2$ or $L_\infty$ norm of the perturbation.
      • Modify the amount of time dedicated to building the model (by selecting the tightening_algorithm, and/or passing in custom tightening_options).

      For Gurobi, we show how to specify optimizer settings to:

      • Mute output
      • Terminate early if:
        • A time limit is reached
        • Lower bounds on robustness are proved (that is, we prove that no adversarial example can exist closer than some threshold)
        • An adversarial example is found that is closer to the input than expected
        • The gap between the upper and lower objective bounds falls below a selected threshold

      Interpreting the output of find_adversarial_example

      Walks you through the output dictionary produced by a call to find_adversarial_example.

      Managing log output

      Explains how to get more granular log settings and to write log output to file.

      diff --git a/dev/utils/import_datasets/index.html b/dev/utils/import_datasets/index.html index a4c3fdf..d813c86 100644 --- a/dev/utils/import_datasets/index.html +++ b/dev/utils/import_datasets/index.html @@ -1,3 +1,3 @@ Datasets · MIPVerify

      Datasets

      For your convenience, the MNIST and CIFAR10 dataset is available as part of our package.

      Index

      Public Interface

      MIPVerify.read_datasetsMethod
      read_datasets(name)
      -

      Makes popular machine learning datasets available as a NamedTrainTestDataset.

      Arguments

      source

      Internal

      MIPVerify.LabelledImageDatasetType
      struct LabelledImageDataset{T<:Real, U<:Integer} <: MIPVerify.LabelledDataset

      Dataset of images stored as a 4-dimensional array of size (num_samples, image_height, image_width, num_channels), with accompanying labels (sorted in the same order) of size num_samples.

      source
      MIPVerify.NamedTrainTestDatasetType
      struct NamedTrainTestDataset{T<:MIPVerify.Dataset, U<:MIPVerify.Dataset} <: MIPVerify.Dataset

      Named dataset containing a training set and a test set which are expected to contain the same kind of data.

      • name: Name of dataset.
      • train: Training set.
      • test: Test set.
      source
      +

      Makes popular machine learning datasets available as a NamedTrainTestDataset.

      Arguments

      source

      Internal

      MIPVerify.LabelledImageDatasetType
      struct LabelledImageDataset{T<:Real, U<:Integer} <: MIPVerify.LabelledDataset

      Dataset of images stored as a 4-dimensional array of size (num_samples, image_height, image_width, num_channels), with accompanying labels (sorted in the same order) of size num_samples.

      source
      MIPVerify.NamedTrainTestDatasetType
      struct NamedTrainTestDataset{T<:MIPVerify.Dataset, U<:MIPVerify.Dataset} <: MIPVerify.Dataset

      Named dataset containing a training set and a test set which are expected to contain the same kind of data.

      • name: Name of dataset.
      • train: Training set.
      • test: Test set.
      source
      diff --git a/dev/utils/import_example_nets/index.html b/dev/utils/import_example_nets/index.html index fde10fe..12fa182 100644 --- a/dev/utils/import_example_nets/index.html +++ b/dev/utils/import_example_nets/index.html @@ -1,3 +1,3 @@ Example Neural Networks · MIPVerify

      Example Neural Networks

      get_example_network_params imports the weights of networks verified in our paper, as well as other networks of interest, as NeuralNets that can immediately be verified via our tools.

      Index

      Public Interface

      MIPVerify.get_example_network_paramsMethod
      get_example_network_params(name)
      -

      Makes named example neural networks available as a NeuralNet object.

      Arguments

      • name::String: Name of example neural network. Options:
        • 'MNIST.n1':
          • Architecture: Two fully connected layers with 40 and 20 units.
          • Training: Trained regularly with no attempt to increase robustness.
        • 'MNIST.WK17a_linf0.1_authors'.
        • 'MNIST.RSL18a_linf0.1_authors'.
          • Architecture: One fully connected layer with 500 units.
          • Training: Network trained to be robust to attacks with $l_\infty$ norm at most 0.1 via method in Certified Defenses against Adversarial Examples . Is MNIST network for which results are reported in that paper.
      source

      Internal

      +

      Makes named example neural networks available as a NeuralNet object.

      Arguments

      source

      Internal

      diff --git a/dev/utils/import_weights/index.html b/dev/utils/import_weights/index.html index 6cd1d90..9351b8f 100644 --- a/dev/utils/import_weights/index.html +++ b/dev/utils/import_weights/index.html @@ -8,11 +8,11 @@ expected_stride, padding ) -

      Helper function to import the parameters for a convolution layer from param_dict as a Conv2d object.

      The default format for the key is 'layer_name/weight' and 'layer_name/bias'; you can customize this by passing in the named arguments matrix_name and bias_name respectively. The expected parameter names will then be 'layer_name/matrix_name' and 'layer_name/bias_name'

      Arguments

      source
      MIPVerify.get_matrix_paramsMethod
      get_matrix_params(
      +

      Helper function to import the parameters for a convolution layer from param_dict as a Conv2d object.

      The default format for the key is 'layer_name/weight' and 'layer_name/bias'; you can customize this by passing in the named arguments matrix_name and bias_name respectively. The expected parameter names will then be 'layer_name/matrix_name' and 'layer_name/bias_name'

      Arguments

      • param_dict::Dict{String}: Dictionary mapping parameter names to array of weights / biases.
      • layer_name::String: Identifies parameter in dictionary.
      • expected_size::NTuple{4, Int}: Tuple of length 4 corresponding to the expected size of the weights of the layer.
      source
      MIPVerify.get_matrix_paramsMethod
      get_matrix_params(
           param_dict,
           layer_name,
           expected_size;
           matrix_name,
           bias_name
       )
      -

      Helper function to import the parameters for a layer carrying out matrix multiplication (e.g. fully connected layer / softmax layer) from param_dict as a Linear object.

      The default format for the key is 'layer_name/weight' and 'layer_name/bias'; you can customize this by passing in the named arguments matrix_name and bias_name respectively. The expected parameter names will then be 'layer_name/matrix_name' and 'layer_name/bias_name'

      Arguments

      • param_dict::Dict{String}: Dictionary mapping parameter names to array of weights / biases.
      • layer_name::String: Identifies parameter in dictionary.
      • expected_size::NTuple{2, Int}: Tuple of length 2 corresponding to the expected size of the weights of the layer.
      source

      Internal

      +

      Helper function to import the parameters for a layer carrying out matrix multiplication (e.g. fully connected layer / softmax layer) from param_dict as a Linear object.

      The default format for the key is 'layer_name/weight' and 'layer_name/bias'; you can customize this by passing in the named arguments matrix_name and bias_name respectively. The expected parameter names will then be 'layer_name/matrix_name' and 'layer_name/bias_name'

      Arguments

      source

      Internal