A Dual Approach to Scalable Verification of Deep Networks

03/17/2018 ∙ by Krishnamurthy Dvijotham, et al. ∙ 0

This paper addresses the problem of formally verifying desirable properties of neural networks, i.e., obtaining provable guarantees that the outputs of the neural network will always behave in a certain way for a given class of inputs. Most previous work on this topic was limited in its applicability by the size of the network, network architecture and the complexity of properties to be verified. In contrast, our framework applies to much more general class of activation functions and specifications on neural network inputs and outputs. We formulate verification as an optimization problem and solve a Lagrangian relaxation of the optimization problem to obtain an upper bound on the verification objective. Our approach is anytime, i.e. it can be stopped at any time and a valid bound on the objective can be obtained. We develop specialized verification algorithms with provable tightness guarantees under special assumptions and demonstrate the practical significance of our general verification approach on a variety of verification tasks.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Deep learning has led to tremendous progress in machine learning in the last few years achieving state-of-the-art performance on complex image classification and speech recognition tasks krizhevsky2012imagenet. However, this progress has been tainted by disturbing revelations that state of the art networks can easily be fooled by making seemingly innocuous modifications to the input data that cause the network to change its prediction significantly szegedy2013intriguing; kurakin2016adversarial. While modifications to neural network training algorithms have been proposed to mitigate these effects, a comprehensive solution has remained elusive.

Further, neural networks are gaining widespread adoption, including in domains with critical safety constraints marston2015acas; DC; SD.

Given these factors, verification of neural networks has gained significant attention in recent research kolter2017provable; bunel2017piecewise add more

Most verification methods to date have been limited to piecewise linear neural networks. However, practical state-of-the-art performing neural networks have significant nonlinearities besides piecewise linear. In this report, we describe a general approach to verifying neural networks with arbitrary transfer functions.

2 Formulation

We will start with a layer-wise description of a neural networks

(1a)

where

is the vector of neural activations at layer

, is the pre-nonlinearity activations and is a component-wise nonlinearity.

We use the notation to denote the -th component of the vectors and the -th component of the function , so that

Note that we do not necessarily need to assume that is the same for each

(so we can have layers where some of the neurons have tanh transfer functions while others have ReLUs and yet others have sigmoids).

Most verification problems can be posed as follows:

(2a)
Subject to (2b)
(2c)
(2d)
(2e)
(2f)
(2g)

where is a set of constraints on the input (assumed to be convex) and are bounds on the pre and post nonlinear activations at each layer (that are inferred from the constraints on ). We assume for now that these bounds are given, but we later show how they can be inferred as well at a marginally small computational cost.

A concrete instance of a verification problem posed in this form would be when and and which corresponds to the search for an adversarial exmaple that causes the maximum deviation in the output of the network subject to the constraint that the input to the network does not change from a nominal value by more than in some norm.

We can bound the optimal value of (2) using the dual program:

(3a)
Subject to (3b)
(3c)
(3d)
(3e)
(3f)

By weak duality, for any choice of , the above optimization problem provides a valid upper bound on the optimal value of (2).

We now look at solving the above optimization problem. Since the objective and constraints are separable in the layers, the variables in each layer can be optimized independently. For , we have

(

λ_l-1-W_l^Tμ_l)^Tx_l - (b_l)^Tμ_l which can also be solved trivially by setting each component of to its upper or lower bound. Finally, we have

where .

Since the objective is separable, one can solve separately for each component of :

This is a one-dimensional optimization problem and can be solved easily for most common transfer functions by simply looking at all the stationary points of the objective within the constraints plus the upper/lower bounds, and choosing among those the point at which the objective is largest. For most common transfer functions, since they are convex below and concave above (sigmoid, tanh all fall into this class), there are at most two stationary points within the domain, and hence the number of possibilities that need to be considered for this optimization is at most .

Finally, we need to solve

which can also be solved easily typically if is simply a norm ball (the solution would be of the form where is chosen such that is on the surface of the norm ball).

Once these problems are solved, we can construct the dual optimization problem:

(4a)

This optimization can be solved via a sub-gradient descent on . If the optimal are such that the objective of (3) is concave, then it can be guaranteed that there is no duality gap and the dual bound exactly matches the optimal value (2)

References