Reachability Analysis of Deep Neural Networks with Provable Guarantees

05/06/2018 ∙ by Wenjie Ruan, et al. ∙ University of Liverpool 0

Verifying correctness of deep neural networks (DNNs) is challenging. We study a generic reachability problem for feed-forward DNNs which, for a given set of inputs to the network and a Lipschitz-continuous function over its outputs, computes the lower and upper bound on the function values. Because the network and the function are Lipschitz continuous, all values in the interval between the lower and upper bound are reachable. We show how to obtain the safety verification problem, the output range analysis problem and a robustness measure by instantiating the reachability problem. We present a novel algorithm based on adaptive nested optimisation to solve the reachability problem. The technique has been implemented and evaluated on a range of DNNs, demonstrating its efficiency, scalability and ability to handle a broader class of networks than state-of-the-art verification approaches.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 15

page 17

page 18

page 19

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Concerns have been raised about the suitability of deep neural networks (DNNs), or systems with DNN components, for deployment in safety-critical applications, see e.g., [2, 3]. To ease this concern and gain users’ trust, DNNs need to be certified similarly to systems such as airplanes and automobiles. In this paper, we propose to study a generic reachability problem which, for a given DNN, an input subspace and a function over the outputs of the network, computes the upper and lower bounds over the values of the function. The function is generic, with the only requirement that it is Lipschitz continuous. We argue that this problem is fundamental for certification of DNNs, as it can be instantiated into several key correctness problems, including adversarial example generation [4, 5], safety verification [6, 7, 8], output range analysis [9, 10], and robustness comparison.

To certify a system, a certification approach needs to provide not only a result but also a guarantee over the result, such as the error bounds. Existing approaches for analysing DNNs with a guarantee work by either reducing the problem to a constraint satisfaction problem that can be solved by MILP [9, 11, 12, 13], SAT [14] or SMT [7, 12]

techniques, or applying search algorithms over discretised vector spaces

[6, 15]

. Even though they are able to achieve guarantees, they suffer from two major weaknesses. Firstly, their subjects of study are restricted. More specifically, they can only work with layers conducting linear transformations (such as convolutional and fully-connected layers) and simple non-linear transformations (such as ReLU), and cannot work with other important layers, such as the sigmoid, max pooling and softmax layers that are widely used in state-of-the-art networks. Secondly, the scalability of the constraint-based approaches is significantly limited by both the capability of the solvers and the size of the network, and they can only work with networks with a few hundreds of hidden neurons. However, state-of-the-art networks usually have millions, or even billions, of hidden neurons.

This paper proposes a novel approach to tackle the generic reachability problem, which does not suffer from the above weaknesses and provides provable guarantees in terms of the upper and lower bounds over the errors. The approach is inspired by recent advances made in the area of global optimisation [16, 17]. For the input subspace defined over a set of input dimensions, an adaptive nested optimisation algorithm is developed. The performance of our algorithm is not dependent on the size of the network and it can therefore scale to work with large networks.

Our algorithm assumes certain knowledge about the DNN. However, instead of directly translating the activation functions and their parameters (i.e., weights and bias) into linear constraints, it needs a Lipschitz constant of the network. For this, we show that several layers that cannot be directly translated into linear constraints are actually Lipschitz continuous, and we are able to compute a tight Lipschitz constant by analysing the activation functions and their parameters.

We develop a software tool DeepGO222Available on https://github.com/trustAI/DeepGO. and evaluate its performance by comparing with existing constraint-based approaches, namely, SHERLOCK [10] and Reluplex [7]. We also demonstrate our tool on DNNs that are beyond the capability of existing tools.

2 Related Works

We discuss several threads of work concerning problems that can be obtained by instantiating our generic reachability problem. Their instantiations are explained in the paper. Due to space limitations, this review is by no means complete.

Safety Verification There are two ways of achieving safety verification for DNNs. The first is to reduce the problem into a constraint solving problem. Notable works include, e.g., [18, 7]. However, they can only work with small networks with hundreds of hidden neurons. The second is to discretise the vector spaces of the input or hidden layers and then apply exhaustive search algorithms or Monte Carlo tree search algorithm on the discretised spaces. The guarantees are achieved by establishing local assumptions such as minimality of manipulations in [6] and minimum confidence gap for Lipschitz networks in [15].

Adversarial Example Generation Most existing works, e.g., [4, 5, 19, 20, 21]

, apply various heuristic algorithms, generally using search algorithms based on gradient descent or evolutionary techniques.

[22] construct a saliency map of the importance of the pixels based on gradient descent and then modify the pixels. In contrast with our approach based on global optimisation and works on safety verification, these methods may be able to find adversarial examples efficiently, but are not able to conclude the nonexistence of adversarial examples when the algorithm fails to find one.

Output Range Analysis The safety verification approach can be adapted to work on this problem. Moreover, [9] consider determining whether an output value of a DNN is reachable from a given input subspace, and propose an MILP solution. [10] study the range of output values from a given input subspace. Their method interleaves local search (based on gradient descent) with global search (based on reduction to MILP). Both approaches can only work with small networks.

3 Lipschitz Continuity of DNNs

This section shows that feed-forward DNNs are Lipschitz continuous. Let be a -layer network such that, for a given input , represents the confidence values for classification labels. Specifically, we have where and for are learnable parameters and is the function mapping from the output of layer to the output of layer such that is the output of layer . Without loss of generality, we normalise the input to lie . The output is usually normalised to be in with a softmax layer.

Definition 1 (Lipschitz Continuity)

Given two metric spaces and , where and are the metrics on the sets and respectively, a function is called Lipschitz continuous if there exists a real constant such that, for all :

(1)

is called the Lipschitz constant for the function . The smallest is called the Best Lipschitz constant, denoted as .

In [4], the authors show that deep neural networks with half-rectified layers (i.e.,

 convolutional or fully connected layers with ReLU activation functions), max pooling and contrast-normalization layers are Lipschitz continuous. They prove that the upper bound of the Lipschitz constant can be estimated via the operator norm of learned parameters

.

Next, we show that the softmax layer, sigmoid and Hyperbolic tangent activation functions also satisfy Lipschitz continuity. First we need the following lemma [23].

Lemma 1

Let , if for all , then is Lipschitz continuous on and is its Lipschitz constant, where represents a norm operator.

Based on this lemma, we have the following theorem.

Theorem 1

Convolutional or fully connected layers with the sigmoid activation function , Hyperbolic tangent activation function , and softmax function are Lipschitz continuous and their Lipschitz constants are , , and , respectively.

Proof 1

First of all, we show that the norm operators of their Jacobian matrices are bounded.

(1) Layer with sigmoid activation with :

(2)

(2) Layer with Hyperbolic tangent activation function with :

(3)

(3) Layer with softmax function for and (dimensions of input and output of softmax are the same):

(4)

Since the softmax layer is the last layer of a deep neural network, we can estimate its supremum based on Lipschitz constants of previous layers and box constraints of DNN’s input.

The final conclusion follows by Lemma 1 and the fact that all the layer functions are bounded on their Jacobian matrix.

4 Problem Formulation

Let be a Lipschitz continuous function statistically evaluating the outputs of the network. Our problem is to find its upper and lower bounds given the set of inputs to the network. Because both the network and the function are Lipschitz continuous, all values between the upper and lower bounds have a corresponding input, i.e., are reachable.

Definition 2 (Reachability of Neural Network)

Let be an input subspace and a network. The reachability of over the function under an error tolerance is a set such that

(5)

We write and for the upper and lower bound, respectively. Then the reachability diameter is

(6)

Assuming these notations, we may write if we need to explicitly refer to the network .

In the following, we instantiate with a few concrete functions, and show that several key verification problems for DNNs can be reduced to our reachability problem.

Definition 3 (Output Range Analysis)

Given a class label , we let such that .

We write

for the network’s confidence in classifying

as label . Intuitively, output range [10] quantifies how a certain output of a deep neural network (i.e.,

 classification probability of a certain label

) varies in response to a set of DNN inputs with an error tolerance

. Output range analysis can be easily generalised to logit

333Logit output is the output of the layer before the softmax layer. The study of logit outputs is conducted in, e.g., [22, 10]. range analysis.

We show that the safety verification problem [6] can be reduced to solving the reachability problem.

Definition 4 (Safety)

A network is safe with respect to an input and an input subspace with , written as , if

(7)

We have the following reduction theorem.

Theorem 2

A network is safe with respect to and s.t. if and only if , where and . The error bound of the safety decision problem by this reduction is .

It is not hard to see that the adversarial example generation [4], which is to find an input such that , is the dual problem of the safety problem.

The following two problems define the robustness comparisons between the networks and/or the inputs.

Definition 5 (Robustness)

Given two homogeneous444 Here, two networks are homogeneous if they are applied on the same classification task but may have different network architectures (layer numbers, layer types, etc) and/or parameters. networks and , we say that is strictly more robust than with respect to a function , an input subspace and an error bound , written as , if .

Definition 6

Given two input subspaces and and a network , we say that is more robust on than on with respect to a statistical function and an error bound , written as , if .

Thus, by instantiating the function , we can quantify the output/logit range of a network, evaluate whether a network is safe, and compare the robustness of two homogeneous networks or two input subspaces for a given network.

5 Confidence Reachability with Guarantees

Section 3 shows that a trained deep neural network is Lipschitz continuous regardless of its layer depth, activation functions and number of neurons. Now, to solve the reachability problem, we need to find the global minimum and maximum values given an input subspace, assuming that we have a Lipschitz constant for the function . In the following, we let be the concatenated function. Without loss of generality, we assume the input space is a box-constraint, which is clearly feasible since images are usually normalized into before being fed into a neural network.

The computation of the minimum value is reduced to solving the following optimization problem with guaranteed convergence to the global minimum (the maximization problem can be transferred into a minimization problem):

(8)

However, the above problem is very difficult since is a highly non-convex function which cannot be guaranteed to reach the global minimum by regular optimization schemes based on gradient descent. Inspired by an idea from optimisation, see e.g., [24, 25], we design another continuous function , which serves as a lower bound of the original function . Specifically, we need

(9)

Furthermore, for , we let be a finite set containing points from the input space , and let when , then we can define a function which satisfies the following relation:

(10)

We use to denote the minimum value of for . Then we have

(11)

Similarly, we need a sequence of upper bounds to have

(12)

By Expression (12), we can have the following:

(13)

Therefore, we can asymptotically approach the global minimum. Practically, we execute a finite number of iterations by using an error tolerance to control the termination. In next sections, we present our approach, which constructs a sequence of lower and upper bounds, and show that it can converge with an error bound. To handle the high-dimensionality of DNNs, our approach is inspired by the idea of adaptive nested optimisation in [16], with significant differences in the detailed algorithm and convergence proof.

Figure 1: A lower-bound function designed via Lipschitz constant

5.1 One-dimensional Case

We first introduce an algorithm which works over one dimension of the input, and therefore is able to handle the case of in Eqn. (8). The multi-dimensional optimisation algorithm will be discussed in Section 5.2 by utilising the one-dimensional algorithm.

We define the following lower-bound function.

(14)

where is a Lipschitz constant of and intuitively represents the lower-bound sawtooth function shown as Figure 1. The set of points is constructed recursively. Assuming that, after -th iteration, we have , whose elements are in ascending order, and sets

The elements in sets , and have been defined earlier. The set records the smallest values computed in an interval .

In -th iteration, we do the following sequentially:

  • Compute as follows. Let and be the index of the interval where is computed. Then we let

    (15)

    and have that .

  • Let , then reorder in ascending order, and update .

  • Calculate

    (16)
    (17)

    and update .

  • Calculate the new lower bound by letting , and updating .

  • Calculate the new upper bound by letting .

We terminate the iteration whenever , and let the global minimum value be and the minimum objective function be .

Intuitively, as shown in Fig. 1, we iteratively generate lower bounds (by selecting in each iteration the lowest point in the saw-tooth function in the figure) by continuously refining a piecewise-linear lower bound function, which is guaranteed to below the original function due to Lipschitz continuity. The upper bound is the lowest evaluation value of the original function so far.

5.1.1 Convergence Analysis

In the following, we show the convergence of this algorithm to the global minimum by proving the following conditions.

  • Convergence Condition 1:

  • Convergence Condition 2:

Proof 2 (Monotonicity of Lower/Upper Bound Sequences)

First, we prove that the lower bound sequence is strictly monotonic. Because

(18)

and . To show that , we need to prove and . By the algorithm, is computed from interval , so we have

(19)

We then have

(20)

Since and , by Lipschitz continuity we have . Similarly, we can prove . Thus is guaranteed.

Second, the monotonicity of upper bounds can be seen from the algorithm, since is updated to in every iteration.

Proof 3 (Convergence Condition 1)

Since , we have . Based on Proof 2, we also have . Then since

(21)

the lower bound sequence is strictly monotonically increasing and bounded from above by . Thus holds.

Proof 4 (Convergence Condition 2)

Since , we show by showing that . Since and , we have . Then we have . Since is a closed interval, we can prove .

5.1.2 Dynamically Improving the Lipschitz Constant

A Lipschitz constant closer to can greatly improve the speed of convergence of the algorithm. We design a practical approach to dynamically update the current Lipschitz constant according to the information obtained from the previous iteration:

(22)

where . We emphasise that, because

this dynamic update does not compromise the convergence.

5.2 Multi-dimensional Case

The basic idea is to decompose a multi-dimensional optimization problem into a sequence of nested one-dimensional subproblems. Then the minima of those one-dimensional minimization subproblems are back-propagated into the original dimension and the final global minimum is obtained.

(23)

We first introduce the definition of -th level subproblem.

Definition 7

The -th level optimization subproblem, written as , is defined as follows: for ,

and for ,

Combining Expression (23) and Definition 7, we have that

which is actually a one-dimensional optimization problem and therefore can be solved by the method in Section 5.1.

However, when evaluating the objective function at , we need to project into the next one-dimensional subproblem

We recursively perform the projection until we reach the -th level one-dimensional subproblem,

Once solved, we back-propagate objective function values to the first-level and continue searching from this level until the error bound is reached.

5.2.1 Convergence Analysis

We use mathematical induction to prove convergence for the multi-dimension case.

  • Base case: for all , and hold.

  • Inductive step: if, for all , and are satisfied, then, for all , and hold.

The base case (i.e., one-dimensional case) is already proved in Section 5.1. Now we prove the inductive step.

Proof 5

By the nested optimization scheme, we have

Since is bounded by an interval error , assuming is the accurate global minimum, then we have

So the -dimensional problem is reduced to the one-dimensional problem . The difference from the real one-dimensional case is that evaluation of is not accurate but bounded by , where is the accurate function evaluation.

Assuming that the minimal value obtained from our method is under accurate function evaluation, then the corresponding lower and upper bound sequences are and , respectively.

For the inaccurate evaluation case, we assume , and its lower and bound sequences are, respectively, and . The termination criteria for both cases are and , and represents the ideal global minimum. Then we have . Assuming that and are adjacent evaluation points, then due to the fact that we have

Since , we thus have

Based on the search scheme, we know that

(24)

and thus we have .

Similarly, we can get

(25)

so . By and the termination criteria , we have , i.e., the accurate global minimum is also bounded.

The proof indicates that the overall error bound of the nested scheme only increases linearly w.r.t. the bounds in the one-dimensional case. Moreover, an adaptive approach can be applied to optimise its performance without compromising convergence. The key observation is to relax the strict subordination inherent in the nested scheme and simultaneously consider all the univariate subproblems arising in the course of multidimensional optimization. For all the generated subproblems that are active, a numerical measure is applied. Then an iteration of the multidimensional optimization consists in choosing the subproblem with maximal measurement and carrying out a new trial within this subproblem. The measure is defined to be the maximal interval characteristics generated by the one-dimensional optimisation algorithm.

6 Proof of NP-completeness

In this section, we prove the NP-completeness of our generic reachability problem.

6.1 Upper Bound

First of all, we show that the one-dimension optimisation case is linear with respect to the error bound . As shown in Figure 1, we have that

So we have

Moreover, we have

Based on Equation (17) and (19), we have

Therefore, we have . Now, since we have that and , and before convergence, we have

Therefore, the improvement for each iteration is of linear with respect to the error bound , which means that the optimisation procedure will converge in linear time with respect to the size of region .

For the multiple dimensional case, we notice that, in Equation (23), to reach the global optimum, not all the dimensions for need to be changed and the ordering between dimensions matter. Therefore, we can have a non-deterministic algorithm which guesses a subset of dimensions together with their ordering. These are dimensions that need to be changed to lead from the original input to the global optimum. This guess can be done in polynomial time.

Then, we can apply the one-dimensional optimisation algorithm backward from the last dimension to the first dimension. Because of the polynomial time convergence of the one-dimensional case, this procedure can be completed in polynomial time.

Therefore, the entire procedure can be completed in polynomial time with a nondeterministic algorithm, i.e., in NP.

6.2 Lower Bound

We have a reduction from the 3-SAT problem, which is known to be NP-complete. A 3-SAT Boolean formula is of the form , where each clause is of the form . Each literal , for and , is a variable or its negation , such that . The 3-SAT problem is to decide the existence of a truth-value assignment to the boolean variables such that the formula is True.

6.2.1 Construction of DNN

We let and be the variable and the sign of the literal , respectively. Given a formula , we construct a DNN which implements a classification problem. The DNN has four layers , within which , are hidden layers, is the input layer, and is the output layer. It has input neurons and output neurons.

Input Layer

The input layer has neurons, each of which represents a variable in .

First Hidden Layer – Fully Connected with ReLU

The hidden layer has neurons, such that every two neurons correspond to a variable in . Given a variable , we write and to denote the two neurons for such that

(26)

It is noted that, the above functions can be implemented as a fully connected function, by letting the coefficients from variables other than be 0.

Second Hidden Layer – Fully Connected with ReLU

The hidden layer has neurons, each of which represents a clause. Let be the neuron representing the clause . Then for a clause , we have

(27)

where for , we have

Intuitively, takes either a positive value or zero. For the latter case, none of the three literals are satisfied.

Output Layer – Fully Connected Without ReLU

The output layer has neurons, each of which represents a clause. Let be the neuron representing the clause . Then we have

(28)

Intuitively, this layer simply negates all the values from the previous layer.

6.2.2 Statistical Evaluation Function

After the output, we let be the following function

(29)

That is, gets the maximal value of all the outputs. Because and , we have that .

6.2.3 Reduction

First, we show that for any point , there exists a point such that, if and only if . Recall that is a concatenation of the network with , i.e., . For any input dimension , we let

Therefore, by construction, we have that if and only if . After passing through and the function , we have that if and only . This is equivalent to if and only if .

Then, for every truth-assignment , we associate with it a point by letting if and if .

Finally, we show that the formula is satisfiable if and only if the function cannot reach value 0. This is done by only considering those points in .

If is satisfiable then by construction, in the second hidden layer, for all . Therefore, in the output layer, we have for all . Finally, with the function , we have , i.e., the function cannot reach value 0.

We prove by contradiction. If is unsatisfiable, then there must exist a clause which is unsatisfiable. Then by construction, we have for some . This results in and , which contradicts with the hypothesis that cannot reach 0.

Figure 2: Comparison with SHERLOCK and Reluplex
Figure 3: The four features and the architecture of DNN-1 and DNN-7
Figure 4: Left: boxplots of confidence reachability diameters for 7 DNNs, based on analyses of each DNN. Right: boxplot of confidence reachability diameters for 4 features, based on analyses of each feature. The red line represents the median value: a lower value indicates a more robust model or feature.

7 Experiments

7.1 Comparison with State-of-the-art Methods

Two methods are chosen as baseline methods in this paper:

  • Reluplex [7]: an SMT-based method for solving queries on DNNs with ReLU activations; we apply a bisection scheme to compute an interval until an error is reached

  • SHERLOCK [10]: a MILP-based method dedicated to output range analysis on DNNs with ReLU activations.

Our software is implemented in Matlab 2018a, running on a notebook computer with i7-7700HQ CPU and 16GB RAM. Since Reluplex and SHERLOCK (not open-sourced) are designed on different software platforms, we take their experimental results from [10], whose experimental environment is a Linux workstation with 63GB RAM and 23-Cores CPU (more powerful than ours) and . Following the experimental setup in [10], we use their data (2-input and 1-output functions) to train six neural networks with various numbers and types of layers and neurons. The input subspace is .

The comparison results are given in Fig. 2. They show that, while the performance of both Reluplex and SHERLOCK is considerably affected by the increase in the number of neurons and layers, our method is not. For the six benchmark neural networks, our average computation time is around , 36 fold improvement over SHERLOCK and nearly 100 fold improvement over Reluplex (excluding timeouts). We note that our method is running on a notebook PC, which is significantly less powerful than the 23-core CPU stations used for SHERLOCK and Reluplex.

Figure 5: Left: an original image (logit is 11.806, confidence of output being ‘0’ is 99.95%), where area marked by dashed line is the feature. Middle: an image on the confidence lower bound. Right: an image on the confidence upper bound; for the output label ‘0’, the feature’s output range is , and logit reachability is
Figure 6: Ratios of safe images for 7 DNNs and 4 features
Figure 7: A detailed example comparing the safety and robustness of DNNs for image ’9’ and Feature-3: the top number in the caption of each figure is logit and the bottom one is confidence; the unsafe cases are all misclassified as ‘8’; the last bar chart shows their confidence reachability diameters.
Figure 8: A high-level comparison with state-of-the-art methods: SHERLOCK [10], Reluplex [7], Planet [26], MIP [11, 9] and BaB [12].

7.2 Safety and Robustness Verification by Reachability Analysis

We use our tool to conduct logit and output range analysis. Seven convolutional neural networks, represented as DNN-1,…,DNN-7, were trained on the MNIST dataset. Images are resized into

to enforce that a DNN with deeper layers tends to over-fit. The networks have different layer types, including ReLu, dropout and normalization, and the number of layers ranges from to . Testing accuracies range from to , and is used in our experiments.

We randomly choose 20 images (2 images per label) and manually choose 4 features such that each feature contains 8 pixels, i.e., . Fig. 3 illustrates the four features and the architecture of two DNNs with the shallowest and deepest layers, i.e., DNN-1 and DNN-7.

Safety Verification Fig. 5 shows an example: for DNN-1, Feature-4 is guaranteed to be safe with respect to the image and the input subspace . Specifically, the reachability interval is , which means that . By this, we have . Then, by Theorem 2, we have . Intuitively, no matter how we manipulate this feature, the worst case is to reduce the confidence of output being ‘0’ from 99.95% (its original confidence probability) to 74.36%.

Statistical Comparison of Safety Fig. 6 compares the ratios of safe images for different DNNs and features. It shows that: i) no DNN is 100% safe on those features: DNN-6 is the safest one and DNN-1, DNN-2 and DNN-3 are less safe, which means a DNN with well chosen layers are safer than those DNNs with very shallow or deeper layers; and ii) the safety performance of different DNNs is consistent for the same feature, which suggests that the feature matters – some features are easily perturbed to yield adversarial examples, e.g., Feature-1 and Feature-2.

Statistical Comparison of Robustness Fig. 4 compares the robustness of networks and features with two boxplots over the reachability diameters, where the function is for a suitable