Verification of Binarized Neural Networks

10/09/2017 ∙ by Chih-Hong Cheng, et al. ∙ fortiss 0

We study the problem of formal verification of Binarized Neural Networks (BNN), which have recently been proposed as a power-efficient alternative to more traditional learning networks. More precisely, given a trained BNN and a relation between possible inputs and outputs of this BNN, we develop verification procedures for establishing that the BNN indeed meets this specification for all possible inputs. For solving the verification problem of BNNs we build on well-known methods for hardware verification.The BNN verification problem is first encoded as a combinational miter. In a second step this miter is then transformed into a corresponding propositional satisfiability (SAT) problem. The main contributions of this paper are a number of essential optimizations for making this approach to BNN verification scalable. First, we provide a transformation on fully conntected BNNs for reducing the order of the number of bitwise operations in each layer of the BNN from quadratic to linear. Second, we are identifying redundant computations in a BNN based on optimal factoring techniques, and we provide transformations on BNNs for avoiding these multiple computations. We prove that the problem of optimal factoring is NP-hard, and we design efficient search procedures for generating approximate solutions of the optimal factoring problem. Third, we design a compositional verification procedure for analyzing each layer of a BNN separately, and for iteratively combining and refining local verification results. We experimentally demonstrate the scalability of our verification techniques to moderately-sized BNNs for embedded applications with thousands of neurons and inputs.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Artificial neural networks have become essential building blocks in realizing many automated and even autonomous systems. They have successfully been deployed, for example, for perception and scene understanding 

[17, 26, 21], for control and decision making [7, 14, 29, 19], and also for end-to-end solutions of autonomous driving scenarios [5]. Implementations of artificial neural networks, however, need to be made much more power-efficient in order to deploy them on typical embedded devices with their characteristically limited resources and power constraints. Moreover, the use of neural networks in safety-critical systems poses severe verification and certification challenges [3].

Binarized Neural Networks (BNN) have recently been proposed [16, 9] as a potentially much more power-efficient alternative to more traditional feed-forward artificial neural networks. Their main characteristics are that trained weights, inputs, intermediate signals and outputs, and also activation constraints are binary-valued. Consequently, forward propagation only relies on bit-level arithmetic. Since BNNs have also demonstrated good performance on standard datasets in image recognition such as MNIST, CIFAR-10 and SVHN [9], they are an attractive and potentially power-efficient alternative to current floating-point based implementations of neural networks for embedded applications.

In this paper we study the verification problem for BNNs. Given a trained BNN and a specification of its intended input-output behavior, we develop verification procedures for establishing that the given BNN indeed meets its intended specification for all possible inputs. Notice that naively solving verification problems for BNNs with, say, inputs requires investigation of all different input configurations.

For solving the verification problem of BNNs we build on well-known methods and tools from the hardware verification domain. We first transform the BNN and its specification into a combinational miter [6], which is then transformed into a corresponding propositional satisfiability (SAT) problem. In this process we rely heavily on logic synthesis tools such as ABC [6] from the hardware verification domain. Using such a direct neuron-to-circuit encoding, however, we were not able to verify BNNs with thousands of inputs and hidden nodes, as encountered in some of our embedded systems case studies. The main challenge therefore is to make the basic verification procedure scale to BNNs as used on current embedded devices.

It turns out that one critical ingredient for efficient BNN verification is to factor computations among neurons in the same layer, which is possible due to weights being binary. Such a technique is not applicable within recent works in verification of floating point neural networks [25, 15, 8, 10, 20]. The key theorem regarding the hardness of finding optimal factoring as well as the hardness of inapproximability leads to the design of polynomial time search heuristics for generating factorings. These factorings substantially increase the scalability of formal verification via SAT solving.

The paper is structured as follows. Section 2 defines basic notions and concepts underlying BNNs. Section 3 presents our verification workflow including the factoring of counting units (Section 3.2). We summarize experimental results with our verification procedure in Section 4, compare our results with related work from the literature in Section 5, and we close with some final remarks and an outlook in Section 6. Proofs of theorems are listed in the appendix.

index j 0 (bias node) 1 2 3 4
+1 (constant) +1 -1 +1 +1
-1 (bias) +1 -1 -1 +1
-1 +1 +1 -1 +1
+1, as
index j 0 (bias node) 1 2 3 4
1 1 0 1 1
0 (bias) 1 0 0 1
0 1 1 0 1
# of 1’s in 3
1, as
Table 1: An example of computing the output of a BNN neuron, using bipolar domain (up) and using 0/1 boolean variables (down).
Figure 1: Computation inside a neuron of a BNN, under bipolar domain .

2 Preliminaries

Let be the set of bipolar binaries , where +1 is interpreted as “true” and as “false.” A Binarized Neural Network (BNN) [16, 9] consists of a sequence of layers labeled from , where  is the index of the input layer is the output layer, and all other layers are so-called hidden layers. Superscripts  are used to index layer 

-specific variables. Elements of both inputs and outputs vectors of a BNN are of bipolar domain

.

Layers  are comprised of nodes (so-called neurons), for , where  is the dimension of the layer . By convention, is a bias node and has constant bipolar output . Nodes  of layer  can be connected with nodes  in layer  by a directed edge of weight . A layer is fully connected if every node (apart from the bias node) in the layer is connected to all neurons in the previous layer. Let denote the array of all weights associated with neuron . Notice that we consider all weights in a network to have fixed bipolar values.

Given an input to the network, computations are applied successively from neurons in layer to for generating outputs. Fig. 1

illustrates the computations of a neuron in bipolar domain. Overall, the activation function is applied to the intermediately computed weighted sum. It outputs 

if the weighted sum is greater or equal to ; otherwise, output . For the output layer, the activation function is omitted. For  let denote the output value of node and denotes the array of all outputs from layer , including the constant bias node; refers to the input layer.

For a given BNN and a relation specifying the undesired property between the bipolar input and output domains of the given BNN, the BNN safety verification problem asks if there exists an input to the BNN such that the risk property holds, where is the output of the BNN for input .

It turns out that safety verification of BNN is no simpler than safety verification of floating point neural networks with ReLU activation function 

[15]. Nevertheless, compared to floating point neural networks, the simplicity of binarized weights allows an efficient translation into SAT problems, as can be seen in later sections.

Theorem 1.

The problem of BNN safety verification is NP-complete.

3 Verification of BNNs via Hardware Verification

The BNN verification problem is encoded by means of a combinational miter [6], which is a hardware circuit with only one Boolean output and the output should always be 0. The main step of this encoding is to replace the bipolar domain operation in the definition of BNNs with corresponding operations in the 0/1 Boolean domain.

We recall the encoding of the update function of an individual neuron of a BNN in bipolar domain (Eq. 1) by means of operations in the 0/1 Boolean domain [16, 9]

: (1) perform a bitwise XNOR (

) operation, (2) count the number of 1s, and (3) check if the sum is greater than or equal to the half of the number of inputs being connected. Table 1 illustrates the concept by providing the detailed computation for a neuron connected to five predecessor nodes. Therefore, the update function of a BNN neuron (in the fully connected layer) in the Boolean domain is as follows.

(1)

where count1 simply counts the number of 1s in an array of Boolean variables, and is 1 if , and 0 otherwise. Notice that the value is constant for a given BNN.

Specifications in the bipolar domain can also be easily re-encoded in the Boolean domain. Let be the valuation in the bipolar domain and be the output valuation in the Boolean domain; then the transformation from bipolar to Boolean domain is as follows.

(2)

An illustrative example is provided in Table 1, where . In the remaining of this paper we assume that properties are always provided in the Boolean domain.

3.1 From BNN to hardware verification

We are now ready for stating the basic decision procedure for solving BNN verification problems. This procedure first constructs a combinational miter for a BNN verification problem, followed by an encoding of the combinational miter into a corresponding propositional SAT problem. Here we rely on standard transformation techniques as implemented in logic synthesis tools such as ABC [6] or Yosys [30] for constructing SAT problems from miters. The decision procedure takes as input a BNN network description, an input-output specification and can be summarized by the following workflow:

  1. Transform all neurons of the given BNN into neuron-modules. All neuron-modules have identical structure, but only differ based on the associated weights and biases of the corresponding neurons.

  2. Create a BNN-module by wiring the neuron-modules realizing the topological structure of the given BNN.

  3. Create a property-module for the property . Connect the inputs of this module with all the inputs and all the outputs of the BNN-module. The output of this module is true if the property is satisfied and false otherwise.

  4. The combination of the BNN-module and the property-module is the miter.

  5. Transform the miter into a propositional SAT formula.

  6. Solve the SAT formula. If it is unsatisfiable then the BNN is safe w.r.t. ; if it is satisfiable then the BNN exhibits the risky behavior being specified in .

3.2 Counting optimization

The goal of the counting optimization is to speed up SAT-solving times by reusing redundant counting units in the circuit and, thus, reducing redundancies in the SAT formula. This method involves the identification and factoring of redundant counting units, illustrated in Figure 2, which highlights one possible factoring. The main idea is to exploit similarities among the weight vectors of neurons in the same layer, because the counting over a portion of the weight vector has the same result for all neurons that share it. The circuit size is reduced by using the factored counting unit in multiple neuron-modules. We define a factoring as follows:

Definition 1 (factoring and saving).

Consider the -th layer of a BNN where . A factoring is a pair of two sets, where , , such that , and for all , for all , we have . Given a factoring , define its saving be .

Definition 2 (non-overlapping factorings).

Two factorings and are non-overlapping when the following condition folds: if and , then either or . In other words, weights associated with and do not overlap.

Definition 3 (-factoring optimization problem).

The -factoring optimization problem searches for a set of size  factorings , such that any two factorings are non-overlapping, and the total saving is maximum.

Figure 2: One possible factoring to avoid redundant counting.

For the example in Fig. 2, there are two non-overlapping factorings and . is also an optimal solution for the -factoring optimization problem, with the total saving being . Even finding one factoring which has the overall maximum saving , is computationally hard. This NP-hardness result is established by a reduction from the NP-complete problem of finding maximum edge biclique in bipartite graphs [24].

Theorem 2 (Hardness of factoring optimization).

The -factoring optimization problem, even when , is NP-hard.

Furthermore, even having an approximation algorithm for the -factoring optimization problem is hard - there is no polynomial time approximation scheme (PTAS), unless NP-complete problems can be solved in randomized subexponential time. The proof follows an intuition that building a PTAS for -factoring can be used to build a PTAS for finding maximum complete bipartite subgraph which also has known inapproximability results [1].

Theorem 3.

Let be an arbitrarily small constant. If there is a PTAS for the -factoring optimization problem, even when , then there is a (probabilistic) algorithm that decides whether a given SAT instance of size is satisfiable in time .

Data: BNN network description (cf Sec. 2)
Result: Set of factorings, where any two factorings of are non-overlapping.
1 function main():
2      let and ;
3      foreach neuron  do
4           let empty factoring;
5           foreach weight where  do
6                ;
7                if  then ;
8               
9          ; ;
10          
11     return ;
12     
13 function getFactoring(, , used):
14      build := where := ;
15      foreach  do  := ;
16      build := where := ;
17      return where , and ;
18     
Algorithm 1 Finding factoring possibilities for BNN.

As finding an optimal factoring is computationally hard, we present a polynomial time heuristic algorithm (Algorithm 1) that finds factoring possibilities among neurons in layer . The main function searches for an unused pair of neuron and input  (line 3 and 5), considers a certain set of factorings determined by the subroutine getFactoring (line 6) where weight is guaranteed to be used (as input parameter , ), picks the factoring with greatest  (line 7) and then adds the factoring greedily and updates the set used (line 8).

The subroutine (lines 10–14) computes a factoring guaranteeing that weight is used. It starts by creating a set , where each element is a set containing the indices of neurons whose -th weight matches the -th weight in neuron  (the condition in line 11). In the example in Fig. 3a, the computation generates Fig. 3b where as . The intersection performed on line 12 guarantees that the set is always a subset of – as weight should be included, already defines the maximum set of neurons where factoring can happen. E.g., changes from to in Fig. 3c.

Figure 3: Executing , meaning that we consider a factoring which includes the top-left corner of (a). The returned factoring is highlighted in thick lines.

The algorithm then builds a set of all the candidates for . Each element contains all the inputs that would benefit from being the final result . Based on the observation mentioned above, can be built through superset computation between elements of (line 13, Fig. 3d). After we build and , finally line 14 finds a pair of where , with the maximum saving . The maximum saving as produced in Fig. 3 equals .

There are only polynomial operations in this algorithm such as nested for loops, superset checking and intersection which makes the heuristic algorithm polynomial. When one encounters a huge number of neurons and long weight vectors, we further partition neurons and weights into smaller regions as input to Algorithm 1. By doing so, we find factoring possibilities for each weight segment of a neuron and the algorithm can be executed in parallel.

4 Implementation and Evaluation

We have created a verification tool, which first reads a BNN description based on the Intel Nervana Neon framework111 https://github.com/NervanaSystems/neon/tree/master/examples/binary , generates a combinational miter in Verilog and calls Yosys [30] and ABC [6] for generating a CNF formula. No further optimization commands (e.g., refactor) are executed inside ABC to create smaller CNFs. Finally, Cryptominisat5 [27] is used for solving SAT queries. The experiments are conducted in a Ubuntu 16.04 Google Cloud VM equipped with 18 cores and 250 GB RAM, with Cryptominisat5 running with 16 threads. We use two different datasets, namely the MNIST dataset for digit recognition [18] and the German traffic sign dataset [28]. We binarize the gray scale data to before actual training. For the traffic sign dataset, every pixel is quantized to  Boolean variables.

ID # inputs # neurons hidden layer Properties being investigated SAT/
UNSAT
SAT solving time (normal) SAT solving time (factored)
MNIST 1 784 3x100 () SAT 2m16.336s 0m53.545s
MNIST 1 784 3x100 () SAT 2m20.318s 0m56.538s
MNIST 1 784 3x100 () SAT timeout 10m50.157s
MNIST 1 784 3x100 () UNSAT 2m4.746s 1m0.419s
Traffic 2 2352 3x500 () SAT 10m27.960s 4m9.363s
Traffic 2 2352 3x500 () SAT 10m46.648s 4m51.507s
Traffic 2 2352 3x500 () SAT 10m48.422s 4m19.296s
Traffic 2 2352 3x500 () unknown timeout timeout
Traffic 2 2352 3x500 () UNSAT 31m24.842s 41m9.407s
Traffic 3 2352 3x1000 () SAT out-of-memory 9m40.77s
Traffic 3 2352 3x1000 () SAT out-of-memory 9m43.70s
Traffic 3 2352 3x1000 () SAT out-of-memory 9m28.40s
Traffic 3 2352 3x1000 () SAT out-of-memory 9m34.95s
Table 2: Verification results for each instance and comparing the execution times of the plain hardware verification approach and the optimized version using counting optimizations.

Table 2 summarizes the result of verification in terms of SAT solving time, with a timeout set to 

minutes. The properties that we use here are characteristics of a BNN given by numerical constraints over outputs, such as “simultaneously classify an image as a priority road sign and as a stop sign with high confidence” (which clearly demonstrates a risk behavior). It turns out that factoring techniques are essential to enable better scalability, as it halves the verification times in most cases and enables us to solve some instances where the plain approach ran out of memory or timed out. However, we also observe that solvers like

Cryptominisat5 might get trapped in some very hard-to-prove properties. Regarding the instance in Table 2 where the result is unknown, we suspect that the simultaneous confidence value of for the two classes and , is close to the value where the property flips from satisfiable to unsatisfiable. This makes SAT solving on such cases extremely difficult for solvers as the instances are close to the “border” between SAT and UNSAT instances.

Here we omit technical details, but the counting approach can also be replaced by techniques such as sorting networks222Intuitively, the counting + activation function can be replaced by implementing a sorting network and check if for the sorted result, the -th element is true. [2] where the technique of factoring can still be integrated333Sorting network in [2] implements merge-sort in hardware, where the algorithm tries to build a sorted string via merging multiple sorted substrings. Under the context of BNN verification, the factored result can be first sorted, then these sorted results can then be integrated as an input to the merger.. However, our initial evaluation demonstrated that using sorting network does not bring any computational benefit.

5 Related Work

There has been a flurry of recent results on formal verification of neural networks (e.g. [25, 15, 8, 10, 20]

). These approaches usually target the formal verification of floating-point arithmetic neural networks (FPA-NNs). Huang et al. propose an (incomplete) search-based technique based on

satisfiability modulo theories (SMT) solvers [13]

. For FPA-NNs with ReLU activation functions, Katz et al. propose a modification of the Simplex algorithm which prefers fixing of binary variables 

[15]. This verification approach has been demonstrated on the verification of a collision avoidance system for UAVs. In our own previous work on neural network verification we establish maximum resilience bounds for FPA-NNs based on reductions to

mixed-integer linear programming

(MILP) problems [8]. The feasibility of this approach has work has demonstrated, for example, by verifying a motion predictor in a highway overtaking scenario. The work of Ehlers [10] is based on sound abstractions, and approximates non-linear behavior in the activation functions. Scalability is the overarching challenge for these formal approaches to the verification of FPA-NNs. Case studies and experiments reported in the literature are usually restricted to the verification of FPA-NNs with a couple of hundred neurons.

Around the time (Oct 9th, 2017) we first release of our work regarding formal verification of BNNs, Narodytska et al have also worked on the same problem [23]. Their work focuses on efficient encoding within a single neuron, while we focus on computational savings among neurons within the same layer. One can view our result and their result complementary.

Researchers from the machine learning domain (e.g.  

[11, 12, 22]) target the generation of adversarial examples for debugging and retraining purposes. Adverserial examples are slightly perturbed inputs (such as images) which may fool a neural network into generating undesirable results (such as ”wrong” classifications). Using satisfiability assignments from the SAT solving stage in our verification procedure, we are also able to generate counterexamples to the BNN verification problem. Our work, however, goes well beyond current approaches to generating adverserial examples in that it does not only support debugging and retraining purposes. Instead, our verification algorithm establishes formal correctness results for neural network-like structures.

6 Conclusions

We are solving the problem of verifying BNNs by reduction to the problem of verifying combinatorial circuits, which itself is reduced to solving SAT problems. Altogether, our experiments indicate that this hardware verification-centric approach, in connection with our BNN-specific transformations and optimizations, scales to BNNs with thousands of inputs and nodes. This kind of scalability makes our verification approach attractive for automatically establishing correctness results at least for moderately-sized BNNs as used on current embedded devices.

Our developments for efficiently encoding BNN verification problems, however, might also prove to be useful in optimizing forward evaluation of BNNs. In addition our verification framework may also be used for debugging and retraining purposes of BNNs; for example, for automatically generating adverserial inputs from failed verification attempts.

In the future we also plan to directly synthesize propositional clauses without the support of 3rd party tools such as Yosys in order to avoid extraneous transformations and repetitive work in the synthesis workflow. Similar optimizations of the current verification tool chain should result in substantial performance improvements. It might also be interesting to investigate incremental verification techniques for BNN, since weights and structure of these learning networks might adapt and change continuously.

Finally, our proposed verification workflow might be extended to synthesis problems, such as synthesizing bias terms in BNNs without sacrificing performance or for synthesizing weight assignments in a property-driven manner. These kinds of synthesis problems for BNNs are reduced to 2QBF problems, which are satisfiability problems with a top level exists-forall quantification. The main challenge for solving these kinds of synthesis problems for the typical networks encountered in practice is, again, scalability.

Acknowledgments

(Version 1)

We thank Dr. Ljubo Mercep from Mentor Graphics for indicating to us some recent results on quantized neural networks, and Dr. Alan Mishchenko from UC Berkeley for his kind support regarding ABC.

(Version 2)

We additionally thank Dr. Leonid Ryzhyk from VMWare to indicate us their work on efficient SAT encoding of individual neurons. We further thank Dr. Alan Mishchenko from UC Berkeley for sharing his knowledge regarding sorting networks.

References

Appendix - Proof of Theorems

Theorem 1.

The problem of BNN safety verification is NP-complete.

Proof.

Recall that for a given BNN and a relation specifying the undesired property between the bipolar input and output domains of the given BNN, the BNN safety verification problem asks if there exists an input to the BNN such that the risk property holds, where is the output of the BNN for input .

(NP) Given an input, compute the output and check if holds can easily be done in time linear to the size of BNN and size of the property formula.

(NP-hardness) The NP-hardness proof is via a reduction from 3SAT to BNN safety verification. Consider variables , clauses where for each clause , it has three literals . We build a single layer BNN with inputs to be (constant for bias), (from CNF variables), connected to neurons.

For neuron , its weights and connection to previous layers is decided by clause .

  • If is a positive literal , then in BNN create a link from to neuron  with weight . If is a negative literal , then in BNN create a link from to neuron  with weight . Proceed analogously for and .

  • Add an edge from to with weight .

  • Add a bias term .

For example, consider the CNF having variables , then the translation of the clause will create in BNN the weighted sum computation .

Assume that is constant , then if there exists any assignment to make the clause true, then by interpreting the true assignment in CNF to be  in the BNN input and false assignment in CNF to be  in the BNN input, the weighted sum is at most , i.e., the output of the neuron is . Only when , and (i.e., the assignment makes the clause unsatisfiable), then the weighed sum is , thereby setting output of the neuron to be .

Following the above exemplary observation, it is easy to derive that 3SAT formula is satisfiable iff in the generated BNN, there exists an input such that the risk property holds. It is done by interpreting the 3SAT variable assignment in CNF to be assignment for input in the BNN, while interpreting in 3SAT to be for input in the BNN. ∎

Theorem 2 (Hardness of factoring optimization).

The -factoring optimization problem, even when , is NP-hard.

Proof.

The proof proceeds by a polynomial reduction from the problem of finding maximum edge biclique in bipartite graphs [24]444 Let be a bipartite graph with vertex set and edge set connecting vertices in to vertices in . A pair of two disjoint subsets and is called a biclique if for all and . Thus, the edges form a complete bipartite subgraph of . A biclique clearly has edges.. Given a bipartite graph , this reduction is defined as follows.

  1. For , the -th element of , create a neuron .

  2. Create an additional neuron

  3. For , the -th element of , create a neuron .

    • Create weight .

    • If , then create .

This construction can clearly be performed in polynomial time. Figure 4 illustrates the construction process. It is not difficult to observe that has a maximum edge size  biclique iff the neural network at layer has a factoring whose saving equals . The gray area in Figure 4-a shows the structure of maximum edge biclique . For Figure 4-c, the saving is , which is the same as the edge size of the biclique. ∎

Figure 4: From bipartite graph (a) to BNN where all weights are with value  (b), to optimal factoring (c).

The following inapproximability result shows that even having an approximation algorithm for the -factoring optimization problem is hard.

Theorem 3.

Let be an arbitrarily small constant. If there is a PTAS for the -factoring optimization problem, even when , then there is a (probabilistic) algorithm that decides whether a given SAT instance of size is satisfiable in time .

Proof.

We will prove the Theorem by showing that a PTAS for the -factoring optimization problem can be used to manufacture a PTAS for MEB. Then the result follows from the inapproximability of MEB assuming the exponential time hypothesis [1].

Assume that is a -approximation algorithm for the -factoring optimization problem. We formulate the following algorithm :

Input: MEB instance (a bipartite graph )

Output: a biclique in

  1. perform reduction of proof of Theorem 1 to obtain -factoring instance

  2. factoring

  3. return

Remark: step 3 is a small abuse of notation. It should return the original vertices corresponding to these neurons.

Now we prove that is a -approximation algorithm for MEB: Note that by our reduction two corresponding MEB and -factoring instances and have the same optimal value, i.e., .

In step 3 the algorithm returns . This is valid since we can assume w.l.o.g. that returned by contains . This neuron is connected to all neurons from the previous layer by construction, so it can be added to any factoring. The following relation holds for the number of edges in the biclique returned by :

(3a)
(3b)
(3c)

The inequality in step (3b) holds by the assumption that is a -approximation algorithm for -factoring and (3c) follows from the construction of our reduction. Equations (3) and the result of [1] imply Theorem 2.