1 Introduction
Artificial neural networks have become essential building blocks in realizing many automated and even autonomous systems. They have successfully been deployed, for example, for perception and scene understanding
[17, 26, 21], for control and decision making [7, 14, 29, 19], and also for endtoend solutions of autonomous driving scenarios [5]. Implementations of artificial neural networks, however, need to be made much more powerefficient in order to deploy them on typical embedded devices with their characteristically limited resources and power constraints. Moreover, the use of neural networks in safetycritical systems poses severe verification and certification challenges [3].Binarized Neural Networks (BNN) have recently been proposed [16, 9] as a potentially much more powerefficient alternative to more traditional feedforward artificial neural networks. Their main characteristics are that trained weights, inputs, intermediate signals and outputs, and also activation constraints are binaryvalued. Consequently, forward propagation only relies on bitlevel arithmetic. Since BNNs have also demonstrated good performance on standard datasets in image recognition such as MNIST, CIFAR10 and SVHN [9], they are an attractive and potentially powerefficient alternative to current floatingpoint based implementations of neural networks for embedded applications.
In this paper we study the verification problem for BNNs. Given a trained BNN and a specification of its intended inputoutput behavior, we develop verification procedures for establishing that the given BNN indeed meets its intended specification for all possible inputs. Notice that naively solving verification problems for BNNs with, say, inputs requires investigation of all different input configurations.
For solving the verification problem of BNNs we build on wellknown methods and tools from the hardware verification domain. We first transform the BNN and its specification into a combinational miter [6], which is then transformed into a corresponding propositional satisfiability (SAT) problem. In this process we rely heavily on logic synthesis tools such as ABC [6] from the hardware verification domain. Using such a direct neurontocircuit encoding, however, we were not able to verify BNNs with thousands of inputs and hidden nodes, as encountered in some of our embedded systems case studies. The main challenge therefore is to make the basic verification procedure scale to BNNs as used on current embedded devices.
It turns out that one critical ingredient for efficient BNN verification is to factor computations among neurons in the same layer, which is possible due to weights being binary. Such a technique is not applicable within recent works in verification of floating point neural networks [25, 15, 8, 10, 20]. The key theorem regarding the hardness of finding optimal factoring as well as the hardness of inapproximability leads to the design of polynomial time search heuristics for generating factorings. These factorings substantially increase the scalability of formal verification via SAT solving.
The paper is structured as follows. Section 2 defines basic notions and concepts underlying BNNs. Section 3 presents our verification workflow including the factoring of counting units (Section 3.2). We summarize experimental results with our verification procedure in Section 4, compare our results with related work from the literature in Section 5, and we close with some final remarks and an outlook in Section 6. Proofs of theorems are listed in the appendix.
index j  0 (bias node)  1  2  3  4 

+1 (constant)  +1  1  +1  +1  
1 (bias)  +1  1  1  +1  
1  +1  +1  1  +1  
+1, as  
index j  0 (bias node)  1  2  3  4 
1  1  0  1  1  
0 (bias)  1  0  0  1  
0  1  1  0  1  
# of 1’s in  3  
1, as 
2 Preliminaries
Let be the set of bipolar binaries , where +1 is interpreted as “true” and as “false.” A Binarized Neural Network (BNN) [16, 9] consists of a sequence of layers labeled from , where is the index of the input layer, is the output layer, and all other layers are socalled hidden layers. Superscripts are used to index layer
specific variables. Elements of both inputs and outputs vectors of a BNN are of bipolar domain
.Layers are comprised of nodes (socalled neurons), for , where is the dimension of the layer . By convention, is a bias node and has constant bipolar output . Nodes of layer can be connected with nodes in layer by a directed edge of weight . A layer is fully connected if every node (apart from the bias node) in the layer is connected to all neurons in the previous layer. Let denote the array of all weights associated with neuron . Notice that we consider all weights in a network to have fixed bipolar values.
Given an input to the network, computations are applied successively from neurons in layer to for generating outputs. Fig. 1
illustrates the computations of a neuron in bipolar domain. Overall, the activation function is applied to the intermediately computed weighted sum. It outputs
if the weighted sum is greater or equal to ; otherwise, output . For the output layer, the activation function is omitted. For let denote the output value of node and denotes the array of all outputs from layer , including the constant bias node; refers to the input layer.For a given BNN and a relation specifying the undesired property between the bipolar input and output domains of the given BNN, the BNN safety verification problem asks if there exists an input to the BNN such that the risk property holds, where is the output of the BNN for input .
It turns out that safety verification of BNN is no simpler than safety verification of floating point neural networks with ReLU activation function
[15]. Nevertheless, compared to floating point neural networks, the simplicity of binarized weights allows an efficient translation into SAT problems, as can be seen in later sections.Theorem 1.
The problem of BNN safety verification is NPcomplete.
3 Verification of BNNs via Hardware Verification
The BNN verification problem is encoded by means of a combinational miter [6], which is a hardware circuit with only one Boolean output and the output should always be 0. The main step of this encoding is to replace the bipolar domain operation in the definition of BNNs with corresponding operations in the 0/1 Boolean domain.
We recall the encoding of the update function of an individual neuron of a BNN in bipolar domain (Eq. 1) by means of operations in the 0/1 Boolean domain [16, 9]
: (1) perform a bitwise XNOR (
) operation, (2) count the number of 1s, and (3) check if the sum is greater than or equal to the half of the number of inputs being connected. Table 1 illustrates the concept by providing the detailed computation for a neuron connected to five predecessor nodes. Therefore, the update function of a BNN neuron (in the fully connected layer) in the Boolean domain is as follows.(1) 
where count1 simply counts the number of 1s in an array of Boolean variables, and is 1 if , and 0 otherwise. Notice that the value is constant for a given BNN.
Specifications in the bipolar domain can also be easily reencoded in the Boolean domain. Let be the valuation in the bipolar domain and be the output valuation in the Boolean domain; then the transformation from bipolar to Boolean domain is as follows.
(2) 
An illustrative example is provided in Table 1, where . In the remaining of this paper we assume that properties are always provided in the Boolean domain.
3.1 From BNN to hardware verification
We are now ready for stating the basic decision procedure for solving BNN verification problems. This procedure first constructs a combinational miter for a BNN verification problem, followed by an encoding of the combinational miter into a corresponding propositional SAT problem. Here we rely on standard transformation techniques as implemented in logic synthesis tools such as ABC [6] or Yosys [30] for constructing SAT problems from miters. The decision procedure takes as input a BNN network description, an inputoutput specification and can be summarized by the following workflow:

Transform all neurons of the given BNN into neuronmodules. All neuronmodules have identical structure, but only differ based on the associated weights and biases of the corresponding neurons.

Create a BNNmodule by wiring the neuronmodules realizing the topological structure of the given BNN.

Create a propertymodule for the property . Connect the inputs of this module with all the inputs and all the outputs of the BNNmodule. The output of this module is true if the property is satisfied and false otherwise.

The combination of the BNNmodule and the propertymodule is the miter.

Transform the miter into a propositional SAT formula.

Solve the SAT formula. If it is unsatisfiable then the BNN is safe w.r.t. ; if it is satisfiable then the BNN exhibits the risky behavior being specified in .
3.2 Counting optimization
The goal of the counting optimization is to speed up SATsolving times by reusing redundant counting units in the circuit and, thus, reducing redundancies in the SAT formula. This method involves the identification and factoring of redundant counting units, illustrated in Figure 2, which highlights one possible factoring. The main idea is to exploit similarities among the weight vectors of neurons in the same layer, because the counting over a portion of the weight vector has the same result for all neurons that share it. The circuit size is reduced by using the factored counting unit in multiple neuronmodules. We define a factoring as follows:
Definition 1 (factoring and saving).
Consider the th layer of a BNN where . A factoring is a pair of two sets, where , , such that , and for all , for all , we have . Given a factoring , define its saving be .
Definition 2 (nonoverlapping factorings).
Two factorings and are nonoverlapping when the following condition folds: if and , then either or . In other words, weights associated with and do not overlap.
Definition 3 (factoring optimization problem).
The factoring optimization problem searches for a set of size factorings , such that any two factorings are nonoverlapping, and the total saving is maximum.
For the example in Fig. 2, there are two nonoverlapping factorings and . is also an optimal solution for the factoring optimization problem, with the total saving being . Even finding one factoring which has the overall maximum saving , is computationally hard. This NPhardness result is established by a reduction from the NPcomplete problem of finding maximum edge biclique in bipartite graphs [24].
Theorem 2 (Hardness of factoring optimization).
The factoring optimization problem, even when , is NPhard.
Furthermore, even having an approximation algorithm for the factoring optimization problem is hard  there is no polynomial time approximation scheme (PTAS), unless NPcomplete problems can be solved in randomized subexponential time. The proof follows an intuition that building a PTAS for factoring can be used to build a PTAS for finding maximum complete bipartite subgraph which also has known inapproximability results [1].
Theorem 3.
Let be an arbitrarily small constant. If there is a PTAS for the factoring optimization problem, even when , then there is a (probabilistic) algorithm that decides whether a given SAT instance of size is satisfiable in time .
As finding an optimal factoring is computationally hard, we present a polynomial time heuristic algorithm (Algorithm 1) that finds factoring possibilities among neurons in layer . The main function searches for an unused pair of neuron and input (line 3 and 5), considers a certain set of factorings determined by the subroutine getFactoring (line 6) where weight is guaranteed to be used (as input parameter , ), picks the factoring with greatest (line 7) and then adds the factoring greedily and updates the set used (line 8).
The subroutine (lines 10–14) computes a factoring guaranteeing that weight is used. It starts by creating a set , where each element is a set containing the indices of neurons whose th weight matches the th weight in neuron (the condition in line 11). In the example in Fig. 3a, the computation generates Fig. 3b where as . The intersection performed on line 12 guarantees that the set is always a subset of – as weight should be included, already defines the maximum set of neurons where factoring can happen. E.g., changes from to in Fig. 3c.
The algorithm then builds a set of all the candidates for . Each element contains all the inputs that would benefit from being the final result . Based on the observation mentioned above, can be built through superset computation between elements of (line 13, Fig. 3d). After we build and , finally line 14 finds a pair of where , with the maximum saving . The maximum saving as produced in Fig. 3 equals .
There are only polynomial operations in this algorithm such as nested for loops, superset checking and intersection which makes the heuristic algorithm polynomial. When one encounters a huge number of neurons and long weight vectors, we further partition neurons and weights into smaller regions as input to Algorithm 1. By doing so, we find factoring possibilities for each weight segment of a neuron and the algorithm can be executed in parallel.
4 Implementation and Evaluation
We have created a verification tool, which first reads a BNN description based on the Intel Nervana Neon framework^{1}^{1}1 https://github.com/NervanaSystems/neon/tree/master/examples/binary , generates a combinational miter in Verilog and calls Yosys [30] and ABC [6] for generating a CNF formula. No further optimization commands (e.g., refactor) are executed inside ABC to create smaller CNFs. Finally, Cryptominisat5 [27] is used for solving SAT queries. The experiments are conducted in a Ubuntu 16.04 Google Cloud VM equipped with 18 cores and 250 GB RAM, with Cryptominisat5 running with 16 threads. We use two different datasets, namely the MNIST dataset for digit recognition [18] and the German traffic sign dataset [28]. We binarize the gray scale data to before actual training. For the traffic sign dataset, every pixel is quantized to Boolean variables.
ID  # inputs  # neurons hidden layer  Properties being investigated 
SAT/
UNSAT 
SAT solving time (normal)  SAT solving time (factored) 

MNIST 1  784  3x100  ()  SAT  2m16.336s  0m53.545s 
MNIST 1  784  3x100  ()  SAT  2m20.318s  0m56.538s 
MNIST 1  784  3x100  ()  SAT  timeout  10m50.157s 
MNIST 1  784  3x100  ()  UNSAT  2m4.746s  1m0.419s 
Traffic 2  2352  3x500  ()  SAT  10m27.960s  4m9.363s 
Traffic 2  2352  3x500  ()  SAT  10m46.648s  4m51.507s 
Traffic 2  2352  3x500  ()  SAT  10m48.422s  4m19.296s 
Traffic 2  2352  3x500  ()  unknown  timeout  timeout 
Traffic 2  2352  3x500  ()  UNSAT  31m24.842s  41m9.407s 
Traffic 3  2352  3x1000  ()  SAT  outofmemory  9m40.77s 
Traffic 3  2352  3x1000  ()  SAT  outofmemory  9m43.70s 
Traffic 3  2352  3x1000  ()  SAT  outofmemory  9m28.40s 
Traffic 3  2352  3x1000  ()  SAT  outofmemory  9m34.95s 
Table 2 summarizes the result of verification in terms of SAT solving time, with a timeout set to
minutes. The properties that we use here are characteristics of a BNN given by numerical constraints over outputs, such as “simultaneously classify an image as a priority road sign and as a stop sign with high confidence” (which clearly demonstrates a risk behavior). It turns out that factoring techniques are essential to enable better scalability, as it halves the verification times in most cases and enables us to solve some instances where the plain approach ran out of memory or timed out. However, we also observe that solvers like
Cryptominisat5 might get trapped in some very hardtoprove properties. Regarding the instance in Table 2 where the result is unknown, we suspect that the simultaneous confidence value of for the two classes and , is close to the value where the property flips from satisfiable to unsatisfiable. This makes SAT solving on such cases extremely difficult for solvers as the instances are close to the “border” between SAT and UNSAT instances.Here we omit technical details, but the counting approach can also be replaced by techniques such as sorting networks^{2}^{2}2Intuitively, the counting + activation function can be replaced by implementing a sorting network and check if for the sorted result, the th element is true. [2] where the technique of factoring can still be integrated^{3}^{3}3Sorting network in [2] implements mergesort in hardware, where the algorithm tries to build a sorted string via merging multiple sorted substrings. Under the context of BNN verification, the factored result can be first sorted, then these sorted results can then be integrated as an input to the merger.. However, our initial evaluation demonstrated that using sorting network does not bring any computational benefit.
5 Related Work
There has been a flurry of recent results on formal verification of neural networks (e.g. [25, 15, 8, 10, 20]
). These approaches usually target the formal verification of floatingpoint arithmetic neural networks (FPANNs). Huang et al. propose an (incomplete) searchbased technique based on
satisfiability modulo theories (SMT) solvers [13]. For FPANNs with ReLU activation functions, Katz et al. propose a modification of the Simplex algorithm which prefers fixing of binary variables
[15]. This verification approach has been demonstrated on the verification of a collision avoidance system for UAVs. In our own previous work on neural network verification we establish maximum resilience bounds for FPANNs based on reductions tomixedinteger linear programming
(MILP) problems [8]. The feasibility of this approach has work has demonstrated, for example, by verifying a motion predictor in a highway overtaking scenario. The work of Ehlers [10] is based on sound abstractions, and approximates nonlinear behavior in the activation functions. Scalability is the overarching challenge for these formal approaches to the verification of FPANNs. Case studies and experiments reported in the literature are usually restricted to the verification of FPANNs with a couple of hundred neurons.Around the time (Oct 9th, 2017) we first release of our work regarding formal verification of BNNs, Narodytska et al have also worked on the same problem [23]. Their work focuses on efficient encoding within a single neuron, while we focus on computational savings among neurons within the same layer. One can view our result and their result complementary.
Researchers from the machine learning domain (e.g.
[11, 12, 22]) target the generation of adversarial examples for debugging and retraining purposes. Adverserial examples are slightly perturbed inputs (such as images) which may fool a neural network into generating undesirable results (such as ”wrong” classifications). Using satisfiability assignments from the SAT solving stage in our verification procedure, we are also able to generate counterexamples to the BNN verification problem. Our work, however, goes well beyond current approaches to generating adverserial examples in that it does not only support debugging and retraining purposes. Instead, our verification algorithm establishes formal correctness results for neural networklike structures.6 Conclusions
We are solving the problem of verifying BNNs by reduction to the problem of verifying combinatorial circuits, which itself is reduced to solving SAT problems. Altogether, our experiments indicate that this hardware verificationcentric approach, in connection with our BNNspecific transformations and optimizations, scales to BNNs with thousands of inputs and nodes. This kind of scalability makes our verification approach attractive for automatically establishing correctness results at least for moderatelysized BNNs as used on current embedded devices.
Our developments for efficiently encoding BNN verification problems, however, might also prove to be useful in optimizing forward evaluation of BNNs. In addition our verification framework may also be used for debugging and retraining purposes of BNNs; for example, for automatically generating adverserial inputs from failed verification attempts.
In the future we also plan to directly synthesize propositional clauses without the support of 3rd party tools such as Yosys in order to avoid extraneous transformations and repetitive work in the synthesis workflow. Similar optimizations of the current verification tool chain should result in substantial performance improvements. It might also be interesting to investigate incremental verification techniques for BNN, since weights and structure of these learning networks might adapt and change continuously.
Finally, our proposed verification workflow might be extended to synthesis problems, such as synthesizing bias terms in BNNs without sacrificing performance or for synthesizing weight assignments in a propertydriven manner. These kinds of synthesis problems for BNNs are reduced to 2QBF problems, which are satisfiability problems with a top level existsforall quantification. The main challenge for solving these kinds of synthesis problems for the typical networks encountered in practice is, again, scalability.
Acknowledgments
 (Version 1)

We thank Dr. Ljubo Mercep from Mentor Graphics for indicating to us some recent results on quantized neural networks, and Dr. Alan Mishchenko from UC Berkeley for his kind support regarding ABC.
 (Version 2)

We additionally thank Dr. Leonid Ryzhyk from VMWare to indicate us their work on efficient SAT encoding of individual neurons. We further thank Dr. Alan Mishchenko from UC Berkeley for sharing his knowledge regarding sorting networks.
References
 [1] C. Ambühl, M. Mastrolilli, and O. Svensson. Inapproximability results for maximum edge biclique, minimum linear arrangement, and sparsest cut. SIAM Journal on Computing, 40(2), pages 567–596. SIAM, 2011.
 [2] K. E. Batcher. Sorting networks and their applications. In AFIPS . ACM, 1968.
 [3] S. Bhattacharyya, D. Cofer, D. Musliner, J. Mueller, and E. Engstrom. Certification considerations for adaptive systems: Technical report. no. NASACR2015218702, 2015.
 [4] A. Biere. Picosat essentials. Journal on Satisfiability, Boolean Modeling and Computation, 4:75–97, 2008.
 [5] M. Bojarski, D. D. Testa, D. Dworakowski, B. Firner, B. Flepp, P. Goyal, L. D. Jackel, M. Monfort, U. Muller, J. Zhang, X. Zhang, J. Zhao, and K. Zieba. End to end learning for selfdriving cars. CoRR, abs/1604.07316, 2016.
 [6] R. Brayton and A. Mishchenko. ABC: An academic industrialstrength verification tool. In CAV, pages 24–40. Springer, 2010.
 [7] C. Chen, A. Seff, A. Kornhauser, and J. Xiao. Deepdriving: Learning affordance for direct perception in autonomous driving. In ICCV, pages 2722–2730, 2015.
 [8] C.H. Cheng, G. Nührenberg, and H. Ruess. Maximum resilience of artificial neural networks. In ATVA, pages 251–268. Springer, 2017.
 [9] M. Courbariaux, I. Hubara, D. Soudry, R. ElYaniv, and Y. Bengio. Binarized neural networks: Training deep neural networks with weights and activations constrained to+ 1 or1. arXiv preprint arXiv:1602.02830, 2016.

[10]
R. Ehlers.
Formal verification of piecewise linear feedforward neural networks.
In ATVA, pages 289–306. Springer, 2017.  [11] I. Goodfellow, J. PougetAbadie, M. Mirza, B. Xu, D. WardeFarley, S. Ozair, A. Courville, and Y. Bengio. Generative adversarial nets. In NIPS, pages 2672–2680, 2014.
 [12] I. J. Goodfellow, J. Shlens, and C. Szegedy. Explaining and harnessing adversarial examples. arXiv preprint arXiv:1412.6572, 2014.
 [13] X. Huang, M. Kwiatkowska, S. Wang, and M. Wu. Safety verification of deep neural networks. In CAV, pages 3–29. Springer, 2017.
 [14] B. Huval, T. Wang, S. Tandon, J. Kiske, W. Song, J. Pazhayampallil, M. Andriluka, P. Rajpurkar, T. Migimatsu, R. ChengYue, et al. An empirical evaluation of deep learning on highway driving. arXiv preprint arXiv:1504.01716, 2015.
 [15] G. Katz, C. W. Barrett, D. L. Dill, K. Julian, and M. J. Kochenderfer. Reluplex: An efficient SMT solver for verifying deep neural networks. In CAV, pages 97–117. Springer, 2017.
 [16] M. Kim and P. Smaragdis. Bitwise neural networks. arXiv preprint arXiv:1601.06071, 2016.
 [17] A. Krizhevsky, I. Sutskever, and G. E. Hinton. Imagenet classification with deep convolutional neural networks. In NIPS, pages 1097–1105, 2012.

[18]
Y. LeCun.
The mnist database of handwritten digits.
http://yann. lecun. com/exdb/mnist/, 1998.  [19] D. Lenz, F. Diehl, M. Troung Le, and A. Knoll. Deep neural networks for markovian interactive scene prediction in highway scenarios. In IV. IEEE, 2017.
 [20] A. Lomuscio and L. Maganti. An approach to reachability analysis for feedforward relu neural networks. arXiv preprint arXiv:1706.07351, 2017.
 [21] J. Long, E. Shelhamer, and T. Darrell. Fully convolutional networks for semantic segmentation. In CPVR, pages 3431–3440, IEEE, 2015.
 [22] S.M. MoosaviDezfooli, A. Fawzi, O. Fawzi, and P. Frossard. Universal adversarial perturbations. arXiv preprint arXiv:1610.08401, 2016.
 [23] N. Narodytska, S. P. Kasiviswanathan, L. Ryzhyk, M. Sagiv, T. Walsh. Verifying Properties of Binarized Deep Neural Networks. arXiv preprint arXiv:1709.06662, 2014.
 [24] R. Peeters. The maximum edge biclique problem is npcomplete. Discrete Applied Mathematics, 131(3):651–654, 2003.
 [25] L. Pulina and A. Tacchella. An abstractionrefinement approach to verification of artificial neural networks. In CAV, pages 243–257. Springer, 2010.
 [26] P. Sermanet, D. Eigen, X. Zhang, M. Mathieu, R. Fergus, and Y. LeCun. Overfeat: Integrated recognition, localization and detection using convolutional networks. arXiv preprint arXiv:1312.6229, 2013.
 [27] M. Soos. The cryptominisat 5 set of solvers at sat competition 2016. SAT COMPETITION 2016, page 28, 2016.
 [28] J. Stallkamp, M. Schlipsing, J. Salmen, and C. Igel. The German Traffic Sign Recognition Benchmark: A multiclass classification competition. In IEEE International Joint Conference on Neural Networks, pages 1453–1460, 2011.
 [29] L. Sun, C. Peng, W. Zhan, and M. Tomizuka. A fast integrated planning and control framework for autonomous driving via imitation learning. arXiv preprint arXiv:1707.02515, 2017.
 [30] C. Wolf, J. Glaser, and J. Kepler. Yosysa free verilog synthesis suite. In Proceedings of the 21st Austrian Workshop on Microelectronics (Austrochip), 2013.
Appendix  Proof of Theorems
Theorem 1.
The problem of BNN safety verification is NPcomplete.
Proof.
Recall that for a given BNN and a relation specifying the undesired property between the bipolar input and output domains of the given BNN, the BNN safety verification problem asks if there exists an input to the BNN such that the risk property holds, where is the output of the BNN for input .
(NP) Given an input, compute the output and check if holds can easily be done in time linear to the size of BNN and size of the property formula.
(NPhardness) The NPhardness proof is via a reduction from 3SAT to BNN safety verification. Consider variables , clauses where for each clause , it has three literals . We build a single layer BNN with inputs to be (constant for bias), (from CNF variables), connected to neurons.
For neuron , its weights and connection to previous layers is decided by clause .

If is a positive literal , then in BNN create a link from to neuron with weight . If is a negative literal , then in BNN create a link from to neuron with weight . Proceed analogously for and .

Add an edge from to with weight .

Add a bias term .
For example, consider the CNF having variables , then the translation of the clause will create in BNN the weighted sum computation .
Assume that is constant , then if there exists any assignment to make the clause true, then by interpreting the true assignment in CNF to be in the BNN input and false assignment in CNF to be in the BNN input, the weighted sum is at most , i.e., the output of the neuron is . Only when , and (i.e., the assignment makes the clause unsatisfiable), then the weighed sum is , thereby setting output of the neuron to be .
Following the above exemplary observation, it is easy to derive that 3SAT formula is satisfiable iff in the generated BNN, there exists an input such that the risk property holds. It is done by interpreting the 3SAT variable assignment in CNF to be assignment for input in the BNN, while interpreting in 3SAT to be for input in the BNN. ∎
Theorem 2 (Hardness of factoring optimization).
The factoring optimization problem, even when , is NPhard.
Proof.
The proof proceeds by a polynomial reduction from the problem of finding maximum edge biclique in bipartite graphs [24]^{4}^{4}4 Let be a bipartite graph with vertex set and edge set connecting vertices in to vertices in . A pair of two disjoint subsets and is called a biclique if for all and . Thus, the edges form a complete bipartite subgraph of . A biclique clearly has edges.. Given a bipartite graph , this reduction is defined as follows.

For , the th element of , create a neuron .

Create an additional neuron

For , the th element of , create a neuron .

Create weight .

If , then create .

This construction can clearly be performed in polynomial time. Figure 4 illustrates the construction process. It is not difficult to observe that has a maximum edge size biclique iff the neural network at layer has a factoring whose saving equals . The gray area in Figure 4a shows the structure of maximum edge biclique . For Figure 4c, the saving is , which is the same as the edge size of the biclique. ∎
The following inapproximability result shows that even having an approximation algorithm for the factoring optimization problem is hard.
Theorem 3.
Let be an arbitrarily small constant. If there is a PTAS for the factoring optimization problem, even when , then there is a (probabilistic) algorithm that decides whether a given SAT instance of size is satisfiable in time .
Proof.
We will prove the Theorem by showing that a PTAS for the factoring optimization problem can be used to manufacture a PTAS for MEB. Then the result follows from the inapproximability of MEB assuming the exponential time hypothesis [1].
Assume that is a approximation algorithm for the
factoring optimization problem. We formulate the following algorithm
:
Input: MEB instance (a bipartite graph )
Output: a biclique in

perform reduction of proof of Theorem 1 to obtain factoring instance

factoring

return
Remark: step 3 is a small abuse of notation. It should return the original vertices corresponding to these neurons.
Now we prove that is a approximation algorithm for MEB: Note that by our reduction two corresponding MEB and factoring instances and have the same optimal value, i.e., .
In step 3 the algorithm returns . This is valid since we can assume w.l.o.g. that returned by contains . This neuron is connected to all neurons from the previous layer by construction, so it can be added to any factoring. The following relation holds for the number of edges in the biclique returned by :
(3a)  
(3b)  
(3c) 
The inequality in step (3b) holds by the assumption that is a approximation algorithm for factoring and (3c) follows from the construction of our reduction. Equations (3) and the result of [1] imply Theorem 2.
∎
Comments
There are no comments yet.