Neural Networks (NNs) increasingly play vital roles within safety-critical cyber-physical systems (CPSs), where they either make safety-critical decisions directly (as in the case of low-level controllers) or influence high-level supervisory decision making (e.g. through vision networks). Ensuring the safety of such systems thus demands algorithms capable of formally verifying the safety of their NN components. However, since CPS safety is characterized by closed-loop behavior, it is not enough to pragmatically verify the input/output behavior of a NN component once. Such a verifier must additionally be as fast as possible, so that it can feasibly be invoked many times during the course of verifying a closed-loop property (as in (Tran et al., 2020b; Wang et al., 2020) for example).
In this paper, we propose Fast BATLLNN as an input/output verifier for Rectified Linear Unit (ReLU) NNs with a special emphasis on execution time. In particular, Fast BATLLNN takes a relatively uncommon approach among verifiers in that itexplicitly trades off generality for execution time: whereas most NN verifiers are designed to work for arbitrary deep NNs and arbitrary half-space output properties (or the intersections thereof) (19), Fast BATLLNN instead forgoes this generality in network and properties to reduce verification time. That is, Fast BATLLNN is only able to verify a very specific subset of deep NNs: those characterized by a particular architecture, the Two-Level Lattice (TLL) NN architecture introduced in (Ferlez and Shoukry, 2019); see also Figure 1 and Section 2.3. Similarly, Fast BATLLNN is restricted to verifying only “box”-like output constraints (formally, hyper-rectangles).
Through extensive experiments, Fast BATLLNN exemplifies that sacrificing generality in both of these senses can lead to dramatically faster verification times. Compared to state-of-the-art general NN verifiers, Fast BATLLNN is 400-1900x faster verifying the same TLL NNs and properties.
In this sense, Fast BATLLNN is primarily inspired by the recent result (Ferlez and Shoukry, 2020), which showed that verifying a Two-Level Lattice (TLL) NN is an “easier” problem than verifying a general deep NN. Specifically, (Ferlez and Shoukry, 2020) exhibits a polynomial time
algorithm to verify a TLL with respect to an arbitrary half-space output property (i.e. polynomial-time in the number of neurons). Indeed, thesemantic structure of the TLL architecture is precisely what makes polynomial-time verification possible: in a TLL NN, the neuronal parameters provide direct (polynomial-time) access to each of the affine functions that appear in its response, viewed as a Continuous Piecewise Affine (CPWA) function111Recall that any ReLU NN implements a CPWA: i.e., a function that continuously switches between finitely many affine functions. (Ferlez and Shoukry, 2020). Since the same cannot be said of the neuronal parameters in a general deep NN, this indicates that considering only TLL NNs can facilitate a much faster verifier.
Thus, the major contribution of Fast BATLLNN is to further leverage the semantics of the TLL architecture under the additional assumption of verifying box-type (or hyper-rectangle) output properties. In particular, a TLL NN implements (component-wise) and lattice operations to compute each of its real-valued output components (as illustrated in Figure 1; see also Section 2.3). This fact can be used to dramatically simplify the verification of box-like output properties, which are component-wise real-valued intervals – and hence mutually decoupled. Importantly, the algorithm proposed in (Ferlez and Shoukry, 2020) cannot take advantage of these lattice operations in the same way, since it considers only general half-space properties, which naturally couple the various output components of the TLL NN. As a result, we can show that Fast BATLLNN has a big-O verification complexity whose crucial exponent is half the size of the analogous exponent in (Ferlez and Shoukry, 2020). The performance consequences of this improvement are reflected in our experimental results.
Before we proceed further, it is appropriate to make a few remarks about the restrictions inherent to Fast BATLLNN. Between the two restrictions of significance – the restriction to TLL NNs and the restriction to box-like output properties – the former is apparently more onerous: box-like properties can be used to adaptively assess more complicated properties whenever box-like properties are themselves inadequate. However even the restriction to TLL NNs is less imposing than at first it may seem. On the one hand, it is known that TLL NNs are capable of representing any Continuous Piecewise-Affine (CPWA) function (Ferlez and Shoukry, 2019; Tarela and Martínez, 1999); i.e., any function that continuously switches between a finite set of affine functions. Since deep NNs themselves realize CPWA functions, the TLL NN architecture is able to instantiate any function that a generic deep NN can. We do not consider the problem of converting a deep NN to the TLL architecture (nor the possible loss in parametric efficiency that may result), but the extremely fast verification times achievable with Fast BATLLNN suggest that the trade off is very likely worth the cost. On the other hand, there is a spate of results which suggest that the TLL architecture a useful architecture within which to do closed-loop controller design in the first place (Ferlez and Shoukry, 2019; Cruz et al., 2021; Ferlez et al., 2020) – potentially obviating the need for such a conversion at all.
Related work: To the best of our knowledge, the work (Ferlez and Shoukry, 2020) is the only current example of an attempt to verify a specific NN architecture rather than a generic deep NN.
The literature on more general NN verifiers is far richer. These NN verifiers can generally be grouped into three categories: (i) SMT-based methods, which encode the problem into a Satisfiability Modulo Theory problem (Ehlers, 2017; Katz et al., 2017, 2019)
; (ii) MILP-based solvers, which directly encode the verification problem as a Mixed Integer Linear Program(Anderson et al., 2020; Bastani et al., 2016; Bunel et al., 2020; Cheng et al., 2017; Fischetti and Jo, 2018; Lomuscio and Maganti, 2017; Tjeng et al., 2017); (iii) Reachability based methods, which perform layer-by-layer reachability analysis to compute the reachable set (Bak et al., 2020; Fazlyab et al., 2019; Gehr et al., 2018; Ivanov et al., 2019; Tran et al., 2020a; Wang et al., 2018b; Xiang et al., 2017, 2018); and (iv) convex relaxations methods (Dvijotham et al., 2018; Wang et al., 2018a; Wong and Kolter, 2017; Khedr et al., 2020). Methods in categories (i) - (iii) tend to suffer from poor scalability, especially relative to convex relaxation methods. In this paper, we perform comparisons with state-of-the-art examples from category (iv) (Khedr et al., 2020; Wang et al., 2021) and category (iii) (Bak et al., 2020), as perform well overall in the standard verifier competition (19).
We will denote the real numbers by . For an
matrix (or vector),, we will use the notation to denote the element in the row and column of . Analogously, the notation will denote the row of , and will denote the column of ; when is a vector instead of a matrix, both notations will return a scalar corresponding to the corresponding element in the vector. We will use bold parenthesis to delineate the arguments to a function that returns a function. We use two special forms of this notation: for an matrix, , and an vector, :
We also use the functions and to return the first and last elements of an ordered list (or by overloading, a vector in ). The function concatenates two ordered lists, or by overloading, concatenates two vectors in and along their (common) nontrivial dimension to get a third vector in . Finally, an over-bar indicates (topological) closure of a set: i.e. is the closure of .
2.2. Neural Networks
We will exclusively consider Rectified Linear Unit Neural Networks (ReLU NNs). A -layer ReLU NN is specified by layer functions, , of which we allow two kinds: linear and nonlinear. A nonlinear layer is a function:
where the function is taken element-wise, and are matrices of appropriate dimensions, and . A linear layer is the same as a nonlinear layer, except it omits the nonlinearity in its layer function; a linear layer will be indicated with a superscript lin e.g. Thus, a -layer ReLU NN function is specified by functionally composing such layer functions whose input and output dimensions satisfy . We always consider the final layer to be a linear layer, so we may define:
To make the dependence on parameters explicit, we will index a ReLU function by a list of matrices 222That is is not the concatenation of the into a single large matrix, so it preserves information about the sizes of the constituent . in this respect, we will often use .
2.3. Two-Level-Lattice (TLL) Neural Networks
In this paper, we are exclusively concerned Two-Level Lattice (TLL) ReLU NNs as noted above. In this subsection, we formally define NNs with the TLL architecture using the succinct method exhibited in (Ferlez and Shoukry, 2020); the material in this subsection is derived from (Ferlez and Shoukry, 2019, 2020).
The most efficient way to characterize a TLL NN is by way of three generic NN composition operators. Hence, we have the following three definitions, which serve as auxiliary results in order to eventually define a TLL NN in Definition 4.
Definition 0 (Sequential (Functional) Composition).
Let and be two NNs. Then the sequential (or functional) composition of and , i.e. , is a well defined NN, and can be represented by the parameter list .
Definition 0 ().
Let and be two -layer NNs with parameter lists:
Then the parallel composition of and is a NN given by the parameter list
That is accepts an input of the same size as (both) and , but has as many outputs as and combined.
Definition 0 (-element / NNs).
An -element network is denoted by the parameter list . such that is the minimum from among the components of (i.e. minimum according to the usual order relation on ). An -element network is denoted by , and functions analogously. These networks are described in (Ferlez and Shoukry, 2019).
With Definition 1 through Definition 3 in hand, it is now possible for us to define TLL NNs in just the same way as (Ferlez and Shoukry, 2020). We likewise proceed to first define a scalar (or real-valued) TLL NN; the structure of such a scalar TLL NN is illustrated in Figure 1. Then we extend this notion to a multi-output (or vector-valued) TLL NN.
Definition 0 (Scalar TLL NN (Ferlez and Shoukry, 2020)).
A NN that maps is said to be TLL NN of size if the size of its parameter list can be characterized entirely by integers and as follows.
each has the form ; and
for some length- sequence where is the identity matrix.
The linear functions implemented by the mapping for will be referred to as the local linear functions of ; we assume for simplicity that these linear functions are unique. The matrices will be referred to as the selector matrices of . Each set is said to be the selector set of .
Definition 0 (Multi-output TLL NN (Ferlez and Shoukry, 2020)).
A NN that maps is said to be a multi-output TLL NN of size if its parameter list can be written as
for equally-sized scalar TLL NNs, ; these scalar TLLs will be referred to as the (output) components of .
2.4. Hyperplanes and Hyperplane Arrangements
Here we review notation for hyperplanes and hyperplane arrangements; these results will be important in the developemnt of Fast BATLLNN.(Stanley, ) is the main reference for this section.
Definition 0 (Hyperplanes and Half-spaces).
Let be an affine map. Then define:
We say that is the hyperplane defined by in dimension , and and are the negative and positive half-spaces defined by , respectively.
Definition 0 (Hyperplane Arrangement).
Let be a set of affine functions where each . Then is an arrangement of hyperplanes in dimension .
Definition 0 (Region of a Hyperplane Arrangement).
Let be an arrangement of hyperplanes in dimension defined by a set of affine functions, . Then a non-empty open subset is said to be a region of if there is an indexing function such that ; is said to be -dimensional or full-dimensional if it is non-empty and described by an indexing function for all .
Theorem 9 ((Stanley, )).
Let be an arrangement of hyperplanes in dimension . Then is at most .
Remark 1 ().
Note that for a fixed dimension, , the bound grows like , i.e. sub-exponentially in .
3. Problem Formulation
The essence of Fast BATLLNN is its focus on verifying TLL NNs with respect to box-like output constraints. Formally, Fast BATLLNN considers only verification problems of the following form (stated using notation from Section 2).
Problem 1 ().
Let be a multi-output TLL NN. Also, let:
be a closed, convex polytope specified by the intersection of half-spaces, i.e. where each is affine; and
be closed hyper-rectangle, i.e. with for each .
Then the verification problem is to decide whether the following formula is true:
If (10) is true, the problem is SAT; otherwise, it is UNSAT.
Note that the properties (and their interpretations) in creftypecap 1 are dual to the usual convention; it is more typical in the literature to associate “unsafe” outputs with a closed, convex polytope, and then the existence of such unsafe outputs is denoted by UNSAT (see (Tran et al., 2019) for example). However, we chose this formulation for creftypecap 1 because it is the one adopted by (Ferlez and Shoukry, 2020), and because it is more suited to NN reachability computations, one of the motivating applications of Fast BATLLNN. Indeed, to verify a property like (10), the typical dual formulation of creftypecap 1 would require verifier calls, assuming unbounded polytopes are verifiable (and then the verification would only be respect to the interior of ). Of course this choice comes with a trade-off: Fast BATLLNN, which directly solves creftypecap 1, requires adaptation to verify the dual property of creftypecap 1; we return to this briefly at the end of this section, but it is ultimately left for future work.
In the case of Fast BATLLNN, there is another important reason to consider the stated formulation of creftypecap 1: both the output property and the NN have an essentially component-wise nature (see also Definition 5). As a result, a component-wise treatment of creftypecap 1 greatly facilitates the development and operation of Fast BATLLNN. To this end, we will find it convenient in the sequel to consider the following two verification problems; each is specified for a scalar TLL NNs and a single real-valued output property. Moreover, we cast them in terms of the negation of the analogous formula derived from creftypecap 1; the reasons for this will become clear in Section 4.
Problem 1A (Scalar Upper Bound).
Let be a scalar TLL NN, and let be a closed convex polytope as in creftypecap 1.
Then the scalar upper bound verification problem for is to decide whether the following formula is true:
If (11) is true, the problem is UNSAT; otherwise, it is SAT.
Problem 1B (Scalar lower Bound).
Let and be as in creftypecap 1A.
Then the scalar lower bound verification problem for is to decide whether the following formula is true:
If (12) is true, the problem is UNSAT; otherwise, it is SAT.
Thus, note that the formulation of creftypecap 1 is such that it can be verified by evaluating a boolean formula that contains only instances of creftypecap 1A and creftypecap 1B. That is, the following formula has the same truth value as (10):
We reiterate, however, that the same is not true of the dual property to creftypecap 1. Consequently, Fast BATLLNN requires modification to verify such properties; this is a more or less straightforward procedure, but we defer this to future work, as noted above.
4. Fast BATLLNN: Theory
In this section, we develop the theoretical underpinnings of Fast BATLLNN. As noted in Section 3, the essential insight of our algorithm is captured by our solutions to problems creftypecap 1A and creftypecap 1B. Thus, this section is organized primarily around solving sub-problems of these forms; at the end of this section, we will show how to combine these results into a verification algorithm for creftypecap 1, and then we will analyze the overall computational complexity of Fast BATLLNN.
4.1. Verifying creftypecap 1a
creftypecap 1A, as stated above, regards the TLL NN to be verified merely as a map from inputs to outputs; this is the behavior that we wish to verify, after all. However, this point of view obscures the considerable semantic structure intrinsic to the neurons in a TLL NN. In particular, recall that implements the following function, which was derived from the Two-Level Lattice representation of CPWAs – see Section 2.3 and (Ferlez and Shoukry, 2019; Tarela and Martínez, 1999):
In (14), the sets are the selector sets of and the are the local linear functions of ; both terminologies are formally defined in Definition 4. Upon substituting (14) into (11), we obtain the following, far more helpful representation of the property expressed in creftypecap 1A:
Literally, (15) compares the output property of interest, , with a combination of real-valued operations applied to scalar affine functions. Crucially, that comparison is made using the usual order relation on , , which is exactly the same order relation upon which the and operations are based.
In particular, (17) is actually an intersection of half spaces, some open and some closed: the open half spaces come from local linear functions that violate the property; and the closed half-spaces come from the input property, (see creftypecap 1). Moreover, there are at most such intersections of relevance to creftypecap 1A: one for each of the such operations present in (15). Finally, note that linear feasibility problems consisting entirely of non-strict inequality constraints are easy to solve: this suggests that we should first amend the inequality with before proceeding.
Formally, these ideas are captured in the following proposition.
Proposition 0 ().
Consider an instance of creftypecap 1A. Then that instance is UNSAT if and only if the set:
Or equivalently, if for at least one of the , the linear feasibility problem specified by the constraints
is feasible, and one of the following conditions is true:
it has non-empty interior; or
there is a feasible point that lies only on some subset of the .
Remark 2 ().
The conclusion of Proposition 1 also has the following important interpretation: the property can be seen to “distribute across” the operations in (15), and upon doing so, it converts the lattice operation into set union and the lattice operation into set intersection. Furthermore, since a TLL NN is constructed from two levels of lattice operations applied to affine functions, the innermost lattice operation of is converted into a set intersection of half-spaces — i.e., a linear feasibility problem.
Of course Proposition 1 also suggests a natural and obvious algorithm to verify an instance of creftypecap 1A. The pseudocode for this algorithm appears as the function verifyScalarUB in Algorithm 1, and its correctness follows directly from Proposition 1. In particular, verifyScalarUB simply evaluates the feasibility of each set of constraints in turn, until either a feasible problem is found or the list is exhausted. Then for each such feasible , verifyScalarUB attempts to find an interior point of the feasible set to reconcile it with the desired inequalities in (18); failing that, it searches for a vertex of the feasible set where no output property constraints are active. In practice, these operations can be combined by operating on the feasible point returned by the original feasibility program: an LP can be used to maximize the value of each constraint activate in order to explore adjacent vertices. Note further that verifyScalarUB may not need to execute all possible linear programs for properties that are UNSAT: it can terminate early on the first “satisfied” linear program found.
4.2. Verifying creftypecap 1b
Naturally, we start our consideration of creftypecap 1B in very much the same way as creftypecap 1A. However, given that creftypecap 1B and creftypecap 1A are in some sense dual, the result is not nearly as convenient. In particular, substituting (14) into (12), and attempting carry out the same sequence of manipulations that led to Proposition 1 results in the following formula:
which has the same truth value as formula (12). Unfortunately, (20) is not nearly as useful as the result in Proposition 1: under the “dual” output constraint , set intersection and set union are switched relative to Proposition 1. As a consequence, (20) is not itself a direct formulation in terms of intersections of half-spaces — i.e., linear feasibility problems.
Nevertheless, rearranging (20) into the
ersections form of Proposition 1 is possible and profitable. Using basic set operations, it is possible to rewrite (20) in a union-of-intersections form as follows (the set intersection is moved outside the outer union for convenience):
By construction, (21) again has the same truth value as (12), but it is now in the desired form. In particular, it is verifiable by evaluating a finite number of half-space intersections much like Proposition 1.
Unfortunately, as a result of this rearrangement, the total number of mutual half-space intersections – or intersection “terms” – has grown from to , where is the cardinality of the selector set, . In particular, this number can easily exceed : for example, if each of the has exactly two elements, then there are total mutual intersection terms. Thus, verifying (21) in its current form would appear to require (in the worst case) exponentially more linear feasibility programs than the verifier we proposed for creftypecap 1A. This situation is not only non-ideal in terms of run-time: it would also seem to contradict (Ferlez and Shoukry, 2020), which describes an algorithm with polynomial-time complexity in and — and that algorithm is after all applicable to more general output properties.
Fortunately, however, there is one aspect not emphasized in this analysis so far: these intersection terms consist of half-spaces, and moreover, each of the half-spaces therein is specified by a hyperplane chosen from among a single, common group of hyperplanes. This will ultimately allow us to identify each non-empty intersection term in (21) with a full, -dimensional region from this hyperplane arrangement, and by Theorem 9 in Section 2.4, there are at most such regions. Effectively, then, the geometry of this hyperplane arrangement (with hyperplanes in dimension ) prevents exponential growth in the number intersection terms relevant to the truth of (21): indeed, the polynomial growth, , means that many of those intersection terms cannot correspond to valid regions in the arrangement333Of course these results apply when is fixed; see also (Ferlez and Shoukry, 2020), and the comments therein pertaining to NN verification encodings of 3-SAT problems (Katz et al., 2017)..
In particular, consider the following set of affine functions, which in turn defines an arrangement of hyperplanes in :
Let denote the corresponding arrangement. Now, consider any index specifying an intersection term in (21), and suppose without loss of generality that are the only unique indices therein (an assumption we carry forward). Then using the notation introduced in Definition 6, it is straightforward to write:
and where is as defined in (22). As a consequence, we conclude:
Although it seems unnecessary to introduce the function , this notation directly connects (23) to full-dimensional regions of the arrangement . Indeed, it states that the intersection term of interest is non-empty if and only if there is a full-dimensional region in the hyperplane arrangement whose index function agrees with one of the partial indexing functions described in (24). More simply, said intersection term is non-empty if and only if it contains a full-dimensional region from the arrangement ; such a region can be said to “witness” the non-emptiness of the intersection term. This idea is illustrated in Figure 2.
Formally, we have the following proposition.
Proposition 0 ().
Consider an instance of creftypecap 1B. Then that instance is UNSAT if and only if the set:
And this is the case if and only if there exists an index with distinct elements denoted by such that the following holds:
there exists a region in , specified by , such that:
If such a region exists, then it is said to witness the non-emptiness of the corresponding intersection term with the index .
The proof follows from the above manipulations. ∎
Proposition 2 establishes a crucial identification between full-dimensional regions in a hyperplane arrangement and the non-empty intersection terms in (21), a verification formula equivalent to the satisfiability of creftypecap 1B. However, it is still framed in terms of individual indices of the form , which are too numerous to enumerate for reasons noted above. Thus, converting Proposition 2 into a practical and fast algorithm to solve creftypecap 1B entails one final step: efficiently evaluating a full-dimensional region in to determine if it matches any index of the form form . This will finally lead to Fast BATLLNN’s algorithm to verify an instance creftypecap 1B by enumerating the regions of instead of enumerating all of the indices in .
Predictably, Fast BATLLNN essentially takes a greedy approach to this problem. In particular, consider a full-dimensional region, , of the hyperplane arrangement , and suppose that is specified by the index function (see Definition 8):
According to Proposition 2, will be a witness to a violation of creftypecap 1B if each of its negative hyperplanes (those assigned by ) can be matched to any element of one of the selector sets, . Thus, to establish whether corresponds to a non-empty intersection term, we can iterate over its negative hyperplanes, checking each one for membership in any one of the . This iteration proceeds as long as each negative hyperplane is found to be an element of some . If all negative hyperplanes of can be matched in this way, then the region is a witness to a violation of creftypecap 1B as per Proposition 2. If, however, a negative hyperplane of is found to belong to no , then the iteration terminates, since the region cannot be a witness to a violation of creftypecap 1B. This algorithm amounts to a greedy matching of the negative hyperplanes of , and it works by effectively examining the smallest intersection term to which the region can be a witness. The pseudocode for this algorithm, with an outer loop to iterate over regions of , appears as verifyScalarLB in Algorithm 2.
4.3. On the Complexity of Fast BATLLNN
Given the remarks prefacing equation (13), it suffices to consider the complexity of Proposition 1 and creftypecap 1B individually. To simplify the notation in this section, we denote the complexity of running a linear program with constraints in variables by . Note also: we consider complexities for a fixed .
4.3.1. Complexity of creftypecap 1a
Analyzing the complexity of verifyScalarUB in Algorithm 1 is straightforward. There are total (or intersection) terms, and each of these requires: one LP to check for feasibility (line 5 of Algorithm 1); followed by at most LPs to find an interior point (line 8 of Algorithm 1). Thus, the complexity of verifyScalarUB is bounded by the following:
4.3.2. Complexity of creftypecap 1b
Analyzing the runtime complexity of verifyScalarLB in Algorithm 2 is also more or less straightforward, given an algorithm that enumerates the regions of a hyperplane arrangement. Fast BATLLNN uses an algorithm very similar to the reverse search algorithm described in (Avis and Fukuda, 1996) and improved slightly in (Ferrez et al., 2001). For a hyperplane arrangement consisting of hyperplanes in dimension , that reverse search algorithm has a per-region complexity bounded by:
Indeed, the per-region complexity of the loops in verifyScalarLB is easily seen to be bounded by operations per region. Thus, it remains to evaluate the complexity of checking the intersection (see line 4 of Algorithm 2); however, this only appears as a separate operation for pedagogical simplicity. Fast BATLLNN actually follows the technique in (Ferlez and Shoukry, 2020) to achieve the same assertion: the hyperplanes describing are added to the arrangement , and any region for which one of those hyperplanes satisfies is ignored. This can be done with the additional per-region complexity associated with the size of the larger arrangement, but without increasing the number of regions evaluated beyond . Thus, we claim the complexity of Algorithm 2 is bounded by:
4.3.3. Complexity of Fast BATLLNN Compared to (Ferlez and Shoukry, 2020)
We begin by adapting the TLL verification complexity reported in (Ferlez and Shoukry, 2020, Theorem 3) to the scalar TLLs and single output properties of creftypecap 1A and creftypecap 1B. In the notation of this paper, it is as follows:
It is immediately clear that Fast BATLLNN has a significant complexity advantage for either type of property. Even the more expensive verifyScalarLB has a runtime complexity of compared to for (Ferlez and Shoukry, 2020, Theorem 3), and that doesn’t even count the larger LPs used in (Ferlez and Shoukry, 2020, Theorem 3).
5.1. General Implementation
The core algorithms of Fast BATLLNN, Algorithm 1 and Algorithm 2, are amenable to considerable parallelism. Thus, in order to make Fast BATLLNN as fast as possible, its implementation is focused on parallelism and concurrency as much as possible.
With this in mind, Fast BATLLNN is implemented using a high-performance concurrency abstraction library for Python called charm4py (6). charm4py uses an actor model to facilitate concurrent programming, and it provides a number of helpful features to achieve good performance with relatively little programming effort. For example, it employs a cooperative scheduler to eliminate race-conditions, and it transparently offers the standard Python pass-by-reference semantics for function calls on the same Processing Element (PE). Moreover, it can be compiled to run on top of Message Passing Interface (MPI), which allows a single code base to scale from an individual multi-core computer to a multi-computer cluster. Fast BATLLNN was written with the intention of being deployed this way: it offers flexibility in how its core algorithms are assigned PEs, so as to take better take advantage of both compute and memory resources that are spread across multiple computers.
5.2. Implementation Details for Algorithm 2
Between the two core algorithms of Fast BATLLNN, Algorithm 2 is the more challenging to parallelize. Indeed, Algorithm 1 has a trivial parallel implementation, since it consists primarily of a for loop over a known index set. In Algorithm 2, it is the for loop over regions of a hyperplane that makes parallelization non-trivial. Hence, this section describes how Fast BATLLNN parallelizes the region enumeration of a hyperplane arrangement for Algorithm 2.
To describe the architecture of Fast BATLLNN’s implementation of hyperplane region enumeration, we first briefly introduce the well-known reverse search algorithm for the same task (Ferrez et al., 2001), and the algorithm on which Fast BATLLNN’s implementation is loosely based. As its name suggests, it is a search algorithm: that is, it starts from a known region of the arrangement and searches for regions adjacent to it, and then regions adjacent to those, and so on. In particular, though, the reverse search algorithm is fundamentally a depth-first search, and it uses a minimum-index rule to ensure that regions are not visited multiple times (i.e., functioning much as Bland’s rule in simplex solvers) (Ferrez et al., 2001; Avis and Fukuda, 1996). This type of search algorithm has the great benefit that it is memory efficient, since it tracks the current state of the search using only the memory required to track adjacency indices; even the information required to back-track over the current branch is computed rather than being stored (Ferrez et al., 2001, pp. 10). However, the memory efficiency of (Ferrez et al., 2001) comes at the expense of parallelizability, precisely because the search state is stored using variables that are incremented with each descent down a branch.
The natural way to parallelize such a search is to allow multiple concurrent search workers, but have them enter their independent search results into a common, synchronized hash table. Assuming an amortized hash complexity, this solution doesn’t affect the overall computational complexity; on the other hand, it comes with a steep memory penalty, since it could require storing all regions in the worst case. However, there is nevertheless a way to efficiently enable and coordinate multiple search processes, while avoiding this excessive memory requirement.
To this end, Fast BATLLNN leverages a special property of the region adjacency structure in a hyperplane arrangement. In particular, the regions of a hyperplane arrangement can be organized into a leveled adjacency poset (Edelman, 1984). That is, relative to any initial base region, all of the regions in the arrangement can be grouped according to the number of hyperplanes that were “crossed” in the process discovering them; the same idea is also implicit in (Ferrez et al., 2001; Avis and Fukuda, 1996). This leveled property of the adjacency poset is illustrated in Figure 3: the top pane shows a hyperplane arrangement with its regions labeled; the bottom pane depicts the region adjacency poset for this arrangement, with levels indicated relative to a base region, . For example, a search starting from will find region by crossing only and region by crossing and .
Thus, Fast BATLLNN still approaches the region enumeration problem as search, but instead it proceeds level-wise. All of the regions in the current level can be easily divided among the available processing elements, which then search in parallel for their immediately adjacent regions; the result of this search is a list of regions comprising the entire next level in the adjacency poset, which then becomes the current level and the process repeats. From an implementation standpoint, searching the region adjacency structure level-wise offers a useful way of reducing Fast BATLLNN’s memory footprint. In particular, once a level is fully explored, the regions it contains will never be seen again. Thus, Fast BATLLNN need only maintain a hash of regions from one level at a time: the hash tables from previous levels can be safely discarded. In this way, Fast BATLLNN achieves a parallel region search but without resorting to hashing the entire list of discovered regions.
Finally, we note that a search-type algorithm for region enumeration has a further advantage for solving a problem like creftypecap 1B, though. In particular, a search algorithm reveals each new region with a relatively low computational cost — see (30); this is in contrast to some other enumeration algorithms, which must run to completion before even one region is available. Since Algorithm 2 is structured such that it can terminate on the first violating region found, this has a considerable advantage for UNSAT problems, as a violating region may be found very early in the search.
We conducted a series of experiments to evaluate the performance and scalability of Fast BATLLNN as a TLL verifier, both in its own right and relative to general NN verifiers applied to TLL NNs. In particular, we conducted the following three experiments:
Scalability of Fast BATLLNN as a function of TLL input dimension, ; the number of local linear functions, , and the number of selector sets, , remained fixed.
Scalability of Fast BATLLNN as a function of the number of local linear functions, , with ; the input dimension, , remained fixed.
All experiments were run within a VMWare Workstation Pro virtual machine (VM) running on a Linux host with 48 hyper-threaded cores and 256 GB of memory. All instances of the VM were given 64 GB of memory, and a core count that is specified within each experiment. A timeout of 300 seconds was used in all cases.
6.1. Experimental Setup: Networks and Properties
6.1.1. TLL NNs Verified
Given that creftypecap 1 can be decomposed into instances of creftypecap 1A and creftypecap 1B, all of these experiments were conducted on scalar-output TLL NNs using real-valued properties of the form in either creftypecap 1A or creftypecap 1B.
In Experiments 1 and 2, TLL NNs of the desired , and were generated randomly according to the following procedure, which was designed to ensure that they are unlikely to be degenerate on (roughly) the input set . The procedure is as follows:
Randomly generate selector sets, , by generating random integers between and , and continue generating them by this mechanism until are obtained such that no two selector sets satisfy (a form of degeneracy).
For each corresponding selector matrix, , solve instances of the following least squares problem:
to obtain the vectors, .
Then scale each row of (from (1) above) by the corresponding row of the vector:
This has the (qualitative) effect of forcing the mutual intersections of randomly generated local linear functions to be concentrated around the origin.
In Experiment 3, we obtained and used the scalar TLL NNs that were tested in (Ferlez and Shoukry, 2020). These networks all have and ; there are thirty examples for each of the sizes and (each size has a common neuron count, ranging from neurons for to neurons for . We used these networks in particular so as to enable some basis of comparison with the experimental results in (Ferlez and Shoukry, 2020). This is relevant, since that tool is not publicly available, and hence omitted from our comparison. Note: we considered these networks with different, albeit similar, properties to those used in (Ferlez and Shoukry, 2020); see Section 6.1.3 below.
6.1.2. Input Constraints,
In all experiments, we considered verification problems with . For the TLLs we generated, there is no great loss of generality in considering this fixed size for , since we generated them to be “interesting” in this vicinity; see Section 6.1.1. However, using a hyper-rectangle was necessary for Experiment 3, since some of the general NN verifiers accept only hyper-rectangle input constraints. Thus, we made the universal choice for consistency between experiments.
Note, however, that (Ferlez and Shoukry, 2020) verified general polytopic input constraints on the networks we borrowed for Experiment 3. Nevertheless, we expect the results for Fast BATLLNN in Experiment 3 to be somewhat comparable to the results in (Ferlez and Shoukry, 2020), since all of those polytopic constraints are contained in the box, .
6.1.3. Output Properties Verified
For a scalar TLL, only two parameters are required to specify an output property: a real-valued scalar and the direction of the inequality. In all cases, the direction of the inequality was determined by the outcome of Bernoulli random variable. And in all cases except one (noted in Experiment 2), the random real-valued property was generated by the following procedure. First, the TLL network was evaluated at 10,000 samples collected from the set; any property between the min and max of these output samples is guaranteed to be UNSAT. Then, to get a mixture of SAT/UNSAT properties, we select a random property from this interval symmetrically extended to twice its original size.
6.2. Experiment 1: Input Dimension Scalability
In this experiment, we evaluated the scalability of Fast BATLLNN as a function of input dimension of the TLL to be verified. To that end, we generated a suite of TLL NNs with input sizes varying from to , using the procedure described in Section 6.1.1. We generated 20 instances for each size, and for all TLLs, we kept constant. We then verified each of these TLLs with respect to its own, randomly generated property (as described in Section 6.1.3). For this experiment, Fast BATLLNN was run in a VM with 32 cores.
Figure 4 summarizes the results of this experiment with a
box-and-whisker plot of verification times: each box-and-whisker444 As
usual, the boxes denote the first and third quartiles; the orange horizontal
line denotes the median; and the whiskers show the maximum and minimum.
As usual, the boxes denote the first and third quartiles; the orange horizontal line denotes the median; and the whiskers show the maximum and minimum.summarizes the verification times for the twenty networks of the corresponding input dimension; no properties/networks resulted in a timeout. The data in the figure shows a clear trend of increasing median, as expected for progressively harder problems (recall the runtime complexities indicated in Section 4.3). By contrast, note that the minimum and maximum runtimes grow very slowly with dimension: given the complexity analysis of Section 4.3, we speculate that these results are likely due to the characteristics of the generated TLLs. That is, the generation procedure appears to “saturate” in the sense that it eventually produces networks which require, on average, a constant number of loop iterations to verify.
6.3. Experiment 2: Network Size Scalability
In this experiment, we evaluated the scalability of Fast BATLLNN as a function of the number of local linear functions, , in the TLL to be verified. To that end, we generated a suite of of TLL NNs local linear functions ranging in number from to , again using the procedure described in Section 6.1.1. We generated 20 instances for each value of , and for all networks we set and . We then verified each of these networks with respect to its own, randomly generated property. The properties for network sizes through were generated as described in Section 6.1.3
; however, a our TensorFlow implementation occupied too much memory to generate samples for TLLs of size, so the properties for these networks were generated using the bounds for the TLLs. For this experiment, Fast BATLLNN was run in a VM with 32 cores.
Figure 5 summarized the results of this experiment with a box-and-whisker plot of verification times: each box-and-whisker summarize the verification times for the twenty test cases of the corresponding size, much as in Section 6.2. However, since some verification problems timed out in this experiment, those time outs were excluded from the box-and-whisker; they are instead indicated by a superimposed bar graph, which displays a count of the number of timeouts obtained from each group of equally-sized TLLs. The data in this figure shows the expected trend of increasingly difficult verification as
increases; this is especially captured by the trend of experiencing more timeouts for larger networks. The outlier to this trend is the size, but this is most likely due to different method of generating properties for these networks (see above). Finally, note that the minimum verification times exhibit a much slower growth trend, as in Experiment 1.
6.4. Experiment 3: General NN Verifiers
In this experiment, we compared the verification performance of Fast BATLLNN with state-of-the-art (SOTA) NN verifiers designed to work on general deep NNs. For this experiment, we compared against generic verifiers Alpha-Beta-Crown (Wang et al., 2021), nnenum (Bak et al., 2020) and PeregriNN (Khedr et al., 2020), as a representative sample of SOTA NN verifiers. Moreover, we conducted this experiment on the same 240 networks used in (Ferlez and Shoukry, 2020), and described in Section 6.1.1; this further facilitates a limited comparison with that algorithm, subject to the caveats described in Section 6.1.1 and Section 6.1.3.
In order to make this test suite of TLLs available to the generic verifiers, each network was first implemented as a TensorFlow model using a custom implementation tool. The intent was to export these TensorFlow models to the ONNX format, which each of the generic verifiers can read. However, most of the generic verifiers do not support implementing multiple feed-forward paths by tensor reshaping operations, as in the most straightforward implementation of a TLL; seeFigure 1 and Section 2.3. Thus, we had to first “flatten” our TensorFlow implementation into an equivalent network where each network accepts the outputs of all of the selector matrices, only to null the irrelevant ones with additional kernel zeros in the first layer. This is highly sub-optimal, since it results in neurons receiving many more inputs than are really required. However, we could not devise another method to circumvent this limitation present in most of the tools.
With all of the tools able to read the same NNs in our (borrowed) test suite, we randomly generated verification properties for each of the networks, as in the previous experiments. However, recall that the generic NN verifiers have a slightly different interpretation of properties compared with Fast BATLLNN. For scalar-output networks, this amounts to verifying the properties in creftypecap 1A and creftypecap 1B with the same interpretations, but with non-strict inequalities instead of strict inequalities. Since this will only generate divergent results when a property happens to be exactly equal to the maximum or minimum value on , we elide this issue.
Thus, we ran each of the tools, Fast BATLLNN, Alpha-Beta-Crown, nnenum and PeregriNN on this test suite of TLLs and properties. PeregriNN was configured with SPLIT_RES = 0.1; nnenum was configured with TRY_QUICK_APPROX = True and all other parameters set to default values; and Alpha-Beta-Crown was configured with input space splitting, share_slopes = True, no_solve_slopes = True, lr_alpha = 0.01, and branching_method = sb. All the solvers used float64 computations. Furthermore, we ran two versions of this experiment, one where the VM had 4 cores and one where the VM had 24 cores.
Figure 6 summarizes the results of this experiment in the form of a cactus plot: a point on any one of the curves indicates the timeout that would be required to obtain the corresponding number of proved cases for that tool (from among of the test suite described above). As noted, each tool was run separately in two VMs, one with 4 cores and one with 24 cores; thus, each tool has two curves in Figure 6. The data shows that Fast BATLLNN is on average 960/435 faster than nnenum, 1800/1370 faster than Alpha-Beta-Crown, and 1000/500 faster than PeregriNN using 4 and 24 cores respectively. Fast BATLLNN also proved all 240 properties in just 17 seconds (4 cores), whereas nnenum proved 193, Alpha-Beta-Crown proved 153, and PeregriNN proved 186. Note that unlike the other tools, Fast BATLLNN doesn’t exhibit exponential growth in execution time on this test suite, which is consistent with the complexity analysis in Section 4.3. Despite the caveats noted above, Fast BATLLNN also compares favorably with the execution times shown in (Ferlez and Shoukry, 2020, Figure 1(b)), which end up in the 100’s or 1000’s of seconds for . Finally, note that although Fast BATLLNN suffered from slightly worse measured performance with 24 cores, the rate at which its timeouts increased was significantly slower than with 4 cores; this suggests the data is reflecting constants, rather than inefficient use of parallelism. Of similar note, Alpha-Beta-Crown seems to suffer from the overhead of using more CPU cores. Based on our understanding, the algorithm doesn’t benefit from multiple cores except for solving MIP problem.
Acknowledgements.This work was supported by the Sponsor National Science Foundation Rlhttps://www.nsf.gov under grant numbers Grant #3 and Grant #3.
- Strong mixed-integer programming formulations for trained neural networks. Mathematical Programming, pp. 1–37. Cited by: §1.
- Reverse search for enumeration. Discrete Applied Mathematics 65 (1), pp. 21–46. External Links: Cited by: §4.3.2, §5.2, §5.2.
- Improved Geometric Path Enumeration for Verifying ReLU Neural Networks. In Computer Aided Verification, S. K. Lahiri and C. Wang (Eds.), Lecture Notes in Computer Science, Vol. 12224, pp. 66–96. External Links: Cited by: §1, item Exp. 3), §6.4.
- Measuring neural net robustness with constraints. In Advances in neural information processing systems, pp. 2613–2621. Cited by: §1.
Branch and bound for piecewise linear neural network verification.
Journal of Machine Learning Research21 (2020). Cited by: §1.
-  (Website) External Links: Cited by: §5.1.
- Maximum resilience of artificial neural networks. In International Symposium on Automated Technology for Verification and Analysis, pp. 251–268. Cited by: §1.
- External Links: Cited by: §1.
- A dual approach to scalable verification of deep networks.. In UAI, Vol. 1, pp. 2. Cited by: §1.
- A Partial Order on the Regions of Dissected by Hyperplanes. Transactions of the American Mathematical Society 283 (2), pp. 617–631. External Links: Cited by: §5.2.
Formal verification of piece-wise linear feed-forward neural networks. In International Symposium on Automated Technology for Verification and Analysis, pp. 269–286. Cited by: §1.
Efficient and accurate estimation of lipschitz constants for deep neural networks. In Advances in Neural Information Processing Systems, pp. 11423–11434. Cited by: §1.
- AReN: Assured ReLU NN Architecture for Model Predictive Control of LTI Systems. In Hybrid Systems: Computation and Control 2020 (HSCC’20), External Links: Cited by: Figure 1, §1, §1, §2.3, Definition 3, §4.1.
- External Links: Cited by: §1, §1, §1, §2.3, §2.3, Definition 4, Definition 5, §3, §4.2, §4.3.2, §4.3.3, §4.3.3, §6.1.1, §6.1.2, §6.4, §6.4, footnote 3.
- External Links: Cited by: §1.
- Infoscience. External Links: Cited by: §4.3.2, §5.2, §5.2.
- Deep neural networks and mixed integer linear optimization. Constraints 23 (3), pp. 296–309. Cited by: §1.
- Ai2: safety and robustness certification of neural networks with abstract interpretation. In 2018 IEEE Symposium on Security and Privacy (SP), pp. 3–18. Cited by: §1.
-  International Verification of Neural Networks Competition 2020 (VNN-COMP’20). Note: https://sites.google.com/view/vnn20 Cited by: §1, §1.
- Verisig: verifying safety properties of hybrid systems with neural network controllers. In Proceedings of the 22nd ACM International Conference on Hybrid Systems: Computation and Control, pp. 169–178. Cited by: §1.
- Reluplex: An Efficient SMT Solver for Verifying Deep Neural Networks. In Computer Aided Verification, Lecture Notes in Computer Science, pp. 97–117. External Links: Cited by: §1, footnote 3.
- The marabou framework for verification and analysis of deep neural networks. In International Conference on Computer Aided Verification, pp. 443–452. Cited by: §1.
- PEREGRiNN: penalized-Relaxation Greedy Neural Network Verifier. External Links: Cited by: §1, item Exp. 3), §6.4.
- An approach to reachability analysis for feed-forward relu neural networks. arXiv preprint arXiv:1706.07351. Cited by: §1.
-  An Introduction to Hyperplane Arrangements. Cited by: §2.4, Theorem 9.
- Region configurations for realizability of lattice Piecewise-Linear models. Mathematical and Computer Modeling 30 (11), pp. 17–27. External Links: Cited by: §1, §4.1.
- Evaluating robustness of neural networks with mixed integer programming. arXiv preprint arXiv:1711.07356. Cited by: §1.
- Star-Based Reachability Analysis of Deep Neural Networks. In Formal Methods – The Next 30 Years, Lecture Notes in Computer Science. External Links: Cited by: §3.
- NNV: the neural network verification tool for deep neural networks and learning-enabled cyber-physical systems. arXiv preprint arXiv:2004.05519. Cited by: §1.
- NNV: the Neural Network Verification Tool for Deep Neural Networks and Learning-Enabled Cyber-Physical Systems. In Computer Aided Verification, S. K. Lahiri and C. Wang (Eds.), Lecture Notes in Computer Science, pp. 3–17. External Links: Cited by: §1.
- Efficient formal safety analysis of neural networks. In Advances in Neural Information Processing Systems, pp. 6367–6377. Cited by: §1.
- Formal security analysis of neural networks using symbolic intervals. In 27th USENIX Security Symposium (USENIX Security 18), pp. 1599–1614. Cited by: §1.
- Beta-crown: efficient bound propagation with per-neuron split constraints for complete and incomplete neural network verification. External Links: Cited by: §1, item Exp. 3), §6.4.
- Neural Network Control Policy Verification With Persistent Adversarial Perturbation. In International Conference on Machine Learning, pp. 10050–10059. External Links: Cited by: §1.
- Provable defenses against adversarial examples via the convex outer adversarial polytope. arXiv preprint arXiv:1711.00851. Cited by: §1.
- Reachable set computation and safety verification for neural networks with relu activations. arXiv preprint arXiv:1712.08163. Cited by: §1.
- Output reachable set estimation and verification for multilayer neural networks. IEEE transactions on neural networks and learning systems 29 (11), pp. 5777–5783. Cited by: §1.