Reachability analysis of neural networks using mixed monotonicity

by   Pierre-Jean Meyer, et al.

This paper presents a new reachability analysis tool to compute an interval over-approximation of the output set of a feedforward neural network under given input uncertainty. The proposed approach adapts to neural networks an existing mixed-monotonicity method for the reachability analysis of dynamical systems and applies it to all possible partial networks within the given neural network. This ensures that the intersection of the obtained results is the tightest interval over-approximation of the output of each layer that can be obtained using mixed-monotonicity. Unlike other tools in the literature that focus on small classes of piecewise-affine or monotone activation functions, the main strength of our approach is its generality in the sense that it can handle neural networks with any Lipschitz-continuous activation function. In addition, the simplicity of the proposed framework allows users to very easily add unimplemented activation functions, by simply providing the function, its derivative and the global extrema and corresponding arguments of the derivative. Our algorithm is tested and compared to five other interval-based tools on 1000 randomly generated neural networks for four activation functions (ReLU, TanH, ELU, SiLU). We show that our tool always outperforms the Interval Bound Propagation method and that we obtain tighter output bounds than ReluVal, Neurify, VeriNet and CROWN (when they are applicable) in 15 to 60 percent of cases.



There are no comments yet.


page 1

page 2

page 3

page 4


ReachNN: Reachability Analysis of Neural-Network Controlled Systems

Applying neural networks as controllers in dynamical systems has shown g...

Specification-Guided Safety Verification for Feedforward Neural Networks

This paper presents a specification-guided safety verification method fo...

Improving Randomized Learning of Feedforward Neural Networks by Appropriate Generation of Random Parameters

In this work, a method of random parameters generation for randomized le...

Towards a regularity theory for ReLU networks -- chain rule and global error estimates

Although for neural networks with locally Lipschitz continuous activatio...

Abstraction based Output Range Analysis for Neural Networks

In this paper, we consider the problem of output range analysis for feed...

JacNet: Learning Functions with Structured Jacobians

Neural networks are trained to learn an approximate mapping from an inpu...

Robust Optimization Framework for Training Shallow Neural Networks Using Reachability Method

In this paper, a robust optimization framework is developed to train sha...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1. Introduction

With the fast spread of artificial intelligence and particularly neural networks in various fields including safety-critical applications such as autonomous driving 

(Xiang et al., 2018), there is a growing need for replacing verification methods based on statistical testing (Kim et al., 2020) by more formal methods able to provide safety guarantees of the satisfaction of desired properties on the output of the neural network (Liu et al., 2021) or on the global behavior of a closed-loop system with a neural network controller (Akintunde et al., 2018; Xiang et al., 2020; Saoud and Sanfelice, 2021).

When focusing on neural networks alone, the rapidly growing field of formal verification has been categorized into three main types of objectives (see survey paper (Liu et al., 2021)): counter-example result to find an input-output pair that violates the desired property; adversarial result to determine the maximum allowed disturbance that can be applied to a nominal input while preserving the properties of the nominal output; reachability result to evaluate (exactly or through over-approximations) the set of all output values reachable by the network when given a bounded input set. While the first two types of results often rely on solving primal or dual optimization problems (Liu et al., 2021; Salman et al., 2019), reachability results (which are considered in this paper) naturally lean toward re-using existing or developing new reachability analysis approaches to be applied layer by layer in the network. The current reachability methods in the literature consider various set representations to over-approximate the network’s output set with static intervals (Xiang et al., 2020), symbolic interval equations and linear relaxations of activation functions (ReluVal (Wang et al., 2018b), Neurify (Wang et al., 2018a), VeriNet (Henriksen and Lomuscio, 2020), CROWN (Zhang et al., 2018)) or polyhedra and zonotopes in the tool libraries NNV (Tran et al., 2020) and ERAN (4).

The main weakness of existing neural network verifiers is that most of them are limited to a very small class of activation functions. The large majority of verifiers only handles piecewise-affine activation to focus on the most popular ReLU function (Wang et al., 2018b, a; Katz et al., 2019; Botoeva et al., 2020). A handful of tools also cover other popular activation functions, primarily the S-shaped functions such as the sigmoid or hyperbolic tangent (P. Henriksen and A. Lomuscio (2020); H. Tran, X. Yang, D. M. Lopez, P. Musau, L. V. Nguyen, W. Xiang, S. Bak, and T. T. Johnson (2020); 4). Several other tools claiming to handle general activation functions are either implicitly restricted to monotone functions in their theory or implementation (Dvijotham et al., 2018; Raghunathan et al., 2018), or their claimed generality rather refers to a mere compatibility with general activation functions by assuming that the user provides their own implementation of how to handle these new functions (Zhang et al., 2018).


The goal of this tool paper is to present a novel method for the reachability analysis of feedforward neural networks with general activation functions. The proposed approach adapts the existing mixed-monotonicity reachability method for dynamical systems (Meyer et al., 2021) to be applicable to neural networks so that we can obtain an interval over-approximation of the set of all the network’s outputs that can be reached from a given input set. Since our reachability method is also applicable to any partial network within the main neural network and always returns a sound over-approximation of the last layer’s output set, intersecting the interval over-approximations from several partial networks ending at the same layer can only reduce the conservativeness while preserving the soundness of the results. To take full advantage of this, we propose an algorithm that applies our new mixed-monotonicity reachability method to all partial networks contained within the considered -layer network, to ensure that the resulting interval over-approximation is the tightest output bounds of the network obtainable using mixed-monotonicity reachability analysis.

Mixed-monotonicity reachability analysis is applicable to any system whose Jacobian matrix is bounded on any bounded input set (Meyer et al., 2021)

. In the case of neural networks combining linear transformations and nonlinear activation functions, the above requirement implies that our approach is applicable to any Lipschitz-continuous activation function. Since extracting the bounds of the Jacobian matrix of a system (or of the derivative of an activation function in our case) may not always be straightforward, for the sake of self-containment of the paper, we provide a method to automatically obtain these bounds for a (still very general) sub-class of activation functions whose derivative can be defined as a

-piece piecewise-monotone function. Apart from the binary step and Gaussian function, all activation functions that the author could find in the literature (including the many non-monotone activation functions reviewed in (Zhu et al., 2021)) belong to this class.

Although most of the tools cited above provide a two-part verification framework for neural networks (one reachability part to compute output bounds of the network, and one part to iteratively refine the network’s domain until a definitive answer can be given to the verification problem), this paper focuses on providing a novel method for the first reachability step only. The proposed approach can thus be seen either as a preliminary step towards the development of a larger verification framework for neural networks combining our new reachability method with an iterative splitting of the input domain, or as a stand-alone tool for the reachability analysis of neural networks as is used for example in the analysis of closed-loop systems with a sample-and-hold neural network controller (Xiang et al., 2020). In summary, the main contributions of this paper are:

  • a novel approach to obtain sound output bounds of a neural network using mixed-monotonicity reachability analysis;

  • a method that is compatible with any Lipschitz-continuous activation function;

  • a framework that allows to easily add new activation functions (the user only needs to provide the definition of the activation function and its derivative, the global extrema of the derivative and the corresponding and );

  • numerical comparisons on randomly generated networks showing that on top of handling general activation functions, our approach outperforms other interval-based optimization-free tools (Xiang et al., 2020; Wang et al., 2018b, a; Henriksen and Lomuscio, 2020; Zhang et al., 2018) in - of cases.

The paper is organized as follows. Section 2 defines the considered problem and provides useful preliminaries for the main algorithm, such as the definition mixed-monotonicity reachability for a neural network, and how to automatically obtain local bounds on the activation function derivative. Section 3 presents the main algorithm applying mixed-monotonicity reachability to all partial networks within the given network. Finally, Section 4 compares our novel reachability approach to the bounding methods of other reachability-based verifiers in the literature.

2. Preliminaries

Given with , the (multi-dimensional) interval is defined as the set using componentwise inequalities.

2.1. Problem definition

Consider an -layer feedforward neural network defined as



is the output vector of layer

, and being the input and output of the neural network, respectively. We assume that the network is pre-trained and all weight matrices

and bias vectors

are known. The function is defined as the componentwise application of a scalar and Lipschitz-continuous activation function.

Some verification problems on neural networks aim to measuring the robustness of the network with respect to input variations. Since the exact output set of the network cannot always be easily computed, we instead rely on approximating this set by a simpler set representation, such as a multi-dimensional interval. By ensuring that we compute an over-approximation interval containing the whole output set, we can then solve the verification problem on the interval: if the desired property is satisfied on the over-approximation, then it is also satisfied on the real output set of the network. In this paper, we focus on the problem of computing an interval over-approximation of the output set of the network when its input is taken in a known bounded set, as formalized below.

Problem 1 ().

Given the -layer neural network (1) and the interval input set , find an interval over-approximating the output set of (1):

Naturally, our secondary objective is to ensure that the computed interval over-approximation is as small as possible in order to minimize the number of false negative results in the subsequent verification process.

2.2. Mixed-monotonicity reachability

In this paper, we solve Problem 1 by iteratively computing over-approximation intervals of the output of each layer, until the last layer of the network is reached. These over-approximations are obtain by adapting to neural networks existing methods for reachability analysis of dynamical systems. More specifically, we rely on the mixed-monotonicity approach in (Meyer et al., 2021) to over-approximate the reachable set of any discrete-time system with Lipschitz-continuous vector field . Since the neural network (1) is instead defined as a function with input and output of different dimensions, we present below the generalization of the mixed-monotonicity method from (Meyer et al., 2021) to any Lipschitz-continuous functions.

Consider the function with input , output and Lipschitz-continuous function . The Lipschitz-continuity assumption ensures that the derivative (also called Jacobian matrix in the paper) is bounded.

Proposition 2.1 ().

Given an interval bounding the derivative for all , denote the center of the interval as . For each output dimension , define input vectors and row vector such that for all ,

Then for all and output dimension , we have:

Proposition 2.1 can thus provide an interval over-approximation of the output set of any function as long as bounds on the derivative are known. Obtaining such bounds for a neural network is made possible by computing local bounds on the derivative of its activation functions, as detailed in Section 2.3.

2.3. Local bounds of activation functions

Proposition 2.1 can be applied to any neural network whose activation functions are Lipschitz continuous since such functions have a bounded derivative. However, to avoid requiring the users to manually provide these bounds for each different activation function they want to use, we instead provide a method to automatically and easily compute local bounds for a very general class of functions describing most popular activation functions and their derivatives.

Let and consider the scalar function . In this paper, we focus on -piece piecewise-monotone functions for which there exists such that is:

  • non-increasing on until reaching its global minimum ;

  • non-decreasing on until reaching its global maximum ;

  • and non-increasing on .

When (resp. ), the first (resp. last) monotone segment is squeezed into a singleton at infinity and can thus be ignored.

Although this description may appear restrictive, we should note that the vast majority of activation functions as well as their derivatives belong to this class of functions. In particular, this is the case for (but not restricted to) popular piecewise-affine activation functions (identity, ReLU, parameterized ReLU), monotone functions (hyperbolic tangent, arctangent, sigmoid, SoftPlus, ELU) as well as many non-monotone activation functions (SiLU, GELU, HardSwish, Mish, REU, PFLU, FPFLU, which are all reviewed or introduced in (Zhu et al., 2021)). Two examples of such functions are provided in Figure 1 representing the Sigmoid Linear Unit (SiLU, also called Swish) activation function (with and ) and its derivative (with and ). The only exception that the author could find is the Gaussian activation function which belongs to this class, but not its derivative (although does belong to this class, so a very similar approach can still be applied).

Proposition 2.2 ().

Given a scalar function as defined above and a bounded input domain , the local bounds of on are given by:

In this paper, Proposition 2.2 is primarily used to obtain local bounds of the activation function derivatives in order to compute bounds on the network’s Jacobian matrix for Proposition 2.1. On the other hand, Proposition 2.2 can also be useful on activation functions themselves to compare (in Section 4) our method based on mixed-monotonicity reachability with the naive interval propagation through the layers.

Figure 1. SiLU activation function (black line) and its derivative (red dashed).

3. Reachability algorithm

Solving Problem 1 by applying the mixed-monotonicity approach from Proposition 2.1 on the neural network (1) can be done in many different ways. One possibility is to iteratively compute bounds on the Jacobian matrix of the network and then apply Proposition 2.1 once for the whole network. However, this may result in wide bounds for the Jacobian of the whole network, which in turn may result in a fairly conservative over-approximation of the network output due to the term in Proposition 2.1. The dual approach is to iteratively apply Proposition 2.1 to each layer of the network, which would result in tighter Jacobian bounds (since one layer’s Jacobian requires less interval matrix products than the Jacobian of the whole network), and thus tighter over-approximation of the layer’s output. However, the loss of the dependency with respect to the network’s input may induce another source of accumulated conservativeness if we have many layers. Many other intermediate approaches can also be considered, where we split the network into several consecutive partial networks and apply Proposition 2.1 iteratively to each of them.

Although all the above approaches result in over-approximations of the network output, the main challenge is that we cannot determine in advance which method would yield the tightest bounds, since this is highly dependent on the considered network and input interval. We thus devised an algorithm that encapsulates all possible choices by running Proposition 2.1 on all possible partial networks of (1), before taking the intersection of the obtained bounds to get the tightest bounds of each layer’s output that can be obtained with mixed monotonicity. The main steps of this approach are presented in Algorithm LABEL:algo and summarized below.

Given the -layer network (1) with activation function and input interval as in Problem 1, our goal is to apply Proposition 2.1 to each partial network of (1), denoted as , containing only layers to (with ) and with input and output

. We start by initializing the Jacobian bounds of each partial network to the identity matrix. Then, we iteratively explore the network (going forward), where for each layer

we first use interval arithmetics (Jaulin et al., 2001) to compute the pre-activation bounds based on the knowledge of the output bounds of the previous layer, and then apply Proposition 2.2 to obtain local bounds on the activation function derivative (). Next, for each partial network (with ) ending at the current layer , we compute its Jacobian bounds based on the Jacobian of layer () and the Jacobian of the partial network . We then apply Proposition 2.1 to the partial network with the Jacobian bounds we just computed and input bounds . Finally, once we computed the over-approximations of corresponding to each partial network ending at layer , we take the intersection of all of them to obtain the final bounds for .


Theorem 3.1 ().

The interval returned by Algorithm LABEL:algo is a solution to Problem 1.

The proof of this result is straightforward since we compute several interval over-approximations of the output of each layer using Proposition 2.1, thus ensuring that their intersection is still an over-approximation of the layer’s output. Then, using these over-approximations as the input bounds of the next layers guarantees the soundness of the approach.

Note also that in Algorithm LABEL:algo, Proposition 2.1 is applied to every possible partial network that exists within the main network (1). We are thus guaranteed that the resulting interval from Algorithm LABEL:algo is the least conservative solution to Problem 1 that could be obtained from applying the mixed-monotonicity reachability analysis of Proposition 2.1 to any decomposition of (1) into consecutive partial networks.

4. Numerical comparisons

The recent years have seen the development of a wide variety of new methods and tools for bounding and verifying properties on the outputs of neural networks, including optimization-based approaches (see e.g. survey paper (Liu et al., 2021)) and methods based on reachability analysis with various set representations such as polyhedra and zonotopes in the ERAN (4) and NNV libraries (Tran et al., 2020). Since the method presented in this paper is an optimization-free reachability analysis approach using (multi-dimensional) interval bounds, we focus our numerical comparisons to the 5 most related tools also relying on optimization-free interval reachability. The main ideas of these approaches and their limitations are summarized below.

  • Naive interval bound propagation (IBP), as presented e.g. in (Xiang et al., 2020), is the most straightforward approach where interval arithmetic operations (Jaulin et al., 2001) are used to propagate the input bounds of the network through each affine transformation and activation function. Its implementation in the framework of (Xiang et al., 2020) is restricted to monotone activation functions and the results are very conservative due to losing the dependency to the network’s input.

  • To preserve part of the input dependency and thus obtain tighter bounds, ReluVal (Wang et al., 2018b) propagates input-dependent symbolic intervals through the network. This tool is restricted to ReLU activations and the input dependency is lost (concretization of the symbolic bounds) whenever the pre-activation bounds span both activation modes of a ReLU node.

  • Neurify (Wang et al., 2018a) is an improved version of ReluVal where the concretization of the symbolic intervals is replaced by a linear relaxation of the ReLU activation, thus bounding the nonlinear function by two linear equations.

  • VeriNet (Henriksen and Lomuscio, 2020) is a generalization of the approach in Neurify (symbolic interval propagation and linear relaxation of activation functions) to S-shaped activation functions, such as sigmoid and hyperbolic tangent. In terms of implementation, VeriNet only propagates a single symbolic equation and an error matrix, which is more efficient than propagating two symbolic equations as done in Neurify and ReluVal.

  • Unlike all the above methods, CROWN (Zhang et al., 2018) relies on a backward propagation of the activation’s linear relaxations through the layers of the network. Although its authors claim the CROWN verifier to be compatible with any activation function (as long as the user provides their own methods to compute linear relaxations of the activation functions based on the pre-activation bounds), in practice its current implementation in auto-LiRPA (Xu et al., 2020) can only handle piecewise-affine and S-shaped activation, similarly to VeriNet.

Since the tools to be compared in this section are written in various programming languages (Matlab, C, Python), for the convenience of comparison we have re-implemented the bounding method of each tool in Matlab. The implemented approaches are those described in each of the original papers cited above, and may thus slightly differ from the latest updates of the corresponding public toolboxes.

Most neural network verifiers have a two-part structure: one bounding the output of the network; and one iteratively refining the network’s domain until the union of the computed bounds satisfies or falsifies the desired property. Since the main contributions of this paper are related to the bounding part (and not to the full verification framework), restricting our numerical comparisons to a handful of pre-trained networks from the literature is not very meaningful to evaluate the relative qualities of the compared bounding methods for various activation functions. We thus rather focus the comparisons on the average results obtained from each approach over a large set of randomly generated neural networks of various sizes.

Each considered feedforward neural network has the structure defined in (1) with all its parameters randomly chosen as follows: a depth ; a number of input and output variables

; a number of neurons per hidden layer (each layer may have a different width)

; and input bounds defined as the hypercube around a randomly chosen center input . The comparison between our mixed-monotonicity method in Algorithm LABEL:algo and the five approaches from the literature cited above (when applicable) is done over a set of random networks for each of these activation functions:

  • Rectified Linear Unit (ReLU) is the piecewise-affine function ;

  • Hyperbolic tangent (TanH) is the S-shaped function ;

  • Exponential Linear Unit (ELU) is the monotone function if and if ;

  • Sigmoid Linear Unit (SiLU) is the non-monotone function .

Note that all these activation functions are Lipschitz continuous and their derivatives satisfy the desired shape described in Section 2.3, meaning that Algorithm LABEL:algo can be applied to all of them.

For each of the four considered activation functions and six bounding methods, the average results over the randomly generated networks are given in Table 1 for the computation time (on a laptop with GHz processor and GB of RAM running Matlab 2021a) and Table 2 for the width of the output bounds (using the -norm). For the average of the results to be meaningful despite the varying depth and width of the random networks, the provided results of each network are scaled down with respect to its total number of neurons.

The first and main outcome of these comparisons is on the much higher generality of our proposed mixed-monotonicity approach compared to all others. Indeed, with no additional effort it can natively handle piecewise-affine activation functions (as ReluVal and Neurify), S-shaped functions (as VeriNet and CROWN), other monotone functions (as the IBP implementation in (Xiang et al., 2020)), as well as general non-monotone activation function currently not handled by any of the above five tools. Note that the results in parentheses in Tables 1-3 are those not natively supported by the original tools, but added in our own implementation of their bounding approaches. This is the case for the IBP approach which is restricted to monotone activation functions in its implementation of (Xiang et al., 2020), but that we extended to non-monotone activation by using Proposition 2.2. For the ELU activations which are not S-shaped functions and are not natively handled by the bounding methods from VeriNet and CROWN, we still managed to include it in our own implementation of these tools since it is a convex function for which linear relaxations can be computed by taking the tangent at the center of the pre-activation bounds for the lower bound equation and the intercepting line for the upper bound equation.

In addition to the generality of our approach, its simplicity to handle new user-provided activation functions shall be highlighted. Indeed, ReluVal (Wang et al., 2018b), Neurify (Wang et al., 2018a), VeriNet (Henriksen and Lomuscio, 2020) and CROWN (Zhang et al., 2018) all rely on linear relaxations of the nonlinear activation functions, which may require long and complex implementations to be provided by the user for each new activation function (e.g. finding the optimal linear relaxations of S-shaped activation functions takes several hundreds of lines of code in the implementation of VeriNet). In contrast, for a user to add a new activation type to be used within our mixed-monotonicity approach, all they need to provide is the definition of the activation function and its derivative, and the global , and corresponding and of the derivative as defined in Section 2.3. Everything else is automatically handled internally by Algorithm LABEL:algo.

This generality and ease of use however comes at the cost of a higher computational complexity as can be seen in Table 1. This higher complexity primarily comes from the fact that most other methods propagate interval or symbolic bounds through the network only once (except from CROWN which is also computationally expensive), while our approach in Algorithm LABEL:algo calls the mixed-monotonicity reachability method from Proportion 2.1 on all partial networks within the main -layer network.

Method ReLU TanH ELU SiLU
IBP (Xiang et al., 2020) ()
ReluVal (Wang et al., 2018b) - - -
Neurify (Wang et al., 2018a) - - -
VeriNet (Henriksen and Lomuscio, 2020) () -
CROWN (Zhang et al., 2018) () -
Table 1. Average computation time (per neuron in the network) in .
Method ReLU TanH ELU SiLU
IBP (Xiang et al., 2020) ()
ReluVal (Wang et al., 2018b) - - -
Neurify (Wang et al., 2018a) - - -
VeriNet (Henriksen and Lomuscio, 2020) () -
CROWN (Zhang et al., 2018) () -
Table 2. Average width of the output bounds (per neuron in the network).
Method ReLU TanH ELU SiLU
IBP (Xiang et al., 2020) ()
ReluVal (Wang et al., 2018b) - - -
Neurify (Wang et al., 2018a) - - -
VeriNet (Henriksen and Lomuscio, 2020) () -
CROWN (Zhang et al., 2018) () -
Table 3. Proportion of the random networks for which the Mixed-Monotonicity approach results in tighter output bounds than other methods.

In terms of width of the output bounds, we can see in Table 2 that on average, our approach performs better than the IBP method and worse than ReluVal, Neurify, VeriNet and CROWN when applicable. On the other hand, Table 3 gives the proportion of the random networks for each activation function on which our mixed-monotonicity approach returns tighter (or at least as tight) bounds than the other five methods. In particular, we can see that Algorithm LABEL:algo always performs better than the IBP approach, and that it returns tighter bounds than the other four tools (ReluVal, Neurify, VeriNet, CROWN) in - of cases, depending on the tool and activation function. Note also that in of cases, CROWN was not able to return a valid output for the ELU activation due to its formulation in (Zhang et al., 2018) being incompatible with horizontal relaxations (when pre-activation bounds of the ELU are very negative).

5. Conclusions

This paper presents a new tool for the sound reachability analysis of feedforward neural networks using mixed-monotonicity. The main strength of the proposed approach is that it can be applied to very general networks, since its only limitation is to have Lipschitz-continuous activation functions, which is the case for the vast majority of activation functions currently in use in the literature. In addition, on activation functions handled by other reachability tools, our algorithm outperforms them on about to of networks, depending on the tools and activation functions. Another significant strength of our framework is its simplicity for an external user to add new activation functions. Indeed, while tools based on symbolic intervals require user-provided linear relaxations of the new activation functions (which may need up to several hundreds of lines of code in some cases), handling a new activation function in our approach only requires the user to provide the definition of the function and its derivative, and the global extrema of the derivative. The generality and simplicity of use of our approach come at the cost of a higher computational complexity.

Although this tool can already be used on its own for the reachability analysis of neural networks, one of our next objectives is to include this new reachability method into a larger neural network verification framework with an iterative refinement of the input domain until a safety problem can be solved.


  • M. Akintunde, A. Lomuscio, L. Maganti, and E. Pirovano (2018) Reachability analysis for neural agent-environment systems. In Sixteenth International Conference on Principles of Knowledge Representation and Reasoning, Cited by: §1.
  • E. Botoeva, P. Kouvaros, J. Kronqvist, A. Lomuscio, and R. Misener (2020) Efficient verification of relu-based neural networks via dependency analysis. In Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 34, pp. 3291–3299. Cited by: §1.
  • K. Dvijotham, R. Stanforth, S. Gowal, T. A. Mann, and P. Kohli (2018) A dual approach to scalable verification of deep networks.. In UAI, Vol. 1, pp. 3. Cited by: §1.
  • [4] (2021) ERAN: ETH robustness analyzer for neural networks. Note: Available online at: Cited by: §1, §1, §4.
  • P. Henriksen and A. Lomuscio (2020) Efficient neural network verification via adaptive refinement and adversarial search. In ECAI 2020, pp. 2513–2520. Cited by: 4th item, §1, §1, 4th item, Table 1, Table 2, Table 3, §4.
  • L. Jaulin, M. Kieffer, O. Didrit, and E. Walter (2001)

    Applied interval analysis: with examples in parameter and state estimation, robust control and robotics

    Vol. 1, Springer Science & Business Media. Cited by: §3, 1st item.
  • G. Katz, D. A. Huang, D. Ibeling, K. Julian, C. Lazarus, R. Lim, P. Shah, S. Thakoor, H. Wu, A. Zeljić, et al. (2019) The marabou framework for verification and analysis of deep neural networks. In International Conference on Computer Aided Verification, pp. 443–452. Cited by: §1.
  • E. Kim, D. Gopinath, C. Pasareanu, and S. A. Seshia (2020) A programmatic and semantic approach to explaining and debugging neural network based object detectors. In

    Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition

    pp. 11128–11137. Cited by: §1.
  • C. Liu, T. Arnon, C. Lazarus, C. Barrett, and M. J. Kochenderfer (2021) Algorithms for verifying deep neural networks. Foundation and Trend in Optimization 4 (3-4), pp. 244–404. Cited by: §1, §1, §4.
  • P. Meyer, A. Devonport, and M. Arcak (2021) Interval reachability analysis: bounding trajectories of uncertain systems with boxes for control and verification. Springer. Cited by: §1, §1, §2.2.
  • A. Raghunathan, J. Steinhardt, and P. Liang (2018) Certified defenses against adversarial examples. In International Conference on Learning Representations, Cited by: §1.
  • H. Salman, G. Yang, H. Zhang, C. Hsieh, and P. Zhang (2019) A convex relaxation barrier to tight robustness verification of neural networks. arXiv preprint arXiv:1902.08722. Cited by: §1.
  • A. Saoud and R. G. Sanfelice (2021) Computation of controlled invariants for nonlinear systems: application to safe neural networks approximation and control. IFAC-PapersOnLine 54 (5), pp. 91–96. Cited by: §1.
  • H. Tran, X. Yang, D. M. Lopez, P. Musau, L. V. Nguyen, W. Xiang, S. Bak, and T. T. Johnson (2020) NNV: the neural network verification tool for deep neural networks and learning-enabled cyber-physical systems. In International Conference on Computer Aided Verification, pp. 3–17. Cited by: §1, §1, §4.
  • S. Wang, K. Pei, J. Whitehouse, J. Yang, and S. Jana (2018a) Efficient formal safety analysis of neural networks. In 32nd Conference on Neural Information Processing Systems (NIPS), Montreal, Canada. Cited by: 4th item, §1, §1, 3rd item, Table 1, Table 2, Table 3, §4.
  • S. Wang, K. Pei, J. Whitehouse, J. Yang, and S. Jana (2018b) Formal security analysis of neural networks using symbolic intervals. In 27th USENIX Security Symposium (USENIX Security 18), Cited by: 4th item, §1, §1, 2nd item, Table 1, Table 2, Table 3, §4.
  • W. Xiang, P. Musau, A. A. Wild, D. M. Lopez, N. Hamilton, X. Yang, J. Rosenfeld, and T. T. Johnson (2018)

    Verification for machine learning, autonomy, and neural networks survey

    arXiv preprint arXiv:1810.01989. Cited by: §1.
  • W. Xiang, H. Tran, X. Yang, and T. T. Johnson (2020) Reachable set estimation for neural network control systems: a simulation-guided approach. IEEE Transactions on Neural Networks and Learning Systems 32 (5), pp. 1821–1830. Cited by: 4th item, §1, §1, §1, 1st item, Table 1, Table 2, Table 3, §4.
  • K. Xu, Z. Shi, H. Zhang, Y. Wang, K. Chang, M. Huang, B. Kailkhura, X. Lin, and C. Hsieh (2020) Automatic perturbation analysis for scalable certified robustness and beyond. Advances in Neural Information Processing Systems 33. Cited by: 5th item.
  • H. Zhang, T. Weng, P. Chen, C. Hsieh, and L. Daniel (2018) Efficient neural network robustness certification with general activation functions. Advances in Neural Information Processing Systems 31, pp. 4939–4948. External Links: Link Cited by: 4th item, §1, §1, 5th item, Table 1, Table 2, Table 3, §4, §4.
  • M. Zhu, W. Min, Q. Wang, S. Zou, and X. Chen (2021)

    PFLU and FPFLU: two novel non-monotonic activation functions in convolutional neural networks

    Neurocomputing 429, pp. 110–117. Cited by: §1, §2.3.