Bounding the Complexity of Formally Verifying Neural Networks: A Geometric Approach

12/22/2020
by   James Ferlez, et al.
4

In this paper, we consider the computational complexity of formally verifying the input/output behavior of Rectified Linear Unit (ReLU) Neural Networks (NNs): that is we consider the complexity of determining whether the output of a NN lies in a specific convex polytopic region (in its range) whenever its input lies in a specific polytopic region (in its domain). Specifically, we show that for two different NN architectures – shallow NNs and Two-Level Lattice (TLL) NNs – the verification problem with polytopic constraints is polynomial in the number of neurons in the NN to be verified, when all other aspects of the verification problem held fixed. We achieve these complexity results by exhibiting an explicit verification algorithm for each type of architecture. Nevertheless, both algorithms share a commonality in structure. First, they efficiently translate the NN parameters into a partitioning of the NN's input space by means of hyperplanes; this has the effect of partitioning the original verification problem into sub-verification problems derived from the geometry of the NN itself. These partitionings have two further important properties. First, the number of these hyperplanes is polynomially related to the number of neurons, and hence so is the number of sub-verification problems. Second, each of the subproblems is solvable in polynomial time by means of a Linear Program (LP). Thus, to attain an overall polynomial time algorithm for the original verification problem, it is only necessary to enumerate these subproblems in polynomial time. For this, we also contribute a novel algorithm to enumerate the regions in a hyperplane arrangement in polynomial time; our algorithm is based on a poset ordering of the regions for which poset successors are polynomially easy to compute.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/20/2022

Polynomial-Time Reachability for LTI Systems with Two-Level Lattice Neural Network Controllers

In this paper, we consider the computational complexity of bounding the ...
research
11/17/2021

Fast BATLLNN: Fast Box Analysis of Two-Level Lattice Neural Networks

In this paper, we present the tool Fast Box Analysis of Two-Level Lattic...
research
03/29/2022

NNLander-VeriF: A Neural Network Formal Verification Framework for Vision-Based Autonomous Aircraft Landing

In this paper, we consider the problem of formally verifying a Neural Ne...
research
10/31/2018

Formal Verification of Neural Network Controlled Autonomous Systems

In this paper, we consider the problem of formally verifying the safety ...
research
11/17/2021

Traversing the Local Polytopes of ReLU Neural Networks: A Unified Approach for Network Verification

Although neural networks (NNs) with ReLU activation functions have found...
research
09/01/2022

Recurrent Convolutional Neural Networks Learn Succinct Learning Algorithms

Neural Networks (NNs) struggle to efficiently learn certain problems, su...
research
06/15/2018

Polyhedra Circuits and Their Applications

We introduce polyhedra circuits. Each polyhedra circuit characterizes a ...

Please sign up or login with your details

Forgot password? Click here to reset