Formal Verification of Neural Network Controlled Autonomous Systems

10/31/2018
by   Xiaowu Sun, et al.
0

In this paper, we consider the problem of formally verifying the safety of an autonomous robot equipped with a Neural Network (NN) controller that processes LiDAR images to produce control actions. Given a workspace that is characterized by a set of polytopic obstacles, our objective is to compute the set of safe initial conditions such that a robot trajectory starting from these initial conditions is guaranteed to avoid the obstacles. Our approach is to construct a finite state abstraction of the system and use standard reachability analysis over the finite state abstraction to compute the set of the safe initial states. The first technical problem in computing the finite state abstraction is to mathematically model the imaging function that maps the robot position to the LiDAR image. To that end, we introduce the notion of imaging-adapted sets as partitions of the workspace in which the imaging function is guaranteed to be affine. We develop a polynomial-time algorithm to partition the workspace into imaging-adapted sets along with computing the corresponding affine imaging functions. Given this workspace partitioning, a discrete-time linear dynamics of the robot, and a pre-trained NN controller with Rectified Linear Unit (ReLU) nonlinearity, the second technical challenge is to analyze the behavior of the neural network. To that end, we utilize a Satisfiability Modulo Convex (SMC) encoding to enumerate all the possible segments of different ReLUs. SMC solvers then use a Boolean satisfiability solver and a convex programming solver and decompose the problem into smaller subproblems. To accelerate this process, we develop a pre-processing algorithm that could rapidly prune the space feasible ReLU segments. Finally, we demonstrate the efficiency of the proposed algorithms using numerical simulations with increasing complexity of the neural network controller.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/20/2022

Polynomial-Time Reachability for LTI Systems with Two-Level Lattice Neural Network Controllers

In this paper, we consider the computational complexity of bounding the ...
research
04/06/2021

Safe-by-Repair: A Convex Optimization Approach for Repairing Unsafe Two-Level Lattice Neural Network Controllers

In this paper, we consider the problem of repairing a data-trained Recti...
research
03/08/2021

Formal Verification of Stochastic Systems with ReLU Neural Network Controllers

In this work, we address the problem of formal safety verification for s...
research
03/29/2022

NNLander-VeriF: A Neural Network Formal Verification Framework for Vision-Based Autonomous Aircraft Landing

In this paper, we consider the problem of formally verifying a Neural Ne...
research
12/22/2020

Bounding the Complexity of Formally Verifying Neural Networks: A Geometric Approach

In this paper, we consider the computational complexity of formally veri...
research
10/24/2019

Case Study: Verifying the Safety of an Autonomous Racing Car with a Neural Network Controller

This paper describes a verification case study on an autonomous racing c...
research
11/10/2020

Safety Verification of Neural Network Controlled Systems

In this paper, we propose a system-level approach for verifying the safe...

Please sign up or login with your details

Forgot password? Click here to reset