Provably Correct Training of Neural Network Controllers Using Reachability Analysis

02/22/2021
by   Xiaowu Sun, et al.
11

In this paper, we consider the problem of training neural network (NN) controllers for cyber-physical systems (CPS) that are guaranteed to satisfy safety and liveness properties. Our approach is to combine model-based design methodologies for dynamical systems with data-driven approaches to achieve this target. Given a mathematical model of the dynamical system, we compute a finite-state abstract model that captures the closed-loop behavior under all possible neural network controllers. Using this finite-state abstract model, our framework identifies the subset of NN weights that are guaranteed to satisfy the safety requirements. During training, we augment the learning algorithm with a NN weight projection operator that enforces the resulting NN to be provably safe. To account for the liveness properties, the proposed framework uses the finite-state abstract model to identify candidate NN weights that may satisfy the liveness properties. Using such candidate NN weights, the proposed framework biases the NN training to achieve the liveness specification. Achieving the guarantees above, can not be ensured without correctness guarantees on the NN architecture, which controls the NN's expressiveness. Therefore, and as a corner step in the proposed framework is the ability to select provably correct NN architectures automatically.

READ FULL TEXT
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 9

page 10

09/03/2021

Provably Safe Model-Based Meta Reinforcement Learning: An Abstraction-Based Approach

While conventional reinforcement learning focuses on designing agents th...
12/16/2021

Distributed neural network control with dependability guarantees: a compositional port-Hamiltonian approach

Large-scale cyber-physical systems require that control policies are dis...
06/16/2020

ShieldNN: A Provably Safe NN Filter for Unsafe NN Controllers

In this paper, we consider the problem of creating a safe-by-design Rect...
03/29/2022

NNLander-VeriF: A Neural Network Formal Verification Framework for Vision-Based Autonomous Aircraft Landing

In this paper, we consider the problem of formally verifying a Neural Ne...
09/28/2021

Local Repair of Neural Networks Using Optimization

In this paper, we propose a framework to repair a pre-trained feed-forwa...
02/25/2019

Modularity as a Means for Complexity Management in Neural Networks Learning

Training a Neural Network (NN) with lots of parameters or intricate arch...
08/09/2021

Reachability Analysis of Neural Feedback Loops

Neural Networks (NNs) can provide major empirical performance improvemen...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.