Safety Guarantees for Neural Network Dynamic Systems via Stochastic Barrier Functions

06/15/2022
by   Rayan Mazouz, et al.
0

Neural Networks (NNs) have been successfully employed to represent the state evolution of complex dynamical systems. Such models, referred to as NN dynamic models (NNDMs), use iterative noisy predictions of NN to estimate a distribution of system trajectories over time. Despite their accuracy, safety analysis of NNDMs is known to be a challenging problem and remains largely unexplored. To address this issue, in this paper, we introduce a method of providing safety guarantees for NNDMs. Our approach is based on stochastic barrier functions, whose relation with safety are analogous to that of Lyapunov functions with stability. We first show a method of synthesizing stochastic barrier functions for NNDMs via a convex optimization problem, which in turn provides a lower bound on the system's safety probability. A key step in our method is the employment of the recent convex approximation results for NNs to find piece-wise linear bounds, which allow the formulation of the barrier function synthesis problem as a sum-of-squares optimization program. If the obtained safety probability is above the desired threshold, the system is certified. Otherwise, we introduce a method of generating controls for the system that robustly maximizes the safety probability in a minimally-invasive manner. We exploit the convexity property of the barrier function to formulate the optimal control synthesis problem as a linear program. Experimental results illustrate the efficacy of the method. Namely, they show that the method can scale to multi-dimensional NNDMs with multiple layers and hundreds of neurons per layer, and that the controller can significantly improve the safety probability.

READ FULL TEXT

page 10

page 14

research
09/18/2020

Learning Safe Neural Network Controllers with Barrier Certificates

We provide a novel approach to synthesize controllers for nonlinear cont...
research
06/03/2022

Safety Certification for Stochastic Systems via Neural Barrier Functions

Providing non-trivial certificates of safety for non-linear stochastic s...
research
05/29/2021

Synthesizing Invariant Barrier Certificates via Difference-of-Convex Programming

A barrier certificate often serves as an inductive invariant that isolat...
research
09/20/2022

Encoding inductive invariants as barrier certificates: synthesis via difference-of-convex programming

A barrier certificate often serves as an inductive invariant that isolat...
research
04/19/2023

Patching Neural Barrier Functions Using Hamilton-Jacobi Reachability

Learning-based control algorithms have led to major advances in robotics...
research
09/06/2019

Robust Barrier Functions for a Fully Autonomous, Remotely Accessible Swarm-Robotics Testbed

The Robotarium, a remotely accessible swarm-robotics testbed, has provid...
research
05/05/2023

On the Optimality, Stability, and Feasibility of Control Barrier Functions: An Adaptive Learning-Based Approach

Safety has been a critical issue for the deployment of learning-based ap...

Please sign up or login with your details

Forgot password? Click here to reset