Safe Control with Neural Network Dynamic Models

10/03/2021
by   Tianhao Wei, et al.
0

Safety is critical in autonomous robotic systems. A safe control law ensures forward invariance of a safe set (a subset in the state space). It has been extensively studied regarding how to derive a safe control law with a control-affine analytical dynamic model. However, in complex environments and tasks, it is challenging and time-consuming to obtain a principled analytical model of the system. In these situations, data-driven learning is extensively used and the learned models are encoded in neural networks. How to formally derive a safe control law with Neural Network Dynamic Models (NNDM) remains unclear due to the lack of computationally tractable methods to deal with these black-box functions. In fact, even finding the control that minimizes an objective for NNDM without any safety constraint is still challenging. In this work, we propose MIND-SIS (Mixed Integer for Neural network Dynamic model with Safety Index Synthesis), the first method to derive safe control laws for NNDM. The method includes two parts: 1) SIS: an algorithm for the offline synthesis of the safety index (also called as barrier function), which uses evolutionary methods and 2) MIND: an algorithm for online computation of the optimal and safe control signal, which solves a constrained optimization using a computationally efficient encoding of neural networks. It has been theoretically proved that MIND-SIS guarantees forward invariance and finite convergence. And it has been numerically validated that MIND-SIS achieves safe and optimal control of NNDM. From our experiments, the optimality gap is less than 10^-8, and the safety constraint violation is 0.

READ FULL TEXT

page 7

page 14

research
06/16/2022

Control Barrier Functionals: Safety-critical Control for Time Delay Systems

This work presents a theoretical framework for the safety-critical contr...
research
10/11/2022

Geometry of Radial Basis Neural Networks for Safety Biased Approximation of Unsafe Regions

Barrier function-based inequality constraints are a means to enforce saf...
research
12/02/2022

Guaranteed Conformance of Neurosymbolic Models to Natural Constraints

Deep neural networks have emerged as the workhorse for a large section o...
research
11/09/2022

Safety-Critical Optimal Control for Robotic Manipulators in A Cluttered Environment

Designing safety-critical control for robotic manipulators is challengin...
research
10/03/2022

Probabilistic Safeguard for Reinforcement Learning Using Safety Index Guided Gaussian Process Models

Safety is one of the biggest concerns to applying reinforcement learning...
research
09/11/2023

The Safety Filter: A Unified View of Safety-Critical Control in Autonomous Systems

Recent years have seen significant progress in the realm of robot autono...
research
09/20/2023

Receding-Constraint Model Predictive Control using a Learned Approximate Control-Invariant Set

In recent years, advanced model-based and data-driven control methods ar...

Please sign up or login with your details

Forgot password? Click here to reset