Stabilizing Neural Control Using Self-Learned Almost Lyapunov Critics

07/11/2021
by   Ya-Chien Chang, et al.
0

The lack of stability guarantee restricts the practical use of learning-based methods in core control problems in robotics. We develop new methods for learning neural control policies and neural Lyapunov critic functions in the model-free reinforcement learning (RL) setting. We use sample-based approaches and the Almost Lyapunov function conditions to estimate the region of attraction and invariance properties through the learned Lyapunov critic functions. The methods enhance stability of neural controllers for various nonlinear systems including automobile and quadrotor control.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset