Gradient-augmented Supervised Learning of Optimal Feedback Laws Using State-dependent Riccati Equations

03/06/2021
by   Giacomo Albi, et al.
0

A supervised learning approach for the solution of large-scale nonlinear stabilization problems is presented. A stabilizing feedback law is trained from a dataset generated from State-dependent Riccati Equation solves. The training phase is enriched by the use gradient information in the loss function, which is weighted through the use of hyperparameters. High-dimensional nonlinear stabilization tests demonstrate that real-time sequential large-scale Algebraic Riccati Equation solves can be substituted by a suitably trained feedforward neural network.

READ FULL TEXT
research
08/05/2019

A Tensor Decomposition Approach for High-Dimensional Hamilton-Jacobi-Bellman Equations

A tensor decomposition approach for the solution of high-dimensional, fu...
research
06/14/2021

State-dependent Riccati equation feedback stabilization for nonlinear PDEs

The synthesis of suboptimal feedback laws for controlling nonlinear dyna...
research
07/11/2019

Adaptive Deep Learning for High Dimensional Hamilton-Jacobi-Bellman Equations

Computing optimal feedback controls for nonlinear systems generally requ...
research
09/11/2020

QRnet: optimal regulator design with LQR-augmented neural networks

In this paper we propose a new computational method for designing optima...
research
02/10/2020

Nonlinear Equation Solving: A Faster Alternative to Feedforward Computation

Feedforward computations, such as evaluating a neural network or samplin...

Please sign up or login with your details

Forgot password? Click here to reset