Computing Lyapunov functions using deep neural networks

05/18/2020
by   Lars Grüne, et al.
0

We propose a deep neural network architecture and a training algorithm for computing approximate Lyapunov functions of systems of nonlinear ordinary differential equations. Under the assumption that the system admits a compositional Lyapunov function, we prove that the number of neurons needed for an approximation of a Lyapunov function with fixed accuracy grows only polynomially in the state dimension, i.e., the proposed approach is able to overcome the curse of dimensionality. We show that nonlinear systems satisfying a small-gain condition admit compositional Lyapunov functions. Numerical examples in up to ten space dimensions illustrate the performance of the training scheme.

READ FULL TEXT

page 15

page 17

research
01/23/2020

Overcoming the curse of dimensionality for approximating Lyapunov functions with deep neural networks under a small-gain condition

We propose a deep neural network architecture for storing approximate Ly...
research
10/24/2020

Deep neural network for solving differential equations motivated by Legendre-Galerkin approximation

Nonlinear differential equations are challenging to solve numerically an...
research
07/13/2022

Compositional Sparsity, Approximation Classes, and Parametric Transport Equations

Approximating functions of a large number of variables poses particular ...
research
09/27/2021

Lyapunov-Net: A Deep Neural Network Architecture for Lyapunov Function Approximation

We develop a versatile deep neural network architecture, called Lyapunov...
research
09/19/2022

Computing Anti-Derivatives using Deep Neural Networks

This paper presents a novel algorithm to obtain the closed-form anti-der...

Please sign up or login with your details

Forgot password? Click here to reset