Neural Operators of Backstepping Controller and Observer Gain Functions for Reaction-Diffusion PDEs

03/18/2023
by   Miroslav Krstic, et al.
0

Unlike ODEs, whose models involve system matrices and whose controllers involve vector or matrix gains, PDE models involve functions in those roles functional coefficients, dependent on the spatial variables, and gain functions dependent on space as well. The designs of gains for controllers and observers for PDEs, such as PDE backstepping, are mappings of system model functions into gain functions. These infinite dimensional nonlinear operators are given in an implicit form through PDEs, in spatial variables, which need to be solved to determine the gain function for each new functional coefficient of the PDE. The need for solving such PDEs can be eliminated by learning and approximating the said design mapping in the form of a neural operator. Learning the neural operator requires a sufficient number of prior solutions for the design PDEs, offline, as well as the training of the operator. In recent work, we developed the neural operators for PDE backstepping designs for first order hyperbolic PDEs. Here we extend this framework to the more complex class of parabolic PDEs. The key theoretical question is whether the controllers are still stabilizing, and whether the observers are still convergent, if they employ the approximate functional gains generated by the neural operator. We provide affirmative answers to these questions, namely, we prove stability in closed loop under gains produced by neural operators. We illustrate the theoretical results with numerical tests and publish our code on github. The neural operators are three orders of magnitude faster in generating gain functions than PDE solvers for such gain functions. This opens up the opportunity for the use of this neural operator methodology in adaptive control and in gain scheduling control for nonlinear PDEs.

READ FULL TEXT
research
08/21/2023

Deep Learning of Delay-Compensated Backstepping for Reaction-Diffusion PDEs

Deep neural networks that approximate nonlinear function-to-function map...
research
11/28/2022

Machine Learning Accelerated PDE Backstepping Observers

State estimation is important for a variety of tasks, from forecasting t...
research
01/26/2023

Neural Inverse Operators for Solving PDE Inverse Problems

A large class of inverse problems for PDEs are only well-defined as mapp...
research
10/06/2022

Residual-based error correction for neural operator accelerated infinite-dimensional Bayesian inverse problems

We explore using neural operators, or neural network representations of ...
research
04/22/2022

Error-in-variables modelling for operator learning

Deep operator learning has emerged as a promising tool for reduced-order...
research
08/11/2023

Size Lowerbounds for Deep Operator Networks

Deep Operator Networks are an increasingly popular paradigm for solving ...
research
02/21/2023

Learning Physical Models that Can Respect Conservation Laws

Recent work in scientific machine learning (SciML) has focused on incorp...

Please sign up or login with your details

Forgot password? Click here to reset