Optimizing a DIscrete Loss (ODIL) to solve forward and inverse problems for partial differential equations using machine learning tools

by   Petr Karnakov, et al.

We introduce the Optimizing a Discrete Loss (ODIL) framework for the numerical solution of Partial Differential Equations (PDE) using machine learning tools. The framework formulates numerical methods as a minimization of discrete residuals that are solved using gradient descent and Newton's methods. We demonstrate the value of this approach on equations that may have missing parameters or where no sufficient data is available to form a well-posed initial-value problem. The framework is presented for mesh based discretizations of PDEs and inherits their accuracy, convergence, and conservation properties. It preserves the sparsity of the solutions and is readily applicable to inverse and ill-posed problems. It is applied to PDE-constrained optimization, optical flow, system identification, and data assimilation using gradient descent algorithms including those often deployed in machine learning. We compare ODIL with related approach that represents the solution with neural networks. We compare the two methodologies and demonstrate advantages of ODIL that include significantly higher convergence rates and several orders of magnitude lower computational cost. We evaluate the method on various linear and nonlinear partial differential equations including the Navier-Stokes equations for flow reconstruction problems.


page 1

page 2

page 3

page 4


Two-Layer Neural Networks for Partial Differential Equations: Optimization and Generalization Theory

Deep learning has significantly revolutionized the design of numerical a...

Learning Partial Differential Equations by Spectral Approximates of General Sobolev Spaces

We introduce a novel spectral, finite-dimensional approximation of gener...

Gauss Newton method for solving variational problems of PDEs with neural network discretizaitons

The numerical solution of differential equations using machine learning-...

NTopo: Mesh-free Topology Optimization using Implicit Neural Representations

Recent advances in implicit neural representations show great promise wh...

A Stable and Scalable Method for Solving Initial Value PDEs with Neural Networks

Unlike conventional grid and mesh based methods for solving partial diff...

A Forward Propagation Algorithm for Online Optimization of Nonlinear Stochastic Differential Equations

Optimizing over the stationary distribution of stochastic differential e...

ViTO: Vision Transformer-Operator

We combine vision transformers with operator learning to solve diverse i...

Please sign up or login with your details

Forgot password? Click here to reset