Sobolev Acceleration and Statistical Optimality for Learning Elliptic Equations via Gradient Descent

05/15/2022
by   Yiping Lu, et al.
11

In this paper, we study the statistical limits in terms of Sobolev norms of gradient descent for solving inverse problem from randomly sampled noisy observations using a general class of objective functions. Our class of objective functions includes Sobolev training for kernel regression, Deep Ritz Methods (DRM), and Physics Informed Neural Networks (PINN) for solving elliptic partial differential equations (PDEs) as special cases. We consider a potentially infinite-dimensional parameterization of our model using a suitable Reproducing Kernel Hilbert Space and a continuous parameterization of problem hardness through the definition of kernel integral operators. We prove that gradient descent over this objective function can also achieve statistical optimality and the optimal number of passes over the data increases with sample size. Based on our theory, we explain an implicit acceleration of using a Sobolev norm as the objective function for training, inferring that the optimal number of epochs of DRM becomes larger than the number of PINN when both the data size and the hardness of tasks increase, although both DRM and PINN can achieve statistical optimality.

READ FULL TEXT
research
12/27/2022

Physics informed neural networks for elliptic equations with oscillatory differential operators

We consider standard physics informed neural network solution methods fo...
research
06/20/2017

Statistical Mechanics of Node-perturbation Learning with Noisy Baseline

Node-perturbation learning is a type of statistical gradient descent alg...
research
11/27/2017

Asymptotic Analysis via Stochastic Differential Equations of Gradient Descent Algorithms in Statistical and Computational Paradigms

This paper investigates asymptotic behaviors of gradient descent algorit...
research
02/01/2023

Learning Functional Transduction

Research in Machine Learning has polarized into two general regression a...
research
05/25/2018

Statistical Optimality of Stochastic Gradient Descent on Hard Learning Problems through Multiple Passes

We consider stochastic gradient descent (SGD) for least-squares regressi...
research
10/13/2021

Machine Learning For Elliptic PDEs: Fast Rate Generalization Bound, Neural Scaling Law and Minimax Optimality

In this paper, we study the statistical limits of deep learning techniqu...
research
10/26/2021

Stable Anderson Acceleration for Deep Learning

Anderson acceleration (AA) is an extrapolation technique designed to spe...

Please sign up or login with your details

Forgot password? Click here to reset