Learning Partial Differential Equations in Reproducing Kernel Hilbert Spaces
We propose a new data-driven approach for learning the fundamental solutions (i.e. Green's functions) of various linear partial differential equations (PDEs) given sample pairs of input-output functions. Building off of the theory of functional linear regression (FLR), we estimate the best-fit Green's function and bias term of the fundamental solution in a reproducing kernel Hilbert space (RKHS) which allows us to regularize their smoothness and impose various structural constraints. We derive a general representer theorem for operator RKHSs which lets us approximate the original infinite-dimensional regression problem by a finite-dimensional one, reducing the search space to a parametric class of Green's functions. In order to study the prediction error of our Green's function estimator, we extend prior results on FLR with scalar outputs to the case with functional outputs. Furthermore, our rates of convergence hold even in the misspecified setting when the data is generated by a nonlinear PDE under certain constraints. Finally, we demonstrate applications of our method to several linear PDEs including the Poisson, Helmholtz, Schrödinger, Fokker-Planck, and heat equation and highlight its ability to extrapolate to more finely sampled meshes without any additional training.
READ FULL TEXT