Kernel Methods are Competitive for Operator Learning

04/26/2023
βˆ™
by   Pau Batlle, et al.
βˆ™
0
βˆ™

We present a general kernel-based framework for learning operators between Banach spaces along with a priori error analysis and comprehensive numerical comparisons with popular neural net (NN) approaches such as Deep Operator Net (DeepONet) [Lu et al.] and Fourier Neural Operator (FNO) [Li et al.]. We consider the setting where the input/output spaces of target operator 𝒒^† : 𝒰→𝒱 are reproducing kernel Hilbert spaces (RKHS), the data comes in the form of partial observations Ο•(u_i), Ο†(v_i) of input/output functions v_i=𝒒^†(u_i) (i=1,…,N), and the measurement operators Ο• : 𝒰→ℝ^n and Ο† : 𝒱→ℝ^m are linear. Writing ψ : ℝ^n →𝒰 and Ο‡ : ℝ^m →𝒱 for the optimal recovery maps associated with Ο• and Ο†, we approximate 𝒒^† with 𝒒̅=Ο‡βˆ˜fΜ…βˆ˜Ο• where fΜ… is an optimal recovery approximation of f^†:=Ο†βˆ˜π’’^β€ βˆ˜Οˆ : ℝ^n →ℝ^m. We show that, even when using vanilla kernels (e.g., linear or MatΓ©rn), our approach is competitive in terms of cost-accuracy trade-off and either matches or beats the performance of NN methods on a majority of benchmarks. Additionally, our framework offers several advantages inherited from kernel methods: simplicity, interpretability, convergence guarantees, a priori error estimates, and Bayesian uncertainty quantification. As such, it can serve as a natural benchmark for operator learning.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset