On efficient algorithms for computing near-best polynomial approximations to high-dimensional, Hilbert-valued functions from limited samples

03/25/2022
by   Ben Adcock, et al.
0

Sparse polynomial approximation has become indispensable for approximating smooth, high- or infinite-dimensional functions from limited samples. This is a key task in computational science and engineering, e.g., surrogate modelling in UQ where the function is the solution map of a parametric or stochastic PDE. Yet, sparse polynomial approximation lacks a complete theory. On the one hand, there is a well-developed theory of best s-term polynomial approximation, which asserts exponential or algebraic rates of convergence for holomorphic functions. On the other hand, there are increasingly mature methods such as (weighted) ℓ^1-minimization for computing such approximations. While the sample complexity of these methods has been analyzed in detail, the matter of whether or not these methods achieve such rates is not well understood. Furthermore, these methods are not algorithms per se, since they involve exact minimizers of nonlinear optimization problems. This paper closes these gaps. Specifically, we pose and answer the following question: are there robust, efficient algorithms for computing approximations to finite- or infinite-dimensional, holomorphic and Hilbert-valued functions from limited samples that achieve best s-term rates? We answer this in the affirmative by introducing algorithms and theoretical guarantees that assert exponential or algebraic rates of convergence, along with robustness to sampling, algorithmic, and physical discretization errors. We tackle both scalar- and Hilbert-valued functions, this being particularly relevant to parametric and stochastic PDEs. Our work involves several significant developments of existing techniques, including a novel restarted primal-dual iteration for solving weighted ℓ^1-minimization problems in Hilbert spaces. Our theory is supplemented by numerical experiments demonstrating the practical efficacy of these algorithms.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/22/2022

Near-optimal learning of Banach-valued, high-dimensional functions via deep neural networks

The past decade has seen increasing interest in applying Deep Learning (...
research
12/11/2020

Deep Neural Networks Are Effective At Learning High-Dimensional Hilbert-Valued Functions From Limited Data

The accurate approximation of scalar-valued functions from sample points...
research
05/29/2023

Optimal approximation of infinite-dimensional holomorphic functions

Over the last decade, approximating functions in infinite dimensions fro...
research
02/04/2022

Towards optimal sampling for learning sparse approximation in high dimensions

In this chapter, we discuss recent work on learning sparse approximation...
research
02/10/2023

Approximation and Structured Prediction with Sparse Wasserstein Barycenters

We develop a general theoretical and algorithmic framework for sparse ap...
research
12/22/2020

Finding Global Minima via Kernel Approximations

We consider the global minimization of smooth functions based solely on ...
research
10/05/2020

Reciprocal-log approximation and planar PDE solvers

This article is about both approximation theory and the numerical soluti...

Please sign up or login with your details

Forgot password? Click here to reset