An Adaptive sampling and domain learning strategy for multivariate function approximation on unknown domains

01/31/2022
by   Ben Adcock, et al.
0

Many problems in computational science and engineering can be described in terms of approximating a smooth function of d variables, defined over an unknown domain of interest Ω⊂ℝ^d, from sample data. Here both the curse of dimensionality (d≫ 1) and the lack of domain knowledge with Ω potentially irregular and/or disconnected are confounding factors for sampling-based methods. Naïve approaches often lead to wasted samples and inefficient approximation schemes. For example, uniform sampling can result in upwards of 20% wasted samples in some problems. In surrogate model construction in computational uncertainty quantification (UQ), the high cost of computing samples needs a more efficient sampling procedure. In the last years, methods for computing such approximations from sample data have been studied in the case of irregular domains. The advantages of computing sampling measures depending on an approximation space P of (P)=N have been shown. In particular, such methods confer advantages such as stability and well-conditioning, with 𝒪(Nlog(N)) as sample complexity. The recently-proposed adaptive sampling for general domains (ASGD) strategy is one method to construct these sampling measures. The main contribution of this paper is to improve ASGD by adaptively updating the sampling measures over unknown domains. We achieve this by first introducing a general domain adaptivity strategy (GDAS), which approximates the function and domain of interest from sample points. Second, we propose adaptive sampling for unknown domains (ASUD), which generates sampling measures over a domain that may not be known in advance. Then, we derive least squares techniques for polynomial approximation on unknown domains. Numerical results show that the ASUD approach can reduce the computational cost by as 50% when compared with uniform sampling.

READ FULL TEXT

page 6

page 13

page 15

page 17

research
08/04/2019

Optimal sampling strategies for multivariate function approximation on general domains

In this paper, we address the problem of approximating a multivariate fu...
research
08/25/2022

CAS4DL: Christoffel Adaptive Sampling for function approximation via Deep Learning

The problem of approximating smooth, multivariate functions from sample ...
research
02/04/2022

Towards optimal sampling for learning sparse approximation in high dimensions

In this chapter, we discuss recent work on learning sparse approximation...
research
01/28/2023

Convergence and Near-optimal Sampling for Multivariate Function Approximations in Irregular Domains via Vandermonde with Arnoldi

Vandermonde matrices are usually exponentially ill-conditioned and often...
research
10/15/2019

Neural Approximation of an Auto-Regressive Process through Confidence Guided Sampling

We propose a generic confidence-based approximation that can be plugged ...
research
08/18/2022

Is Monte Carlo a bad sampling strategy for learning smooth functions in high dimensions?

This paper concerns the approximation of smooth, high-dimensional functi...
research
05/29/2023

Optimal approximation of infinite-dimensional holomorphic functions

Over the last decade, approximating functions in infinite dimensions fro...

Please sign up or login with your details

Forgot password? Click here to reset