An Adaptive Sampling Approach for the Reduced Basis Method

10/01/2019
by   Sridhar Chellappa, et al.
0

The offline time of the reduced basis method can be very long given a large training set of parameter samples. This usually happens when the system has more than two independent parameters. On the other hand, if the training set includes fewer parameter samples, the greedy algorithm might produce a ROM with large errors at the samples outside of the training set. We introduce a method based on a surrogate error model to efficiently sample the parameter domain such that the training set is adaptively updated starting from a coarse set with a small number of parameter samples. A sharp a posteriori error estimator is evaluated on a coarse training set. Radial Basis Functions are used to interpolate the error estimator over a separate fine training set. Points from the fine training set are added into the coarse training set at every iteration based on a user defined criterion. In parallel, parameter samples satisfying a defined tolerance are adaptively removed from the coarse training set. The approach is shown to avoid high computational costs by using a compact training set and to provide a reduced-order model with guaranteed accuracy over the entire parameter domain.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/05/2020

Adaptive Interpolatory MOR by Learning the Error Estimator in the Parameter Domain

Interpolatory methods offer a powerful framework for generating reduced-...
research
03/10/2021

A Training Set Subsampling Strategy for the Reduced Basis Method

We present a subsampling strategy for the offline stage of the Reduced B...
research
11/06/2019

Multi-level adaptive greedy algorithms for the reduced basis method

The reduced basis method (RBM) empowers repeated and rapid evaluation of...
research
09/07/2018

Simple coarse graining and sampling strategies for image recognition

A conceptually simple way to recognize images is to directly compare tes...
research
03/20/2022

Over-parameterization: A Necessary Condition for Models that Extrapolate

In this work, we study over-parameterization as a necessary condition fo...
research
12/24/2020

Generalization in portfolio-based algorithm selection

Portfolio-based algorithm selection has seen tremendous practical succes...
research
10/10/2022

DALE: Differential Accumulated Local Effects for efficient and accurate global explanations

Accumulated Local Effect (ALE) is a method for accurately estimating fea...

Please sign up or login with your details

Forgot password? Click here to reset