DeepAI AI Chat
Log In Sign Up

Randomized residual-based error estimators for the Proper Generalized Decomposition approximation of parametrized problems

by   Kathrin Smetana, et al.

This paper introduces a novel error estimator for the Proper Generalized Decomposition (PGD) approximation of parametrized equations. The estimator is intrinsically random: It builds on concentration inequalities of Gaussian maps and an adjoint problem with random right-hand side, which we approximate using the PGD. The effectivity of this randomized error estimator can be arbitrarily close to unity with high probability, allowing the estimation of the error with respect to any user-defined norm as well as the error in some quantity of interest. The performance of the error estimator is demonstrated and compared with some existing error estimators for the PGD for a parametrized time-harmonic elastodynamics problem and the parametrized equations of linear elasticity with a high-dimensional parameter space.


page 16

page 17


Estimation of smooth functionals in high-dimensional models: bootstrap chains and Gaussian approximation

Let X^(n) be an observation sampled from a distribution P_θ^(n) with an ...

Concentration for high-dimensional linear processes with dependent innovations

We develop concentration inequalities for the l_∞ norm of a vector linea...

High-dimensional Location Estimation via Norm Concentration for Subgamma Vectors

In location estimation, we are given n samples from a known distribution...

Generalized Resubstitution for Classification Error Estimation

We propose the family of generalized resubstitution classifier error est...

Estimation of high dimensional Gamma convolutions through random projections

Multivariate generalized Gamma convolutions are distributions defined by...