Principal Feature Detection via Φ-Sobolev Inequalities

05/10/2023
by   Matthew T. C. Li, et al.
0

We investigate the approximation of high-dimensional target measures as low-dimensional updates of a dominating reference measure. This approximation class replaces the associated density with the composition of: (i) a feature map that identifies the leading principal components or features of the target measure, relative to the reference, and (ii) a low-dimensional profile function. When the reference measure satisfies a subspace ϕ-Sobolev inequality, we construct a computationally tractable approximation that yields certifiable error guarantees with respect to the Amari α-divergences. Our construction proceeds in two stages. First, for any feature map and any α-divergence, we obtain an analytical expression for the optimal profile function. Second, for linear feature maps, the principal features are obtained from eigenvectors of a matrix involving gradients of the log-density. Neither step requires explicit access to normalizing constants. Notably, by leveraging the ϕ-Sobolev inequalities, we demonstrate that these features universally certify approximation errors across the range of α-divergences α∈ (0,1]. We then propose an application to Bayesian inverse problems and provide an analogous construction with approximation guarantees that hold in expectation over the data. We conclude with an extension of the proposed dimension reduction strategy to nonlinear feature maps.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/02/2018

Certified dimension reduction in nonlinear Bayesian inverse problems

We propose a dimension reduction technique for Bayesian inverse problems...
research
05/31/2019

Greedy inference with layers of lazy maps

We propose a framework for the greedy approximation of high-dimensional ...
research
03/17/2017

Inference via low-dimensional couplings

We investigate the low-dimensional structure of deterministic transforma...
research
01/07/2021

A unified performance analysis of likelihood-informed subspace methods

The likelihood-informed subspace (LIS) method offers a viable route to r...
research
02/20/2021

Nonlinear dimension reduction for surrogate modeling using gradient information

We introduce a method for the nonlinear dimension reduction of a high-di...
research
10/09/2018

Learning Bounds for Greedy Approximation with Explicit Feature Maps from Multiple Kernels

Nonlinear kernels can be approximated using finite-dimensional feature m...
research
05/30/2023

Bottleneck Structure in Learned Features: Low-Dimension vs Regularity Tradeoff

Previous work has shown that DNNs with large depth L and L_2-regularizat...

Please sign up or login with your details

Forgot password? Click here to reset