From Smooth Wasserstein Distance to Dual Sobolev Norm: Empirical Approximation and Statistical Applications

01/11/2021
āˆ™
by   Sloan Nietert, et al.
āˆ™
0
āˆ™

Statistical distances, i.e., discrepancy measures between probability distributions, are ubiquitous in probability theory, statistics and machine learning. To combat the curse of dimensionality when estimating these distances from data, recent work has proposed smoothing out local irregularities in the measured distributions via convolution with a Gaussian kernel. Motivated by the scalability of the smooth framework to high dimensions, we conduct an in-depth study of the structural and statistical behavior of the Gaussian-smoothed p-Wasserstein distance š–¶_p^(Ļƒ), for arbitrary pā‰„ 1. We start by showing that š–¶_p^(Ļƒ) admits a metric structure that is topologically equivalent to classic š–¶_p and is stable with respect to perturbations in Ļƒ. Moving to statistical questions, we explore the asymptotic properties of š–¶_p^(Ļƒ)(Ī¼Ģ‚_n,Ī¼), where Ī¼Ģ‚_n is the empirical distribution of n i.i.d. samples from Ī¼. To that end, we prove that š–¶_p^(Ļƒ) is controlled by a pth order smooth dual Sobolev norm š–½_p^(Ļƒ). Since š–½_p^(Ļƒ)(Ī¼Ģ‚_n,Ī¼) coincides with the supremum of an empirical process indexed by Gaussian-smoothed Sobolev functions, it lends itself well to analysis via empirical process theory. We derive the limit distribution of āˆš(n)š–½_p^(Ļƒ)(Ī¼Ģ‚_n,Ī¼) in all dimensions d, when Ī¼ is sub-Gaussian. Through the aforementioned bound, this implies a parametric empirical convergence rate of n^-1/2 for š–¶_p^(Ļƒ), contrasting the n^-1/d rate for unsmoothed š–¶_p when d ā‰„ 3. As applications, we provide asymptotic guarantees for two-sample testing and minimum distance estimation. When p=2, we further show that š–½_2^(Ļƒ) can be expressed as a maximum mean discrepancy.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
āˆ™ 02/03/2020

Limit Distribution Theory for Smooth Wasserstein Distance with Applications to Generative Modeling

The 1-Wasserstein distance (W_1) is a popular proximity measure between ...
research
āˆ™ 03/01/2022

Limit distribution theory for smooth p-Wasserstein distances

The Wasserstein distance is a metric on a space of probability measures ...
research
āˆ™ 02/16/2021

From Majorization to Interpolation: Distributionally Robust Learning using Kernel Smoothing

We study the function approximation aspect of distributionally robust op...
research
āˆ™ 05/27/2019

Distributionally Robust Optimization and Generalization in Kernel Methods

Distributionally robust optimization (DRO) has attracted attention in ma...
research
āˆ™ 07/28/2021

Limit Distribution Theory for the Smooth 1-Wasserstein Distance with Applications

The smooth 1-Wasserstein distance (SWD) W_1^Ļƒ was recently proposed as a...
research
āˆ™ 02/03/2020

Limit Distribution for Smooth Total Variation and Ļ‡^2-Divergence in High Dimensions

Statistical divergences are ubiquitous in machine learning as tools for ...
research
āˆ™ 12/01/2021

Controlling Wasserstein distances by Kernel norms with application to Compressive Statistical Learning

Comparing probability distributions is at the crux of many machine learn...

Please sign up or login with your details

Forgot password? Click here to reset