A Concentration Result of Estimating Phi-Divergence using Data Dependent Partition

01/02/2018
by   Fengqiao Luo, et al.
0

Estimation of the ϕ-divergence between two unknown probability distributions using empirical data is a fundamental problem in information theory and statistical learning. We consider a multi-variate generalization of the data dependent partitioning method for estimating divergence between the two unknown distributions. Under the assumption that the distribution satisfies a power law of decay, we provide a convergence rate result for this method on the number of samples and hyper-rectangles required to ensure the estimation error is bounded by a given level with a given probability.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/07/2014

Multivariate f-Divergence Estimation With Confidence

The problem of f-divergence estimation is important in the fields of mac...
research
10/01/2018

Convergence Rates for Empirical Estimation of Binary Classification Bounds

Bounding the best achievable error probability for binary classification...
research
07/13/2019

Local Distribution Obfuscation via Probability Coupling

We introduce a general model for the local obfuscation of probability di...
research
10/07/2021

Neural Estimation of Statistical Divergences

Statistical divergences (SDs), which quantify the dissimilarity between ...
research
04/04/2019

Concentration of the multinomial in Kullback-Leibler divergence near the ratio of alphabet and sample sizes

We bound the moment generating function of the Kullback-Leibler divergen...
research
03/11/2021

Non-Asymptotic Performance Guarantees for Neural Estimation of 𝖿-Divergences

Statistical distances (SDs), which quantify the dissimilarity between pr...
research
07/14/2012

Scaling of Model Approximation Errors and Expected Entropy Distances

We compute the expected value of the Kullback-Leibler divergence to vari...

Please sign up or login with your details

Forgot password? Click here to reset