Minimax Optimal Estimation of KL Divergence for Continuous Distributions

by   Puning Zhao, et al.

Estimating Kullback-Leibler divergence from identical and independently distributed samples is an important problem in various domains. One simple and effective estimator is based on the k nearest neighbor distances between these samples. In this paper, we analyze the convergence rates of the bias and variance of this estimator. Furthermore, we derive a lower bound of the minimax mean square error and show that kNN method is asymptotically rate optimal.



page 1

page 2

page 3

page 4


The Nearest Neighbor Information Estimator is Adaptively Near Minimax Rate-Optimal

We analyze the Kozachenko--Leonenko (KL) nearest neighbor estimator for ...

A New Lower Bound for Kullback-Leibler Divergence Based on Hammersley-Chapman-Robbins Bound

In this paper, we derive a useful lower bound for the Kullback-Leibler d...

Tight MMSE Bounds for the AGN Channel Under KL Divergence Constraints on the Input Distribution

Tight bounds on the minimum mean square error for the additive Gaussian ...

Direct Estimation of Information Divergence Using Nearest Neighbor Ratios

We propose a direct estimation method for Rényi and f-divergence measure...

Minimax Optimal Additive Functional Estimation with Discrete Distribution: Slow Divergence Speed Case

This paper addresses an estimation problem of an additive functional of ...

Improving Bridge estimators via f-GAN

Bridge sampling is a powerful Monte Carlo method for estimating ratios o...

On Estimating L_2^2 Divergence

We give a comprehensive theoretical characterization of a nonparametric ...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.