DeepAI AI Chat
Log In Sign Up

Formal Limitations on the Measurement of Mutual Information

by   David McAllester, et al.

Motivate by applications to unsupervised learning, we consider the problem of measuring mutual information. Recent analysis has shown that naive kNN estimators of mutual information have serious statistical limitations motivating more refined methods. In this paper we prove that serious statistical limitations are inherent to any measurement method. More specifically, we show that any distribution-free high-confidence lower bound on mutual information cannot be larger than O( N) where N is the size of the data sample. We also analyze the Donsker-Varadhan lower bound on KL divergence in particular and show that, when simple statistical considerations are taken into account, this bound can never produce a high-confidence value larger than N. While large high-confidence lower bounds are impossible, in practice one can use estimators without formal guarantees. We suggest expressing mutual information as a difference of entropies and using cross-entropy as an entropy estimator. We observe that, although cross-entropy is only an upper bound on entropy, cross-entropy estimates converge to the true cross-entropy at the rate of 1/√(N).


page 1

page 2

page 3

page 4


Neural Network Classifier as Mutual Information Evaluator

Cross-entropy loss with softmax output is a standard choice to train neu...

Cross-Entropy Estimators for Sequential Experiment Design with Reinforcement Learning

Reinforcement learning can effectively learn amortised design policies f...

A Training-Based Mutual Information Lower Bound for Large-Scale Systems

We provide a mutual information lower bound that can be used to analyze ...

Neural Computation of Capacity Region of Memoryless Multiple Access Channels

This paper provides a numerical framework for computing the achievable r...

Determining the Unithood of Word Sequences using Mutual Information and Independence Measure

Most works related to unithood were conducted as part of a larger effort...

APS: Active Pretraining with Successor Features

We introduce a new unsupervised pretraining objective for reinforcement ...

Implementation and Analysis of Stable PUFs Using Gate Oxide Breakdown

We implement and analyze highly stable PUFs using two random gate oxide ...