MINE: Mutual Information Neural Estimation

by   Ishmael Belghazi, et al.

We argue that the estimation of the mutual information between high dimensional continuous random variables is achievable by gradient descent over neural networks. This paper presents a Mutual Information Neural Estimator (MINE) that is linearly scalable in dimensionality as well as in sample size. MINE is back-propable and we prove that it is strongly consistent. We illustrate a handful of applications in which MINE is succesfully applied to enhance the property of generative models in both unsupervised and supervised settings. We apply our framework to estimate the information bottleneck, and apply it in tasks related to supervised classification problems. Our results demonstrate substantial added flexibility and improvement in these settings.


Renormalized Mutual Information for Extraction of Continuous Features

We derive a well-defined renormalized version of mutual information that...

Data-Efficient Mutual Information Neural Estimator

Measuring Mutual Information (MI) between high-dimensional, continuous, ...

Mutual Information Gradient Estimation for Representation Learning

Mutual Information (MI) plays an important role in representation learni...

Regularized Mutual Information Neural Estimation

With the variational lower bound of mutual information (MI), the estimat...

Estimating mutual information in high dimensions via classification error

Multivariate pattern analyses approaches in neuroimaging are fundamental...

High-Dimensional Smoothed Entropy Estimation via Dimensionality Reduction

We study the problem of overcoming exponential sample complexity in diff...

Please sign up or login with your details

Forgot password? Click here to reset