Log In Sign Up

A robust estimator of mutual information for deep learning interpretability

by   Davide Piras, et al.

We develop the use of mutual information (MI), a well-established metric in information theory, to interpret the inner workings of deep learning models. To accurately estimate MI from a finite number of samples, we present GMM-MI (pronounced “Jimmie"), an algorithm based on Gaussian mixture models that can be applied to both discrete and continuous settings. GMM-MI is computationally efficient, robust to the choice of hyperparameters and provides the uncertainty on the MI estimate due to the finite sample size. We extensively validate GMM-MI on toy data for which the ground truth MI is known, comparing its performance against established mutual information estimators. We then demonstrate the use of our MI estimator in the context of representation learning, working with synthetic data and physical datasets describing highly non-linear processes. We train deep learning models to encode high-dimensional data within a meaningful compressed (latent) representation, and use GMM-MI to quantify both the level of disentanglement between the latent variables, and their association with relevant physical quantities, thus unlocking the interpretability of the latent representation. We make GMM-MI publicly available.


Estimating Information-Theoretic Quantities with Random Forests

Information-theoretic quantities, such as mutual information and conditi...

MINE: Mutual Information Neural Estimation

We argue that the estimation of the mutual information between high dime...

Efficient Estimation of Mutual Information for Strongly Dependent Variables

We demonstrate that a popular class of nonparametric mutual information ...

Mutual Information Gradient Estimation for Representation Learning

Mutual Information (MI) plays an important role in representation learni...

Learning Bias-Invariant Representation by Cross-Sample Mutual Information Minimization

Deep learning algorithms mine knowledge from the training data and thus ...

Explaining Representation by Mutual Information

Science is used to discover the law of world. Machine learning can be us...

Representation Learning with Information Theory for COVID-19 Detection

Successful data representation is a fundamental factor in machine learni...

Code Repositories


Code to calculate mutual information (MI) distribution with Gaussian mixture models (GMMs)

view repo