DiME: Maximizing Mutual Information by a Difference of Matrix-Based Entropies

01/19/2023
by   Oscar Skean, et al.
0

We introduce an information-theoretic quantity with similar properties to mutual information that can be estimated from data without making explicit assumptions on the underlying distribution. This quantity is based on a recently proposed matrix-based entropy that uses the eigenvalues of a normalized Gram matrix to compute an estimate of the eigenvalues of an uncentered covariance operator in a reproducing kernel Hilbert space. We show that a difference of matrix-based entropies (DiME) is well suited for problems involving maximization of mutual information between random variables. While many methods for such tasks can lead to trivial solutions, DiME naturally penalizes such outcomes. We provide several examples of use cases for the proposed quantity including a multi-view representation learning problem where DiME is used to encourage learning a shared representation among views with high mutual information. We also show the versatility of DiME by using it as objective function for a variety of tasks.

READ FULL TEXT

page 7

page 8

page 10

page 15

page 17

research
06/13/2019

Factorized Mutual Information Maximization

We investigate the sets of joint probability distributions that maximize...
research
08/23/2018

Multivariate Extension of Matrix-based Renyi's α-order Entropy Functional

The matrix-based Renyi's α-order entropy functional was recently introdu...
research
06/23/2023

Exact mutual information for lognormal random variables

Stochastic correlated observables with lognormal distribution are ubiqui...
research
02/20/2020

Neural Bayes: A Generic Parameterization Method for Unsupervised Representation Learning

We introduce a parameterization method called Neural Bayes which allows ...
research
07/31/2019

On Mutual Information Maximization for Representation Learning

Many recent methods for unsupervised or self-supervised representation l...
research
06/14/2017

Information Potential Auto-Encoders

In this paper, we suggest a framework to make use of mutual information ...
research
05/19/2019

Minimal Achievable Sufficient Statistic Learning

We introduce Minimal Achievable Sufficient Statistic (MASS) Learning, a ...

Please sign up or login with your details

Forgot password? Click here to reset