Gacs-Korner Common Information Variational Autoencoder

05/24/2022
by   Michael Kleinman, et al.
7

We propose a notion of common information that allows one to quantify and separate the information that is shared between two random variables from the information that is unique to each. Our notion of common information is a variational relaxation of the Gács-Körner common information, which we recover as a special case, but is more amenable to optimization and can be approximated empirically using samples from the underlying distribution. We then provide a method to partition and quantify the common and unique information using a simple modification of a traditional variational auto-encoder. Empirically, we demonstrate that our formulation allows us to learn semantically meaningful common and unique factors of variation even on high-dimensional data such as images and videos. Moreover, on datasets where ground-truth latent factors are known, we show that we can accurately quantify the common information between the random variables. Additionally, we show that the auto-encoder that we learn recovers semantically meaningful disentangled factors of variation, even though we do not explicitly optimize for it.

READ FULL TEXT

page 3

page 4

page 5

page 7

page 10

page 11

page 14

page 15

research
12/21/2019

Latent Variables on Spheres for Sampling and Spherical Inference

Variational inference is a fundamental problem in Variational Auto-Encod...
research
02/28/2016

A Structured Variational Auto-encoder for Learning Deep Hierarchies of Sparse Features

In this note we present a generative model of natural images consisting ...
research
05/24/2017

Multi-Level Variational Autoencoder: Learning Disentangled Representations from Grouped Observations

We would like to learn a representation of the data which decomposes an ...
research
02/13/2018

Tighter Variational Bounds are Not Necessarily Better

We provide theoretical and empirical evidence that using tighter evidenc...
research
06/20/2021

Low-rank Characteristic Tensor Density Estimation Part II: Compression and Latent Density Estimation

Learning generative probabilistic models is a core problem in machine le...
research
10/12/2019

Variational Auto-encoder Based Bayesian Poisson Tensor Factorization for Sparse and Imbalanced Count Data

Non-negative tensor factorization models enable predictive analysis on c...
research
03/25/2019

Diversifying Reply Suggestions using a Matching-Conditional Variational Autoencoder

We consider the problem of diversifying automated reply suggestions for ...

Please sign up or login with your details

Forgot password? Click here to reset