Learning Discrete Structured Representations by Adversarially Maximizing Mutual Information

04/08/2020
by   Karl Stratos, et al.
14

We propose learning discrete structured representations from unlabeled data by maximizing the mutual information between a structured latent variable and a target variable. Calculating mutual information is intractable in this setting. Our key technical contribution is an adversarial objective that can be used to tractably estimate mutual information assuming only the feasibility of cross entropy calculation. We develop a concrete realization of this general formulation with Markov distributions over binary encodings. We report critical and unexpected findings on practical aspects of the objective such as the choice of variational priors. We apply our model on document hashing and show that it outperforms current best baselines based on discrete and vector quantized variational autoencoders. It also yields highly compressed interpretable representations.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/02/2020

Variational Mutual Information Maximization Framework for VAE Latent Codes with Continuous and Discrete Priors

Learning interpretable and disentangled representations of data is a key...
research
10/08/2019

MIM: Mutual Information Machine

We introduce the Mutual Information Machine (MIM), an autoencoder model ...
research
10/04/2019

High Mutual Information in Representation Learning with Symmetric Variational Inference

We introduce the Mutual Information Machine (MIM), a novel formulation o...
research
01/07/2020

Entropy-Constrained Maximizing Mutual Information Quantization

In this paper, we investigate the quantization of the output of a binary...
research
08/23/2019

Pareto-optimal data compression for binary classification tasks

The goal of lossy data compression is to reduce the storage cost of a da...
research
06/17/2019

Hierarchical Soft Actor-Critic: Adversarial Exploration via Mutual Information Optimization

We describe a novel extension of soft actor-critics for hierarchical Dee...
research
08/31/2021

APS: Active Pretraining with Successor Features

We introduce a new unsupervised pretraining objective for reinforcement ...

Please sign up or login with your details

Forgot password? Click here to reset