Sparse Communication via Mixed Distributions

08/05/2021
by   António Farinhas, et al.
0

Neural networks and other machine learning models compute continuous representations, while humans communicate mostly through discrete symbols. Reconciling these two forms of communication is desirable for generating human-readable interpretations or learning discrete latent variable models, while maintaining end-to-end differentiability. Some existing approaches (such as the Gumbel-Softmax transformation) build continuous relaxations that are discrete approximations in the zero-temperature limit, while others (such as sparsemax transformations and the Hard Concrete distribution) produce discrete/continuous hybrids. In this paper, we build rigorous theoretical foundations for these hybrids, which we call "mixed random variables." Our starting point is a new "direct sum" base measure defined on the face lattice of the probability simplex. From this measure, we introduce new entropy and Kullback-Leibler divergence functions that subsume the discrete and differential cases and have interpretations in terms of code optimality. Our framework suggests two strategies for representing and sampling mixed random variables, an extrinsic ("sample-and-project") and an intrinsic one (based on face stratification). We experiment with both approaches on an emergent communication benchmark and on modeling MNIST and Fashion-MNIST data with variational auto-encoders with mixed latent variables.

READ FULL TEXT
research
04/01/2021

Reconciling the Discrete-Continuous Divide: Towards a Mathematical Theory of Sparse Communication

Neural networks and other machine learning models compute continuous rep...
research
06/07/2018

Direct Optimization through for Discrete Variational Auto-Encoder

Reparameterization of variational auto-encoders with continuous latent s...
research
06/23/2021

A partial information decomposition for discrete and continuous variables

Conceptually, partial information decomposition (PID) is concerned with ...
research
03/23/2023

Continuous Indeterminate Probability Neural Network

This paper introduces a general model called CIPNN - Continuous Indeterm...
research
11/02/2016

The Concrete Distribution: A Continuous Relaxation of Discrete Random Variables

The reparameterization trick enables optimizing large scale stochastic c...
research
07/03/2020

Efficient Marginalization of Discrete and Structured Latent Variables via Sparsity

Training neural network models with discrete (categorical or structured)...
research
03/14/2019

Unsupervised and interpretable scene discovery with Discrete-Attend-Infer-Repeat

In this work we present Discrete Attend Infer Repeat (Discrete-AIR), a R...

Please sign up or login with your details

Forgot password? Click here to reset