DeepAI AI Chat
Log In Sign Up

Direct Optimization through for Discrete Variational Auto-Encoder

06/07/2018
by   Guy Lorberbom, et al.
0

Reparameterization of variational auto-encoders with continuous latent spaces is an effective method for reducing the variance of their gradient estimates. However, using the same approach when latent variables are discrete is problematic, due to the resulting non-differentiable objective. In this work, we present a direct optimization method that propagates gradients through a non-differentiable prediction operation. We apply this method to discrete variational auto-encoders, by modeling a discrete random variable by the function of the Gumbel-Max perturbation model.

READ FULL TEXT

page 11

page 12

05/03/2022

Learning Discrete Structured Variational Auto-Encoder using Natural Evolution Strategies

Discrete variational auto-encoders (VAEs) are able to represent semantic...
10/05/2020

Self-Supervised Variational Auto-Encoders

Density estimation, compression and data generation are crucial tasks in...
05/18/2017

Spatial Variational Auto-Encoding via Matrix-Variate Normal Distributions

The key idea of variational auto-encoders (VAEs) resembles that of tradi...
12/18/2012

Variational Optimization

We discuss a general technique that can be used to form a differentiable...
09/11/2022

Adaptive Perturbation-Based Gradient Estimation for Discrete Latent Variable Models

The integration of discrete algorithmic components in deep learning arch...
08/05/2021

Sparse Communication via Mixed Distributions

Neural networks and other machine learning models compute continuous rep...
11/27/2020

Direct Evolutionary Optimization of Variational Autoencoders With Binary Latents

Discrete latent variables are considered important for real world data, ...