Categorical Reparameterization with Gumbel-Softmax

11/03/2016 ∙ by Eric Jang, et al. ∙ Google University of Cambridge Stanford University 0

Categorical variables are a natural choice for representing discrete structure in the world. However, stochastic neural networks rarely use categorical latent variables due to the inability to backpropagate through samples. In this work, we present an efficient gradient estimator that replaces the non-differentiable sample from a categorical distribution with a differentiable sample from a novel Gumbel-Softmax distribution. This distribution has the essential property that it can be smoothly annealed into a categorical distribution. We show that our Gumbel-Softmax estimator outperforms state-of-the-art gradient estimators on structured output prediction and unsupervised generative modeling tasks with categorical latent variables, and enables large speedups on semi-supervised classification.

READ FULL TEXT VIEW PDF

Authors

page 1

page 2

page 3

page 4

Code Repositories

gumbel-softmax

categorical variational autoencoder using the Gumbel-Softmax estimator


view repo
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Stochastic neural networks with discrete random variables are a powerful technique for representing distributions encountered in unsupervised learning, language modeling, attention mechanisms, and reinforcement learning domains. For example, discrete variables have been used to learn probabilistic latent representations that correspond to distinct semantic classes

(Kingma et al., 2014), image regions (Xu et al., 2015), and memory locations (Graves et al., 2014, 2016). Discrete representations are often more interpretable (Chen et al., 2016) and more computationally efficient (Rae et al., 2016) than their continuous analogues.

However, stochastic networks with discrete variables are difficult to train because the backpropagation algorithm — while permitting efficient computation of parameter gradients — cannot be applied to non-differentiable layers. Prior work on stochastic gradient estimation has traditionally focused on either score function estimators augmented with Monte Carlo variance reduction techniques

(Paisley et al., 2012; Mnih & Gregor, 2014; Gu et al., 2016; Gregor et al., 2013), or biased path derivative estimators for Bernoulli variables (Bengio et al., 2013). However, no existing gradient estimator has been formulated specifically for categorical variables. The contributions of this work are threefold:

  1. We introduce Gumbel-Softmax, a continuous distribution on the simplex that can approximate categorical samples, and whose parameter gradients can be easily computed via the reparameterization trick.

  2. We show experimentally that Gumbel-Softmax outperforms all single-sample gradient estimators on both Bernoulli variables and categorical variables.

  3. We show that this estimator can be used to efficiently train semi-supervised models (e.g. Kingma et al. (2014)) without costly marginalization over unobserved categorical latent variables.

The practical outcome of this paper is a simple, differentiable approximate sampling mechanism for categorical variables that can be integrated into neural networks and trained using standard backpropagation.

2 The Gumbel-Softmax distribution

We begin by defining the Gumbel-Softmax distribution, a continuous distribution over the simplex that can approximate samples from a categorical distribution. Let

be a categorical variable with class probabilities

. For the remainder of this paper we assume categorical samples are encoded as

-dimensional one-hot vectors lying on the corners of the

-dimensional simplex, . This allows us to define quantities such as the element-wise mean of these vectors.

The Gumbel-Max trick (Gumbel, 1954; Maddison et al., 2014) provides a simple and efficient way to draw samples from a categorical distribution with class probabilities :

(1)

where are i.i.d samples drawn from 111The distribution can be sampled using inverse transform sampling by drawing and computing . . We use the softmax function as a continuous, differentiable approximation to , and generate -dimensional sample vectors where

(2)

The density of the Gumbel-Softmax distribution (derived in Appendix B) is:

(3)

This distribution was independently discovered by Maddison et al. (2016), where it is referred to as the concrete distribution. As the softmax temperature approaches , samples from the Gumbel-Softmax distribution become one-hot and the Gumbel-Softmax distribution becomes identical to the categorical distribution .

Figure 1:

The Gumbel-Softmax distribution interpolates between discrete one-hot-encoded categorical distributions and continuous categorical densities. (a) For low temperatures (

), the expected value of a Gumbel-Softmax random variable approaches the expected value of a categorical random variable with the same logits. As the temperature increases (

), the expected value converges to a uniform distribution over the categories. (b) Samples from Gumbel-Softmax distributions are identical to samples from a categorical distribution as

. At higher temperatures, Gumbel-Softmax samples are no longer one-hot, and become uniform as .

2.1 Gumbel-Softmax Estimator

The Gumbel-Softmax distribution is smooth for , and therefore has a well-defined gradient with respect to the parameters . Thus, by replacing categorical samples with Gumbel-Softmax samples we can use backpropagation to compute gradients (see Section 3.1). We denote this procedure of replacing non-differentiable categorical samples with a differentiable approximation during training as the Gumbel-Softmax estimator.

While Gumbel-Softmax samples are differentiable, they are not identical to samples from the corresponding categorical distribution for non-zero temperature. For learning, there is a tradeoff between small temperatures, where samples are close to one-hot but the variance of the gradients is large, and large temperatures, where samples are smooth but the variance of the gradients is small (Figure 1). In practice, we start at a high temperature and anneal to a small but non-zero temperature.

In our experiments, we find that the softmax temperature can be annealed according to a variety of schedules and still perform well. If is a learned parameter (rather than annealed via a fixed schedule), this scheme can be interpreted as entropy regularization (Szegedy et al., 2015; Pereyra et al., 2016), where the Gumbel-Softmax distribution can adaptively adjust the “confidence” of proposed samples during the training process.

2.2 Straight-Through Gumbel-Softmax Estimator

Continuous relaxations of one-hot vectors are suitable for problems such as learning hidden representations and sequence modeling. For scenarios in which we are constrained to sampling discrete values (e.g. from a discrete action space for reinforcement learning, or quantized compression), we discretize

using but use our continuous approximation in the backward pass by approximating . We call this the Straight-Through (ST) Gumbel Estimator, as it is reminiscent of the biased path derivative estimator described in Bengio et al. (2013). ST Gumbel-Softmax allows samples to be sparse even when the temperature is high.

3 Related Work

In this section we review existing stochastic gradient estimation techniques for discrete variables (illustrated in Figure 2). Consider a stochastic computation graph (Schulman et al., 2015) with discrete random variable whose distribution depends on parameter , and cost function . The objective is to minimize the expected cost via gradient descent, which requires us to estimate .

Figure 2: Gradient estimation in stochastic computation graphs. (1) can be computed via backpropagation if is deterministic and differentiable. (2) The presence of stochastic node

precludes backpropagation as the sampler function does not have a well-defined gradient. (3) The score function estimator and its variants (NVIL, DARN, MuProp, VIMCO) obtain an unbiased estimate of

by backpropagating along a surrogate loss , where and is a baseline for variance reduction. (4) The Straight-Through estimator, developed primarily for Bernoulli variables, approximates . (5) Gumbel-Softmax is a path derivative estimator for a continuous distribution that approximates . Reparameterization allows gradients to flow from to . can be annealed to one-hot categorical variables over the course of training.

3.1 Path Derivative Gradient Estimators

For distributions that are reparameterizable, we can compute the sample as a deterministic function of the parameters and an independent random variable , so that . The path-wise gradients from to can then be computed without encountering any stochastic nodes:

(4)

For example, the normal distribution

can be re-written as , making it trivial to compute and . This reparameterization trick is commonly applied to training variational autooencoders with continuous latent variables using backpropagation (Kingma & Welling, 2013; Rezende et al., 2014b). As shown in Figure 2, we exploit such a trick in the construction of the Gumbel-Softmax estimator.

Biased path derivative estimators can be utilized even when is not reparameterizable. In general, we can approximate , where is a differentiable proxy for the stochastic sample. For Bernoulli variables with mean parameter , the Straight-Through (ST) estimator (Bengio et al., 2013) approximates , implying . For (Bernoulli), ST Gumbel-Softmax is similar to the slope-annealed Straight-Through estimator proposed by Chung et al. (2016), but uses a softmax instead of a hard sigmoid to determine the slope. Rolfe (2016)

considers an alternative approach where each binary latent variable parameterizes a continuous mixture model. Reparameterization gradients are obtained by backpropagating through the continuous variables and marginalizing out the binary variables.

One limitation of the ST estimator is that backpropagating with respect to the sample-independent mean may cause discrepancies between the forward and backward pass, leading to higher variance. Gumbel-Softmax avoids this problem because each sample is a differentiable proxy of the corresponding discrete sample .

3.2 Score Function-Based Gradient Estimators

The score function estimator (SF, also referred to as REINFORCE (Williams, 1992) and likelihood ratio estimator (Glynn, 1990)) uses the identity to derive the following unbiased estimator:

(5)

SF only requires that is continuous in , and does not require backpropagating through or the sample . However, SF suffers from high variance and is consequently slow to converge. In particular, the variance of SF scales linearly with the number of dimensions of the sample vector (Rezende et al., 2014a), making it especially challenging to use for categorical distributions.

The variance of a score function estimator can be reduced by subtracting a control variate from the learning signal , and adding back its analytical expectation to keep the estimator unbiased:

(6)
(7)

We briefly summarize recent stochastic gradient estimators that utilize control variates. We direct the reader to Gu et al. (2016) for further detail on these techniques.

  • NVIL (Mnih & Gregor, 2014) uses two baselines: (1) a moving average of to center the learning signal, and (2) an input-dependent baseline computed by a 1-layer neural network fitted to (a control variate for the centered learning signal itself). Finally, variance normalization divides the learning signal by , where is a moving average of .

  • DARN (Gregor et al., 2013) uses , where the baseline corresponds to the first-order Taylor approximation of from . is chosen to be for Bernoulli variables, which makes the estimator biased for non-quadratic , since it ignores the correction term in the estimator expression.

  • MuProp (Gu et al., 2016) also models the baseline as a first-order Taylor expansion: and . To overcome backpropagation through discrete sampling, a mean-field approximation is used in place of to compute the baseline and derive the relevant gradients.

  • VIMCO (Mnih & Rezende, 2016) is a gradient estimator for multi-sample objectives that uses the mean of other samples to construct a baseline for each sample . We exclude VIMCO from our experiments because we are comparing estimators for single-sample objectives, although Gumbel-Softmax can be easily extended to multi-sample objectives.

3.3 Semi-Supervised Generative Models

Semi-supervised learning considers the problem of learning from both labeled data and unlabeled data , where are observations (i.e. images) and are corresponding labels (e.g. semantic class). For semi-supervised classification, Kingma et al. (2014)

propose a variational autoencoder (VAE) whose latent state is the joint distribution over a Gaussian “style” variable

and a categorical “semantic class” variable (Figure 6, Appendix). The VAE objective trains a discriminative network , inference network , and generative network end-to-end by maximizing a variational lower bound on the log-likelihood of the observation under the generative model. For labeled data, the class is observed, so inference is only done on . The variational lower bound on labeled data is given by:

(8)

For unlabeled data, difficulties arise because the categorical distribution is not reparameterizable. Kingma et al. (2014) approach this by marginalizing out over all classes, so that for unlabeled data, inference is still on for each . The lower bound on unlabeled data is:

(9)
(10)

The full maximization objective is:

(11)

where is the scalar trade-off between the generative and discriminative objectives.

One limitation of this approach is that marginalization over all class values becomes prohibitively expensive for models with a large number of classes. If are the computational cost of sampling from , , and respectively, then training the unsupervised objective requires for each forward/backward step. In contrast, Gumbel-Softmax allows us to backpropagate through for single sample gradient estimation, and achieves a cost of per training step. Experimental comparisons in training speed are shown in Figure 5.

4 Experimental Results

In our first set of experiments, we compare Gumbel-Softmax and ST Gumbel-Softmax to other stochastic gradient estimators: Score-Function (SF), DARN, MuProp, Straight-Through (ST), and Slope-Annealed ST. Each estimator is evaluated on two tasks: (1) structured output prediction and (2) variational training of generative models. We use the MNIST dataset with fixed binarization for training and evaluation, which is common practice for evaluating stochastic gradient estimators

(Salakhutdinov & Murray, 2008; Larochelle & Murray, 2011).

Learning rates are chosen from

; we select the best learning rate for each estimator using the MNIST validation set, and report performance on the test set. Samples drawn from the Gumbel-Softmax distribution are continuous during training, but are discretized to one-hot vectors during evaluation. We also found that variance normalization was necessary to obtain competitive performance for SF, DARN, and MuProp. We used sigmoid activation functions for binary (Bernoulli) neural networks and softmax activations for categorical variables. Models were trained using stochastic gradient descent with momentum

.

4.1 Structured Output Prediction with Stochastic Binary Networks

The objective of structured output prediction is to predict the lower half of a MNIST digit given the top half of the image (). This is a common benchmark for training stochastic binary networks (SBN) (Raiko et al., 2014; Gu et al., 2016; Mnih & Rezende, 2016). The minimization objective for this conditional generative model is an importance-sampled estimate of the likelihood objective, , where is used for training and is used for evaluation.

We trained a SBN with two hidden layers of 200 units each. This corresponds to either 200 Bernoulli variables (denoted as ---) or 20 categorical variables (each with 10 classes) with binarized activations (denoted as ---).

As shown in Figure 3, ST Gumbel-Softmax is on par with the other estimators for Bernoulli variables and outperforms on categorical variables. Meanwhile, Gumbel-Softmax outperforms other estimators on both Bernoulli and Categorical variables. We found that it was not necessary to anneal the softmax temperature for this task, and used a fixed .

Figure 3: Test loss (negative log-likelihood) on the structured output prediction task with binarized MNIST using a stochastic binary network with (a) Bernoulli latent variables (---) and (b) categorical latent variables (---).

4.2 Generative Modeling with Variational Autoencoders

We train variational autoencoders (Kingma & Welling, 2013), where the objective is to learn a generative model of binary MNIST images. In our experiments, we modeled the latent variable as a single hidden layer with 200 Bernoulli variables or 20 categorical variables (). We use a learned categorical prior rather than a Gumbel-Softmax prior in the training objective. Thus, the minimization objective during training is no longer a variational bound if the samples are not discrete. In practice, we find that optimizing this objective in combination with temperature annealing still minimizes actual variational bounds on validation and test sets. Like the structured output prediction task, we use a multi-sample bound for evaluation with .

The temperature is annealed using the schedule of the global training step , where is updated every steps. and

are hyperparameters for which we select the best-performing estimator on the validation set and report test performance.

As shown in Figure 4, ST Gumbel-Softmax outperforms other estimators for Categorical variables, and Gumbel-Softmax drastically outperforms other estimators in both Bernoulli and Categorical variables.

Figure 4: Test loss (negative variational lower bound) on binarized MNIST VAE with (a) Bernoulli latent variables () and (b) categorical latent variables ().
SF DARN MuProp ST Annealed ST Gumbel-S. ST Gumbel-S.
SBN (Bern.) 72.0 59.7 58.9 58.9 58.7 58.5 59.3
SBN (Cat.) 73.1 67.9 63.0 61.8 61.1 59.0 59.7
VAE (Bern.) 112.2 110.9 109.7 116.0 111.5 105.0 111.5
VAE (Cat.) 110.6 128.8 107.0 110.9 107.8 101.5 107.8
Table 1: The Gumbel-Softmax estimator outperforms other estimators on Bernoulli and Categorical latent variables. For the structured output prediction (SBN) task, numbers correspond to negative log-likelihoods (nats) of input images (lower is better). For the VAE task, numbers correspond to negative variational lower bounds (nats) on the log-likelihood (lower is better).

4.3 Generative Semi-Supervised Classification

We apply the Gumbel-Softmax estimator to semi-supervised classification on the binary MNIST dataset. We compare the original marginalization-based inference approach (Kingma et al., 2014) to single-sample inference with Gumbel-Softmax and ST Gumbel-Softmax.

We trained on a dataset consisting of 100 labeled examples (distributed evenly among each of the 10 classes) and 50,000 unlabeled examples, with dynamic binarization of the unlabeled examples for each minibatch. The discriminative model and inference model

are each implemented as 3-layer convolutional neural networks with ReLU activation functions. The generative model

is a 4-layer convolutional-transpose network with ReLU activations. Experimental details are provided in Appendix A.

Estimators were trained and evaluated against several values of and the best unlabeled classification results for test sets were selected for each estimator and reported in Table 2. We used an annealing schedule of , updated every 2000 steps.

In Kingma et al. (2014), inference over the latent state is done by marginalizing out and using the reparameterization trick for sampling from . However, this approach has a computational cost that scales linearly with the number of classes. Gumbel-Softmax allows us to backpropagate directly through single samples from the joint , achieving drastic speedups in training without compromising generative or classification performance. (Table 2, Figure 5).

ELBO Accuracy
Marginalization -106.8 92.6%
Gumbel -109.6 92.4%
ST Gumbel-Softmax -110.7 93.6%
Table 2: Marginalizing over and single-sample variational inference perform equally well when applied to image classification on the binarized MNIST dataset (Larochelle & Murray, 2011). We report variational lower bounds and image classification accuracy for unlabeled data in the test set.

In Figure 5, we show how Gumbel-Softmax versus marginalization scales with the number of categorical classes. For these experiments, we use MNIST images with randomly generated labels. Training the model with the Gumbel-Softmax estimator is as fast for classes and as fast for classes.

Figure 5: Gumbel-Softmax allows us to backpropagate through samples from the posterior , providing a scalable method for semi-supervised learning for tasks with a large number of classes. (a) Comparison of training speed (steps/sec) between Gumbel-Softmax and marginalization (Kingma et al., 2014) on a semi-supervised VAE. Evaluations were performed on a GTX Titan X® GPU. (b) Visualization of MNIST analogies generated by varying style variable across each row and class variable across each column.

5 Discussion

The primary contribution of this work is the reparameterizable Gumbel-Softmax distribution, whose corresponding estimator affords low-variance path derivative gradients for the categorical distribution. We show that Gumbel-Softmax and Straight-Through Gumbel-Softmax are effective on structured output prediction and variational autoencoder tasks, outperforming existing stochastic gradient estimators for both Bernoulli and categorical latent variables. Finally, Gumbel-Softmax enables dramatic speedups in inference over discrete latent variables.

Acknowledgments

We sincerely thank Luke Vilnis, Vincent Vanhoucke, Luke Metz, David Ha, Laurent Dinh, George Tucker, and Subhaneil Lahiri for helpful discussions and feedback.

References

Appendix A Semi-Supervised Classification Model

Figures 6 and 7 describe the architecture used in our experiments for semi-supervised classification (Section 4.3).

Figure 6: Semi-supervised generative model proposed by Kingma et al. (2014). (a) Generative model synthesizes images from latent Gaussian “style” variable and categorical class variable . (b) Inference model samples latent state given . Gaussian can be differentiated with respect to its parameters because it is reparameterizable. In previous work, when is not observed, training the VAE objective requires marginalizing over all values of . (c) Gumbel-Softmax reparameterizes so that backpropagation is also possible through without encountering stochastic nodes.
Figure 7: Network architecture for (a) classification (b) inference , and (c) generative

models. The output of these networks parameterize Categorical, Gaussian, and Bernoulli distributions which we sample from.

Appendix B Deriving the density of the Gumbel-Softmax distribution

Here we derive the probability density function of the Gumbel-Softmax distribution with probabilities

and temperature . We first define the logits , and Gumbel samples , where . A sample from the Gumbel-Softmax can then be computed as:

(12)

b.1 Centered Gumbel density

The mapping from the Gumbel samples to the Gumbel-Softmax sample

is not invertible as the normalization of the softmax operation removes one degree of freedom. To compensate for this, we define an equivalent sampling process that subtracts off the last element,

before the softmax:

(13)

To derive the density of this equivalent sampling process, we first derive the density for the ”centered” multivariate Gumbel density corresponding to:

(14)

where . Note the probability density of a Gumbel distribution with scale parameter and mean at is: . We can now compute the density of this distribution by marginalizing out the last Gumbel sample, :

We perform a change of variables with , so and , and define to simplify notation:

(15)
(16)
(17)
(18)

b.2 Transforming to a Gumbel-Softmax

Given samples from the centered Gumbel distribution, we can apply a deterministic transformation to yield the first coordinates of the sample from the Gumbel-Softmax:

(19)

Note that the final coordinate probability is fixed given the first , as :

(20)

We can thus compute the probability of a sample from the Gumbel-Softmax using the change of variables formula on only the first variables:

(21)

Thus we need to compute two more pieces: the inverse of and its Jacobian determinant. The inverse of is:

(22)

with Jacobian

(23)

Next, we compute the determinant of the Jacobian:

(24)
(25)
(26)

where is a dimensional vector of ones, and we’ve used the identities: , , and .

We can then plug into the change of variables formula (Eq. 21) using the density of the centered Gumbel (Eq.15), the inverse of (Eq. 22) and its Jacobian determinant (Eq. 26):

(27)
(28)