Discriminative Regularization for Generative Models

by   Alex Lamb, et al.

We explore the question of whether the representations learned by classifiers can be used to enhance the quality of generative models. Our conjecture is that labels correspond to characteristics of natural data which are most salient to humans: identity in faces, objects in images, and utterances in speech. We propose to take advantage of this by using the representations from discriminative classifiers to augment the objective function corresponding to a generative model. In particular we enhance the objective function of the variational autoencoder, a popular generative model, with a discriminative regularization term. We show that enhancing the objective function in this way leads to samples that are clearer and have higher visual quality than the samples from the standard variational autoencoders.



There are no comments yet.


page 5

page 6

page 8


Batch norm with entropic regularization turns deterministic autoencoders into generative models

The variational autoencoder is a well defined deep generative model that...

On Implicit Regularization in β-VAEs

While the impact of variational inference (VI) on posterior inference in...

Improving image generative models with human interactions

GANs provide a framework for training generative models which mimic a da...

BreGMN: scaled-Bregman Generative Modeling Networks

The family of f-divergences is ubiquitously applied to generative modeli...

Score Function Features for Discriminative Learning

Feature learning forms the cornerstone for tackling challenging learning...

Focal Frequency Loss for Generative Models

Despite the remarkable success of generative models in creating photorea...

How (not) to Train your Generative Model: Scheduled Sampling, Likelihood, Adversary?

Modern applications and progress in deep learning research have created ...

Code Repositories


Code for the "Discriminative Regularization for Generative Models" paper.

view repo
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Discriminative neural network models have had a tremendous impact in many traditional application areas of machine learning such as object recognition and detection in images

(Krizhevsky et al., ; Simonyan & Zisserman, 2014), speech recognition (Hinton et al., 2012) and a host of other application domains (Schmidhuber, 2014). While progress in the longstanding problem of learning generative models capable of producing novel and compelling examples of natural data has not quite kept pace with the advances in discriminative modeling, there have been a number of important developments.

Within the context of generative models that support tractable approximate inference, the variational autoencoder (VAE) (Kingma & Welling, 2013)

has emerged as a popular framework. The VAE leverages deep neural networks both for the generative model (mapping from a set of latent random variables to a conditional distribution over the observed data) and for an approximate inference model (mapping from the observed data to a conditional distribution over the latent random variables).

Images generated from the VAE (and most other generative frameworks) diverge from natural images in two distinct ways:

  1. Missing high frequency information.

    Compared to natural data, generated samples often lack detail and appear blurry. Generative models of natural data such as images are largely limited to the maximum likelihood setting where the data was modeled as Gaussian distributed (with diagonal covariance), given some setting of the latent variables. Under a Gaussian, the quality of reconstruction is essentially evaluated on the basis of a generalized

    distance. As a measure of similarity between images,

    distance does not closely match human perception. For instance, the same image translated by a few pixels could have relatively high distance, yet humans may not even perceive the difference.

  2. Missing semantic information. Human perception is goal driven: we perceive our environment so that we can interact with it in meaningful ways. This implies that semantic information is going to be particularly salient to the human perceptual system. The current state-of-the-art in generative models, even when they capture high frequency information, produce samples which often lack semantically-relevant details. Generative models of natural images often lack a clear sense of “objectness”. It is not enough to capture the correct local statistics over the data. For example, generative models trained on faces often produce inconsistencies in gender and identity, which may be subtle in pixel space but immediately apparent to humans viewing the samples.

In this work we explore an alternative VAE training objective by augmenting the standard VAE lower bound on the likelihood objective with additional discriminative terms that encourage the model’s reconstructions to be close to the data example in a representation space defined by the hidden layers of highly-discriminative, neural network-based classifiers. We refer to this strategy as discriminative regularization of generative models.

In this effort we are heavily inspired by recently introduced texture synthesis method of (Gatys et al., 2015b) as well as the DeepStyle model of (Gatys et al., 2015a)

. These works showed that surprisingly detailed and semantically-rich information regarding natural images is preserved in the hidden-layer representations of ImageNet-trained object recognition networks such as VGG

(Simonyan & Zisserman, 2014). Our goal is to incorporate this insight into the VAE framework and to render the synthetic data perceptually closer to the real data.

In this paper we confine our discussion to learning generative models of images; however, the approach we propose here is readily applicable to other domains. We show how to learn discriminatively regularized generative models for three benchmark datasets: Street View House Numbers (SVHN) (Netzer et al., 2011), the CIFAR-10 object recognition dataset (Krizhevsky & Hinton, 2009), and the CelebA facial attribute recognition dataset (Ziwei Liu & Tang, 2015)

. In each case, the classifier we consider is a convolutional neural network (CNN).



Figure 1: The discriminative regularization model. Layers , , , , and represent convolutional layers, whereas layers , and

represent fractionally strided convolutional layers.

2 VAEs as Generative models of images

In this section we lay out the variational autoencoder (VAE) framework (Kingma & Welling, 2013; Jimenez Rezende et al., 2014) on which we build. The VAE is a neural network-based approach to latent variable modeling where the natural, richly-structured dependencies found in the data are disentangled into the relatively simple dependencies between a set of latent variables. Formally, let

be a random real-valued vector representing the observed data and let

be a random real-valued vector representing the latent variables that reflect the principle directions of variation in the input data.

2.1 The generative model

We specify the generative model over the pair as , where is the prior distribution over the latent variables and is the conditional likelihood of the data given the latents. represents the generative model parameters. As is typical in the VAE framework, we assume a standard Normal (Gaussian) prior distribution over : .

For real-valued data such as natural images, by far the most common conditional likelihood is the Gaussian distribution: , where the mean is a nonlinear function of the latent variables specified by a neural network, which following autoencoder terminology, we refer to as the decoder network, . In the natural image setting, is parameterized by a CNN (see Figure 1) and

is a vector of independent variance parameters over the pixels.

2.2 The approximate inference model

Given the generative model described above, inference is intractable, as is standard parameter learning paradigms such as maximizing the likelihood of the data. The VAE resolves these issues by introducing a learned approximate posterior distribution , specified by another neural network known as the encoder network, and parametrized by .

Introducing the approximate posterior

allows us to decompose the marginal log-likelihood of the data under the generative model in terms of the variational free energy and the Kullback-Leibler divergence between the approximate and true posteriors:


where the Kullback-Leibler divergence is given by

and the variational free energy is given by

Since measures the divergence between and , it is guaranteed to be non-negative. As a consequence, the variational free energy is always a lower bound on the likelihood. As such it is sometimes called the variational lower bound or the evidence lower bound (ELBO).

In the VAE framework, is often rearranged into two terms:



can be interpreted as the (negative) expected reconstruction error of under the conditional likelihood with respect to . Maximizing this lower bound strikes a balance between minimizing reconstruction error and minimizing the KL divergence between the approximate posterior and the prior .

2.3 Reparametrization Trick

The power of the VAE approach can be credited to how the model is trained. With real-valued , we can exploit a reparametrization trick (Kingma & Welling, 2013; Bengio et al., 2013) to propagate the gradient from the decoder network to the encoder network. Instead of sampling directly from , is computed as a deterministic function of and some noise term such that has the desired distribution. For instance, if


then we would express as

to produce values with the desired distribution while permitting gradients to propagate through both and .

2.4 The problem with the Independent Gaussian Assumption

The derivation of the variational autoencoder allows for different choices for the reconstruction model . However, as previously mentioned the majority of applications on real-valued data use a multivariate Gaussian with diagonal covariance matrix as the conditional likelihood of the data given the latent variables (Gregor et al., 2015; Mansimov et al., 2015). Maximizing the conditional likelihood of this distribution corresponds to minimizing an elementwise reconstruction penalty. One major weakness with this approach is that elementwise distance metrics are a poor fit for human notions of similarity. For example, shifting an image by only a few pixels will cause it to look very different under elementwise distance metrics but will not change its semantic properties or how it is perceived by humans (Theis et al., 2015b).

In addition to the issues surrounding elementwise independence, there is nothing in a Gaussian conditional likelihood that will cause the model to render semantically-salient perceptual features of the data to be captured by the model.

3 Discriminative Regularization

In this section we describe our modification to the VAE lower bound training objective. Our goal is to modify the VAE training objective to render generated images perceptually closer to natural images. As previously discussed, generated images from the VAE (or other generative frameworks) often diverge from natural images in two distinct directions: (1) by being excessively blurry and (2) by lacking semantically meaningful cues such as depictions of well-defined objects. We conjecture that both of these issues can be ameliorated by encouraging the generator to render reconstructions that match the original data example in a representation space defined by the hidden layers of a classifier trained on a discrimination task relevant to the input data.

Let represent the hidden layer representations of a pre-trained classifier. The classifier could be trained on a task specifically relevant to the data we wish to model. For example, in learning to generate images of faces we may wish to leverage a classifier trained to either identify individuals (Huang et al., 2007) or trained to recognize certain facial characteristics (Ziwei Liu & Tang, 2015). On the other hand, we could also follow the example of (Gatys et al., 2015b) and use one of the high performing ImageNet trained models such as VGG (Simonyan & Zisserman, 2014) as a general purpose classifier for natural images.

In the standard VAE variational lower bound objective, we include a term that aims to minimize the reconstruction error in the space of the observed data. To this we add additional terms aimed at minimizing the reconstruction error in the space defined by the hidden layer representations, , of the classifier.




We take the conditional likelihood of each to be Gaussian with its mean defined by forward propagating the conditional mean through the layers of the classifier from to :

The discriminative regularization approach can be considered a kind of multitask regularization of the standard VAE, where in addition to the standard VAE objective, we include the additional tasks of predicting each of the hidden layer representations of a classifier.

We can understand the impact that these additional terms would have on the VAE parameters by considering matching in the different layers of the classifier. Since the classifiers we will consider will all be convolutional neural networks, the different layers will tend to have different characteristics, especially with respect to spatial translations. Matching the lower layer representations is going to encourage visual features such as edges to be well-defined and in the right location. The upper layers of a convolutional neural network classifier have been shown to be both highly invariant to spatial transformations (particularly translation), while simultaneously showing high specificity to semantically-relevant stimuli. Matching in the upper layers will likely de-emphasize exact spatial alignment, but will pressure semantic elements apparent in the example, such as the identity of objects, to be well matched between the data example and the mean of the conditional likelihood .

It is important to assess the impact that the addition of our discriminative regularization terms have on the VAE. By adding the discriminative regularization terms we are no longer directly optimizing the variational lower bound.

Furthermore, since we are backpropagating the gradient of the combined objective

through the decoder network and into the encoder network (the network responsible for approximating the posterior distribution), we are no longer directly optimizing the encoder network to minimize . Doing so implies that we risk deteriorating our approximate posterior in favor of improving the example reconstructions (w.r.t the combined objective). One consequence could be an overall deterioration of the generated sample quality as the marginal diverges from the prior .

In our experiments, we did not observe any negative impact in sample quality, however if such an issue did arise, we could simply have elected not to propagate the the gradient contribution due to our discriminative regularization through the encoder network and thus preserve direct minimization of w.r.t. the parameters of the encoder network.

(a) Samples without discriminative regularization
(b) Samples with discriminative regularization
Figure 2:

CIFAR samples generated from variational autoencoders trained with and without the discriminative regularization. The architecture and the hyperparameters (except those directly related to discriminative regularization) are the same for both models. Our baseline VAE samples are similar in visual fidelity to other results in the literature

(Mansimov et al., 2015). Discriminative regularization often does a good job of producing coherent objects, but the textures are usually muddled and the samples lack local detail
Figure 3: SVHN samples with the standard variational autoencoders (left), real images (center), and samples using discriminative regularization (right). The discriminative regularizer improves the clarity and visual fidelity of the samples. SVHN is the only dataset where we did not observe unnatural patterning when using discriminative regularization.

4 Related Work

Recent work has used the structural similarity metric (Wang et al., 2004)

as an auxiliary loss function for training variational autoencoders

(Ridgeway et al., 2015). They showed that using this metric instead of pixel-wise square loss dramatically improved human ratings of the generated images. Our approach differs from theirs in a few ways. First, we use the representations from a discriminatively trained classifier to augment our objective function, whereas they use a hand-crafted measure for image similarity. Second, discriminative regularization describes both local and global properties of the image (the local properties coming from lower layers and the global properties coming from higher layers), whereas their method only compares the true image and the reconstructed image around local 11x11 patches centered at each pixel. An interesting area for future work would be to study which method does a better job at improving the generation of local data, or if results can be improved by using both methods simultaneously.

Recently there has been a focus on alternative measures to be used during the training of generative models. Probably the most established of these is the generative adversarial networks (GANs) that leverage discriminative machinery and apply it to a two player game scenario between a generator and a discriminator

(Goodfellow et al., 2014). While the discriminator is trained to distinguish between true training samples and those generated from the generator, the generator is trained to try to fool the discriminator. While this joint optimization of the generator and discriminator is prone to instabilities, the end result are often generated images that capture realistic local texture. Recent applications of the GAN formalism have show very impressive results (Denton et al., 2015; Radford et al., 2015).

Of all the proposed GAN-based methods, the one that most closely resembles the approach we propose here is the discriminative VAE (Larsen et al., 2015). In this work, the authors integrate the VAE within a GAN framework, in part, by maximizing a lower bound on a representation of the image defined by a given hidden layer of the GAN discriminator network. The authors show that their integration of the GAN and the VAE leads to impressive samples.

While generative adversarial networks have been a driving force in the relatively rapid improvement in the quality of image generation models, there are ways in which VAEs are preferable. GAN models do not optimize likelihood and are not trained directly for coverage of the training set, i.e. they use their capacity to convincingly mimic natural images. On the other hand the VAE more explicitly encourages coverage by maximizing a lower bound on the log likelihood. Another disadvantage of GANs is that in their original formulation there is no clear way to perform inference in the model, i.e. to recover the posterior distribution . However, there has been a few very recent efforts that are working to address this shortcoming of the GAN framework (Makhzani et al., 2015; Larsen et al., 2015).

(a) Samples without discriminative regularization
(b) Samples with discriminative regularization
Figure 4: Face samples generated with and without discriminative regularization. On balance, details of the face are better captured and more varied in the samples generated with discriminative regularization.
Figure 5: Face reconstructions with (top row) and without (bottom row) discriminative regularization. The face images used for the reconstructions (middle row) are from the held-out validation set and were not seen by the model during training. The architecture and the hyperparameters (except those directly related to discriminative regularization) are the same for both models. Discriminative regularization greatly enhances the model’s ability to preserve identity, ethnicity, gender, and expressions. Note that the model does not improve the visual quality of the image background, which likely reflects the fact that the classifier’s labels all describe facial attributes. Additional reconstructions can be seen in the appendix.

5 Experiments

We evaluated the impact of the discriminative regularization on three datasets: CelebA (Liu et al., 2015), Street View House Numbers (Netzer et al., 2011), and CIFAR-10 (Krizhevsky & Hinton, 2009). The SVHN and CIFAR-10 datasets were used as is, whereas the aligned and cropped version of the CelebA dataset was scaled from pixels to pixels and center cropped at pixels. For SVHN and CIFAR-10 we used the pre-trained VGG-19 model as the network for our discriminative regularization (Simonyan & Zisserman, 2014) and for celebA we trained our own classifier to predict all of the labels.

5 Shadow Arch. Eyebrows Attractive
Bags under eyes Bald Blurry
Bangs Big Lips Brown Hair
Big Nose Black Hair Bushy Eyebrows
Blond Hair Goatee Gray Hair
Eyeglasses Double Chin Heavy Makeup
Heavy Cheekbones Gender Mouth Open
Mustache Narrow Eyes No Beard
Oval Face Pale Skin Pointy Nose
Recced. Hairline Rosey Cheeks Sideburns
Smiling Straight Hair Wavy Hair
Earrings Wearing Hat Lipstick
Necklace Necktie Young
Table 1: A list of the binary targets that we predict with our celebA classifier.

All VAE models, regularized or not, as well as the CelebA classifier were trained using Adam and batch normalization. Our architecture closely follows

(Radford et al., 2015), with convolutional layers in the encoder and fractionally-strided convolutions in the decoder. In each convolutional layer in the encoder we double the number of filters present in the previous layer and use a convolutional stride of 2. In each convolutional layer in the decoder we use a fractional stride of 2 and halve the number of filters on each layer.

Evaluating generative models quantitatively is a challenging task (Theis et al., 2015a)

. One common evaluation metric is the likelihood of held-out samples. However, the usefulness of this metric is limited. If we compare the log-likelihood using the independent Gaussian in the pixel space, then we suffer from the limitations of pixel-wise distance metrics for comparing images. On the other hand, if we compare using the log-likelihood over the hidden states of the discriminative classifier, then we bias our evaluation criteria towards the criteria that we trained on.

Figure 6:

Latent space interpolations with discriminative regularization. On each row, the first and last image correspond to reconstructions of randomly selected examples.

5.1 Samples

Trained models were sampled from by sampling and computing (in our case ), which is standard practice in generative modeling work.

Using discriminative regularization during training has a noticeable impact on the quality of CIFAR-10 samples (Figure 2). In addition to being sharper, the samples also exhibit good global statistics, i.e. they look like objects. We observe a similar improvement in the quality of our SVHN samples.

Faces in CelebA samples (Figure 4) look more “in focus” when discriminative regularization is used during training.

5.2 Quantitative Results

In Table 2, we show NLL approximations of models trained on the CelebA dataset with and without discriminative regularization. We report per-unit averages. The approximation was obtained via importance sampling using 100 samples per data point.

Split Without disc. reg. With disc. reg.
Training -1.2092 -1.1147
Validation -1.1779 -1.0804
Test -1.1835 -1.0866
Table 2: NLL approximations for models trained on the CelebA dataset with and without discriminative regularization. We note that the discriminative regularizer makes the likelihood over the raw pixel space worse even though the visual quality of the samples is improved.

5.3 Reconstructions

Reconstructions were obtained by sampling and computing (in our case ), which is also standard practice in generative modeling work.

Using discriminative regularization during training leads to improved reconstructions (Figure 5). In addition to producing sharper reconstructions, this approach helps maintaining the identity better. This is especially noticeable in the eyes region: VAE reconstructions tend to produce stereotypical eyes, whereas our approach better captures the overall eye shape.

5.4 Interpolations in the Latent Space

To evaluate the quality of the learned latent representation, we visualize the result of linearly interpolating between latent configurations. We choose pairs of images whose latent representation we obtain by computing . We then compute intermediary latent representations by linearly interpolating between the latent representation pairs, and we display the corresponding .

The resulting trajectory in pixel space (Figure 6) exhibits smooth and realistic transitions between face pose and orientation, hair color and gender.

5.5 Explaining Visual Artifacts

In the samples generated from a model trained with discriminative regularization, we sometimes see unnatural patterns or texturing. In the faces samples, we mostly observe these patterns in the background. In the CIFAR dataset, they occur to some extent in nearly all samples. These patterns are not seen in samples from the standard variational autoencoders.

One explanation for the visual artifacts is that the variational autoencoder with discriminative regularization produces unnaturally blurred activations in the classifier’s convolutional layers in the same way that the standard variational autoencoder outputs unnaturally blurred images.

To support this hypothesis, we visualize what happens when a convolutional autoencoder explicitly tries to generate a reconstruction which produces a blurred representation in the classifier. To do so, we train a convolutional autoencoder on a batch of 100 examples. The examples are reconstructed as usual, but we propagate both the input and the reconstruction through the first two layers of the classifier. The propagated input is then blurred by adding gaussian blur (applied separately to each filter), and the cost is computed as the squared error between the propagated reconstruction and the blurred propagated input.

Figure 7

provides a visual summary of the experiment. We see that when no blurring is applied to the hidden representation, the autoencoder does a perfect job of matching the hidden representations (middle left column), which is indicated by an excellent reconstruction at the input level. When blurring is applied, we see that the resulting reconstructions (right column) exhibit visual patterns resembling those of our model’s reconstructions (middle right column).

Figure 7: From left to right: input examples, convolutional (non-variational) autoencoder reconstructions (no blurring applied to the classifier’s hidden representations), model reconstructions (trained with discriminative regularization), convolutional autoencoder reconstructions (blurring applied to the classifier’s hidden representations).

6 Conclusion

A common view in cognitive science is that generative modeling will play a central role in the development of artificial intelligence by enabling feature learning where labeled data and reward signals are sparse. In this view generative models serve to assist other models by learning representations and discovering causal factors from the nearly unlimited supply of unlabeled data. Our paper shows that this interaction ought to be a two-way street, in which supervised learning contributes to generative modeling by determining which attributes of the data are worth learning to represent. We have demonstrated that discriminative information can be used to regularize generative models to improve the perceptual quality of their samples.