Implementation of Ian Goodfellow's paper on "Generative Adversarial Networks (GAN)".
This report summarizes the tutorial presented by the author at NIPS 2016 on generative adversarial networks (GANs). The tutorial describes: (1) Why generative modeling is a topic worth studying, (2) how generative models work, and how GANs compare to other generative models, (3) the details of how GANs work, (4) research frontiers in GANs, and (5) state-of-the-art image models that combine GANs with other methods. Finally, the tutorial contains three exercises for readers to complete, and the solutions to these exercises.READ FULL TEXT VIEW PDF
Generative Adversarial Networks (GANs) have been used to model the under...
Why are Generative Adversarial Networks (GANs) so popular? What is the
The GANs are generative models whose random samples realistically reflec...
Generative adversarial networks (GANs) are a hot research topic recently...
Generative Adversarial Networks (GANs) is a novel class of deep generati...
Generative Adversarial Networks (GANs) became very popular for generatio...
We investigate Generative Adversarial Networks (GANs) to model one parti...
Implementation of Ian Goodfellow's paper on "Generative Adversarial Networks (GAN)".
TAIP Project - An GNA driven image generator, starting from sketches
Learning about GANS
This report111This is the arxiv.org version of this tutorial. Some graphics have been compressed to respect arxiv.org’s 10MB limit on paper size, and do not reflect the full image quality. summarizes the content of the NIPS 2016 tutorial on generative adversarial networks (GANs) (Goodfellow-et-al-NIPS2014-small). The tutorial was designed primarily to ensure that it answered most of the questions asked by audience members ahead of time, in order to make sure that the tutorial would be as useful as possible to the audience. This tutorial is not intended to be a comprehensive review of the field of GANs; many excellent papers are not described here, simply because they were not relevant to answering the most frequent questions, and because the tutorial was delivered as a two hour oral presentation and did not have unlimited time cover all subjects.
The tutorial describes: (1) Why generative modeling is a topic worth studying, (2) how generative models work, and how GANs compare to other generative models, (3) the details of how GANs work, (4) research frontiers in GANs, and (5) state-of-the-art image models that combine GANs with other methods. Finally, the tutorial contains three exercises for readers to complete, and the solutions to these exercises.
The slides for the tutorial are available in PDF and Keynote format at the following URLs:
The video was recorded by the NIPS foundation and should be made available at a later date.
Generative adversarial networks are an example of generative models. The term “generative model” is used in many different ways. In this tutorial, the term refers to any model that takes a training set, consisting of samples drawn from a distribution. In some cases, the model estimates explicitly, as shown in figure 1. In other cases, the model is only able to generate samples from , as shown in figure 2. Some models are able to do both. GANs focus primarily on sample generation, though it is possible to design GANs that can do both.
One might legitimately wonder why generative models are worth studying, especially generative models that are only capable of generating data rather than providing an estimate of the density function. After all, when applied to images, such models seem to merely provide more images, and the world has no shortage of images.
There are several reasons to study generative models, including:
Training and sampling from generative models is an excellent test of our ability to represent and manipulate high-dimensional probability distributions. High-dimensional probability distributions are important objects in a wide variety of applied math and engineering domains.
Generative models can be incorporated into reinforcement learning in several ways. Reinforcement learning algorithms can be divided into two categories; model-based and model-free, with model-based algorithms being those that contain a generative model. Generative models of time-series data can be used to simulate possible futures. Such models could be used for planning and for reinforcement learning in a variety of ways. A generative model used for planning can learn a conditional distribution over future states of the world, given the current state of the world and hypothetical actions an agent might take as input. The agent can query the model with different potential actions and choose actions that the model predicts are likely to yield a desired state of the world. For a recent example of such a model, seefinn2016unsupervised, and for a recent example of the use of such a model for planning, see finn2016deep. Another way that generative models might be used for reinforcement learning is to enable learning in an imaginary environment, where mistaken actions do not cause real damage to the agent. Generative models can also be used to guide exploration by keeping track of how often different states have been visited or different actions have been attempted previously. Generative models, and especially GANs, can also be used for inverse reinforcement learning. Some of these connections to reinforcement learning are described further in section 5.6.
Generative models can be trained with missing data and can provide predictions on inputs that are missing data. One particularly interesting case of missing data is semi-supervised learning
, in which the labels for many or even most training examples are missing. Modern deep learning algorithms typically require extremely many labeled examples to be able to generalize well. Semi-supervised learning is one strategy for reducing the number of labels. The learning algorithm can improve its generalization by studying a large number of unlabeled examples which, which are usually easier to obtain. Generative models, and GANs in particular, are able to perform semi-supervised learning reasonably well. This is described further in section5.4.
Generative models, and GANs in particular, enable machine learning to work withmulti-modal outputs. For many tasks, a single input may correspond to many different correct answers, each of which is acceptable. Some traditional means of training machine learning models, such as minimizing the mean squared error between a desired output and the model’s predicted output, are not able to train models that can produce multiple different correct answers. One example of such a scenario is predicting the next frame in a video, as shown in figure 3.
Finally, many tasks intrinsically require realitic generation of samples from some distribution.
Examples of some of these tasks that intrinsically require the generation of good samples include:
Single image super-resolution
Single image super-resolution
: In this task, the goal is to take a low-resolution image and synthesize a high-resolution equivalent. Generative modeling is required because this task requires the model to impute more information into the image than was originally there in the input. There are many possible high-resolution images corresponding to the low-resolution image. The model should choose an image that is a sample from the probability distribution over possible images. Choosing an image that is the average of all possible images would yield a result that is too blurry to be pleasing. See figure4.
Tasks where the goal is to create art. Two recent projects have both demonstrated that generative models, and in particular, GANs, can be used to create interactive programs that assist the user in creating realistic images that correspond to rough scenes in the user’s imagination. See figure 5 and figure 6.
All of these and other applications of generative models provide compelling reasons to invest time and resources into improving generative models.
We now have some idea of what generative models can do and why it might be desirable to build one. Now we can ask: how does a generative model actually work? And in particular, how does a GAN work, in comparison to other generative models?
To simplify the discussion somewhat, we will focus on generative models that work via the principle of maximum likelihood. Not every generative model uses maximum likelihood. Some generative models do not use maximum likelihood by default, but can be made to do so (GANs fall into this category). By ignoring those models that do not use maximum likelihood, and by focusing on the maximum likelihood version of models that do not usually use maximum likelihood, we can eliminate some of the more distracting differences between different models.
The basic idea of maximum likelihood is to define a model that provides an estimate of a probability distribution, parameterized by parameters . We then refer to the likelihood as the probability that the model assigns to the training data: for a dataset containing training examples .
The principle of maximum likelihood simply says to choose the parameters for the model that maximize the likelihood of the training data. This is easiest to do in log space, where we have a sum rather than a product over examples. This sum simplifies the algebraic expressions for the derivatives of the likelihood with respect to the models, and when implemented on a digital computer, is less prone to numerical problems, such as underflow resulting from multiplying together several very small probabilities.
In equation 2, we have used the property that for positive , because the logarithm is a function that increases everywhere and does not change the location of the maximum.
The maximum likelihood process is illustrated in figure 8.
We can also think of maximum likelihood estimation as minimizing the KL divergence between the data generating distribution and the model:
If we were able to do this precisely, then if lies within the family of distributions , the model would recover exactly. In practice, we do not have access to itself, but only to a training set consisting of samples from . We uses these to define , an empirical distribution that places mass only on exactly those points, approximating . Minimizing the KL divergence between and is exactly equivalent to maximizing the log-likelihood of the training set.
For more information on maximum likelihood and other statistical estimators, see chapter 5 of Goodfellow-et-al-2016.
If we restrict our attention to deep generative models that work by maximizing the likelihood, we can compare several models by contrasting the ways that they compute either the likelihood and its gradients, or approximations to these quantities. As mentioned earlier, many of these models are often used with principles other than maximum likelihood, but we can examine the maximum likelihood variant of each of them in order to reduce the amount of distracting differences between the methods. Following this approach, we construct the taxonomy shown in figure 9. Every leaf in this taxonomic tree has some advantages and disadvantages. GANs were designed to avoid many of the disadvantages present in pre-existing nodes of the tree, but also introduced some new disadvantages.
In the left branch of the taxonomy shown in figure 9 are models that define an explicit density function . For these models, maxmimization of the likelihood is straightforward; we simply plug the model’s definition of the density function into the expression for the likelihood, and follow the gradient uphill.
The main difficulty present in explicit density models is designing a model that can capture all of the complexity of the data to be generated while still maintaining computational tractability. There are two different strategies used to confront this challenge: (1) careful construction of models whose structure guarantees their tractability, as described in section 2.3.1, and (2) models that admit tractable approximations to the likelihood and its gradients, as described in section 2.3.2.
In the leftmost leaf of the taxonomic tree of figure 9
are the models that define an explicit density function that is computationally tractable. There are currently two popular approaches to tractable explicit density models: fully visible belief networks and nonlinear independent components analysis.
Fully visible belief networks (Frey96; Frey98)
or FVBNs are models that use the chain rule of probability to decompose a probability distribution over an
-dimensional vectorinto a product of one-dimensional probability distributions:
FVBNs are, as of this writing, one of the three most popular approaches to generative modeling, alongside GANs and variational autoencoders. They form the basis for sophisticated generative models from DeepMind, such as WaveNet(aaron-wavenet-2016). WaveNet is able to generate realistic human speech. The main drawback of FVBNs is that samples must be generated one entry at a time: first , then , etc., so the cost of generating a sample is . In modern FVBNs such as WaveNet, the distribution over each is computed by a deep neural network, so each of these steps involves a nontrivial amount of computation. Moreover, these steps cannot be parallelized. WaveNet thus requires two minutes of computation time to generate one second of audio, and cannot yet be used for interactive conversations. GANs were designed to be able to generate all of in parallel, yielding greater generation speed.
Another family of deep generative models with explicit density functions is based on defining continuous, nonlinear transformations between two different spaces. For example, if there is a vector of latent variables and a continuous, differentiable, invertible transformation such that yields a sample from the model in space, then
The density is tractable if the density is tractable and the determinant of the Jacobian of is tractable. In other words, a simple distribution over combined with a transformation that warps space in complicated ways can yield a complicated distribution over , and if is carefully designed, the density is tractable too. Models with nonlinear functions date back at least to deco1995higher. The latest member of this family is real NVP (dinh2016density). See figure 10 for some visualizations of ImageNet samples generated by real NVP. The main drawback to nonlinear ICA models is that they impose restrictions on the choice of the function . In particular, the invertibility requirement means that the latent variables must have the same dimensionality as . GANs were designed to impose very few requirements on , and, in particular, admit the use of with larger dimension than .
For more information about the chain rule of probability used to define FVBNs or about the effect of deterministic transformations on probability densities as used to define nonlinear ICA models, see chapter 3 of Goodfellow-et-al-2016.
In summary, models that define an explicit, tractable density are highly effective, because they permit the use of an optimization algorithm directly on the log-likelihood of the training data. However, the family of models that provide a tractable density is limited, with different families having different disadvantages.
To avoid some of the disadvantages imposed by the design requirements of models with tractable density functions, other models have been developed that still provide an explicit density function but use one that is intractable, requiring the use of approximations to maximize the likelihood. These fall roughly into two categories: those using deterministic approximations, which almost always means variational methods, and those using stochastic approximations, meaning Markov chain Monte Carlo methods.
Variational methods define a lower bound
A learning algorithm that maximizes is guaranteed to obtain at least as high a value of the log-likelihood as it does of . For many families of models, it is possible to define an that is computationally tractable even when the log-likelihood is not. Currently, the most popular approach to variational learning in deep generative models is the variational autoencoder (Kingma-arxiv2013; Rezende-et-al-ICML2014) or VAE. Variational autoencoders are one of the three approaches to deep generative modeling that are the most popular as of this writing, along with FVBNs and GANs. The main drawback of variational methods is that, when too weak of an approximate posterior distribution or too weak of a prior distribution is used, 222 Empirically, VAEs with highly flexible priors or highly flexible approximate posteriors can obtain values of that are near their own log-likelihood (kingma2016improving; chen2016variational). Of course, this is testing the gap between the objective and the bound at the maximum of the bound; it would be better, but not feasible, to test the gap at the maximum of the objective. VAEs obtain likelihoods that are competitive with other methods, suggesting that they are also near the maximum of the objective. In personal conversation, L. Dinh and D. Kingma have conjectured that a family of models (Dinh-et-al-arxiv2014; rezende2015variational; kingma2016improving; dinh2016density) usable as VAE priors or approximate posteriors are universal approximators. If this could be proven, it would establish VAEs as being asymptotically consistent. even with a perfect optimization algorithm and infinite training data, the gap between and the true likelihood can result in learning something other than the true . GANs were designed to be unbiased, in the sense that with a large enough model and infinite data, the Nash equilibrium for a GAN game corresponds to recovering exactly. In practice, variational methods often obtain very good likelihood, but are regarded as producing lower quality samples. There is not a good method of quantitatively measuring sample quality, so this is a subjective opinion, not an empirical fact. See figure 11 for an example of some samples drawn from a VAE. While it is difficult to point to a single aspect of GAN design and say that it results in better sample quality, GANs are generally regarded as producing better samples. Compared to FVBNs, VAEs are regarded as more difficult to optimize, but GANs are not an improvement in this respect. For more information about variational approximations, see chapter 19 of Goodfellow-et-al-2016.
Most deep learning algorithms make use of some form of stochastic approximation, at the very least in the form of using a small number of randomly selected training examples to form a minibatch used to minimize the expected loss. Usually, sampling-based approximations work reasonably well as long as a fair sample can be generated quickly (e.g. selecting a single example from the training set is a cheap operation) and as long as the variance across samples is not too high. Some models require the generation of more expensive samples, using Markov chain techniques. A Markov chain is a process for generating samples by repeatedly drawing a sampleBy repeatedly updating according to the transition operator , Markov chain methods can sometimes guarantee that will eventually converge to a sample from . Unfortunately, this convergence can be very slow, and there is no clear way to test whether the chain has converged, so in practice one often uses too early, before it has truly converged to be a fair sample from
. In high-dimensional spaces, Markov chains become less efficient. Boltzmann machines(Fahlman83; Ackley85; Hinton-Boltzmann; Hinton86a) are a family of generative models that rely on Markov chains both to train the model or to generate a sample from the model. Boltzmann machines were an important part of the deep learning renaissance beginning in 2006 (Hinton06; hinton2007learning) but they are now used only very rarely, presumably mostly because the underlying Markov chain approximation techniques have not scaled to problems like ImageNet generation. Moreover, even if Markov chain methods scaled well enough to be used for training, the use of a Markov chain to generate samples from a trained model is undesirable compared to single-step generation methods because the multi-step Markov chain approach has higher computational cost. GANs were designed to avoid using Markov chains for these reasons. For more information about Markov chain Monte Carlo approximations, see chapter 18 of Goodfellow-et-al-2016. For more information about Boltzmann machines, see chapter 20 of the same book.
Some models use both variational and Markov chain approximations. For example, deep Boltzmann machines make use of both types of approximation (SalHinton09).
Some models can be trained without even needing to explicitly define a density functions. These models instead offer a way to train the model while interacting only indirectly with , usually by sampling from it. These constitute the second branch, on the right side, of our taxonomy of generative models depicted in figure 9.
Some of these implicit models based on drawing samples from define a Markov chain transition operator that must be run several times to obtain a sample from the model. From this family, the primary example is the generative stochastic network (Bengio-et-al-ICML-2014). As discussed in section 2.3.2, Markov chains often fail to scale to high dimensional spaces, and impose increased computational costs for using the generative model. GANs were designed to avoid these problems.
Finally, the rightmost leaf of our taxonomic tree is the family of implicit models that can generate a sample in a single step. At the time of their introduction, GANs were the only notable member of this family, but since then they have been joined by additional models based on kernelized moment matching(Li-et-al-2015; dziugaite2015training).
In summary, GANs were designed to avoid many disadvantages associated with other generative models:
They can generate samples in parallel, instead of using runtime proportional to the dimensionality of . This is an advantage relative to FVBNs.
The design of the generator function has very few restrictions. This is an advantage relative to Boltzmann machines, for which few probability distributions admit tractable Markov chain sampling, and relative to nonlinear ICA, for which the generator must be invertible and the latent code must have the same dimension as the samples .
No Markov chains are needed. This is an advantage relative to Boltzmann machines and GSNs.
No variational bound is needed, and specific model families usable within the GAN framework are already known to be universal approximators, so GANs are already known to be asymptotically consistent. Some VAEs are conjectured to be asymptotically consistent, but this is not yet proven.
GANs are subjectively regarded as producing better samples than other methods.
At the same time, GANs have taken on a new disadvantage: training them requires finding the Nash equilibrium of a game, which is a more difficult problem than optimizing an objective function.
We have now seen several other generative models and explained that GANs do not work in the same way that they do. But how do GANs themselves work?
The basic idea of GANs is to set up a game between two players. One of them is called the generator. The generator creates samples that are intended to come from the same distribution as the training data. The other player is the discriminator. The discriminator examines samples to determine whether they are real or fake. The discriminator learns using traditional supervised learning techniques, dividing inputs into two classes (real or fake). The generator is trained to fool the discriminator. We can think of the generator as being like a counterfeiter, trying to make fake money, and the discriminator as being like police, trying to allow legitimate money and catch counterfeit money. To succeed in this game, the counterfeiter must learn to make money that is indistinguishable from genuine money, and the generator network must learn to create samples that are drawn from the same distribution as the training data. The process is illustrated in figure 12.
Formally, GANs are a structured probabilistic model (see chapter 16 of Goodfellow-et-al-2016 for an introduction to structured probabilistic models) containing latent variables and observed variables . The graph structure is shown in figure 13.
The two players in the game are represented by two functions, each of which is differentiable both with respect to its inputs and with respect to its parameters. The discriminator is a function that takes as input and uses as parameters. The generator is defined by a function that takes as input and uses as parameters.
Both players have cost functions that are defined in terms of both players’ parameters. The discriminator wishes to minimize and must do so while controlling only . The generator wishes to minimize and must do so while controlling only . Because each player’s cost depends on the other player’s parameters, but each player cannot control the other player’s parameters, this scenario is most straightforward to describe as a game rather than as an optimization problem. The solution to an optimization problem is a (local) minimum, a point in parameter space where all neighboring points have greater or equal cost. The solution to a game is a Nash equilibrium. Here, we use the terminology of local differential Nash equilibria (ratliff2013characterization). In this context, a Nash equilibrium is a tuple that is a local minimum of with respect to and a local minimum of with respect to .
The generator is simply a differentiable function . When is sampled from some simple prior distribution, yields a sample of drawn from . Typically, a deep neural network is used to represent . Note that the inputs to the function do not need to correspond to inputs to the first layer of the deep neural net; inputs may be provided at any point throughout the network. For example, we can partition into two vectors and , then feed as input to the first layer of the neural net and add to the last layer of the neural net. If is Gaussian, this makes conditionally Gaussian given . Another popular strategy is to apply additive or multiplicative noise to hidden layers or concatenate noise to hidden layers of the neural net. Overall, we see that there are very few restrictions on the design of the generator net. If we want to have full support on space we need the dimension of to be at least as large as the dimension of , and must be differentiable, but those are the only requirements. In particular, note that any model that can be trained with the nonlinear ICA approach can be a GAN generator network. The relationship with variational autoencoders is more complicated; the GAN framework can train some models that the VAE framework cannot and vice versa, but the two frameworks also have a large intersection. The most salient difference is that, if relying on standard backprop, VAEs cannot have discrete variables at the input to the generator, while GANs cannot have discrete variables at the output of the generator.
The training process consists of simultaneous SGD. On each step, two minibatches are sampled: a minibatch of values from the dataset and a minibatch of values drawn from the model’s prior over latent variables. Then two gradient steps are made simultaneously: one updating to reduce and one updating to reduce . In both cases, it is possible to use the gradient-based optimization algorithm of your choice. Adam (kingma2014adam) is usually a good choice. Many authors recommend running more steps of one player than the other, but as of late 2016, the author’s opinion is that the protocol that works the best in practice is simultaneous gradient descent, with one step for each player.
Several different cost functions may be used within the GANs framework.
All of the different games designed for GANs so far use the same cost for the discriminator, . They differ only in terms of the cost used for the generator, .
The cost used for the discriminator is:
This is just the standard cross-entropy cost that is minimized when training a standard binary classifier with a sigmoid output. The only difference is that the classifier is trained on two minibatches of data; one coming from the dataset, where the label isfor all examples, and one coming from the generator, where the label is for all examples.
All versions of the GAN game encourage the discriminator to minimize equation 6. In all cases, the discriminator has the same optimal strategy. The reader is now encouraged to complete the exercise in section 7.1 and review its solution given in section 8.1. This exercise shows how to derive the optimal discriminator strategy and discusses the importance of the form of this solution.
We see that by training the discriminator, we are able to obtain an estimate of the ratio
at every point . Estimating this ratio enables us to compute a wide variety of divergences and their gradients. This is the key approximation technique that sets GANs apart from variational autoencoders and Boltzmann machines. Other deep generative models make approximations based on lower bounds or Markov chains; GANs make approximations based on using supervised learning to estimate a ratio of two densities. The GAN approximation is subject to the failures of supervised learning: overfitting and underfitting. In principle, with perfect optimization and enough training data, these failures can be overcome. Other models make other approximations that have other failures.
Because the GAN framework can naturally be analyzed with the tools of game theory, we call GANs “adversarial.” But we can also think of them as cooperative, in the sense that the discriminator estimates this ratio of densities and then freely shares this information with the generator. From this point of view, the discriminator is more like a teacher instructing the generator in how to improve than an adversary. So far, this cooperative view has not led to any particular change in the development of the mathematics.
So far we have specified the cost function for only the discriminator. A complete specification of the game requires that we specify a cost function also for the generator.
The simplest version of the game is a zero-sum game, in which the sum of all player’s costs is always zero. In this version of the game,
Because is tied directly to , we can summarize the entire game with a value function specifying the discriminator’s payoff:
Zero-sum games are also called minimax games because their solution involves minimization in an outer loop and maximization in an inner loop:
The minimax game is mostly of interest because it is easily amenable to theoretical analysis. Goodfellow-et-al-NIPS2014-small used this variant of the GAN game to show that learning in this game resembles minimizing the Jensen-Shannon divergence between the data and the model distribution, and that the game converges to its equilibrium if both players’ policies can be updated directly in function space. In practice, the players are represented with deep neural nets and updates are made in parameter space, so these results, which depend on convexity, do not apply.
The cost used for the generator in the minimax game (equation 7) is useful for theoretical analysis, but does not perform especially well in practice.
Minimizing the cross-entropy between a target class and a classifier’s predicted distribution is highly effective because the cost never saturates when the classifier has the wrong output. The cost does eventually saturate, approaching zero, but only when the classifier has already chosen the correct class.
In the minimax game, the discriminator minimizes a cross-entropy, but the generator maximizes the same cross-entropy. This is unfortunate for the generator, because when the discriminator successfully rejects generator samples with high confidence, the generator’s gradient vanishes.
To solve this problem, one approach is to continue to use cross-entropy minimization for the generator. Instead of flipping the sign on the discriminator’s cost to obtain a cost for the generator, we flip the target used to construct the cross-entropy cost. The cost for the generator then becomes:
In the minimax game, the generator minimizes the log-probability of the discriminator being correct. In this game, the generator maximizes the log-probability of the discriminator being mistaken.
This version of the game is heuristically motivated, rather than being motivated by a theoretical concern. The sole motivation for this version of the game is to ensure that each player has a strong gradient when that player is “losing” the game.
In this version of the game, the game is no longer zero-sum, and cannot be described with a single value function.
We might like to be able to do maximum likelihood learning with GANs, which would mean minimizing the KL divergence between the data and the model, as in equation 4. Indeed, in section 2, we said that GANs could optionally implement maximum likelihood, for the purpose of simplifying their comparison to other models.
There are a variety of methods of approximating equation 4 within the GAN framework. Goodfellow-ICLR2015 showed that using
is the logistic sigmoid function, is equivalent to minimizing equation4
, under the assumption that the discriminator is optimal. This equivalence holds in expectation; in practice, both stochastic gradient descent on the KL divergence and the GAN training procedure will have some variance around the true expected gradient due to the use of sampling (offor maximum likelihood and for GANs) to construct the estimated gradient. The demonstration of this equivalence is an exercise (section 7.3 with the solution in section 8.3).
Other methods of approximating maximum likelihood within the GANs framework are possible. See for example nowozin2016f.
As part of our investigation of how GANs work, we might wonder exactly what it is that makes them work well for generating samples.
Previously, many people (including the author) believed that GANs produced sharp, realistic samples because they minimize the Jensen-Shannon divergence while VAEs produce blurry samples because they minimize the KL divergence between the data and the model.
The KL divergence is not symmetric; minimizing is different from minimizing . Maximum likelihood estimation performs the former; minimizing the Jensen-Shannon divergence is somewhat more similar to the latter. As shown in figure 14, the latter might be expected to yield better samples because a model trained with this divergence would prefer to generate samples that come only from modes in the training distribution even if that means ignoring some modes, rather than including all modes but generating some samples that do not come from any training set mode.
Some newer evidence suggests that the use of the Jensen-Shannon divergence does not explain why GANs make sharper samples:
GANs often choose to generate from very few modes; fewer than the limitation imposed by the model capacity. The reverse KL prefers to generate from as many modes of the data distribution as the model is able to; it does not prefer fewer modes in general. This suggests that the mode collapse is driven by a factor other than the choice of divergence.
Altogether, this suggests that GANs choose to generate a small number of modes due to a defect in the training procedure, rather than due to the divergence they aim to minimize. This is discussed further in section 5.1.1. The reason that GANs produce sharp samples is not entirely clear. It may be that the family of models trained using GANs is different from the family of models trained using VAEs (for example, with GANs it is easy to make models where has a more complicated distribution than just an isotropic Gaussian conditioned on the input to the generator). It may also be that the kind of approximations that GANs make have different effects than the kind of approximations that other frameworks make.
We can think of the generator network as learning by a strange kind of reinforcement learning. Rather than being told a specific output it should associate with each , the generator takes actions and receives rewards for them. In particular, note that does not make reference to the training data directly at all; all information about the training data comes only through what the discriminator has learned. (Incidentally, this makes GANs resistant to overfitting, because the generator has no opportunity in practice to directly copy training examples) The learning process differs somewhat from traditional reinforcement learning because
The generator is able to observe not just the output of the reward function but also its gradients.
The reward function is non-stationary; the reward is based on the discriminator which learns in response to changes in the generator’s policy.
In all cases, we can think of the sampling process that begins with the selection of a specific value as an episode that receives a single reward, independent of the actions taken for all other values. The reward given to the generator is a function of a single scalar value, . We usually think of this in terms of cost (negative reward). The cost for the generator is always monotonically decreasing in but different games are designed to make this cost decrease faster along different parts of the curve.
Figure 16 shows the cost response curves as functions of for three different variants of the GAN game. We see that the maximum likelihood game gives very high variance in the cost, with most of the cost gradient coming from the very few samples of that correspond to the samples that are most likely to be real rather than fake. The heuristically designed non-saturating cost has lower sample variance, which may explain why it is more successful in practice. This suggests that variance reduction techniques could be an important research area for improving the performance of GANs, especially GANs based on maximum likelihood.
Most GANs today are at least loosely based on the DCGAN architecture (radford2015unsupervised). DCGAN stands for “deep, convolution GAN.” Though GANs were both deep and convolutional prior to DCGANs, the name DCGAN is useful to refer to this specific style of architecture. Some of the key insights of the DCGAN architecture were to:
The overall network structure is mostly borrowed from the all-convolutional net (Springenberg2015)
. This architecture contains neither pooling nor “unpooling” layers. When the generator needs to increase the spatial dimension of the representation it uses transposed convolution with a stride greater than 1.
The use of the Adam optimizer rather than SGD with momentum.
Prior to DCGANs, LAPGANs (denton2015deep) were the only version of GAN that had been able to scale to high resolution images. LAPGANs require a multi-stage generation process in which multiple GANs generate different levels of detail in a Laplacian pyramid representation of an image. DCGANs were the first GAN model to learn to generate high resolution images in a single shot. As shown in figure 18, DCGANs are able to generate high quality images when trained on restricted domains of images, such as images of bedrooms. DCGANs also clearly demonstrated that GANs learn to use their latent code in meaningful ways, with simple arithmetic operations in latent space having clear interpretation as arithmetic operations on semantic attributes of the input, as demonstrated in figure 19.
While trying to understand how GANs work, one might naturally wonder about how they are connected to noise-constrastive estimation (NCE) (Gutmann+Hyvarinen-2010). Minimax GANs use the cost function from NCE as their value function, so the methods seem closely related at face value. It turns out that they actually learn very different things, because the two methods focus on different players within this game. Roughly speaking, the goal of NCE is to learn the density model within the discriminator, while the goal of GANs is to learn the sampler defining the generator. While these two tasks seem closely related at a qualitative level, the gradients for the tasks are actually quite different. Surprisingly, maximum likelihood turns out to be closely related to NCE, and corresponds to playing a minimax game with the same value function, but using a sort of heuristic update strategy rather than gradient descent for one of the two players. The connections are summarized in figure 20.
Practitioners use several tricks to improve the performance of GANs. It can be difficult to tell how effective some of these tricks are; many of them seem to help in some contexts and hurt in others. These should be regarded as techniques that are worth trying out, not as ironclad best practices.
NIPS 2016 also featured a workshop on adversarial training, with an invited talk by Soumith Chintala called ”How to train a GAN.” This talk has more or less the same goal as this portion of the tutorial, with a different collection of advice. To learn about tips and tricks not included in this tutorial, check out the GitHub repository associated with Soumith’s talk:
Using labels in any way, shape or form almost always results in a dramatic improvement in the subjective quality of the samples generated by the model. This was first observed by denton2015deep, who built class-conditional GANs that generated much better samples than GANs that were free to generate from any class. Later, salimans2016improved found that sample quality improved even if the generator did not explicitly incorporate class information; training the discriminator to recognize specific classes of real objects is sufficient.
It is not entirely clear why this trick works. It may be that the incorporation of class information gives the training process useful clues that help with optimization. It may also be that this trick gives no objective improvement in sample quality, but instead biases the samples toward taking on properties that the human visual system focuses on. If the latter is the case, then this trick may not result in a better model of the true data-generating distribution, but it still helps to create media for a human audience to enjoy and may help an RL agent to carry out tasks that rely on knowledge of the same aspects of the environment that are relevant to human beings.
It is important to compare results obtained using this trick only to other results using the same trick; models trained with labels should be compared only to other models trained with labels, class-conditional models should be compared only to other class-conditional models. Comparing a model that uses labels to one that does not is unfair and an uninteresting benchmark, much as a convolutional model can usually be expected to outperform a non-convolutional model on image tasks.
GANs are intended to work when the discriminator estimates a ratio of two densities, but deep neural nets are prone to producing highly confident outputs that identify the correct class but with too extreme of a probability. This is especially the case when the input to the deep network is adversarially constructed; the classifier tends to linearly extrapolate and produce extremely confident predictions (Goodfellow-2015-adversarial).
To encourage the discriminator to estimate soft probabilities rather than to extrapolate to extremely confident classification, we can use a technique called one-sided label smoothing (salimans2016improved).
Usually we train the discriminator using equation 6
. We can write this in TensorFlow(tensorflow) code as:
The idea of one-sided label smoothing is to replace the target for the real examples with a value slightly less than one, such as .9:
This prevents extreme extrapolation behavior in the discriminator; if it learns to predict extremely large logits corresponding to a probability approachingfor some input, it will be penalized and encouraged to bring the logits back down to a smaller value.
It is important to not smooth the labels for the fake samples. Suppose we use a target of for the real data and a target of for the fake samples. Then the optimal discriminator function is
When is zero, then smoothing by does nothing but scale down the optimal value of the discriminator. When is nonzero, the shape of the optimal discriminator function changes. In particular, in a region where is very small and is larger, will have a peak near the spurious mode of . The discriminator will thus reinforce incorrect behavior in the generator; the generator will be trained either to produce samples that resemble the data or to produce samples that resemble the samples it already makes.
One-sided label smoothing is a simple modification of the much older label smoothing technique, which dates back to at least the 1980s. Szegedy-et-al-2015 demonstrated that label smoothing is an excellent regularizer in the context of convolutional networks for object recognition. One reason that label smoothing works so well as a regularizer is that it does not ever encourage the model to choose an incorrect class on the training set, but only to reduce the confidence in the correct class. Other regularizers such as weight decay often encourage some misclassification if the coefficient on the regularizer is set high enough. wardefarley2016 showed that label smoothing can help to reduce vulnerability to adversarial examples, which suggests that label smoothing should help the discriminator more efficiently learn to resist attack by the generator.
Since the introduction of DCGANs, most GAN architectures have involved some form of batch normalization. The main purpose of batch normalization is to improve the optimization of the model, by reparameterizing the model so that the mean and variance of each feature are determined by a single mean parameter and a single variance parameter associated with that feature, rather than by a complicated interaction between all of the weights of all of the layers used to extract the feature. This reparameterization is accomplished by subtracting the mean and dividing by the standard deviation of that feature on a minibatch of data. It is important that the normalization operation ispart of the model, so that back-propgation computes the gradient of features that are defined to always be normalized. The method is much less effect if features are frequently renormalized after learning without the normalization defined as part of the model.
Batch normalization is very helpful, but for GANs has a few unfortunate side effects. The use of a different minibatch of data to compute the normalization statistics on each step of training results in fluctuation of these normalizing constants. When minibatch sizes are small (as is often the case when trying to fit a large generative model into limited GPU memory) these fluctuations can become large enough that they have more effect on the image generated by the GAN than the input has. See figure 21 for an example.
salimans2016improved introduced techniques to mitigate this problem. Reference batch normalization consists of running the network twice: once on a minibatch of reference examples that are sampled once at the start of training and never replaced, and once on the current minibatch of examples to train on. The mean and standard deviation of each feature are computed using the reference batch. The features for both batches are then normalized using these computed statistics. A drawback to reference batch normalization is that the model can overfit to the reference batch. To mitigate this problem slightly, one can instead use virutal batch normalization, in which the normalization statistics for each example are computed using the union of that example and the reference batch. Both reference batch normalization and virtual batch normalization have the property that all examples in the training minibatch are processed independently from each other, and all samples produced by the generator (except those defining the reference batch) are i.i.d.
Many people have an intuition that it is necessary to somehow balance the two players to prevent one from overpowering the other. If such balance is desirable and feasible, it has not yet been demonstrated in any compelling fashion.
The author’s present belief is that GANs work by estimating the ratio of the data density and model density. This ratio is estimated correctly only when the discriminator is optimal, so it is fine for the discriminator to overpower the generator.
Sometimes the gradient for the generator can vanish when the discriminator becomes too accurate. The right way to solve this problem is not to limit the power of the discriminator, but to use a parameterization of the game where the gradient does not vanish (section 3.2.3).
Sometimes the gradient for the generator can become very large if the discriminator becomes too confident. Rather than making the discriminator less accurate, a better way to resolve this problem is to use one-sided label smoothing (section 4.2).
The idea that the discriminator should always be optimal in order to best estimate the ratio would suggest training the discriminator for steps every time the generator is trained for one step. In practice, this does not usually result in a clear improvement.
One can also try to balance the generator and discriminator by choosing the model size. In practice, the discriminator is usually deeper and sometimes has more filters per layer than the generator. This may be because it is important for the discriminator to be able to correctly estimate the ratio between the data density and generator density, but it may also be an artifact of the mode collapse problem—since the generator tends not to use its full capacity with current training methods, practitioners presumably do not see much of a benefit from increasing the generator capacity. If the mode collapse problem can be overcome, generator sizes will presumably increase. It is not clear whether discriminator sizes will increase proportionally.
GANs are a relatively new method, with many research directions still remaining open.
The largest problem facing GANs that researchers should try to resolve is the issue of non-convergence.
Most deep models are trained using an optimization algorithm that seeks out a low value of a cost function. While many problems can interfere with optimization, optimization algorithms usually make reliable downhill progress. GANs require finding the equilibrium to a game with two players. Even if each player successfully moves downhill on that player’s update, the same update might move the other player uphill. Sometimes the two players eventually reach an equilibrium, but in other scenarios they repeatedly undo each others’ progress without arriving anywhere useful. This is a general problem with games not unique to GANs, so a general solution to this problem would have wide-reaching applications.
To gain some intuition for how gradient descent performs when applied to games rather than optimization, the reader is encouraged to solve the exercise in section 7.2 and review its solution in section 8.2 now.
Simultaneous gradient descent converges for some games but not all of them.
In the case of the minimax GAN game (section 3.2.2), Goodfellow-et-al-NIPS2014-small showed that simultaneous gradient descent converges if the updates are made in function space. In practice, the updates are made in parameter space, so the convexity properties that the proof relies on do not apply. Currently, there is neither a theoretical argument that GAN games should converge when the updates are made to parameters of deep neural networks, nor a theoretical argument that the games should not converge.
In practice, GANs often seem to oscillate, somewhat like what happens in the toy example in section 8.2, meaning that they progress from generating one kind of sample to generating another kind of sample without eventually reaching an equilibrium.
Probably the most common form of harmful non-convergence encountered in the GAN game is mode collapse.
Mode collapse, also known as the Helvetica scenario, is a problem that occurs when the generator learns to map several different input values to the same output point. In practice, complete mode collapse is rare, but partial mode collapse is common. Partial mode collapse refers to scenarios in which the generator makes multiple images that contain the same color or texture themes, or multiple images containing different views of the same dog. The mode collapse problem is illustrated in figure 22.
Mode collapse may arise because the maximin solution to the GAN game is different from the minimax solution. When we find the model
draws samples from the data distribution. When we exchange the order of the min and max and find
the minimization with respect to the generator now lies in the inner loop of the optimization procedure. The generator is thus asked to map every value to the single coordinate that the discriminator believes is most likely to be real rather than fake. Simultaneous gradient descent does not clearly privilege over or vice versa. We use it in the hope that it will behave like but it often behaves like .
As discussed in section 3.2.5, mode collapse does not seem to be caused by any particular cost function. It is commonly asserted that mode collapse is caused by the use of Jensen-Shannon divergence, but this does not seem to be the case, because GANs that minimize approximations of face the same issues, and because the generator often collapses to even fewer modes than would be preferred by the Jensen-Shannon divergence.
Because of the mode collapse problem, applications of GANs are often limited to problems where it is acceptable for the model to produce a small number of distinct outputs, usually tasks where the goal is to map some input to one of many acceptable outputs. As long as the GAN is able to find a small number of these acceptable outputs, it is useful. One example is text-to-image synthesis, in which the input is a caption for an image, and the output is an image matching that description. See figure 23 for a demonstration of this task. In very recent work, reedgenerating have shown that other models have higher output diversity than GANs for such tasks (figure 24), but StackGANs (zhang2016stackgan) seem to have higher output diversity than previous GAN-based approaches (figure 25).
The mode collapse problem is probably the most important issue with GANs that researchers should attempt to address.
One attempt is minibatch features (salimans2016improved)
. The basic idea of minibatch features is to allow the discriminator to compare an example to a minibatch of generated samples and a minibatch of real samples. By measuring distances to these other samples in latent spaces, the discriminator can detect if a sample is unusually similar to other generated samples. Minibatch features work well. It is strongly recommended to directly copy the Theano/TensorFlow code released with the paper that introduced them, since small changes in the definition of the features result in large reductions in performance.
Minibatch GANs trained on CIFAR-10 obtain excellent results, with most samples being recognizable as specific CIFAR-10 classes (figure 26). When trained on ImageNet, few images are recognizable as belonging to a specific ImageNet class (figure 27). Some of the better images are cherry-picked into figure 28.
Minibatch GANs have reduced the mode collapse problem enough that other problems, such as difficulties with counting, perspective, and global structure become the most obvious defects (figure 29, figure 30, and figure 31, respectively). Many of these problems could presumably be resolved by designing better model architectures.
Another approach to solving the mode collapse problem is unrolled GANs (metz2016unrolled). Ideally, we would like to find . In practice, when we simultaneously follow the gradient of for both players, we essentially ignore the operation when computing the gradient for . Really, we should regard as the cost function for , and we should back-propagate through the maximization operation. Various strategies exist for back-propagating through a maximization operation, but many, such as those based on implicit differentiation, are unstable. The idea of unrolled GANs is to build a computational graph describing
steps of learning in the discriminator, then backpropagate through allof these steps of learning when computing the gradient on the generator. Fully maximizing the value function for the discriminator takes tens of thousands of steps, but metz2016unrolled found that unrolling for even small numbers of steps, like 10 or fewer, can noticeably reduce the mode dropping problem. This approach has not yet been scaled up to ImageNet. See figure 32 for a demonstration of unrolled GANs on a toy problem.
If our theory of how to understand whether a continuous, high-dimensional non-convex game will converge could be improved, or if we could develop algorithms that converge more reliably than simultaneous gradient descent, several application areas besides GANs would benefit. Even restricted to just AI research, we find games in many scenarios:
Agents that literally play games, such as AlphaGo (silver2016mastering).
Machine learning security, where models must resist adversarial examples (Szegedy-ICLR2014; Goodfellow-2015-adversarial).
Domain adaptation via domain-adversarial learning (ganin2015domain).
Adversarial mechanisms for preserving privacy (edwards2015censoring).
Adversarial mechanisms for cryptography (abadi2016learning).
This is by no means an exhaustive list.
Another highly important research area related to GANs is that it is not clear how to quantitatively evaluate generative models. Models that obtain good likelihood can generate bad samples, and models that generate good samples can have poor likelihood. There is no clearly justified way to quantitatively score samples. GANs are somewhat harder to evaluate than other generative models because it can be difficult to estimate the likelihood for GANs (but it is possible—see wu2016quantitative). Theis2015d describe many of the difficulties with evaluating generative models.
The only real requirement imposed on the design of the generator by the GAN framework is that the generator must be differentiable. Unfortunately, this means that the generator cannot produce discrete data, such as one-hot word or character representations. Removing this limitation is an important research direction that could unlock the potential of GANs for NLP. There are at least three obvious ways one could attack this problem:
Using the REINFORCE algorithm (Williams-1992).
Using the concrete distribution (maddison2016concrete) or Gumbel-softmax (jang2016categorical).
Training the generate to sample continuous values that can be decoded to discrete ones (e.g., sampling word embeddings directly).
A research area where GANs are already highly successful is the use of generative models for semi-supervised learning, as proposed but not demonstrated in the original GAN paper (Goodfellow-et-al-NIPS2014-small).
GANs have been successfully applied to semi-supervised learning at least since the introduction of CatGANs (springenberg2015unsupervised). Currently, the state of the art in semi-supervised learning on MNIST, SVHN, and CIFAR-10 is obtained by feature matching GANs (salimans2016improved). Typically, models are trained on these datasets using 50,000 or more labels, but feature matching GANs are able to obtain good performance with very few labels. They obtain state of the art performance within several categories for different amounts of labels, ranging from 20 to 8,000.
The basic idea of how to do semi-supervised learning with feature matching GANs is to turn a classification problem with classes into a classification problem with classes, with the additional class corresponding to fake images. All of the real classes can be summed together to obtain the probability of the image being real, enabling the use of the classifier as a discriminator within the GAN game. The real-vs-fake discriminator can be trained even with unlabeled data, which is known to be real, and with samples from the generator, which are known to be fake. The classifier can also be trained to recognize individual real classes on the limited amount of real, labeled examples. This approach was simultaneously developed by salimans2016improved and odena2016semi. The earlier CatGAN used an class discriminator rather than an class discriminator.
Future improvements to GANs can presumably be expected to yield further improvements to semi-supervised learning.
GANs learn a representation of the image . It is already known that this representation can capture useful high-level abstract semantic properties of , but it can be somewhat difficult to make use of this information.
One obstacle to using is that it can be difficult to obtain given an input . Goodfellow-et-al-NIPS2014-small proposed but did not demonstrate using a second network analogous to the generator to sample from , much as the generator samples from . So far the full version of this idea, using a fully general neural network as the encoder and sampling from an arbitrarily powerful approximation of , has not been successfully demonstrated, but donahue2016adversarial demonstrated how to train a deterministic encoder, and dumoulin2016adversarially demonstrated how to train an encoder network that samples from a Gaussian approximation of the posterior. Futher research will presumably develop more powerful stochastic encoders.
Another way to make better use of the code is to train the code to be more useful. InfoGANs (chen2016infogan) regularize some entries in the code vector with an extra objective function that encourages them to have high mutual information with . Individual entries in the resulting code then correspond to specific semantic attributes of , such as the direction of lighting on an image of a face.
Researchers have already identified connections between GANs and actor-critic methods (pfau2016connecting), inverse reinforcement learning (finn2016connection)
, and have applied GANs to imitation learning(ho2016generative). These connections to RL will presumably continue to bear fruit, both for GANs and for RL.
Shortly before this tutorial was presented at NIPS, a new generative model was released. This model, plug and play generative networks (nguyen2016plug), has dramatically improved the diversity of samples of images of ImageNet classes that can be produced at high resolution.
PPGNs are new and not yet well understood. The model is complicated, and most of the recommendations about how to design the model are based on empirical observation rather than theoretical understanding. This tutorial will thus not say too much about exactly how PPGNs work, since this will presumably become more clear in the future.
As a brief summary, PPGNs are basically an approximate Langevin sampling approach to generating images with a Markov chain. The gradients for the Langevin sampler are estimated using a denoising autoencoder. The denoising autoencoder is trained with several losses, including a GAN loss.
This tutorial includes three exercises to check your understanding. The solutions are given in section 8.
As described in equation 6, the goal of the discriminator is to minimize
with respect to . Imagine that the discriminator can be optimized in function space, so the value of is specified independently for every value of . What is the optimal strategy for ? What assumptions need to be made to obtain this result?
Consider a minimax game with two players that each control a single scalar value. The minimizing player controls scalar and the maximizing player controls scalar . The value function for this game is
Does this game have an equilibrium? If so, where is it?
Consider the learning dynamics of simultaneous gradient descent. To simplify the problem, treat gradient descent as a continuous time process. With an infinitesimal learning rate, gradient descent is described by a system of partial differential equations:
Solve for the trajectory followed by these dynamics.
In this exercise, we will derive a cost that yields (approximate) maximum likelihood learning within the GAN framework. Our goal is to design so that, if we assume the discriminator is optimal, the expected gradient of will match the expected gradient of .
The solution will take the form of:
The exercise consists of determining the form of .
Our goal is to minimize
in function space, specifying directly.
We begin by assuming that both and are nonzero everywhere. If we do not make this assumption, then some points are never visited during training, and have undefined behavior.
To minimize with respect to , we can write down the functional derivatives with respect to a single entry , and set them equal to zero:
By solving this equation, we obtain
Estimating this ratio is the key approximation mechanism used by GANs.
The process is illustrated in figure 35.
The value function
is the simplest possible example of a continuous function with a saddle point. It is easiest to understand this game by visualizing the value function in three dimensions, as shown in figure 36.
The three dimensional visualization shows us clearly that there is a saddle point at . This is an equilibrium of the game. We could also have found this point by solving for where the derivatives are zero.
Not every saddle point is an equilibrium; we require that an infinitesimal perturbation of one player’s parameters cannot reduce that player’s cost. The saddle point for this game satisfies that requirement. It is something of a pathological equilibrium because the value function is constant as a function of each player’s parameter when holding the other player’s parameter fixed.
To solve for the trajectory taken by gradient descent, we take the derivatives, and find that
Diffentiating equation 13, we obtain
Differential equations of this form have sinusoids as their set of basis functions of solutions. Solving for the coefficients that respect the boundary conditions, we obtain
These dynamics form a circular orbit, as shown in figure 37. In other words, simultaneous gradient descent with an infinitesimal learning rate will orbit the equilibrium forever, at the same radius that it was initialized. With a larger learning rate, it is possible for simultaneous gradient descent to spiral outward forever. Simultaneous gradient descent will never approach the equilibrium.
For some games, simultaneous gradient descent does converge, and for others, such as the one in this exercise, it does not. For GANs, there is no theoretical prediction as to whether simultaneous gradient descent should converge or not. Settling this theoretical question, and developing algorithms guaranteed to converge, remain important open research problems.
We wish to find a function such that the expected gradient of
is equal to the expected gradient of .
First we take the derivative of the KL divergence with respect to a parameter :
To obtain this result, we made two assumptions:
We assumed that everywhere so that we were able to use the identity
We assumed that we can use Leibniz’s rule to exhange the order of differentiation and integration (specifically, that both the function and its derivative are continuous, and that the function vanishes for infinite values of ).
We see that the derivatives of come very near to giving us what we want; the only problem is that the expectation is computed by drawing samples from when we would like it to be computed by drawing samples from . We can fix this problem using an importance sampling trick; by setting we can reweight the contribution to the gradient from each generator sample to compensate for it having been drawn from the generator rather than the data.
Note that when constructing we must copy into so that has a derivative of zero with respect to the parameters of . Fortunately, this happens naturally if we obtain the value of .
From section 8.1, we already know that the discriminator estimates the desired ratio. Using some algebra, we can obtain a numerically stable implementation of . If the discriminator is defined to apply a logistic sigmoid function at the output layer, with , then .
This exercise is taken from a result shown by Goodfellow-ICLR2015. From this exercise, we see that the discriminator estimates a ratio of densities that can be used to calculate a variety of divergences.
GANs are generative models that use supervised learning to approximate an intractable cost function, much as Boltzmann machines use Markov chains to approximate their cost and VAEs use the variational lower bound to approximate their cost. GANs can use this supervised ratio estimation technique to approximate many cost functions, including the KL divergence used for maximum likelihood estimation.
GANs are relatively new and still require some research to reach their new potential. In particular, training GANs requires finding Nash equilibria in high-dimensional, continuous, non-convex games. Researchers should strive to develop better theoretical understanding and better training algorithms for this scenario. Success on this front would improve many other applications, besides GANs.
GANs are crucial to many different state of the art image generation and manipulation systems, and have the potential to enable many other applications in the future.
The author would like to thank the NIPS organizers for inviting him to present this tutorial. Many thanks also to those who commented on his Twitter and Facebook posts asking which topics would be of interest to the tutorial audience. Thanks also to D. Kingma for helpful discussions regarding the description of VAEs. Thanks to Zhu Xiaohu, Alex Kurakin and Ilya Edrenkin for spotting typographical errors in the manuscript.