First Order Generative Adversarial Networks

02/13/2018 ∙ by Calvin Seward, et al. ∙ 0

GANs excel at learning high dimensional distributions, but they can update generator parameters in directions that do not correspond to the steepest descent direction of the objective. Prominent examples of problematic update directions include those used in both Goodfellow's original GAN and the WGAN-GP. To formally describe an optimal update direction, we introduce a theoretical framework which allows the derivation of requirements on both the divergence and corresponding method for determining an update direction. These requirements guarantee unbiased mini-batch updates in the direction of steepest descent. We propose a novel divergence which approximates the Wasserstein distance while regularizing the critic's first order information. Together with an accompanying update direction, this divergence fulfills the requirements for unbiased steepest descent updates. We verify our method, the First Order GAN, with CelebA image generation and set a new state of the art on the One Billion Word language generation task.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

Code Repositories

first_order_gan

Create images and texts with the First Order Generative Adversarial Networks arxiv.org/abs/1802.04591


view repo
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Generative adversarial networks (GANs) (Goodfellow et al., 2014) excel at learning generative models of complex distributions, such as images (Radford et al., 2015; Ledig et al., 2016), textures (Jetchev et al., 2016; Bergmann et al., 2017; Jetchev et al., 2017), and even texts (Gulrajani et al., 2017; Heusel et al., 2017).

GANs learn a generative model that maps samples from multivariate random noise into a high dimensional space. The goal of GAN training is to update

such that the generative model approximates a target probability distribution. In order to determine how close the generated and target distributions are, a class of divergences, the so-called adversarial divergences was defined and explored by

(Liu et al., 2017). This class is broad enough to encompass most popular GAN methods such as the original GAN (Goodfellow et al., 2014), -GANs, (Nowozin et al., 2016)moment matching networks (Li et al., 2015), Wasserstein GANs (Arjovsky et al., 2017) and the tractable version thereof, the WGAN-GP (Gulrajani et al., 2017).

GANs learn a generative model with the distribution by minimizing an objective function measuring the similarity between target and generated distributions and . In most GAN settings the objective function to be minimized is an adversarial divergence (Liu et al., 2017), where a critic function is learned that distinguishes between target and generated data. For example, in the classic GAN (Goodfellow et al., 2014) the critic classifies data as real or generated, and the generator is encouraged to generate samples that will classify as real.

Unfortunately in GAN training, the generated distribution often fails to converge to the target distribution. Many popular GAN methods are unsuccessful with toy examples, for example failing to generate all modes of a mixture of Gaussians (Srivastava et al., 2017; Metz et al., 2017) or failing to learn the distribution of data on a one-dimensional line in a high dimensional space (Fedus et al., 2017). In these situations, updates to the generator don’t significantly reduce the divergence between generated and target distributions; if there always was a significant reduction in the divergence then the generated distribution would converge to the target distribution.

The key to successful neural network training lies in the ability to efficiently obtain unbiased estimates of the gradients of a network’s parameters with respect to some loss. With GANs, this idea can be applied to the generative setting. There, the generator

is parameterized by some values . If an unbiased estimate of the gradient of the divergence between target and generated distributions with respect to can be obtained during mini-batch learning, then SGD can be applied to learn just like in any other neural network setting.

In GAN learning, intuition would dictate updating the generated distribution by moving in the direction of steepest descent . Unfortunately, the direction of steepest descent is generally intractable, therefore is updated according to a tractable method, in most cases a critic is learned and the gradient of the expected critic value update direction for . Usually the direction of the update direction that’s used and the direction of steepest descent , don’t coincide and therefore learning isn’t optimal. As we see later, popular methods such as WGAN-GP (Gulrajani et al., 2017) are affected by this issue.

Therefore we set out to answer a simple but fundamental question: Is there an adversarial divergence and corresponding method that produces unbiased estimates of the direction of steepest descent in a mini-batch setting?

In this paper, under reasonable assumptions, we identify a path to such an adversarial divergence and accompanying update method. Similar to the WGAN-GP, this divergence also penalizes a critic’s gradients, and thereby ensures that the critic’s first order information can be used directly to obtain an update direction in the direction of steepest descent.

This program places four requirements on the adversarial divergence and the accompanying update rule for calculating the update direction that haven’t to the best of our knowledge been formulated together. This paper will give rigorous definitions of these requirements, but for now we suffice with intuitive and informal definitions:

  1. the divergence used must decrease as target and generated distributions approach each other. For example, if we define the trivial distance between two probability distribution to be if the distributions are equal, and otherwise, i.e.

    then even as gets close to , doesn’t change. Without this requirement, and every direction is a “direction of steepest descent,”

  2. critic learning must be tractable,

  3. the gradient and the result of an update rule must be well defined,

  4. the optimal critic enables an update which is an unbiased estimate of .

In order to formalize these requirements, we will devote Section 2 to defining the notion of adversarial divergences, optimal critics and “well behaved” families of distributions . In Section 3 we will apply the adversarial divergence paradigm and begin to formalize the requirements above and better understand existing GAN methods. The last requirement is defined precisely in Section 4

where we explore criteria for an update rule guaranteeing a low variance unbiased estimate of the true gradient

.

After stating these conditions, we devote Section 5 to defining a divergence, the Penalized Wasserstein Divergence, and an associated update rule that fulfills the first two basic requirements. In this setting, a critic is learned, that similarly to the WGAN-GP critic, pushes real and generated data as far apart as possible while being penalized if the critic violates a Lipschitz condition.

As we will discover, an optimal critic for the Penalized Wasserstein Divergence between two distributions need not be unique. In fact, this divergence only specifies the values that the optimal critic assumes on the supports of generated and target distributions. Therefore, for many distributions multiple critics with different gradients on the support of the generated distribution can all be optimal.

We apply this insight in Section 6 and modify the Penalized Wasserstein Divergence by adding a gradient penalty to define the First Order Penalized Wasserstein Divergence. This divergence enforces not just correct values for the critic, but also ensures that the critic’s gradient, its first order information, assumes values that allow for an easy formulation of an update rule. Together, this divergence and update rule fulfill all four requirements.

It is our hope that this gradient penalty trick will be applied to other popular GAN methods and ensure that they too return optimal generator updates on mini-batches. Indeed (Fedus et al., 2017)

improves results for existing GAN methods by heuristically adding a gradient penalty to said methods.

Finally in Section 7 the effectiveness of the First Order Wasserstein GAN will be demonstrated with the generation of both CelebA (Ledig et al., 2016) images and One Billion Words Benchmark (Chelba et al., 2013) texts.

2 Notation, Definitions and Assumptions

In (Liu et al., 2017) an adversarial divergence is defined:

Definition 1 (Adversarial Divergence).

Let be a topological space, , . An adversarial divergence over is a function

Note that care must be taken when choosing the function class if is to be a well defined divergence. For example, if then the divergence between two Dirac distributions , and if , i.e.  contains only a constant function which assumes zero everywhere, then .

Many existing GAN-style procedures can be formulated as an adversarial divergence. For example, setting

results in , the divergence in Goodfellow’s original GAN (Goodfellow et al., 2014). See (Liu et al., 2017) for further examples.

For convenience, we’ll restrict ourselves to analyzing a special case of the adversarial divergence (similar to Theorem 4 of (Liu et al., 2017)), and use the notation:

Definition 2 (Critic Based Adversarial Divergence).

Let be a topological space, , . Further let , , and . Then define

(1)

and set .

For example, the from above can be equivalently defined by setting , , and . Then

(2)

is a critic based adversarial divergence.

An example with a non-zero is the WGAN-GP (Gulrajani et al., 2017), which is a critic based adversarial divergence when , , and

Then the WGAN-GP divergence is:

(3)

While Definition 1 is more general, Definition 2 is more in line with most GAN models. In traditional GAN settings, a critic in the simpler space is learned that separates real and generated data while reducing some penalty term which depends on both real and fake data. For this reason, we use exclusively the notation from Definition 2.

One desirable property of an adversarial divergence is that obtains its infimum if and only if , which leads to the following definition adapted from (Liu et al., 2017):

Definition 3 (Strict adversarial divergence).

Let be an adversarial divergence over a topological space . is called a strict adversarial divergence if for any ,

In order to analyze GANs that minimize a critic based adversarial divergence, we introduce the set of optimal critics.

Definition 4 (Optimal Critic,  ).

Let be a critic based adversarial divergence over a topological space and , , . Define to be the set of critics in that maximize . That is

Note that is possible. For example, (Arjovsky & Bottou, 2017) shows that for the original GAN divergence from Eq. 2 and certain , a critic is in only if it is a perfect discriminator. But such a perfect discriminator is not in , therefore not in . Thus .

Finally, we assume that generated data is distributed according to a probability distribution parameterized by . We further assume that the mapping satisfies the mild regularity Assumption 1 defined below. Furthermore, we assume that and both have compact and disjoint support in Assumption 2 below. Although we conjecture that weaker assumptions can be made, we decide for the stronger assumptions to simplify proofs and focus on defining update rules.

Assumption 1 (Adapted from (Arjovsky et al., 2017)).

Let . We say , satisfies assumption 1 if there is a locally Lipschitz function which is differentiable in the first argument and a distribution with finite support in such that for all it holds where is sampled from .

Assumption 2 (Compact and Disjoint Distributions).

Using from Assumption 1, we say that and satisfies Assumption 2 if for all it holds that the supports of target and generated distributions and respectively are compact and disjoint.

3 Requirements Derived From Related Work

With the concept of an Adversarial Divergence now formally defined, we can investigate existing GAN methods from a standpoint of Adversarial Divergence minimization. During the last few years, weaknesses in existing GAN frameworks have been highlighted and new frameworks have been proposed to mitigate or eliminate these weaknesses. In this section we’ll trace this history and formalize requirements for adversarial divergences and optimal updates.

Although the idea of using two competing neural networks for unsupervised learning is not new

(Schmidhuber, 1992), recent interest in the field learning started with (Goodfellow et al., 2014) using the divergence defined in Eq. 2 to generate images. However, in (Arjovsky & Bottou, 2017) it was shown if the target and generated distributions have compact disjoint support, the gradient of the divergence is zero, i.e. , which is clearly an impediment to gradient based learning methods.

In response to this impediment, the Wasserstein GAN was proposed in (Arjovsky et al., 2017). Here the divergence is defined as

where is the Lipschitz constant of . The following example shows the advantage of . Consider a series of Dirac measures and the target measure . Then while . As approaches , the Wasserstein divergence decreases while remains constant.

This issue is formally explored in (Liu et al., 2017) by creating a weak ordering, the so-called strength, of divergences. A divergence is said to be stronger than if for any sequence of probability measures and any target probability measure the convergence implies . The divergences and are equivalent if is stronger than and is stronger than . The Wasserstein distance is the weakest divergence in the class of strict adversarial divergences (Liu et al., 2017).

The strength of an adversarial divergence leads to the following requirement:

Requirement 1 (Equivalence to ).

An adversarial divergence is said to fulfill Requirement 1 if is a strict adversarial divergence which is weaker than .

The issue of the zero gradients was side stepped in (Goodfellow et al., 2014) (and the option more rigorously explored in (Fedus et al., 2017)) by not updating with but instead using the gradient . As will be shown in Section 4, this update direction doesn’t generally move in the direction of steepest descent.

Although using the Wasserstein distance as a divergence between probability measures solves many theoretical problems, it requires that critics are Lipschitz continuous with Lipschitz constant . Unfortunately, no tractable algorithm has yet been found that is able to learn the optimal Lipschitz continuous critic (or a close approximation thereof).

This is due in part to the fact that if the critic is parameterized by a neural network , , then the set of admissible parameters is highly non-convex. Thus critic learning is a non-convex optimization problem (as is generally the case in neural network learning) with non-convex constraints on the parameters. Since neural network learning is generally an unconstrained optimization problem, adding complex non-convex constraints makes learning intractable with current methods. Thus, finding an optimal Lipschitz continuous critic is a problem that can not yet be solved, leading to the second requirement:

Requirement 2 (Convex Admissible Critic Parameter Set).

Assume is a critic based adversarial divergence where critics are chosen from an underlying set . Assume further that in training a critic, the critic isn’t learned directly but a parameterization of a function is learned. The critic based adversarial divergence is said to fulfill requirement 2 if the set of admissible parameters is convex.

It was reasoned in (Gulrajani et al., 2017) that since a Wasserstein critic must have gradients of norm at most everywhere, a reasonable strategy would be to transform the constrained optimization into an unconstrained optimization problem by penalizing the divergence when a critic has non-unit gradients. With this strategy, the so-called Improved Wasserstein GAN or WGAN-GP divergence defined in Eq. 3 is obtained.

The generator parameters are updated by training an optimal critic and updating with . Although this method has impressive experimental results, it is not yet ideal. (Petzka et al., 2017) showed that an optimal critic for has undefined gradients on the support of the generated distribution . Thus, the update direction is undefined. This naturally leads to the next requirement:

Requirement 3 (Well Defined Update Rule).

An update rule is said to fulfill Requirement 3 on a target distribution and a family of generated distributions if for every the update rule at and is well defined.

Note that kernel methods such as (Dziugaite et al., 2015) and (Li et al., 2015) provide exciting theoretical guarantees and may well fulfill all four requirements. Since these guarantees come at a cost in scalability, we won’t analyze them further.

Req. 1 Req. 2 Req. 3 Req. 4
GAN No Yes Yes No
-GAN No Yes Yes No
WGAN Yes No Yes Yes
WGAN-GP Yes Yes No No
WGAN-LP Yes Yes No No
DRAGAN Yes Yes Yes No
PWGAN Yes Yes No No
FOGAN Yes Yes Yes Yes
Table 1: Comparing existing GAN methods with regard to the four Requirements formulated in this paper. The methods compared are the classic GAN (Goodfellow et al., 2014), -GAN (Nowozin et al., 2016) WGAN (Arjovsky et al., 2017), WGAN-GP (Gulrajani et al., 2017), WGAN-LP (Petzka et al., 2017), DRAGAN (Kodali et al., 2017), PWGAN (our method) and FOGAN (our method).

4 Correct Update Rule Requirement

In the previous section, we stated a bare minimum requirement for an update rule (namely that it is well defined). In this section, we’ll go further and explore criteria for a “good” update rule. For example in Lemma 7 in Section A of Appendix, it is shown that there exists a target and a family of generated distributions fulfilling Assumptions 1 and 2 such that for the optimal critic there is no so that

for all if all terms are well defined. Thus, the update rule used in the WGAN-GP setting, although well defined for this specific and , isn’t guaranteed to move in the direction of steepest descent. Therefore, the question arises what well defined update rule also moves in the direction of steepest descent?

The most obvious candidate for an update rule is simply use the direction , but since in the adversarial divergence setting is the supremum over a set of infinitely many possible critics, calculating directly is generally intractable.

One strategy to address this issue is to use an envelope theorem (Milgrom & Segal, 2002). Assuming all terms are well defined, then for every optimal critic it holds . This strategy is outlined in detail in (Arjovsky et al., 2017) when proving the Wasserstein GAN update rule.

Yet in many GAN settings, (Goodfellow et al., 2014; Arjovsky et al., 2017; Salimans et al., 2016; Petzka et al., 2017), the update rule is to train an optimal critic and then take a step in the direction of . In the critic based adversarial divergence setting (Definition 2), a direct result of Eq. 1 together with Theorem 1 from (Milgrom & Segal, 2002) is that for every

(4)

Thus, the update direction only points in the direction of steepest descent for special choices of and . One such example is the Wasserstein GAN where and .

Most popular GAN methods don’t employ functions and such that the update direction points in the direction of steepest descent. For example, with the classic GAN, and , so the update direction clearly is not oriented along in the direction of steepest descent . The WGAN-GP is similar, since as we see in Lemma 7 in Appendix, Section A, is not generally a multiple of .

The question arises why this direction is used instead of directly calculating the direction of steepest descent? Using the correct update rule in Eq. 4 above, two issues, both of which involve estimating are in the way. GAN learning happens in mini-batches, therefore isn’t calculated directly, but estimated based on samples which can lead to both variance and bias in the estimate.

To analyze these issues, we use the notation from (Bellemare et al., 2017) where are samples from and the empirical distribution is defined by . Further let be the element-wise variance. Now with mini-batch learning we get222Because the first expectation doesn’t depend on , . In the same way, because the second expectation doesn’t depend on the mini-batch sampled, .

Therefore, estimation of is an extra source of variance.

An even more pressing issue is that of biased estimates. For many choices of there’s no guarantee that

Therefore, even though Eq. 4 is correct, applying it to mini-batch learning won’t generally result in correct estimates of the direction of steepest descent.

Our elegant solution to both these problems chooses the critic based adversarial divergence in such a way that there exists a so that for all optimal critics it holds

(5)

Now using Eq. 5 we see that

making an unbiased (with respect to sampling from , low variance update rule in the direction of steepest descent.

We’re then able to have the best of both worlds. On the one hand, when serves as a penalty term, training of a critic neural network can happen in an unconstrained optimization fashion like with the WGAN-GP. At the same time, assuming is an optimal critic, since the direction of steepest descent can be obtained by calculating , and as in the Wasserstein GAN we get correct gradient update steps.

With this motivation, Eq. 5 forms the basis of our final requirement:

Requirement 4 (Unbiased Update Rule).

An adversarial divergence is said to fulfill requirement 4 if is a critic based adversarial divergence and for every optimal critic fulfills Eq. 5.

5 Penalized Wasserstein Divergence

We now attempt to find an adversarial divergence that fulfills all four requirements. We start by formulating an adversarial divergence and a corresponding update rule than can be shown to comply with Requirements 1 to 2. Subsequently in Section 6, will be refined to make its update rule practical and conform to all four requirements.

The divergence is inspired by the Wasserstein distance, there for an optimal critic between two Dirac distributions it holds . Now if we look at

(6)

it’s easy to calculate that , which is the same (in this simple setting) as the Wasserstein distance, without being a constrained optimization problem. See Figure 1 for an example.

This has another intuitive explanation. Because Eq. 6 can be reformulated as

which is a tug of war between the objective and the squared Lipschitz penalty weighted by . This term is important (and missing from (Gulrajani et al., 2017), (Petzka et al., 2017)) because otherwise the slope of the optimal critic between and will depend on .

The penalized Wasserstein divergence is a straight-forward adaptation of to the multi dimensional case.

Definition 5 (Penalized Wasserstein Divergence).

Assume and are probability measures over and . Set

Define the penalized Wasserstein divergence as

This divergence is updated by picking an optimal critic and taking a step in the direction of .

This formulation is similar to the WGAN-GP (Gulrajani et al., 2017), restated here in Eq. 3.

Theorem 1.

Assume , and are probability measures over fulfilling Assumptions 1 and 2. Then for every the Penalized Wasserstein Divergence with it’s corresponding update direction fulfills Requirements 1 and 2.

Further, there exists an optimal critic that fulfills Eq. 5 and thus Requirement 4.

Proof.

See Appendix, Section A. ∎

Note that this theorem isn’t unique to . For example, for the penalty in Eq. 8 of (Petzka et al., 2017) we conjecture that a similar result can be shown. The divergence is still very useful because, as will be shown in the next section, can be modified slightly to obtain a new critic , where every optimal critic fulfills Requirements 1 to 4.

Since only constrains the value of a critic on the supports of and , many different critics are optimal, and in general depends on the optimal critic choice and is thus is not well defined. With this, Requirements 3 and 4 are not fulfilled. See Figure 1 for a simple example.

(a) First order critic
(b) Normal critic
Figure 1: Comparison of update rule given different optimal critics. Consider the simple example of divergence from Definition 5 between Dirac measures with update rule (the update rule is from Lemma 6 in Appendix, Section A). Recall that , and that so . Let ; our goal is to calculate via our update rule. Since multiple critics are optimal for , we will explore how the choice of optimal critic affects the update. In In Subfigure 1, we chose the first order optimal critic , and and the update rule is correct (see how the red, black and green lines all intersect in one point). In Subfigure 1, the optimal critic is set to which is not a first order critic resulting in the update rule calculating an incorrect update.

In theory, ’s critic could be trained with a modified sampling procedure so that is well defined and Eq. 5 holds, as is done in both (Kodali et al., 2017) and (Unterthiner et al., 2018). By using a method similar to (Bishop et al., 1998), one can minimize the divergence where is data equal to where is sampled from and

is some zero-mean uniform distributed noise. In this way the support of

lives in the space and not the submanifold . Unfortunately, while this method works in theory, the number of samples required for accurate gradient estimates scales with the dimensionality of the underlying space , not with the dimensionality of data or generated submanifolds or . In response, we propose the First Order Penalized Wasserstein Divergence.

6 First Order Penalized Wasserstein Divergence

The First Order Penalized Wasserstein Divergence is a modification of the Penalized Wasserstein Divergence where an extra gradient penalty term is added to enforce a constraint on the gradients of the optimal critic. Conveniently, as is shown in Lemma 5 in Appendix, Section A, any optimal critic for the First Order Penalized Wasserstein divergence is also an optimal critic for the Penalized Wasserstein Divergence. The key advantage to the First Order Penalized Wasserstein Divergence is that for any , fulfilling Assumptions 1 and 2 a slightly modified probability distribution can be obtained on which with its corresponding update rule fulfills requirements 3 and 4.

By penalizing the gradient of a critic so that the critic fulfills Eq. 5, the following definition emerges (see proof of Lemma 6 in Appendix, Section A for details).

Definition 6 (First Order Penalized Wasserstein Divergence (FOGAN)).

Assume and are probability measures over . Set , and

Define the First Order Penalized Wasserstein Divergence as

This divergence is updated by picking an optimal critic and taking a step in the direction of .

In order to define a GAN from the First Order Penalized Wasserstein Divergence, we must define a slight modification of the generated distribution to obtain . Similar to the WGAN-GP setting, samples from are obtained by where and . The difference is that , with chosen quite small, making and quite similar. Therefore updates to that reduce also reduce . This modification is needed for the proof of Theorem 2.

Theorem 2.

Assume , and are probability measures over fulfilling Assumptions 1 and 2 and is modified using the method above. Then for every there exists at least one optimal critic and combined with update direction fulfills Requirements 1 to 4.

Proof.

See Appendix, Section A

Note that adding a gradient penalty, other than being a necessary step for the WGAN-GP (Gulrajani et al., 2017), DRAGAN (Kodali et al., 2017) and Consensus Optimization GAN (Mescheder et al., 2017), has also been shown empirically to improve the performance the original GAN method (Eq. 2), see (Fedus et al., 2017). In addition, using stricter assumptions on the critic, (Nagarajan & Kolter, 2017) provides a theoretical justification for use of a gradient penalty in GAN learning. The analysis of Theorem 2 in Appendix, Section A provides a theoretical understanding why in the Penalized Wasserstein GAN setting adding a gradient penalty causes to be an update rule that points in the direction of steepest descent, and may well provide a pathway for other GAN methods to make similar assurances.

7 Experimental Results

7.1 CelebA

We begin by testing the FOGAN on the CelebA image generation task (Liu et al., 2015), training a generative model with the DCGAN architecture (Radford et al., 2015) and obtaining FID scores (Heusel et al., 2017) competitive with state of the art methods without doing a tuning parameter search. See Table 2 and Appendix B.1. 333https://github.com/zalandoresearch/first_order_gan

7.2 One Billion Word

Finally, we use the First Order Penalized Wasserstein Divergence to train a character level generative language model on the One Billion Word Benchmark (Chelba et al., 2013)

. In this setting, a 1D CNN deterministically transforms a latent vector into a

matrix, where

is the number of possible characters. A softmax nonlinearity is applied to this output, and given to the critic. Real data is one-hot encoding of 32 character texts sampled from the true data.

We conjecture this is an especially difficult task for GANs, since data in the target distribution lies in just a few corners of the dimensional unit hypercube. As the generator is updated, it must push mass from one corner to another, passing through the interior of the hypercube far from any real data. Methods other than Coulomb GAN (Unterthiner et al., 2018) and WGAN-GP (Gulrajani et al., 2017; Heusel et al., 2017) were not successful at this task.

We use the same setup as in both (Gulrajani et al., 2017; Heusel et al., 2017) with two differences. First, we train to minimize our divergence from Definition 6 with parameters and

instead of the WGAN-GP divergence. Second, we use batch normalization in the generator, both for training our FOGAN method and the benchmark WGAN-GP; we do this because batch normalization improved performance and stability of both models.

As with (Gulrajani et al., 2017; Heusel et al., 2017) we use the Jensen-Shannon-divergence (JSD) between

-grams from the model and the real world distribution as an evaluation metric. The JSD is estimated by sampling a finite number of 32 character vectors, and comparing the distributions of the

-grams from said samples and true data. This estimation is biased; smaller samples result in larger JSD estimations. A Bayes limit results from this bias; even when samples are drawn from real world data and compared with real world data, small sample sizes results in large JSD estimations. In order to detect performance difference when training with the FOGAN and WGAN-GP, a low Bayes limit is necessary. Thus, to compare the methods, we sampled 32 character vectors compared to the vectors sampled in past works and the JSD values in those papers are higher than the results here.

For our experiments we trained both models for iterations in independent runs, estimating the JSD between -grams of generated and real world data every training steps, see Figure 2. The results are even more impressive when aligned with wall-clock time. Since in WGAN-GP training an extra point between real and generated distributions must be sampled, it is slower than the FOGAN training; see Figure 2 and observe the significant () drop in estimated JSD.

Figure 2: Five training runs of both WGAN-GP and FOGAN, with the average of all runs plotted in bold and the error margins denoted by shaded regions. For easy visualization, we plot the moving average of the last three -gram JSD estimations. The first two plots both show training w.r.t. number of training iterations; the second plot starts at iteration 50. The last plot show training with respect to wall-clock time, starting after 6 hours of training.
Task BEGAN DCGAN Coulomb WGAN-GP ours
CelebA 28.5 12.5 9.3 4.2 6.0
4-gram - - -
6-gram - - -
Table 2:

Evaluation of image and text generation

Acknowledgements

This work was supported by Zalando SE with Research Agreement 01/2016.

References

Appendix A Proof of Things

Proof of Theorem 1.

The proof of this theorem is split into smaller lemmas that are proven individually.

  • That is a strict adversarial divergence which is equivalent to is proven in Lemma 4, thus showing that fulfills Requirement 1.

  • fulfills Requirement 2 by design.

  • The existence of an optimal critic in follows directly from Lemma 3.

  • That there exists a critic that fulfills Eq. 5 is because Lemma 3 ensures that a continuous differentiable exists in which fulfills Eq. 9. Because Eq. 9 holds for , same reasoning as the end of the proof of Lemma 6 can be used to show Requirement 4

We prepare by showing a few basic lemmas used in the remaining proofs

Lemma 1 (concavity of ).

The mapping , is concave.

Proof.

The concavity of is trivial. Now consider , then

thus showing concavity of . ∎

Lemma 2 (necessary and sufficient condition for maximum).

Assume fulfill assumptions 1 and 2. Then for any it must hold that

(7)

and

(8)

Further, if and fulfills Eq. 7 and 8, then

Proof.

Since in Lemma 1 it was shown that the the mapping is concave, if and only if and is a local maximum of . This is equivalent to saying that all with and it holds

which holds if and only if

and