Lipschitz Generative Adversarial Nets

02/15/2019 ∙ by Zhiming Zhou, et al. ∙ 0

In this paper we study the convergence of generative adversarial networks (GANs) from the perspective of the informativeness of the gradient of the optimal discriminative function. We show that GANs without restriction on the discriminative function space commonly suffer from the problem that the gradient produced by the discriminator is uninformative to guide the generator. By contrast, Wasserstein GAN (WGAN), where the discriminative function is restricted to 1-Lipschitz, does not suffer from such a gradient uninformativeness problem. We further show in the paper that the model with a compact dual form of Wasserstein distance, where the Lipschitz condition is relaxed, also suffers from this issue. This implies the importance of Lipschitz condition and motivates us to study the general formulation of GANs with Lipschitz constraint, which leads to a new family of GANs that we call Lipschitz GANs (LGANs). We show that LGANs guarantee the existence and uniqueness of the optimal discriminative function as well as the existence of a unique Nash equilibrium. We prove that LGANs are generally capable of eliminating the gradient uninformativeness problem. According to our empirical analysis, LGANs are more stable and generate consistently higher quality samples compared with WGAN.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 7

page 19

page 24

page 26

page 29

page 30

page 31

page 32

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Generative adversarial networks (GANs) (Goodfellow et al., 2014), as one of the most successful generative models, have shown promising results in various challenging tasks. GANs are popular and widely used, but they are notoriously hard to train (Goodfellow, 2016). The underlying obstacles, though have been heavily studied (Arjovsky & Bottou, 2017; Lucic et al., 2017; Heusel et al., 2017; Mescheder et al., 2017, 2018; Yadav et al., 2017), are still not fully understood.

The objective of GAN is usually defined as or proved equivalent to a distance metric between the real distribution and the generative distribution , which implies that is the unique global optimum. The nonconvergence of traditional GANs has been considered as a result of ill-behaving distance metric (Arjovsky & Bottou, 2017), i.e., the distance between and keeps constant when their supports are disjoint. Arjovsky et al. (2017) accordingly suggested using the Wasserstein distance, which can properly measure the distance between two distributions no matter whether their supports are disjoint.

In this paper, we conduct a further study on the convergence of GANs from the perspective of the informativeness of the gradient of the optimal discriminative function . We show that for GANs that have no restriction on the discriminative function space, e.g., the vanilla GAN and its most variants, is only related to the densities of the local point and does not reflect any information about other points in the distributions. We demonstrate that under these circumstances, the gradient of the optimal discriminative function with respect to its input, on which the generator updates generated samples, usually tells nothing about the real distribution. We refer to this phenomenon as the gradient uninformativeness, which is substantially different from the gradient vanishing and is a fundamental cause of nonconvergence of GANs.

According to the analysis of Gulrajani et al. (2017), Wasserstein GAN can avoid the gradient uninformativeness problem. Meanwhile, we show in the paper that the Lipschitz constraint in the Kantorovich-Rubinstein dual of the Wasserstein distance can be relaxed, leading to a new equivalent dual; and with the new dual form, the gradient may also not reflect any information about how to refine towards . It suggests that Lipschitz condition would be a vital element for resolving the gradient uninformativeness problem.

Motivated by the above analysis, we investigate the general formulation of GANs with Lipschitz constraint. We show that under a mild condition, penalizing Lipschitz constant guarantees the existence and uniqueness of the optimal discriminative function as well as the existence of the unique Nash equilibrium between and where . It leads to a new family of GANs that we call Lipschitz GANs (LGANs). We show that LGANs are generally capable of eliminating the gradient uninformativeness in the manner that with the optimal discriminative function, the gradient for each generated sample, if nonzero, will point towards some real sample. This process continues until the Nash equilibrium is reached.

The remainder of this paper is organized as follows. In Section 2, we provide some preliminaries that will be used in this paper. In Section 3, we study the gradient uninformativeness issue in detail. In Section 4, we present LGANs and their theoretical analysis. We conduct the empirical analysis in Section 5. Finally, we discuss related work in Section 6 and conclude the paper in Section 7.

2 Preliminaries

In this section we first give some notions and then present a general formulation for generative adversarial networks.

2.1 Notation and Notions

Given two metric spaces and , a function is said to be Lipschitz continuous if there exists a constant such that

(1)

In this paper and in most existing GANs, the metrics and are by default Euclidean distance which we also denote by . The smallest constant is called the (best) Lipschitz constant of , denoted by .

The first-order Wasserstein distance

between two probability distributions is defined as

(2)

where denotes the set of all probability measures with marginals and . It can be interpreted as the minimum cost of transporting the distribution to the distribution . We use to denote the optimal transport plan, and let and denote the supports of and , respectively. We say two distributions are disjoint if their supports are disjoint.

The Kantorovich-Rubinstein (KR) duality (Villani, 2008) provides a way of more efficiently computing of Wasserstein distance. The duality states that

(3)

The constraint in Eq. (3) implies that is Lipschitz continuous with . Interestingly, we have a more compact dual form of the Wasserstein distance. That is,

(4)

The proof for this dual form is given in Appendix A.5. We see that this new dual relaxes the Lipschitz continuity condition of the dual form in Eq. (3).

2.2 Generative Adversarial Networks (GANs)

Typically, GANs can be formulated as

(5)

where is the source distribution of the generator in and is the target (real) distribution in . The generative function learns to output samples that share the same dimension as samples in , while the discriminative function learns to output a score indicating the authenticity of a given sample. Here and denote discriminative and generative function spaces, respectively; and , , : are loss metrics. We denote the implicit distribution of the generated samples by .

We list the choices of , and in some representative GAN models in Table 1. In these GANs, the gradient that the generator receives from the discriminator with respect to (w.r.t.) a generated sample is

(6)

where the first term is a step-related scalar, and the second term

is a vector with the same dimension as

which indicates the direction that the generator should follow for optimizing the generated sample .

We use to denote the optimal discriminative function, i.e., . For further notation, we let . It has .

2.3 The Gradient Vanishing

The gradient vanishing problem has been typically thought as a key factor for causing the nonconvergence of GANs, i.e., the gradient becomes zero when the discriminator is perfectly trained.

Goodfellow et al. (2014) addressed this problem by using an alternative objective for the generator. Actually, only the scalar is changed. The Least-Squares GAN (Mao et al., 2016), which aims at addressing the gradient vanishing problem, also focused on .

Arjovsky & Bottou (2017) provided a new perspective for understanding the gradient vanishing. They argued that and are usually disjoint and the gradient vanishing stems from the ill-behaving of traditional distance metrics, i.e., the distance between and remains constant when they are disjoint. The Wasserstein distance was thus used (Arjovsky et al., 2017) as an alternative metric, which can properly measure the distance between two distributions no matter whether they are disjoint.

3 The Gradient Uninformativeness

In this paper we pay our main attention on the gradient direction of the optimal discriminative function, i.e., , along which the generated sample is updated. We show that for many distance metrics, such a gradient may fail to bring any useful information about . Consequently, is not guaranteed to converge to . We name this pheno-menon as the gradient uninformativeness and argue that it is a fundamental factor of resulting in nonconvergence and instability in the training of traditional GANs.

The gradient uninformativeness is substantially different from the gradient vanishing. The gradient vanishing is about the scalar term in or the overall scale of , while the gradient uninformativeness is about the direction of , which is defined by . The two issues are orthogonal, though they sometimes exist simultaneously. See Table 1 for a summary of issues for representative GANs.

Next, we discuss the gradient uninformativeness in the taxonomy of restrictions on the discriminative function space . We will show that for unrestricted GANs, gradient uninformativeness commonly exists; for restricted GANs, such an issue might still exist; and with Lipschitz condition, it generally does not exist.

3.1 Unrestricted GANs

For many GAN models, there is no restriction on . Typical cases include -divergence based GANs, such as the vanilla GAN (Goodfellow et al., 2014), Least-Squares GAN (Mao et al., 2016) and -GAN (Nowozin et al., 2016).

In these GANs, the value of the optimal discriminative function at each point is independent of other points and only reflects the local densities and :

Hence, for each generated sample which is not surrounded by real samples (there exists such that for all with , it holds that ), in the surrounding of would contain no information about . Thus , the gradient that receives from the optimal discriminative function, does not reflect any information about .

Typical situation is that and are disjoint, which is common in practice according to (Arjovsky & Bottou, 2017). To further distinguish the gradient uninformativeness from the gradient vanishing, we consider an ideal case: and are totally overlapped and both consist of discrete points, but their probability masses over these points are different. In this case, for each generated sample is still uninformative, but the gradient does not vanish.

3.2 Restricted GANs: Fisher GAN as an Instance

Some GANs impose restrictions on . Typical instances are the Integral Probability Metric (IPM) based GANs (Mroueh & Sercu, 2017; Mroueh et al., 2017; Bellemare et al., 2017) and the Wasserstein GAN (Arjovsky et al., 2017). We next show that GANs with restriction on might also suffer from the gradient uninformativeness.

The optimal discriminative function of -Fisher IPM , the generalized objective of the Fisher GAN (Mroueh et al., 2017), has the following form:

(7)

where is a distribution whose support covers and , and is a constant. It can be observed that -Fisher IPM also defines at each point according to the local densities and does not reflect information of other locations. Similar as above, we can conclude that for each generated sample that is not surrounded by real samples, is uninformative.

3.3 The Wasserstein GAN

As shown by Gulrajani et al. (2017), the gradient of the optimal discriminative function in the KR dual form of the Wasserstein distance has the following property:

Proposition 1.

Let be the optimal transport plan in Eq. (2) and with . If the optimal discriminative function in Eq. (3) is differentiable and for all , then it holds that

(8)

This proposition indicates: (i) for each generated sample , there exists a real sample such that

for all linear interpolations

between and , i.e., the gradient at any is pointing towards the real sample ; (ii) these pairs match the optimal coupling in the optimal transport perspective. It implies that WGAN is able to overcome the gradient uninformativeness as well as the gradient vanishing.

Our concern turns to the reason why WGAN can avoid gradient uninformativeness. To address this question, we alternatively apply the compact dual of the Wasserstein distance in Eq. (4) and study the optimal discriminative function.

Since there is generally no closed-form solution for in Eq. (4), we take an illustrative example, but the conclusion is general. Let be a uniform variable on interval , be the distribution of in , and be the distribution of in . According to Eq. (4), we have an optimal as follows

(9)

Though having the constraint “ ,” the Wasserstein distance in this dual form also only defines the values of on and . For each generated sample which is isolated or at the boundary (there does not exist such that it holds for all with ), the gradient of is theoretically undefined and thus cannot provide useful information about . More extremely, we can consider the case where are isolated points.

These examples imply that Lipschitz condition would be critical for resolving the gradient uninformativeness problem. Motivated by this, we study the general formulation of GANs with Lipschitz constraint, which leads to a family of more general GANs that we call Lipschitz GANs. We will see that in Lipschitz GANs, the similarity measure between and might not be some Wasserstein distance, but they still perform very well.

4 Lipschitz GANs

Lipschitz continuity recently becomes popular in GANs. It was observed that introducing Lipschitz continuity as a regularization of the discriminator leads to improved stability and sample quality (Arjovsky et al., 2017; Kodali et al., 2017; Fedus et al., 2017; Miyato et al., 2018; Qi, 2017).

In this paper, we investigate the general formulation of GANs with Lipschitz constraint, where the Lipschitz constant of discriminative function is penalized via a quadratic loss, to theoretically analyze the properties of such GANs. In particular, we define the Lipschitz Generative Adversarial Nets (LGANs) as:

(10)

In this work, we further assume that the loss functions

and satisfy the following conditions:

(11)

The assumptions for the losses and are very mild. Note that in WGAN is used, which satisfies Eq. (11). There are many other instances, such as , and . Meanwhile, there also exist losses used in GANs that do not satisfy Eq. (11), e.g., the quadratic loss (Mao et al., 2016) and the hinge loss (Zhao et al., 2016; Lim & Ye, 2017; Miyato et al., 2018).

To devise a loss in LGANs, it is practical to let be be an increasing function with non-decreasing derivative and set . Moreover, the linear combinations of such losses still satisfy Eq. (11). Figure 15 illustrates some of these loss metrics.

Note that is the objective of vanilla GAN. As we have shown, the vanilla GAN suffers from the gradient uninformativeness problem. However, as we will show next, when imposing the Lipschitz regularization, the resulting model as a specific case of LGANs behaves very well.

4.1 Theoretical Analysis

We now present the theoretical analysis of LGANs. First, we consider the existence and uniqueness of the optimal discriminative function.

Theorem 1.

Under Assumption (11) and if or is strictly convex, the optimal discriminative function of Eq. (10) exists and is unique.

Note that although WGAN does not satisfy the condition in Theorem 1, its solution still exists but is not unique. Specifically, if is an optimal solution then for any is also an optimal solution. The following theorems can be regarded as a generalization of Proposition 1 to LGANs.

Theorem 2.

Assume , , and the optimal discriminator exists and is smooth. We have

  1. [label=(),leftmargin=16pt]

  2. For all , if it holds that , then there exists with such that ;

  3. For all , there exists with such that ;

  4. If and , then there exists pair with both points in and such that and ;

  5. There is a unique Nash equilibrium between and under the objective , where it holds that and .

The proof is given in Appendix A.2. This theorem states the basic properties of LGANs, including the existence of unique Nash equilibrium where and the existence of bounding relationships in the optimal discriminative function (i.e., such that ). The former ensures that the objective is a well-defined distance metric, and the latter, as we will show next, eliminates the gradient uninformativeness problem.

It is worth noticing that the penalty is in fact necessary for Property-(c) and Property-(d). The reason is due to the existence of the case that for . Minimizing guarantees that the only Nash equilibrium is achieved when . In WGAN, minimizing is not necessary. However, if is not minimized towards zero, is not guaranteed to be zero at the convergence state where any function subject to -Lipschitz constraint is an optimal in WGAN. It implies that minimizing also benefits WGAN.

4.2 Refining the Bounding Relationship

From Theorem 2, we know that for any point , as long as does not hold a zero gradient with respect to , must be bounded by another point such that . We further clarify that when there is a bounding relationship, it must involve both real sample(s) and fake sample(s). More formally, we have

Theorem 3.

Under the conditions in Theorem 2, we have

  1. [label=0),leftmargin=16pt]

  2. For any , if , then there must exist some with such that and ;

  3. For any , if , then there must exist some with such that and .

The intuition behind the above theorem is that samples from the same distribution (e.g., the fake samples) will not bound each other to violate the optimality of . So, when there is strict bounding relationship (i.e., it involves points that hold ), it must involve both real and fake samples. It is worth noticing that excepting the overlapping case, all fake samples hold , while all real samples hold .

Note that there might exist a dozen real and fake samples that bound each other. Under the Lipschitz continuity condition, the bounding relationship on the value surface of is the basic building block that connects and , and each fake sample with lies in at least one of these bounded relationships. Next we will further interpret the implication of bounding relationship and show that it guarantees meaningful for all involved points.

4.3 The Implication of Bounding Relationship

Recall that the Proposition 1 states that . We next show that it is actually a direct consequence of bounding relationship between and . We formally state it as follows:

Theorem 4.

Assume function is differentiable and its Lipschitz constant is , then for all and which satisfy and , we have for all with .

In other words, if two points and bound each other in terms of , there is a straight line between and on the value surface of . Any point in this line holds the maximum gradient slope , and the gradient direction at any point in this line is pointing towards the direction. The proof is provided in Appendix A.4.

Combining Theorems 2 and 3, we can conclude that when and are disjoint, the gradient for each generated sample points towards some real sample , which guarantees that -based updating would pull towards at every step.

In fact, Theorem 2 provides further guarantee on the convergence. Property-(b) implies that for any generated sample that does not lie in , its gradient must point towards some real sample . And in the fully overlapped case, according to Property-(c), unless , there must exist at least one pair of in strict bounding relationship and pulls towards . Finally, Property-(d) guarantees that the only Nash equilibrium is where for all generated samples.

5 Empirical Analysis

In this section, we empirically study the gradient uninformativeness problem and the performance of various objectives of Lipschitz GANs. The anonymous code is provided in the supplemental material.

(a) Disjoint Case
(b) Overlapping Case
(c) Mode Collapse
Figure 1: Practical behaviors of gradient uninformativeness: noisy gradient. Local greedy gradient leads to mode collapse.

5.1 Gradient Uninformativeness in Practice

According to our analysis, for most traditional GANs is uninformative. Here we investigate the practical behaviors of the gradient uninformativeness. Note that the behaviors of GANs without restriction on are essentially identical. We choose the Least-Squares GAN whose is relatively simple as the representative and study it with a set of synthetic experiments which benefits the visualization.

The results are shown in Figure 1. We find that the gradient is very random, which we believe is the typical practical behavior of the gradient uninformativeness. Given the nondeterministic property of for points out of , is highly sensitive to the hyper-parameters. We actually conduct the same experiments with a set of different hyper-parameters. The rest is provided in Appendix B.6.

In Section 3, we discussed the gradient uninformativeness under the circumstances that the fake sample is not surrounded by real samples. Actually, the problem of in traditional GANs is more general, which can also be regarded as the gradient uninformativeness. For example, in the case of Figure 0(b) where the real and fake samples are both evenly distributed in the two regions with different densities, is constant in each region and undefined outside. It theoretically has zero for inner points and undefined for boundary points. They in practice also behave as noisy gradient. We note that in the totally overlapping and continuous case, is also ill-behaving, which seems to be an intrinsic cause of mode collapse, as illustrated in Figure 0(c) where and are both devised to be Gaussian(s). See more details in Appendix B.4.

5.2 Verifying of LGANs

(a) (b) (c) (d)
Figure 2: in LGANs point towards real samples.
Figure 3: gradation with CIFAR-10.

One important theoretical benefit of LGANs is that for each generated sample is guaranteed to point towards some real sample. We here verify the gradient direction of with a set of and that satisfy Eq. (11).

The tested objectives include: (a) ; (b) ; (c) ; (d)

. And they are tested in two scenarios: two-dimensional toy data and real-world high-dimensional data. In the two-dimensional case,

consists of two Gaussians and is fixed as one Gaussian which is close to one of the two real Gaussians, as illustrated in Figure 3. For the latter case, we use the CIFAR-10 training set. To make solving feasible, we use ten CIFAR-10 images as and ten fixed noise images as . Note that we fix on purpose because to verify the direction of , learning is not necessary.

The results are shown in Figures 3 and 3, respectively. In Figure 3, we can see that the gradient of each generated sample is pointing towards some real sample. For the high dimensional case, visualizing the gradient direction is nontrivial. Hence, we plot the gradient and corresponding increments. In Figure 3, the leftmost in each row is a sample from and the second is its gradient . The interiors are with increasing and the rightmost is the nearest real sample from . This result visually demonstrates that the gradient of a generated sample is towards a real sample. Note that the final results of Figure 3 keep almost identical when varying the loss metric and in the family of LGANs.

5.3 Stabilized Discriminative Functions

The Wasserstein distance is a very special case that has solution under Lipschitz constraint. It is the only case where both and have constant derivative. As a result, under the Wasserstein distance has a free offset, i.e., given some , with any is also an optimal. In practice, it behaves as oscillations in during training. The oscillations affect the practical performance of WGAN; Karras et al. (2017) and Adler & Lunz (2018) introduced regularization to the discriminative function to prevent drifting during the training. By contrast, any other instance of LGANs does not have this problem. We illustrate the practical difference in Figure 6.

5.4 Max Gradient Penalty (MaxGP)

LGANs impose penalty on the Lipschitz constant of the discriminative function. There are works that investigate different implementations of Lipschitz continuity in GANs, such as gradient penalty (GP) (Gulrajani et al., 2017), Lipschitz penalty (LP) (Petzka et al., 2017) and spectral normalization (SN) (Miyato et al., 2018). However, the existing regularization methods do not directly penalize the Lipschitz constant. According to (Adler & Lunz, 2018), Lipschitz constant is equivalent to the maximum scale of . Both GP and LP penalize all gradients whose scales are larger than the given target Lipschitz constant

. SN directly restricts the Lipschitz constant via normalizing the network weights by their largest eigenvalues. However, it is currently unclear how to effectively penalize the Lipschitz constant with SN.

To directly penalize Lipschitz constant, we approximate in Eq. (10) with the maximum sampled gradient scale:

(12)

Practically, we follow (Gulrajani et al., 2017) and sample as random interpolation of real and fake samples. We provide more details of this algorithm (MaxGP) in Appendix C.

According to our experiments, MaxGP in practice is usually comparable with GP and LP. However, in some of our synthetic experiments, we find that MaxGP is able to achieve the optimal discriminative function while GP and LP fail, e.g., the problem of solving in Figure 3. Also, in some real data experiments, we find the training with GP or LP diverges and it is able to converge if we switch to MaxGP, e.g., the training with metric .

5.5 Benchmark with Unsupervised Image Generation

Table 2: Quantitative comparisons with unsupervised image generation. Objective CIFAR-10

Tiny ImageNet

IS FID IS FID
Figure 4: Training curves on CIFAR.
Figure 5: in LGANs is more stable. Left: WGAN. Right: LGANs.
Figure 6: Training curves on Tiny.

To quantitatively compare the performance of different objectives under Lipschitz constraint, we test them with unsupervised image generation tasks. In this part of experiments, we also include the hinge loss and quadratic loss (Mao et al., 2016), which do not fit the assumption of strict monotonicity. For the quadratic loss, we set . To make the comparison simple, we fix in the objective of generator as . We set in the experiment.

The strict monotonicity assumption of and is critical in Theorem 2 to theoretically guarantee the existences of bounding relationships for arbitrary datas. But if we further assume and are limited, it is possible that there exists a suitable such that all real and fake samples lie in a strict monotone region of and : for the hinge loss, it would mean for all and .

The results in terms of Inception Score (IS) (Salimans et al., 2016) and Frechet Inception Distance (FID) (Heusel et al., 2017) are presented in Table 2. For all experiments, we adopt the network structures and hyper-parameter setting from (Gulrajani et al., 2017), where WGAN-GP in our implementation achieves IS and FID on CIFAR-10. We use MaxGP for all experiments and search the best in . We use iterations for better convergence and use samples to evaluate IS and FID for preferable stability. We note that IS is remarkably unstable during training and among different initializations. By contrast, FID is fairly stable.

From Table 2, we can see that LGANs generally work better than WGAN. Different LGANs have relatively similar final results, while the objectives and achieve the best performances. The hinge loss and quadratic loss with a suitable turn out to also work pretty good. We plot the training curves in terms of FID in Figures 4 and 6. Due to page limitation, we leave more results and details in Appendix D.

6 Related Work

WGAN (Arjovsky et al., 2017) based on the KR dual does not suffer from the gradient uninformativeness problem. We have shown that the Lipschitz constraint in the KR dual of the Wasserstein distance can be relaxed. With the new dual form, the resulting model suffers from the gradient uninformativeness problem.

We have shown that Lipschitz constraint is able to ensure the convergence for a family of GAN objectives, which is not limited to the Wasserstein distance. For example, Lipschitz continuity is also introduced to the vanilla GAN (Miyato et al., 2018; Kodali et al., 2017; Fedus et al., 2017), achieving improvements in the quality of generated samples. As a matter of fact, the vanilla GAN objective is an special case of our LGANs. Thus our analysis explains why and how it works. (Farnia & Tse, 2018) also provide some analysis on how -divergence behaviors when combined with Lipschitz. However, their analysis is limited to the symmetric -divergence.

Fedus et al. (2017) also argued that divergence is not the primary guide of the training of GANs. However, they thought that the vanilla GAN with a non-saturating generator objective somehow works. According to our analysis, given the optimal , the vanilla GAN has no guarantee on its convergence. Unterthiner et al. (2017) provided some arguments on the unreliability of in traditional GANs, which motivates their proposal of Coulomb GAN. However, the arguments there are not thorough. By contrast, we identify the gradient uninformativeness problem and link it to the restrictions on . Moreover, we have accordingly proposed a new solution, i.e., the Lipschitz GANs.

Some work studies the suboptimal convergence of GANs (Mescheder et al., 2017, 2018; Arora et al., 2017; Liu et al., 2017; Farnia & Tse, 2018), which is another important direction for theoretically understanding GANs. Despite the fact that the behaviors of suboptimal can be different, we think the optimal should well-behave in the first place, e.g., informative gradient and stable Nash equilibrium. Researchers found that applying Lipschitz continuity condition to the generator also benefits the quality of generated samples (Zhang et al., 2018; Odena et al., 2018). And (Qi, 2017) has provided a thorough study of GANs with Lipschitz density assumption on data distribution.

7 Conclusion

In this paper we have studied one fundamental cause of failure in the training of GANs, i.e., the gradient uninformativeness issue. In particular, for generated samples which are not surrounded by real samples, the gradients of the optimal discriminative function tell nothing about . That is, in a sense, there is no guarantee that will converge to . Typical case is that and are disjoint, which is common in practice. The gradient uninformativeness is common for unrestricted GANs and also appears in restricted GANs.

To address the nonconvergence problem caused by uninformative , we have proposed LGANs and shown that it makes informative in the way that the gradient for each generated sample points towards some real sample. We have also shown that in LGANs, the optimal discriminative function exists and is unique, and the only Nash equilibrium is achieved when where . Our experiments shown LGANs lead to more stable discriminative functions and achieve higher sample qualities.

References

Appendix A Proofs

a.1 Proof of Theorem 1

Let be two random vectors such that . Assume and . Let . Let denote the Lipschitz constant of . Let and denote the supports of and , respectively. Let denote the -st Wasserstein distance between and .

Lemma 1.

Let and be two convex functions, whose domains are both . Assume is subject to . If there is such that , then we have a lower bound for .

Proof.

Given that are convex functions, we have

(13)

Therefore, we get the lower bound. ∎

Lemma 2.

Let and be two convex functions, whose domains are both . Assume is subject to .

  • If there exists such that , then we have: if , then ;

  • If there exists such that , then we have: if , then .

Proof.

Since are convex functions, we have

(14)

Thus, if , then . And we can prove the other case symmetrically. ∎

Lemma 3.

Let and be two convex functions, whose domains are both . If and satisfy the following properties:

  • ;

  • There exist such that .

Then we have