Dualing GANs

Generative adversarial nets (GANs) are a promising technique for modeling a distribution from samples. It is however well known that GAN training suffers from instability due to the nature of its maximin formulation. In this paper, we explore ways to tackle the instability problem by dualizing the discriminator. We start from linear discriminators in which case conjugate duality provides a mechanism to reformulate the saddle point objective into a maximization problem, such that both the generator and the discriminator of this 'dualing GAN' act in concert. We then demonstrate how to extend this intuition to non-linear formulations. For GANs with linear discriminators our approach is able to remove the instability in training, while for GANs with nonlinear discriminators our approach provides an alternative to the commonly used GAN training algorithm.

READ FULL TEXT VIEW PDF

page 6

page 7

page 9

page 10

page 17

03/29/2018

Generative Modeling using the Sliced Wasserstein Distance

Generative Adversarial Nets (GANs) are very successful at modeling distr...
02/28/2018

A Variational Inequality Perspective on Generative Adversarial Nets

Stability has been a recurrent issue in training generative adversarial ...
02/27/2018

Robust GANs against Dishonest Adversaries

Robustness of deep learning models is a property that has recently gaine...
11/07/2017

On the Discrimination-Generalization Tradeoff in GANs

Generative adversarial training can be generally understood as minimizin...
05/26/2019

OOGAN: Disentangling GAN with One-Hot Sampling and Orthogonal Regularization

Exploring the potential of GANs for unsupervised disentanglement learnin...
05/08/2017

Geometric GAN

Generative Adversarial Nets (GANs) represent an important milestone for ...
01/22/2019

A New CGAN Technique for Constrained Topology Design Optimization

This paper presents a new conditional GAN (named convex relaxing CGAN or...

1 Introduction

Generative adversarial nets (GANs) GoodfellowARXIV2014 are, among others like variational auto-encoders KingmaARXIV2013 and auto-regressive models vandenOordARXIV2016 , a promising technique for modeling a distribution from samples. A lot of empirical evidence shows that GANs are able to learn to generate images with good visual quality at unprecedented resolution ZhangARXIV2016 ; RadfordARXIV2015 , and recently there has been a lot of research interest in GANs, to better understand their properties and the training process.

Training GANs can be viewed as a duel between a discriminator and a generator. Both players are instantiated as deep nets. The generator is required to produce realistic-looking samples that cannot be differentiated from real data by the discriminator. In turn, the discriminator does as good a job as possible to tell the samples apart from real data. Due to the complexity of the optimization problem, training GANs is notoriously hard, and usually suffers from problems such as mode collapse, vanishing gradient, and divergence. The training procedures are very unstable and sensitive to hyper-parameters. A number of techniques have been proposed to address these issues, some empirically justified RadfordARXIV2015 ; SalimansARXIV2016 , and some more theoretically motivated MetzARXIV2016 ; ArjovskyARXIV2017 ; NowozinNIPS2016 ; ZhaoICLR2017 .

This tremendous amount of recent work, together with the wide variety of heuristics applied by practitioners, indicates that many questions regarding the properties of GANs are still unanswered. In this work we provide another perspective on the properties of GANs, aiming toward better training algorithms in some cases. Our study in this paper is motivated by the alternating gradient update between discriminator and generator, employed during training of GANs. This form of update is one source of instability, and it is known to diverge even for some simple problems 

SalimansARXIV2016 . Ideally, when the discriminator is optimized to optimality, the GAN objective is a deterministic function of the generator. In this case, the optimization problem would be much easier to solve. This motivates our idea to dualize parts of the GAN objective, offering a mechanism to better optimize the discriminator.

Interestingly, our dual formulation provides a direct relationship between the GAN objective and the maximum mean-discrepancy framework discussed in GrettonJMLR2012

. When restricted to linear discriminators, where we can find the optimal discriminator by solving the dual, this formulation permits the derivation of an optimization algorithm that monotonically increases the objective. Moreover, for non-linear discriminators we can apply trust-region type optimization techniques to obtain more accurate discriminators. Our work brings to the table some additional optimization techniques beyond stochastic gradient descent; we hope this encourages other researchers to pursue this direction.

2 Background

In generative training we are interested in modeling of and sampling from an unknown distribution , given a set of datapoints, for example images. GANs use a generator network parameterized by , that maps samples drawn from a simple distribution, , Gaussian or uniform, to samples in the data space . A separate discriminator parameterized by maps a point

in the data space to the probability of it being a real sample.

The discriminator is trained to minimize a classification loss, typically the cross-entropy, and the generator is trained to maximize the same loss. On sets of real data samples and noise samples , using the (averaged) cross-entropy loss results in the following joint optimization problem: max_θmin_f(θ, )  where  f(θ, ) = -12n∑_i logD_(_i) - 12n∑_i log(1 - D_(G_θ(_i))). We adhere to the formulation of a fixed batch of samples for clarity of the presentation, but also point out how this process is adapted to the stochastic optimization setting later in the paper as well as in the supplementary material.

To solve this maximin optimization problem, ideally, we want to solve for the optimal discriminator parameters , in which case the GAN program given in eq:GanOrig can be reformulated as a maximization for using . However, typical GAN training only alternates two gradient updates and , and usually just one step for each of and in each round. In this case, the objective to be maximized by the generator is instead. This objective is always an upper bound on the correct objective , since is the optimal for . Maximizing an upper bound has no guarantee on maximizing the correct objective, which leads to instability. Therefore, many practically useful techniques have been proposed to circumvent the difficulties of the original program definition presented in eq:GanOrig.

Another widely employed technique is a separate loss to update in order to avoid vanishing gradients during early stages of training when the discriminator can get too strong. This technique can be combined with our approach, but in what follows, we keep the elegant formulation of the GAN program specified in eq:GanOrig.

3 Dualing GANs

The main idea of ‘Dualing GANs’ is to represent the discriminator program in eq:GanOrig using its dual, . Hereby, is the dual objective of , and are the dual variables. Instead of gradient descent on to update , we solve the dual instead. This results in a maximization problem .

Using the dual is beneficial for two reasons. First, note that for any , is a lower bound on the objective with optimal discriminator parameters . Staying in the dual domain, we are then guaranteed that optimization of

makes progress in terms of the original program. Second, the dual problem usually involves a much smaller number of variables, and can therefore be solved much more easily than the primal formulation. This provides opportunities to obtain more accurate estimates for the discriminator parameters

, which is in turn beneficial for stabilizing the learning of the generator parameters

. In the following, we start by studying linear discriminators, before extending our technique to training with non-linear discriminators. Also, we use cross-entropy as the classification loss, but emphasize that other convex loss functions, , the hinge-loss, can be applied equivalently.

3.1 Linear Discriminator

We start from linear discriminators that use a linear scoring function , , the discriminator . Here, indicates real data, while for a generated sample, and characterizes the probability of being a generated versus real data sample.

We only require the scoring function to be linear in and any (nonlinear) differentiable features can be used in place of in this formulation. Substituting the linear scoring function into the objective given in eq:GanOrig, results in the following program for : min_ C2 ∥∥^2_2 + 12n∑_i log(1 + exp(-^⊤_i)) + 12n∑_i log(1 + exp(^⊤G_θ(_i))). Here we also added an L2-norm regularizer on . We note that the program presented in eq:LogLossPrimal is convex in the discriminator parameters . Hence, we can equivalently solve it in the dual domain as discussed in the following claim, with proof provided in the supplementary material.

The dual program to the task given in eq:LogLossPrimal reads as follows:

(1)

with binary entropy . The optimal solution to the original problem can be obtained from the optimal and via

Remarks: Intuitively, considering the last two terms of the program given in clm:dualLogLoss as well as its constraints, we aim at assigning weights , close to half of to as many data points and to as many artificial samples as possible. More carefully investigating the first part, which can at most reach zero, reveals that we aim to match the empirical data observation and the generated artificial sample observation

. Note that this resembles the moment matching property obtained in other maximum likelihood models. Importantly, this objective also resembles the (kernel) maximum mean discrepancy (MMD) framework, where the empirical squared MMD is estimated via

. Generative models that learn to minimize the MMD objective, like the generative moment matching networks LiARXIV2015 ; dziugaite2015training , can therefore be included in our framework, using fixed ’s and proper scaling of the first term.

Combining the result obtained in clm:dualLogLoss with the training objective for the generator yields the task for training of GANs with linear discriminators. Hence, instead of searching for a saddle-point, we strive to find a maximizer, a task which is presumably easier. The price to pay is the restriction to linear discriminators and the fact that every randomly drawn artificial sample has its own dual variable .

In the non-stochastic optimization setting, where we optimize for fixed sets of data samples and randomizations , it is easy to design a learning algorithm for GANs with linear discriminators that monotonically improves the objective based on line search. Although this approach is not practical for very large data sets, such a property is convenient for smaller scale data sets. In addition, linear models are favorable in scenarios in which we know informative features that we want the discriminator to pay attention to.

When optimizing with mini-batches we introduce new data samples and randomizations

in every iteration. In the supplementary material we show that this corresponds to maximizing a lower bound on the full expectation objective. Since the dual variables vary from one mini-batch to the next, we need to solve for the newly introduced dual variables to a reasonable accuracy. For small minibatch sizes commonly used in deep learning literature, like 100, calling a constrained optimization solver to solve the dual problem is quite cheap. We used Ipopt 

WaechterMP2006 , which solves this dual problem to a very good accuracy in negligible time; other solvers can also be used and may lead to improved performance.

Utilizing a log-linear discriminator reduces the model’s expressiveness and complexity. We therefore now propose methods to alleviate this restriction.

Initialize , , and iterate One or few gradient ascent steps on generator parameters Find step using s.t.  Update

Figure 1: GAN optimization with model function.

3.2 Non-linear Discriminator

General non-linear discriminators use non-convex scoring functions , parameterized by a deep net. The non-convexity of makes it hard to directly convert the problem into its dual form. Therefore, our approach for training GANs with non-convex discriminators is based on repeatedly linearizing and dualizing the discriminator locally. At first sight this seems restrictive, however, we will show that a specific setup of this technique recovers the gradient direction employed in the regular GAN training mechanism while providing additional flexibility.

We consider locally approximating the primal objective around a point using a model function . We phrase the update the discriminator parameters as a search for a step , , where indicates the current iteration. In order to guarantee the quality of the approximation, we introduce a trust-region constraint where specifies the trust-region size. More concretely, we search for a step by solving min_m_k,θ()  s.t. 12∥∥_2^2≤Δ_k, given generator parameters . Rather than optimizing the GAN objective with stochastic gradient descent, we can instead employ this model function and use the algorithm outlined in fig:AlgoOutline. It proceeds by first performing a gradient ascent the generator parameters . Afterwards, we find a step by solving the program given in eq:ModelPrimal. We then apply this step, and repeat.

Different model functions result in variants of the algorithm. If we choose , model and function are identical but the program given in eq:ModelPrimal is hard to solve. Therefore, in the following, we propose two model functions that we have found to be useful. The first one is based on linearization of the cost function and recovers the step employed by gradient-based discriminator updates in standard GAN training. The second one is based on linearization of the score function while keeping the loss function intact; this second approximation is hence accurate in a larger region. Many more models exist and we leave further exploration of this space to future work.

(A). Cost function linearization: A local approximation to the cost function can be constructed by using the first order Taylor approximation

Such a model function is appealing because step 2 of the algorithm outlined in fig:AlgoOutline, , minimization of the model function subject to trust-region constraints as specified in eq:ModelPrimal, has the analytically computable solution

Consequently step 3 of the algorithm outlined in fig:AlgoOutline is a step of length into the negative gradient direction of the cost function . We can use the trust region parameter to tune the step size just like it is common to specify the step size for standard GAN training. As mentioned before, using the first order Taylor approximation as our model recovers the same direction that is employed during standard GAN training. The value of the parameters can be fixed or adapted; see the supplementary material for more details.

Importantly, using the first order Taylor approximation as a model is not the only choice. While some choices are fairly obvious, such as a quadratic approximation, we present another intriguing option in the following.

(B). Score function linearization: Instead of linearizing the entire cost function as demonstrated in the previous part, we can choose to only linearize the score function , locally around , via

Note that the overall objective is itself a nonlinear function of . Substituting the approximation for into the overall objective, results in the following model function:

(2)

This approximation keeps the nonlinearities of the surrogate loss function intact, therefore we expect it to be more accurate than linearization of the whole cost function . When is already linear in , linearization of the score function introduces no approximation error, and the formulation can be naturally reduced to the discussion presented in sec:LD; non-negligible errors are introduced when linearizing the whole cost function in this case.

For general non-linear discriminators, however, no analytic solution can be computed for the program given in eq:ModelPrimal when using this model. Nonetheless, the model function fulfills and it is convex in . Exploiting this convexity, we can derive the dual for this trust-region optimization problem as presented in the following claim. The proof is included in the supplementary material. The dual program to s.t.  with model function as in eq:ModelFunction is:

The optimal to the original problem can be expressed through optimal as

Combining the dual formulation with the maximization of the generator parameters results in a maximization as opposed to a search for a saddle point. However, unlike the linear case, it is not possible to design an algorithm that is guaranteed to monotonically increase the cost function . The culprit is step 3 of the algorithm outlined in fig:AlgoOutline, which adapts the model in every iteration.

Intuitively, the program illustrated in clm:dualModel aims at choosing dual variables , such that the weighted means of derivatives as well as scores match. Note that this program searches for a direction as opposed to searching for the weights , hence the term inside the squared norm.

In practice, we use Ipopt WaechterMP2006 to solve the dual problem. The form of this dual is more ill-conditioned than the linear case. The solution found by Ipopt sometimes contains errors, however, we found the errors to be generally tolerable and not to affect the performance of our models.

4 Experiments

Figure 2: We show the learning curves and samples from two models of the same architecture, one optimized in dual space (left), and one in the primal space (, typical GAN) up to 5000 iterations. Samples are shown at different points during training, as well as at the very end (second row). Despite having similar sample qualities in the end, they demonstrate drastically different training behavior. In the typical GAN setup, loss oscillates and has no clear trend, whereas in the dual setup, loss monotonically increases and shows much smaller oscillation. Sample quality is nicely correlated with the dual objective during training.
Figure 3: Training GANs with linear discriminators on the simple 5-Gaussians dataset. Here we are showing typical runs with the compared methods (not cherry-picked). Top: training curves and samples from a single experiment: left - dual with full batch, middle - dual with minibatch, right - standard GAN with minibatch. The real data from this dataset are drawn in blue, generated samples in green. Below: distribution of ’s during training for the two dual GAN experiments, as a histogram at each x-value (iteration) where intensity depicts frequency for values ranging from 0 to 1 (red are data, and green are samples).

In this section, we empirically study the proposed dual GAN algorithms. In particular, we show the stable and monotonic training for linear discriminators and study its properties. For nonlinear GANs we show good quality samples and compare it with standard GAN training methods. Experiments are done on three datasets: a 2D dataset composed of 5 2D Gaussians (5-Gaussians), MNIST LeCunIEEE1998 , and CIFAR-10 krizhevsky2009learning . Overall the results show that our proposed approaches work across a range of problems and provide good alternatives to the standard GAN training method.

4.1 Dual GAN with linear discriminator

We explore the dual GAN with linear discriminator on the synthetic 2D dataset generated by sampling points from a mixture of 5 2D Gaussians, as well as the MNIST dataset. Through these experiments we show that (1) with the proposed dual GAN algorithm, training is very stable; (2) the dual variables can be used as an extra informative signal for monitoring the training process; (3) features matter, and we can train good generative models even with linear discriminators when we have good features. In all experiments, we compare our proposed dual GAN with standard GAN, for training the same generator and discriminator models.

The discussion of linear discriminators presented in sec:LD works with any feature representation in place of as long as is differentiable to allow gradients flow through it. For the simple 5-Gaussian dataset, we use RBF features based on 100 sample training points. For the MNIST dataset, we use a convolutional neural net, and concatenate the hidden activations on all layers as the features.

The dual GAN formulation has a single hyper-parameter , but we found the algorithm not to be sensitive to it, and set it to 0.0001 in all experiments. We used Adam KingmaICLR2015 with fixed learning rate and momentum to optimize the generator. Additional experimental details and results are included in the supplementary material.

Stable Training: The main results illustrating stable training are provided in fig:lin-mnist and 3, where we show the learning curves as well as model samples at different points during training. Both the dual GAN and the standard GAN use minibatches of the same size, and for the synthetic dataset we did an extra experiment doing full-batch training. From these curves we can see the stable monotonic increase of the dual objective, contrasted with standard GAN’s spiky training curves. On the synthetic data, we see that increasing the minibatch size leads to significantly improved stability. In the supplementary material we include an extra experiment to quantify the stability of the proposed method on the synthetic dataset.

Sensitivity to Hyperparameters:

Sensitivity to hyperparameters is another important aspect of training stability. Successful GAN training typically requires carefully tuned hyperparameters, making it difficult for non-experts to adopt these generative models. In an attempt to quantify this sensitivity, we investigated the robustness of the proposed method to hyperparameter choice, and empirically showed the proposed method was less sensitive to the choice of hyperparameters. For both the 5-Gaussians and MNIST datasets, we randomly sampled 100 hyperparameter settings from ranges specified in Table

1, and compared learning using both the proposed dual GAN and the standard GAN. On the 5-Gaussians dataset, we evaluated the performance of the models by how well the model samples covered the 5 modes. We defined successfully covering a mode as having out of

samples falling within a distance of 3 standard deviations to the center of the Gaussian. Our dual linear GAN succeeded in 49% of the experiments, and standard GAN succeeded in only 32%, demonstrating our method was significantly easier to train and tune. On MNIST, the mean Inception scores were 2.83, 1.99 for the proposed method and GAN training respectively. A more detailed breakdown of mode coverage and Inception score can be found in Figure

4.

Dataset mini-batch size generator generator discriminator generator max
learnrate momentum learnrate* architecture iterations
5-Gaussians randint[20,200] enr([0,10]) rand[.1,.9] enr([0,6]) enr([0,10]) fc-small randint[400,2000]
fc-large
MNIST randint[20,200] enr([0,10]) rand[.1,.9] enr([0,6]) enr([0,10]) fc-small 20000
fc-large
dcgan
dcgan-no-bn
Table 1:

Ranges of hyperparameters for sensitivity experiment. randint[a,b] means samples were drawn from uniformly distributed integers in the closed interval of [a,b], similarly rand[a,b] for real numbers. enr([a,b]) is shorthand for exp(-randint[a,b]), which was used for hyperparameters commonly explored in log-scale. For generator architectures, for the 5-Gaussians dataset we tried 2 3-layer fully-connected networks, with 20 and 40 hidden units. For MNIST, we tried 2 3-layer fully-connected networks, with 256 and 1024 hidden units, and a DCGAN-like architecture with and without batch normalization.

5-Gaussians MNIST
Figure 4: Results for hyperparameter sensitivity experiment. For 5-Gaussians dataset, the x-axis represents the number of modes covered. For MNIST, the x-axis represents discretized Inception score. Overall, the proposed dual GAN results concentrate significantly more mass on the right side, demonstrating its better robustness to hyperparameters than standard GANs.

Distribution of During Training: The dual formulation allows us to monitor the training process through a unique perspective by monitoring the dual variables . fig:lin-toy shows the evolution of the distribution of during training for the synthetic 2D dataset. At the begining of training the ’s are on the low side as the generator is not good and ’s are encouraged to be small to minimize the moment matching cost. As the generator improves, more attention is devoted to the entropy term in the dual objective, and the ’s start to converge to the value of .

Comparison of Different Features: The qualitative differences of the learned models with different features can be observed in fig:lin-mnist-feature. In general, the more information the features carry about the data, the better the learned generative models are. On MNIST, even with random features and linear discriminators we can learn reasonably good generative models. On the other hand, these results also indicate that if the features are bad then it is hard to learn good models. This leads us to the nonlinear discriminators presented below, where the discriminator features are learned together with the last layer, which may be necessary for more complicated problems domains where features are potentially difficult to engineer.

Trained

Random

Layer: All Conv1 Conv2 Conv3 Fc4 Fc5
Figure 5: Samples from dual linear GAN using pretrained and random features on MNIST. Each column shows a different features, utilizing all layers in a convnet and then successive single layers in the network.

4.2 Dual GAN with non-linear discriminator

Next we assess the applicability of our proposed technique for non-linear discriminators, and focus on training models on MNIST and CIFAR-10.

As discussed in sec:NLD, when the discriminator is non-linear, we can only approximate the discriminator locally. Therefore we do not have monotonic convergence guarantees. However, through better approximation and optimization of the discriminator we may expect the proposed dual GAN to work better than standard gradient based GAN training in some cases. Since GAN training is sensitive to hyperparameters, to make the comparison fair, we tuned the parameters for both the standard GANs and our approaches extensively and compare the best results for each.

fig:nonlin-mnist and 7 show the samples generated by models learned using different approaches. Visually samples of our proposed approaches are on par with the standard GANs. As an extra quantitative metric for performance, we computed the Inception Score SalimansARXIV2016 for each of them on CIFAR-10 in Table 2

. The Inception Score is a surrogate metric which highly depends on the network architecture. Therefore we computed the score using our own classifier and the one proposed in 

SalimansARXIV2016 . As can be seen in Table 2, both score and cost linearization are competitive with standard GANs. From the training curves we can also see that score linearization does the best in terms of approximating the objective, and both score linearization and cost linearization oscillate less than standard GANs.

Score Type GAN Score Lin Cost Lin Real Data
Inception (end) 5.610.09 5.400.12 5.430.10 10.72 0.38
Internal classifier (end) 3.850.08 3.520.09 4.420.09 8.03 0.07
Inception (avg) 5.590.38 5.440.08 5.160.37 -
Internal classifier (avg) 3.640.47 3.700.27 4.040.37 -
Table 2: Inception Score SalimansARXIV2016 for different GAN training methods. Since the score depends on the classifier, we used code from SalimansARXIV2016 as well as our own small convnet CIFAR-10 classifier for evaluation (achieves 83% accuracy). All scores are computed using 10,000 samples. The top pair are scores on the final models. GANs are known to be unstable, and results are sometimes cherry-picked. So, the bottom pair are scores averaged across models sampled from different iterations of training after it stopped improving.
Score Linearization Cost Linearization GAN
Figure 6: Nonlinear discriminator experiments on MNIST, and their training curves, showing the primal objective, the approximation, and the discriminator accuracy. Here we are showing typical runs with the compared methods (not cherry-picked).
Score Linearization Cost Linearization GAN
Figure 7: Nonlinear discriminator experiments on CIFAR-10, learning curves and samples organized by class are provided in the supplementary material.

5 Related Work

A thorough review of the research devoted to generative modeling is beyond the scope of this paper. In this section we focus on GANs GoodfellowARXIV2014 and review the most related work that has not been discussed throughout the paper.

Our dual formulation reveals a close connection to moment-matching objectives widely seen in many other models. MMD GrettonJMLR2012 is one such related objective, and has been used in deep generative models in LiARXIV2015 ; dziugaite2015training . SalimansARXIV2016 proposed a range of techniques to improve GAN training, including the usage of feature matching. Similar techniques are also common in style transfer gatys2015neural . In addition to these, moment-matching objectives are very common for exponential family models wainwright2008graphical . Common to all these works is the use of fixed moments. The Wasserstein objective proposed for GAN training in ArjovskyARXIV2017

can also be thought of as a form of moment matching, where the features are part of the discriminator and they are adaptive. The main difference between our dual GAN with linear discriminators and other forms of adaptive moment matching is that we adapt the weighting of features by optimizing non-parametric dual parameters, while other works mostly adopt a parametric model to adapt features.

Duality has also been studied to understand and improve GAN training. NowozinNIPS2016 pioneered work that uses duality to derive new GAN training objectives from other divergences. ArjovskyARXIV2017 also used duality to derive a practical objective for training GANs from other distance metrics. Compared to previous work, instead of coming up with new objectives, we instead used duality on the original GAN objective and aim to better optimize the discriminator.

Beyond what has already been discussed, there has been a range of other techniques developed to improve or extend GAN training, , ImARXIV2016 ; HuangARXIV2016 ; ZhangARXIV2016 ; ChenARXIV2016 ; ZhaoICLR2017 ; LondonNIPSWS2016 just to name a few.

6 Conclusion

To conclude, we introduced ‘Dualing GANs,’ a framework which considers duality based formulations for the duel between the discriminator and the generator. Using the dual formulation provides opportunities to train the discriminator better. This helps remove the instability in training for linear discriminators, and we also adapted this framework to non-linear discriminators. The dual formulation also provides connections to other techniques. In particular, we discussed a close link to moment matching techniques, and showed that the cost function linearization for non-linear discriminators recovers the original gradient direction in standard GANs. We hope that our results spur further research in this direction to obtain a better understanding of the GAN objective and its intricacies.

References

Appendix A Minibatch objective

Standard GAN training are motivated from a maximin formulation on an expectation objective

(3)

where is the true data distribution, and is the prior distribution on .

In practice, however, a minibatch of data and noise are sampled each time, and one gradient update is made to update and each.

In our formulation, in particular the dual GAN with linear discriminators, we can solve the inner optimization problem over on minibatch samples and to optimality, is then updated with the optimal . This effectively makes the optimization problem take the following form

(4)

where and are the individual loss functions. Using this notation, the original GAN problem can be represented as

(5)

since, and are drawn i.i.d. from corresponding distributions.

Let , we have

(6)

which means our minibatch algorithm is actually optimizing a lower bound on the theoretical GAN objective, this introduces a bias that decreases with minibatch size, but guarantees that the optimization is still valid.

On the other hand, interleaving minibatch training with partial optimization of (not all the way to optimality) makes the standard GAN training behave differently, however the exact properties of this process is hard to characterize and beyond the scope of this paper.

Appendix B Proof of Claim 1

The dual program to the minimization task

reads as follows:

(7)

with binary entropy , and the optimal solution to the original problem can be expressed with optimal and as

Proof.

We introduce auxillary variables and , the original minimization problem can then be transformed into the following equality constrained problem

The corresponding Lagrangian has the following form

(9)

Set the derivatives with respect to the primal variables to 0, we get

(10)
(11)
(12)

We can then represent the primal variables using the ’s,

(13)
(14)
(15)

Eq.(11) and (12) also introduced extra constraints on and , as follows

(16)

Substituting the primal variables back to the Lagrangian, we get the dual objective

(17)

The overall dual problem is therefore

(18)

Once we have solved for the optimal , we can recover the optimal primal solution using (13). ∎

Appendix C Setting the step size in the trust-region method

Pursuing this trust-region intuition, we can alternatively choose based on the accuracy of the model . To this end it is often convenient to introduce the acceptance ratio ρ= f(k,θ)-f(k+,θ)f(k,θ) - mk,θ(), which compares the real function value difference to the modeled one. If the acceptance ratio deviates significantly from 1 on either side, we may opt to decrease the trust region and resolve the program given in Eq. (4) of the main paper, instead of accepting the step.

Intuitively, if specified in eq:AccRatio is far from 1, the model function does not fit well the original objective. To obtain a better fit we resolve the program using a smaller trust region size .

Appendix D Proof of Claim 2

The dual program to s.t.  with model function given as

is the following:

The optimal to the original problem can be expressed through optimal as

Proof.

In this optimization problem, the free variable is . We introduce short hand notations and . With these extra notations we can simplify the primal problem as

(19)

Again, we introduce auxillary variables and , and obtain the following constrained optimization problem

(20)

The corresponding Lagrangian is the following

(21)

Setting the derivatives of the primal variables with respect to the Lagrangian to 0, we get

(22)
(23)
(24)

Therefore

(25)
(26)
(27)

which includes the equation for .

Next we substitute these back to the Lagrangian to obtain the dual objective. We introduce another short hand notation , then , and the dual objective can be written as