1 Introduction
In order to learn Generative Adversarial Networks (Goodfellow et al., 2014), it is now well established that the generator should mimic the distribution of real data, in the sense of a certain discrepancy measure. Discrepancies between distributions that measure the goodness of the fit of the neural generator to the real data distribution has been the subject of many recent studies (Arjovsky & Bottou, 2017; Nowozin et al., 2016; Kaae Sønderby et al., 2017; Mao et al., 2017; Arjovsky et al., 2017; Gulrajani et al., 2017; Mroueh et al., 2017; Mroueh & Sercu, 2017; Li et al., 2017), most of which focus on training stability.
In terms of data modalities, most success was booked in plausible natural image generation after the introduction of Deep Convolutional Generative Adversarial Networks (DCGAN) (Radford et al., 2015)
. This success is not only due to advances in training generative adversarial networks in terms of loss functions
(Arjovsky et al., 2017)and stable algorithms, but also to the representation power of convolutional neural networks in modeling images and in finding sufficient statistics that capture the
continuous density function of natural images. When moving to neural generators of discrete sequencesgenerative adversarial networks theory and practice are still not very well understood. Maximum likelihood pretraining or augmentation, in conjunction with the use of reinforcement learning techniques were proposed in many recent works for training GAN for discrete sequences generation
(Yu et al., 2016; Che et al., 2017; Hjelm et al., 2017; Rajeswar et al., 2017). Other methods included using the Gumbel Softmax trick (Kusner & HernándezLobato, 2016) and the use of autoencoders to generate adversarially discrete sequences from a continuous space (Zhao et al., 2017). End to end training of GANs for discrete sequence generation is still an open problem (Press et al., 2017). Empirical successes of end to end training have been reported within the framework of WGANGP (Gulrajani et al., 2017), using a proxy for the Wasserstein distance via a pointwise gradient penalty on the critic. Inspired by this success, we propose in this paper a new Integral Probability Metric (IPM) between distributions that we coin Sobolev IPM. Intuitively an IPM (Müller, 1997)between two probability distributions looks for a witness function
, called critic, that maximally discriminates between samples coming from the two distributions:Traditionally, the function is defined over a function class that is independent to the distributions at hand (Sriperumbudur et al., 2012). The Wasserstein distance corresponds for instance to an IPM where the witness functions are defined over the space of Lipschitz functions; The MMD distance (Gretton et al., 2012) corresponds to witness functions defined over a ball in a Reproducing Kernel Hilbert Space (RKHS).
We will revisit in this paper Fisher IPM defined in (Mroueh & Sercu, 2017), which extends the IPM definition to function classes defined with norms that depend on the distributions. Fisher IPM can be seen as restricting the critic to a Lebsegue ball defined with respect to a dominant measure . The Lebsegue norm is defined as follows:
where is a dominant measure of and .
In this paper we extend the IPM framework to critics bounded in the Sobolev norm:
In contrast to Fisher IPM, which compares joint probability density functions of all coordinates between two distributions, we will show that Sobolev IPM compares weighted (coordinatewise) conditional Cumulative Distribution Functions for all coordinates on a leave on out basis. Matching conditional dependencies between coordinates is crucial for sequence modeling.
Our analysis and empirical verification show that the modeling of the conditional dependencies can be built in to the metric used to learn GANs as in Sobolev IPM. For instance, this gives an advantage to Sobolev IPM in comparing sequences over Fisher IPM. Nevertheless, in sequence modeling when we parametrize the critic and the generator with a neural network, we find an interesting tradeoff between the metric used and the architectures used to parametrize the critic and the generator as well as the conditioning used in the generator. The burden of modeling the conditional long term dependencies can be handled by the IPM loss function as in Sobolev IPM (more accurately the choice of the data dependent function class of the critic) or by a simpler metric such as Fisher IPM together with a powerful architecture for the critic that models conditional long term dependencies such as LSTM or GRUs in conjunction with a curriculum conditioning of the generator as done in (Press et al., 2017). Highlighting those interesting tradeoffs between metrics, data dependent functions classes for the critic (Fisher or Sobolev) and architectures is crucial to advance sequence modeling and more broadly structured data generation using GANs.
On the other hand, Sobolev norms have been widely used in manifold regularization in the so called Laplacian framework for semisupervised learning (SSL) (Belkin et al., 2006). GANs have shown success in semisupervised learning (Salimans et al., 2016; Dumoulin et al., 2017; Dai et al., 2017; Kumar et al., 2017). Nevertheless, many normalizations and additional tricks were needed. We show in this paper that a variant of Sobolev GAN achieves strong results in semisupervised learning on CIFAR10, without the need of any activation normalization in the critic.
The main contributions of this paper can be summarized as follows:

We introduce Sobolev IPM in Section 4 by restricting the critic of an IPM to a Sobolev ball defined with respect to a dominant measure . We then show that Sobolev IPM defines a discrepancy between weighted (coordinatewise) conditional CDFs of distributions.

The intrinsic conditioning and the CDF matching make Sobolev IPM suitable for discrete sequence matching and explain the success of the gradient pernalty in WGANGP and Sobolev GAN in discrete sequence generation.

We give in Section 5 an ALM (Augmented Lagrangian Multiplier) algorithm for training Sobolev GAN. Similar to Fisher GAN, this algorithm is stable and does not compromise the capacity of the critic.

We show in Appendix A
that the critic of Sobolev IPM satisfies an elliptic Partial Differential Equation (PDE). We relate this diffusion to the FokkerPlanck equation and show the behavior of the gradient of the optimal Sobolev critic as a transportation plan between distributions.

We empirically study Sobolev GAN in character level text generation (Section 6.1). We validate that the conditioning implied by Sobolev GAN is crucial for the success and stability of GAN in text generation. As a take home message from this study, we see that text generation succeeds either by implicit conditioning i.e using Sobolev GAN (or WGANGP) together with convolutional critics and generators, or by explicit conditioning i.e using Fisher IPM together with recurrent critic and generator and curriculum learning.

We finally show in Section 6.2 that a variant of Sobolev GAN achieves competitive semisupervised learning results on CIFAR10, thanks to the smoothness implied by the Sobolev regularizer.
2 Overview of Metrics between Distributions
In this Section, we review different representations of probability distributions and metrics for comparing distributions that use those representations. Those metrics are at the core of training GAN. In what follows, we consider probability measures with a positive weakly differentiable probability density functions (PDF). Let and be two probability measures with PDFs and defined on . Let and be the Cumulative Distribution Functions (CDF) of and respectively:
The score function of a density function is defined as:
In this work, we are interested in metrics between distributions that have a variational form and can be written as a suprema of mean discrepancies of functions defined on a specific function class. This type of metrics include divergences as well as Integral Probability Metrics (Sriperumbudur et al., 2009) and have the following form:
where is a function class defined on and is a mean discrepancy, . The variational form given above leads in certain cases to closed form expressions in terms of the PDFs or in terms of the CDFs or the score functions .
In Table 1, we give a comparison of different discrepancies and function spaces used in the literature for GAN training together with our proposed Sobolev IPM. We see from Table 1 that Sobolev IPM, compared to Wasserstein Distance, imposes a tractable smoothness constraint on the critic on points sampled from a distribution , rather then imposing a Lipschitz constraint on all points in the space . We also see that Sobolev IPM is the natural generalization of the Cramér VonMises Distance from one dimension to high dimensions. We note that the Energy Distance, a form of Maximum Mean Discrepancy for a special kernel, was used in (Bellemare et al., 2017b) as a generalization of the Cramér distance in GAN training but still needed a gradient penalty in its algorithmic counterpart leading to a misspecified distance between distributions. Finally it is worth noting that when comparing Fisher IPM and Sobolev IPM we see that while Fisher IPM compares joint PDF of the distributions, Sobolev IPM compares weighted (coordinatewise) conditional CDFs. As we will see later, this conditioning nature of the metric makes Sobolev IPM suitable for comparing sequences. Note that the Stein metric (Liu et al., 2016; Liu, 2017) uses the score function to match distributions. We will show later how Sobolev IPM relates to the Stein discrepancy (Appendix A).
Function class  Closed Form  
Divergence 

(Goodfellow et al., 2014)  
(Nowozin et al., 2016)  Fenchel Conjugate  
Wasserstein 1  NA  
(Arjovsky et al., 2017)  
(Gulrajani et al., 2017)  
MMD  
(Li et al., 2017)  
(Li et al., 2015)  
(Dziugaite et al., 2015)  


Stein  NA in general  
Distance  smooth with zero  has a closed form  
(Wang & Liu, 2016)  boundary condition  in RKHS  
Cramér 

for  smooth with zero  
(Bellemare et al., 2017a)  boundary condition  
Fisher  
IPM  
(Mroueh & Sercu, 2017)  
Sobolev  
IPM  
(This work)  with zero boundary condition  where  
3 Generalizing Fisher IPM: PDF Comparison
Imposing dataindependent constraints on the function class in the IPM framework, such as the Lipschitz constraint in the Wasserstein distance is computationally challenging and intractable for the general case. In this Section, we generalize the Fisher IPM introduced in (Mroueh & Sercu, 2017), where the function class is relaxed to a tractable data dependent constraint
on the second order moment of the critic, in other words the critic is constrained to be in a Lebsegue ball.
Fisher IPM. Let and be the space of distributions defined on . Let , and be a dominant measure of and , in the sense that
We assume to be also a distribution in , and assume , . Let be the space of measurable functions. For , we define the following dot product and its corresponding norm:
Note that , can be formally defined as follows:
We define the unit Lebesgue ball as follows:
Fisher IPM defined in (Mroueh & Sercu, 2017), searches for the critic function in the Lebesgue Ball that maximizes the mean discrepancy between and . Fisher GAN (Mroueh & Sercu, 2017) was originally formulated specifically for . We consider here a general as long as it dominates and . We define Generalized Fisher IPM as follows:
(1) 
Note that:
Hence Fisher IPM can be written as follows:
(2) 
We have the following result:
Theorem 1 (Generalized Fisher IPM).
The Fisher distance and the optimal critic are as follows:

The Fisher distance is given by:

The optimal achieving the Fisher distance is:
Proof of Theorem 1.
From Equation (2), the optimal
belong to the intersection of the hyperplane that has normal
, and the ball , hence . Hence . ∎We see from Theorem 1 the role of the dominant measure : the optimal critic is defined with respect to this measure and the overall Fisher distance can be seen as an average weighted distance between probability density functions, where the average is taken on points sampled from . We give here some choices of :

For , we obtain the symmetric chisquared distance as defined in (Mroueh & Sercu, 2017).

, the implicit distribution defined by the interpolation lines between
and as in (Gulrajani et al., 2017). 
When does not dominate and , we obtain a non symmetric divergence. For example for , . We see here that for this particular choice we obtain the Pearson divergence.
4 Sobolev IPM
In this Section, we introduce the Sobolev IPM. In a nutshell, the Sobolev IPM constrains the critic function to belong to a ball in the restricted Sobolev Space. In other words we constrain the norm of the gradient of the critic . We will show that by moving from a Lebesgue constraint as in Fisher IPM to a Sobolev constraint as in Sobolev IPM, the metric changes from a joint PDF matching to weighted (ccordinatewise) conditional CDFs matching. The intrinsic conditioning built in to the Sobolev IPM and the comparison of cumulative distributions makes Sobolev IPM suitable for comparing discrete sequences.
4.1 Definition and Expression of Sobolev IPM in terms of Coordinate Conditional CDFs
We will start by recalling some definitions on Sobolev Spaces. We assume in the following that is compact and consider functions in the Sobolev space :
We restrict ourselves to functions in vanishing at the boundary, and note this space . Note that in this case: defines a seminorm. We can similarly define a dot product in , for :
Hence we define the following Sobolev IPM, by restricting the critic of the mean discrepancy to the Sobolev unit ball :
(3) 
Let and be the cumulative distribution functions of and respectively. We have:
(4) 
and we define
computes the highorder partial derivative excluding the variable .
Our main result is presented in Theorem 2. Additional theoretical results are given in Appendix A. All proofs are given in Appendix B.
Theorem 2 (Sobolev IPM).
Assume that and and its derivatives exist and are continuous: and . Define the differential operator :
For , let .
The Sobolev IPM given in Equation (B) has the following equivalent forms:

Sobolev IPM as comparison of high order partial derivatives of CDFs. The Sobolev IPM has the following form:

Sobolev IPM as comparison of weighted (coordinatewise) conditional CDFs. The Sobolev IPM can be written in the following equivalent form:
(5) 
The optimal critic satisfies the following identity:
(6)
We show in Appendix A that the optimal Sobolev critic is the solution of the following elliptic PDE (with zero boundary conditions):
(7) 
Appendix A gives additional theoretical results of Sobolev IPM in terms of 1) approximating Sobolev critic in a function hypothesis class such as neural networks 2) Linking the elliptic PDE given in Equation (7) and the FokkerPlanck diffusion. As we illustrate in Figure 1(b) the gradient of the critic defines a transportation plan for moving the distribution mass from to .
Discussion of Theorem 2.
We make the following remarks on Theorem 2:

[leftmargin=0.5cm]

From Theorem 2, we see that the Sobolev IPM compares higher order partial derivatives of the cumulative distributions and , while Fisher IPM compares the probability density functions.

The dominant measure plays a similar role to Fisher:
the average distance is defined with respect to points sampled from .

Comparison of coordinatewise Conditional CDFs. We note in the following . Note that we have:
(Using Bayes rule) Note that for each , is the cumulative distribution of the variable given the other variables , weighted by the density function of at . This leads us to the form given in Equation 5.
We see that the Sobolev IPM compares for each dimension the conditional cumulative distribution of each variable given the other variables, weighted by their density function. We refer to this as comparison of coordinatewise CDFs on a leave one out basis. From this we see that we are comparing CDFs, which are better behaved on discrete distributions. Moreover, the conditioning built in to this metric will play a crucial role in comparing sequences as the conditioning is important in this context (See section 6.1).
4.2 Illustrative Examples
Sobolev IPM / Cramér Distance and Wasserstein1 in one Dimension.
In one dimension, Sobolev IPM is the Cramér Distance (for uniform on , we note this ). While Sobolev IPM in one dimension measures the discrepancy between CDFs, the one dimensional Wasserstein distance measures the discrepancy between inverse CDFs:
Recall also that the Fisher IPM for uniform is given by :
Consider for instance two point masses and with . The rationale behind using Wasserstein distance for GAN training is that since it is a weak metric, for far distributions Wasserstein distance provides some signal (Arjovsky et al., 2017). In this case, it is easy to see that , while . As we see from this simple example, CDF comparison is more suitable than PDF for comparing distributions on discrete spaces.
Sobolev IPM between two 2D Gaussians.
We consider and to be two dimensional Gaussians with means and and covariances and . Let be the coordinates in 2D. We note and the CDFs of and respectively. We consider in this example . We know from Theorem 2 that the gradient of the Sobolev optimal critic is proportional to the following vector field:
(8) 
In Figure 1 we consider
In Figure 1(a) we plot the numerical solution of the PDE satisfied by the optimal Sobolev critic given in Equation (7), using Matlab solver for elliptic PDEs (more accurately we solve , hence we obtain the solution of Equation (7) up to a normalization constant ()). We numerically solve the PDE on a rectangle with zero boundary conditions. We see that the optimal Sobolev critic separates the two distributions well. In Figure 1(b) we then numerically compute the gradient of the optimal Sobolev critic on a 2D grid as given in Equation 8 (using numerical evaluation of the CDF and finite difference for the evaluation of the partial derivatives). We plot in Figure 1(b) the density functions of and as well as the vector field of the gradient of the optimal Sobolev critic. As discussed in Section A.2, we see that the gradient of the critic (wrt to the input), defines on the support of a transportation plan for moving the distribution mass from to .
5 Sobolev GAN
Now we turn to the problem of learning GANs with Sobolev IPM. Given the “real distribution” , our goal is to learn a generator such that for , the distribution of is close to the real data distribution , where is a fixed distribution on (for instance ). We note for the “fake distribution” of . Consider , , and . We consider these choices for :

i.e or with equal probability .

is the implicit distribution defined by the interpolation lines between and as in (Gulrajani et al., 2017) i.e : and .
Sobolev GAN can be written as follows:
For any choice of the parametric function class , note the constraint by For example if , . Note that, since the optimal theoretical critic is achieved on the sphere, we impose a sphere constraint rather than a ball constraint. Similar to (Mroueh & Sercu, 2017) we define the Augmented Lagrangian corresponding to Sobolev GAN objective and constraint
(9) 
where is the Lagrange multiplier and is the quadratic penalty weight. We alternate between optimizing the critic and the generator. We impose the constraint when training the critic only. Given , we solve for training the critic. Then given the critic parameters we optimize the generator weights to minimize the objective See Algorithm 1.
Relation to WGANGP.
WGANGP can be written as follows:
The main difference between WGANGP and our setting, is that WGANGP enforces pointwise constraints on points drawn from via a pointwise quadratic penalty while we enforce that constraint on average as a Sobolev norm, allowing us the coordinate weighted conditional CDF interpretation of the IPM.
6 Applications of Sobolev GAN
Sobolev IPM has two important properties; The first stems from the conditioning built in to the metric through the weighted conditional CDF interpretation. The second stems from the diffusion properties that the critic of Sobolev IPM satisfies (Appendix A) that has theoretical and practical ties to the Laplacian regularizer and diffusion on manifolds used in semisupervised learning (Belkin et al., 2006).
In this Section, we exploit those two important properties in two applications of Sobolev GAN: Text generation and semisupervised learning. First in text generation, which can be seen as a discrete sequence generation, Sobolev GAN (and WGANGP) enable training GANs without need to do explicit bruteforce conditioning. We attribute this to the builtin conditioning in Sobolev IPM (for the sequence aspect) and to the CDF matching (for the discrete aspect). Secondly using GANs in semisupervised learning is a promising avenue for learning using unlabeled data. We show that a variant of Sobolev GAN can achieve strong SSL results on the CIFAR10 dataset, without the need of any form of activation normalization in the networks or any extra ad hoc tricks.
6.1 Text Generation with Sobolev GAN
In this Section, we present an empirical study of Sobolev GAN in character level text generation. Our empirical study on end to end training of characterlevel GAN for text generation is articulated on four dimensions . (1) the loss used (GP: WGANGP (Gulrajani et al., 2017), S: Sobolev or F: Fisher) (2) the architecture of the critic (Resnets or RNN) (3) the architecture of the generator (Resnets or RNN or RNN with curriculum learning) (4) the sampling distribution in the constraint.
Text Generation Experiments. We train a characterlevel GAN on Google Billion Word dataset and follow the same experimental setup used in (Gulrajani et al., 2017). The generated sequence length is and the evaluation is based on JensenShannon divergence on empirical 4gram probabilities (JS4) of validation data and generated data. JS4 may not be an ideal evaluation criteria, but it is a reasonable metric for current characterlevel GAN results, which is still far from generating meaningful sentences.
Annealed Smoothing of discrete in the constraint . Since the generator distribution will always be defined on a continuous space, we can replace the discrete “real” distribution with a smoothed version (Gaussian kernel smoothing) . This corresponds to doing the following sampling for and . Note that we only inject noise to the “real” distribution with the goal of smoothing the support of the discrete distribution, as opposed to instance noise on both “real” and “fake” to stabilize the training, as introduced in (Kaae Sønderby et al., 2017; Arjovsky & Bottou, 2017). As it is common in optimization by continuation (Mobahi & III, 2015), we also anneal the noise level as the training progresses on a linear schedule.
Sobolev GAN versus WGANGP with Resnets. In this setting, we compare (WGANGP,GResnet,DResnet,) to (Sobolev,GResnet,DResnet,) where is one of: (1) , (2) the noise smoothed or (3) noise smoothed with annealing with the initial noise level. We use the same architectures of Resnet with 1D convolution for the critic and the generator as in (Gulrajani et al., 2017) (4 resnet blocks with hidden layer size of ). In order to implement the noise smoothing we transform the data into onehot vectors. Each one hot vector is transformed to a probability vector with replacing the one and replacing the zero. We then sample
from a Gaussian distribution
, and use softmax to normalize . We use algorithm 1 for Sobolev GAN and fix the learning rate to and to . The noise level was annealed following a linear schedule starting from an initial noise level (at iteration , , Maxiter=K). For WGANGP we used the open source implementation with the penalty as in (Gulrajani et al., 2017). Results are given in Figure 2(a) for the JS4 evaluation of both WGANGP and Sobolev GAN for . In Figure 2(b) we show the JS4 evaluation of Sobolev GAN with the annealed noise smoothing , for various values of the initial noise level . We see that the training succeeds in both cases. Sobolev GAN achieves slightly better results than WGANGP for the annealing that starts with high noise level . We note that without smoothing and annealing i.e using , Sobolev GAN is behind. Annealed smoothing of , helps the training as the real distribution is slowly going from a continuous distribution to a discrete distribution. See Appendix C (Figure 5) for a comparison between annealed and non annealed smoothing.We give in Appendix C a comparison of WGANGP and Sobolev GAN for a Resnet generator architecture and an RNN critic. The RNN has degraded performance due to optimization difficulties.
Fisher GAN Curriculum Conditioning versus Sobolev GAN: Explicit versus Implicit conditioning. We analyze how Fisher GAN behaves under different architectures of generators and critics. We first fix the generator to be ResNet. We study 3 different architectures of critics: ResNet, GRU (we follow the experimental setup from (Press et al., 2017)), and hybrid ResNet+GRU (Reed et al., 2016). We notice that RNN is unstable, we need to clip the gradient values of critics in , and the gradient of the Lagrange multiplier to . We fix and we use . We search the value for the learning rate in . We see that for and Resnet for various critic architectures, Fisher GAN fails at the task of text generation (Figure 3 ac). Nevertheless, when using RNN critics (Fig 3 b, c) a marginal improvement happens over the fully collapsed state when using a resnet critic (Fig 3 a). We hypothesize that RNN critics enable some conditioning and factoring of the distribution, which is lacking in Fisher IPM.
Finally Figure 3 (d) shows the result of training with recurrent generator and critic. We follow (Press et al., 2017) in terms of GRU architecture, but differ by using Fisher GAN rather than WGANGP. We use i.e. without annealed noise smoothing. We train (F, D=RNN,G=RNN,) using curriculum conditioning of the generator for all lengths as done in (Press et al., 2017): the generator is conditioned on characters and predicts the remaining characters. We increment to on a regular schedule (every 15k updates). JS4 is only computed when . We see in Figure 3 that under curriculum conditioning with recurrent critics and generators, the training of Fisher GAN succeeds and reaches similar levels of Sobolev GAN (and WGANGP). Note that the need of this explicit brute force conditioning for Fisher GAN, highlights the implicit conditioning induced by Sobolev GAN via the gradient regularizer, without the need for curriculum conditioning.
6.2 SemiSupervised Learning with Sobolev GAN
A proper and promising framework for evaluating GANs consists in using it as a regularizer in the semisupervised learning setting (Salimans et al., 2016; Dumoulin et al., 2017; Kumar et al., 2017). As mentioned before, the Sobolev norm as a regularizer for the Sobolev IPM draws connections with the Laplacian regularization in manifold learning (Belkin et al., 2006)
. In the Laplacian framework of semisupervised learning, the classifier satisfies a smoothness constraint imposed by controlling its Sobolev norm:
(Alaoui et al., 2016). In this Section, we present a variant of Sobolev GAN that achieves competitive performance in semisupervised learning on the CIFAR10 dataset Krizhevsky & Hinton (2009)without using any internal activation normalization in the critic, such as batch normalization (BN)
(Ioffe & Szegedy, 2015), layer normalization (LN) (Ba et al., 2016), or weight normalization (Salimans & Kingma, 2016).In this setting, a convolutional neural network is shared between the cross entropy (CE) training of a class classifier () and the critic of GAN (See Figure 4). We have the following training equations for the (critic + classifer) and the generator:
(10) 
(11) 
where the main IPM objective with samples: .
Following (Mroueh & Sercu, 2017) we use the following “ parametrization” for the critic (See Figure 4) :
Note that appears both in the critic formulation and in the CrossEntropy term in Equation (10). Intuitively this critic uses the class directions of the classifier to define the “real” direction, which competes with another K+1^{th} direction that indicates fake samples. This parametrization adapts the idea of (Salimans et al., 2016), which was formulated specifically for the classic KL / JSD based GANs, to IPMbased GANs. We saw consistently better results with the formulation over the regular formulation where the classification layer doesn’t interact with the critic direction . We also note that when applying a gradient penalty based constraint (either WGANGP or Sobolev) on the full critic , it is impossible for the network to fit even the small labeled training set (underfitting), causing bad SSL performance. This leads us to the formulation below, where we apply the Sobolev constraint only on . Throughout this Section we fix .
We propose the following two schemes for constraining the K+1 critic :
1) Fisher constraint on the critic: We restrict the critic to the following set:
This constraint translates to the following ALM objective in Equation (10):
where the Fisher constraint ensures the stability of the training through an implicit whitened mean matching (Mroueh & Sercu, 2017).
2) Fisher+Sobolev constraint: We impose 2 constraints on the critic: Fisher on & Sobolev on
where .
This constraint translates to the following ALM in Equation (10):
Note that the fisher constraint on ensures the stability of the training, and the Sobolev constraints on the “fake” critic enforces smoothness of the “fake” critic and thus the shared CNN . This is related to the classic Laplacian regularization in semisupervised learning (Belkin et al., 2006).
Table 2
shows results of SSL on CIFAR10 comparing the two proposed formulations. Similar to the standard procedure in other GAN papers, we do hyperparameter and model selection on the validation set. We present baselines with a similar model architecture and leave out results with significantly larger convnets. We indicate baselines with * which use either additional models like PixelCNN, or do data augmentation (translations and flips), or use a much larger model, either of which gives an advantage over our plain simple training method. G and D architectures and hyperparameters are in Appendix
D. is similar to (Salimans et al., 2016; Dumoulin et al., 2017; Mroueh & Sercu, 2017) in architecture, but note that we do not use any batch, layer, or weight normalization yet obtain strong competitive accuracies. We hypothesize that we don’t need any normalization in the critic, because of the implicit whitening of the feature maps introduced by the Fisher and Sobolev constraints as explained in (Mroueh & Sercu, 2017).Number of labeled examples  1000  2000  4000  8000 

Model  Misclassification rate  
CatGAN (Springenberg, 2015)  
FM (Salimans et al., 2016)  
ALI (Dumoulin et al., 2017)  
Tangents Reg (Kumar et al., 2017)  
model (Laine & Aila, 2016) *  
VAT (Miyato et al., 2017)  
Bad Gan (Dai et al., 2017) *  
VAT+EntMin+Large (Miyato et al., 2017) *  
Sajjadi (Sajjadi et al., 2016) *  
Fisher, layer norm (Mroueh & Sercu, 2017)  
Fisher, no norm (Mroueh & Sercu, 2017)  
Fisher+Sobolev, no norm (This Work) 
CIFAR10 error rates for varying number of labeled samples in the training set. Mean and standard deviation computed over 5 runs. We only use the
formulation of the critic. Note that we achieve strong SSL performance without any additional tricks, and even though the critic does not have any batch, layer or weight normalization.7 Conclusion
We introduced the Sobolev IPM and showed that it amounts to a comparison between weighted (coordinatewise) CDFs. We presented an ALM algorithm for training Sobolev GAN. The intrinsic conditioning implied by the Sobolev IPM explains the success of gradient regularization in Sobolev GAN and WGANGP on discrete sequence data, and particularly in text generation. We highlighted the important tradeoffs between the implicit conditioning introduced by the gradient regularizer in Sobolev IPM, and the explicit conditioning of Fisher IPM via recurrent critics and generators in conjunction with the curriculum conditioning. Both approaches succeed in text generation. We showed that Sobolev GAN achieves competitive semisupervised learning results without the need of any normalization, thanks to the smoothness induced by the gradient regularizer. We think the Sobolev IPM point of view will open the door for designing new regularizers that induce different types of conditioning for general structured/discrete/graph data beyond sequences.
References
 Alaoui et al. (2016) Ahmed El Alaoui, Xiang Cheng, Aaditya Ramdas, Martin J. Wainwright, and Michael I. Jordan. Asymptotic behavior of based laplacian regularization in semisupervised learning. CoRR, abs/1603.00564, 2016.
 Arjovsky & Bottou (2017) Martin Arjovsky and Léon Bottou. Towards principled methods for training generative adversarial networks. In ICLR, 2017.
 Arjovsky et al. (2017) Martin Arjovsky, Soumith Chintala, and Léon Bottou. Wasserstein gan. ICML, 2017.
 Ba et al. (2016) Jimmy Lei Ba, Jamie Ryan Kiros, and Geoffrey E Hinton. Layer normalization. arXiv:1607.06450, 2016.
 Belkin et al. (2006) Mikhail Belkin, Partha Niyogi, and Vikas Sindhwani. Manifold regularization: A geometric framework for learning from labeled and unlabeled examples. JMLR, 2006.
 Bellemare et al. (2017a) Marc G. Bellemare, Ivo Danihelka, Will Dabney, Shakir Mohamed, Balaji Lakshminarayanan, Stephan Hoyer, and Rémi Munos. The cramer distance as a solution to biased wasserstein gradients. CoRR, abs/1705.10743, 2017a.
 Bellemare et al. (2017b) Marc G Bellemare, Ivo Danihelka, Will Dabney, Shakir Mohamed, Balaji Lakshminarayanan, Stephan Hoyer, and Rémi Munos. The cramer distance as a solution to biased wasserstein gradients. arXiv:1705.10743, 2017b.
 Che et al. (2017) Tong Che, Yanran Li, Ruixiang Zhang, Devon R Hjelm, Wenjie Li, Yangqiu Song, and Yoshua Bengio. Maximumlikelihood augmented discrete generative adversarial networks. arXiv:1702.07983, 2017.
 Dai et al. (2017) Zihang Dai, Zhilin Yang, Fan Yang, William W Cohen, and Ruslan Salakhutdinov. Good semisupervised learning that requires a bad gan. arXiv:1705.09783, 2017.
 Dumoulin et al. (2017) Vincent Dumoulin, Ishmael Belghazi, Ben Poole, Alex Lamb, Martin Arjovsky, Olivier Mastropietro, and Aaron Courville. Adversarially learned inference. ICLR, 2017.
 Dziugaite et al. (2015) Gintare Karolina Dziugaite, Daniel M. Roy, and Zoubin Ghahramani. Training generative neural networks via maximum mean discrepancy optimization. In UAI, 2015.
 Ekeland & Turnbull (1983) I. Ekeland and T. Turnbull. Infinitedimensional Optimization and Convexity. The University of Chicago Press, 1983.
 Goodfellow et al. (2014) Ian Goodfellow, Jean PougetAbadie, Mehdi Mirza, Bing Xu, David WardeFarley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial nets. In NIPS, 2014.
 Gretton et al. (2012) Arthur Gretton, Karsten M Borgwardt, Malte J Rasch, Bernhard Schölkopf, and Alexander Smola. A kernel twosample test. JMLR, 2012.
 Gulrajani et al. (2017) Ishaan Gulrajani, Faruk Ahmed, Martin Arjovsky, Vincent Dumoulin, and Aaron Courville. Improved training of wasserstein gans. arXiv:1704.00028, 2017.
 Hjelm et al. (2017) R. Devon Hjelm, Athul Paul Jacob, Tong Che, Kyunghyun Cho, and Yoshua Bengio. Boundaryseeking generative adversarial networks. arXiv:1702.08431, 2017.
 Ioffe & Szegedy (2015) Sergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. Proc. ICML, 2015.

Kaae Sønderby et al. (2017)
Casper Kaae Sønderby, Jose Caballero, Lucas Theis, Wenzhe Shi, and Ferenc
Huszár.
Amortised map inference for image superresolution.
ICLR, 2017.  Krizhevsky & Hinton (2009) A. Krizhevsky and G. Hinton. Learning multiple layers of features from tiny images. Master’s thesis, 2009.
 Kumar et al. (2017) Abhishek Kumar, Prasanna Sattigeri, and P Thomas Fletcher. Improved semisupervised learning with gans using manifold invariances. NIPS, 2017.
 Kusner & HernándezLobato (2016) Matt J. Kusner and José Miguel HernándezLobato. Gans for sequences of discrete elements with the gumbelsoftmax distribution. arXiv:1611.04051, 2016.
 Laine & Aila (2016) Samuli Laine and Timo Aila. Temporal ensembling for semisupervised learning. arXiv:1610.02242, 2016.
 Li et al. (2017) ChunLiang Li, WeiCheng Chang, Yu Cheng, Yiming Yang, and Barnabás Póczos. MMD GAN: towards deeper understanding of moment matching network. NIPS, abs/1705.08584, 2017.
 Li et al. (2015) Yujia Li, Kevin Swersky, and Richard Zemel. Generative moment matching networks. In ICML, 2015.
 Liu (2017) Qiang Liu. Stein variational descent as a gradient flow. NIPS, 2017.

Liu & Wang (2016)
Qiang Liu and Dilin Wang.
Stein variational gradient descent: A general purpose bayesian inference algorithm.
In Advances in Neural Information Processing Systems 29. 2016. 
Liu et al. (2016)
Qiang Liu, Jason D. Lee, and Michael I. Jordan.
A kernelized stein discrepancy for goodnessoffit tests.
In
Proceedings of the 33nd International Conference on Machine Learning, ICML 2016, New York City, NY, USA, June 1924, 2016
, 2016.  Mao et al. (2017) Xudong Mao, Qing Li, Haoran Xie, Raymond YK Lau, and Zhen Wang. Least squares generative adversarial networks. arXiv:1611.04076 ICCV, 2017.
 Miyato et al. (2017) Takeru Miyato, Shinichi Maeda, Masanori Koyama, and Shin Ishii. Virtual adversarial training: a regularization method for supervised and semisupervised learning. arXiv:1704.03976, 2017.

Mobahi & III (2015)
Hossein Mobahi and John W. Fisher III.
A Theoretical Analysis of Optimization by Gaussian Continuation.
In
Proc. of 29th Conf. Artificial Intelligence (AAAI’15)
, 2015.  Mroueh & Sercu (2017) Youssef Mroueh and Tom Sercu. Fisher gan. arXiv:1705.09675 NIPS, 2017.
 Mroueh et al. (2017) Youssef Mroueh, Tom Sercu, and Vaibhava Goel. Mcgan: Mean and covariance feature matching gan. arXiv:1702.08398 ICML, 2017.
 Müller (1997) Alfred Müller. Integral probability metrics and their generating classes of functions. Advances in Applied Probability, 1997.
 Nowozin et al. (2016) Sebastian Nowozin, Botond Cseke, and Ryota Tomioka. fgan: Training generative neural samplers using variational divergence minimization. In NIPS, 2016.
 Press et al. (2017) Ofir Press, Amir Bar, Ben Bogin, Jonathan Berant, and Lior Wolf. Language generation with recurrent generative adversarial networks without pretraining. arXiv:1706.01399, 2017.
 Radford et al. (2015) Alec Radford, Luke Metz, and Soumith Chintala. Unsupervised representation learning with deep convolutional generative adversarial networks. arXiv:1511.06434, 2015.
 Rajeswar et al. (2017) Sai Rajeswar, Sandeep Subramanian, Francis Dutil, Christopher Pal, and Aaron Courville. Adversarial generation of natural language. arXiv:1705.10929, 2017.
 Reed et al. (2016) Scott E Reed, Zeynep Akata, Santosh Mohan, Samuel Tenka, Bernt Schiele, and Honglak Lee. Learning what and where to draw. In Advances In Neural Information Processing Systems, pp. 217–225, 2016.
 Sajjadi et al. (2016) Mehdi Sajjadi, Mehran Javanmardi, and Tolga Tasdizen. Regularization with stochastic transformations and perturbations for deep semisupervised learning. In Advances in Neural Information Processing Systems, pp. 1163–1171, 2016.
 Salimans & Kingma (2016) Tim Salimans and Diederik P Kingma. Weight normalization: A simple reparameterization to accelerate training of deep neural networks. In Advances in Neural Information Processing Systems, pp. 901–901, 2016.
 Salimans et al. (2016) Tim Salimans, Ian Goodfellow, Wojciech Zaremba, Vicki Cheung, Alec Radford, and Xi Chen. Improved techniques for training gans. NIPS, 2016.
 Springenberg (2015) Jost Tobias Springenberg. Unsupervised and semisupervised learning with categorical generative adversarial networks. arXiv:1511.06390, 2015.
 Sriperumbudur et al. (2009) Bharath K. Sriperumbudur, Kenji Fukumizu, Arthur Gretton, Bernhard Scholkopf, and Gert R. G. Lanckriet. On integral probability metrics, divergences and binary classification. 2009.

Sriperumbudur et al. (2012)
Bharath K. Sriperumbudur, Kenji Fukumizu, Arthur Gretton, Bernhard
Schölkopf, and Gert R. G. Lanckriet.
On the empirical estimation of integral probability metrics.
Electronic Journal of Statistics, 2012.  Wang & Liu (2016) Dilin Wang and Qiang Liu. Learning to draw samples: With application to amortized MLE for generative adversarial learning. CoRR, abs/1611.01722, 2016.
 Yu et al. (2016) Lantao Yu, Weinan Zhang, Jun Wang, and Yong Yu. Seqgan: Sequence generative adversarial nets with policy gradient. CoRR, abs/1609.05473, 2016.

Zhao et al. (2017)
Junbo Jake Zhao, Yoon Kim, Kelly Zhang, Alexander M. Rush, and Yann LeCun.
Adversarially regularized autoencoders for generating discrete structures.
CoRR, 2017.
Appendix A Theory: Approximation and Transport Interpretation
In this Section we present the theoretical properties of Sobolev IPM and how it rela
Comments
There are no comments yet.