McGan: Mean and Covariance Feature Matching GAN

02/27/2017 ∙ by Youssef Mroueh, et al. ∙ 0

We introduce new families of Integral Probability Metrics (IPM) for training Generative Adversarial Networks (GAN). Our IPMs are based on matching statistics of distributions embedded in a finite dimensional feature space. Mean and covariance feature matching IPMs allow for stable training of GANs, which we will call McGan. McGan minimizes a meaningful loss between distributions.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 3

page 7

page 8

page 11

page 12

page 13

page 14

page 15

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Unsupervised learning of distributions is an important problem, in which we aim to learn underlying features that unveil the hidden the structure in the data. The classic approach to learning distributions is by explicitly parametrizing the data likelihood and fitting this model by maximizing the likelihood of the real data. An alternative recent approach is to learn a generative model of the data without explicit parametrization of the likelihood. Variational Auto-Encoders (VAE) (Kingma & Welling, 2013) and Generative Adversarial Networks (GAN) (Goodfellow et al., 2014) fall under this category.

We focus on the GAN approach. In a nutshell GANs learn a generator of the data via a min-max game between the generator and a discriminator, which learns to distinguish between “real” and “fake” samples. In this work we focus on the objective function that is being minimized between the learned generator distribution and the real data distribution .

The original work of (Goodfellow et al., 2014) showed that in GAN this objective is the Jensen-Shannon divergence. (Nowozin et al., 2016) showed that other -divergences can be successfully used. The Maximum Mean Discrepancy objective (MMD) for GAN training was proposed in (Li et al., 2015; Dziugaite et al., 2015). As shown empirically in (Salimans et al., 2016), one can train the GAN discriminator using the objective of (Goodfellow et al., 2014) while training the generator using mean feature matching. An energy based objective for GANs was also developed recently (Zhao et al., 2017). Finally, closely related to our paper, the recent work Wasserstein GAN (WGAN) of (Arjovsky et al., 2017) proposed to use the Earth Moving distance (EM) as an objective for training GANs. Furthermore (Arjovsky et al., 2017)

show that the EM objective has many advantages as the loss function correlates with the quality of the generated samples and the

mode dropping problem is reduced in WGAN.

In this paper, inspired by the MMD distance and the kernel mean embedding of distributions (Muandet et al., 2016) we propose to embed distributions in a finite dimensional feature space and to match them based on their mean and covariance feature statistics. Incorporating first and second order statistics has a better chance to capture the various modes of the distribution. While mean feature matching was empirically used in (Salimans et al., 2016), we show in this work that it is theoretically grounded: similarly to the EM distance in (Arjovsky et al., 2017), mean and covariance feature matching of two distributions can be written as a distance in the framework of Integral Probability Metrics (IPM) (Muller, 1997). To match the means, we can use any norm, hence we refer to mean matching IPM, as IPM. For matching covariances, in this paper we consider the Ky-Fan norm, which can be computed cheaply without explicitly constructing the full covariance matrices, and refer to the corresponding IPM as IPM.

Our technical contributions can be summarized as follows:

a) We show in Section 3 that the mean feature matching IPM has two equivalent primal and dual formulations and can be used as an objective for GAN training in both formulations.

b) We show in Section 3.3 that the parametrization used in Wasserstein GAN corresponds to mean feature matching GAN (IPM GAN in our framework).

c)  We show in Section 4.2 that the covariance feature matching IPM admits also two dual formulations, and can be used as an objective for GAN training.

d)  Similar to Wasserstein GAN, we show that mean feature matching and covariance matching GANs (McGan) are stable to train, have a reduced mode dropping and the IPM loss correlates with the quality of the generated samples.

2 Integral Probability Metrics

We define in this Section IPMs as a distance between distribution. Intuitively each IPM finds a “critic” (Arjovsky et al., 2017) which maximally discriminates between the distributions.

2.1 IPM Definition

Consider a compact space in . Let be a set of measurable and bounded real valued functions on . Let

be the set of measurable probability distributions on

. Given two probability distributions , the Integral probability metric (IPM) indexed by the function space is defined as follows (Muller, 1997):

In this paper we are interested in symmetric function spaces , i.e , hence we can write the IPM in that case without the absolute value:

(1)

It is easy to see that defines a pseudo-metric over . ( non-negative, symmetric and satisfies the triangle inequality. A pseudo metric means that but does not necessarily imply ).

By choosing appropriately (Sriperumbudur et al., 2012, 2009), various distances between probability measures can be defined. In the next subsection following (Arjovsky et al., 2017; Li et al., 2015; Dziugaite et al., 2015) we show how to use IPM to learn generative models of distributions, we then specify a special set of functions that makes the learning tractable.

2.2 Learning Generative Models with IPM

In order to learn a generative model of a distribution , we learn a function

such that for , the distribution of is close to the real data distribution , where is a fixed distribution on (for instance ). Let be the distribution of . Using an IPM indexed by a function class we shall solve therefore the following problem:

(2)

Hence this amounts to solving the following min-max problem:

Given samples from and samples from we shall solve the following empirical problem:

in the following we consider for simplicity .

Figure 1: Motivating example on synthetic data in 2D, showing how different components in covariance matching can target different regions of the input space. Mean matching (a) is not able to capture the two modes of the bimodal “real” distribution and assigns higher values to one of the modes. Covariance matching (b) is composed of the sum of three components (c)+(d)+(e), corresponding to the top three “critic directions”. Interestingly, the first direction (c) focuses on the “fake” data , the second direction (d) focuses on the “real” data, while the third direction (e) is mode selective. This suggests that using covariance matching would help reduce mode dropping in GAN. In this toy example is a fixed random Fourier feature map (Rahimi & Recht, 2008) of a Gaussian kernel (i.e. a finite dimensional approximation).

3 Mean Feature Matching GAN

In this Section we introduce a class of functions having the form

, where vector

and

a non linear feature map (typically parametrized by a neural network). We show in this Section that the IPM defined by this function class corresponds to the distance between the mean of the distribution in the

space.

3.1 Ipm: Mean Matching IPM

More formally consider the following function space:

where is the norm. is the space of bounded linear functions defined in the non linear feature space induced by the parametric feature map . is typically a multi-layer neural network. The parameter space is chosen so that the function space is bounded. Note that for a given , is a finite dimensional Hilbert space.

We recall here simple definitions on dual norms that will be necessary for the analysis in this Section. Let , such that . By duality of norms we have: and the Holder inequality: .

From Holder inequality we obtain the following bound:

To ensure that is bounded, it is enough to consider such that . Given that the space is bounded it is sufficient to control the norm of the weights and biases of the neural network by regularizing the (clamping) or norms (weight decay) to ensure the boundedness of .

Now that we ensured the boundedness of , we look at its corresponding IPM:

where we used the linearity of the function class and expectation in the first equality and the definition of the dual norm in the last equality and our definition of the mean feature embedding of a distribution :

We see that the IPM indexed by , corresponds to the Maximum mean feature Discrepancy between the two distributions. Where the maximum is taken over the parameter set , and the discrepancy is measured in the sense between the mean feature embedding of and . In other words this IPM is equal to the worst case distance between mean feature embeddings of distributions. We refer in what follows to as IPM.

3.2 Mean Feature Matching GAN

We turn now to the problem of learning generative models with IPM. Setting to in Equation (2) yields to the following min-max problem for learning generative models:

(3)

where

or equivalently using the dual norm:

(4)

where .

We refer to formulations (3) and (4) as primal and dual formulation respectively.

The dual formulation in Equation (4) has a simple interpretation as an adversarial learning game: while the feature space tries to map the mean feature embeddings of the real distribution and the fake distribution to be far apart (maximize the distance between the mean embeddings), the generator tries to put them close one to another. Hence we refer to this IPM as mean matching IPM.

We devise empirical estimates of both formulations in Equations (

3) and (4), given samples from , and from . The primal formulation (3

) is more amenable to stochastic gradient descent since the expectation operation appears in a linear way in the cost function of Equation (

3), while it is non linear in the cost function of the dual formulation (4) (inside the norm). We give here the empirical estimate of the primal formulation by giving empirical estimates of the primal cost function:

An empirical estimate of the dual formulation can be also given as follows:

In what follows we refer to the problem given in (P) and (D) as Mean Feature Matching GAN. Note that while (P) does not need real samples for optimizing the generator, (D) does need samples from real and fake. Furthermore we will need a large minibatch of real data in order to get a good estimate of the expectation. This makes the primal formulation more appealing computationally.

3.3 Related Work

We show in this Section that several previous works on GAN, can be written within the mean feature matching IPM (IPM) minimization framework:

a) Wasserstein GAN (WGAN): (Arjovsky et al., 2017) recently introduced Wasserstein GAN. While the main motivation of this paper is to consider the IPM indexed by Lipchitz functions on , we show that the particular parametrization considered in (Arjovsky et al., 2017) corresponds to a mean feature matching IPM.
Indeed (Arjovsky et al., 2017)

consider the function set parametrized by a convolutional neural network with a linear output layer and weight clipping. Written in our notation, the last linear layer corresponds to

, and the convolutional neural network below corresponds to . Since and are simultaneously clamped, this corresponds to restricting to be in the unit ball, and to define in constraints on the norms of . In other words (Arjovsky et al., 2017) consider functions in , where . Setting in Equation (3), and in Equation (4), we see that in WGAN we are minimizing , that corresponds to mean feature matching GAN.

b) MMD GAN: Let be a Reproducing Kernel Hilbert Space (RKHS) with its reproducing kernel. For any valid PSD kernel there exists an infinite dimensional feature map such that: . For an RKHS is noted usually and satisfies the reproducing proprety:

Setting in Equation (1) the IPM has a simple expression:

(5)

where is the so called kernel mean embedding (Muandet et al., 2016). in this case is the so called Maximum kernel Mean Discrepancy (MMD) (Gretton et al., 2012) . Using the reproducing property MMD has a closed form in term of the kernel . Note that IPM is a special case of MMD when the feature map is finite dimensional, with the main difference that the feature map is fixed in case of MMD and learned in the case of IPM. (Li et al., 2015; Dziugaite et al., 2015) showed that GANs can be learned using MMD with a fixed gaussian kernel.

c) Improved GAN: Building on the pioneering work of (Goodfellow et al., 2014), (Salimans et al., 2016) suggested to learn the discriminator with the binary cross entropy criterium of GAN while learning the generator with mean feature matching. The main difference of our IPM GAN is that both “discriminator” and “generator” are learned using the mean feature matching criterium, with additional constraints on .

4 Covariance Feature Matching GAN

4.1 Ipm: Covariance Matching IPM

As follows from our discussion of mean matching IPM comparing two distributions amounts to comparing a first order statistics, the mean of their feature embeddings. Here we ask the question how to incorporate second order statistics, i.e covariance information of feature embeddings.

In this Section we will provide a function space such that the IPM in Equation (1) captures second order information. Intuitively a distribution of points represented in a feature space can be approximately captured by its mean and its covariance. Commonly in unsupervised learning, this covariance is approximated by its first

principal components (PCA directions), which capture the directions of maximal variance in the data. Similarly, the metric we define in this Section will find

directions that maximize the discrimination between the two covariances. Adding second order information would enrich the discrimination power of the feature space (See Figure 1).

This intuition motivates the following function space of bilinear functions in :

Note that the set is symmetric and hence the IPM indexed by this set (Equation (1)) is well defined. It is easy to see that can be written as:

the parameter set is such that the function space remains bounded. Let

be the uncentered feature covariance embedding of . It is easy to see that can be written in terms of and :

For a matrix , we note by

the singular value of A,

in descending order. The 1-schatten norm or the nuclear norm is defined as the sum of singular values, . We note by the k-th rank approximation of . We note . Consider the IPM induced by this function set. Let we have:

where we used the variational definition of singular values and the definition of the nuclear norm. Note that are the left and right singular vectors of . Hence measures the worst case distance between the covariance feature embeddings of the two distributions, this distance is measured with the Ky Fan -norm (nuclear norm of truncated covariance difference). Hence we call this IPM covariance matching IPM, IPM.

4.2 Covariance Matching GAN

Turning now to the problem of learning a generative model of using IPM we shall solve:

this has the following primal formulation:

(6)

or equivalently the following dual formulation:

(7)

where .

The dual formulation in Equation (7) shows that learning generative models with IPM, consists in an adversarial game between the feature map and the generator, when the feature maps tries to maximize the distance between the feature covariance embeddings of the distributions, the generator tries to minimize this distance. Hence we call learning with IPM, covariance matching GAN.

We give here an empirical estimate of the primal formulation in Equation (6) which is amenable to stochastic gradient. The dual requires nuclear norm minimization and is more involved. Given , and , the covariance matching GAN can be written as follows:

(8)

4.3 Mean and Covariance Matching GAN

In order to match first and second order statistics we propose the following simple extension:

that has a simple dual adversarial game interpretation

where the discriminator finds a feature space that discriminates between means and variances of real and fake, and the generator tries to match the real statistics. We can also give empirical estimates of the primal formulation similar to expressions given in the paper.

5 Algorithms

We present in this Section our algorithms for mean and covariance feature matching GAN (McGan) with IPM and IPM.

Mean Matching GAN. Primal P: We give in Algorithm 1 an algorithm for solving the primal IPM GAN (P). Algorithm 1 is adapted from (Arjovsky et al., 2017) and corresponds to their algorithm for . The main difference is that we allow projection of on different balls, and we maintain the clipping of to ensure boundedness of . For example for , . For we obtain the same clipping in (Arjovsky et al., 2017) for .

Dual D: We give in Algorithm 2 an algorithm for solving the dual formulation IPM GAN (D). As mentioned earlier we need samples from “real” and “fake” for training both generator and the “critic” feature space.

Covariance Matching GAN. Primal P: We give in Algorithm 3 an algorithm for solving the primal of IPM GAN (Equation (8)). The algorithm performs a stochastic gradient ascent on and a descent on . We maintain clipping on to ensure boundedness of , and perform a QR retraction on the Stiefel manifold (Absil et al., 2007), maintaining orthonormality of and .

  Input: to define the ball of , Learning rate, number of iterations for training the critic, clipping or weight decay parameter, N batch size
  Initialize
  repeat
     for  to  do
        Sample a minibatch
        Sample a minibatch
        
        {Project on ball, }
        
         {Ensure is bounded}
     end for
     Sample
     
     
  until  converges
Algorithm 1 Mean Matching GAN - Primal (P)
  Input: the matching norm , Learning rate, number of iterations for training the critic, clipping or weight decay parameter, N batch size
  Initialize
  repeat
     for  to  do
        Sample a minibatch
        Sample a minibatch
        
        
        
         {Ensure is bounded}
     end for
     Sample
     Sample
     
     
     
  until  converges
Algorithm 2 Mean Matching GAN - Dual (D)
  Input: the number of components , Learning rate, number of iterations for training the critic, clipping or weight decay parameter, N batch size
  Initialize
  repeat
     for  to  do
        Sample a minibatch
        Sample a minibatch
        
         { Project U and V on the Stiefel manifold }
        
        
        
        
         {Ensure is bounded}
     end for
     Sample
     
     
  until  converges
Algorithm 3 Covariance Matching GAN - Primal (P)

6 Experiments

We train McGan for image generation with both Mean Matching and Covariance Matching objectives. We show generated images on the labeled faces in the wild (lfw) (Huang et al., 2007), LSUN bedrooms (Yu et al., 2015), and cifar-10 (Krizhevsky & Hinton, 2009) datasets.

It is well-established that evaluating generative models is hard (Theis et al., 2016)

. Many GAN papers rely on a combination of samples for quality evaluation, supplemented by a number of heuristic quantitative measures. We will mostly focus on training stability by showing plots of the loss function, and will provide generated samples to claim comparable sample quality between methods, but we will avoid claiming better sample quality. These samples are all generated at random and are not cherry-picked.

The design of and are following DCGAN principles (Radford et al., 2015), with both and

being a convolutional network with batch normalization

(Ioffe & Szegedy, 2015)

and ReLU activations.

has output size . The inner product can then equivalently be implemented as conv(4x4, F->1) or flatten + Linear(4*4*F -> 1). We generate images for lfw and LSUN and images on cifar, and train with minibatches of size 64. We follow the experimental framework and implementation of (Arjovsky et al., 2017), where we ensure the boundedness of by clipping the weights pointwise to the range .

Primal versus dual form of mean matching. To illustrate the validity of both the primal and dual formulation, we trained mean matching GANs both in the primal and dual form, see respectively Algorithm 1 and 2. Samples are shown in Figure 2. Note that optimizing the dual form is less efficient and only feasible for mean matching, not for covariance matching. The primal formulation of IPM GAN corresponds to clipping , i.e. the original WGAN, while for IPM we divide by its norm if it becomes larger than 1. In the dual, for we noticed little difference between maximizing the norm or its square.

Figure 2: Samples generated with primal (left) and dual (right) formulation, in (top) and (bottom) norm. (A) lfw (B) LSUN.
Figure 3: Plot of the loss of (i.e. WGAN), during training of lfw, as a function of number of updates to . Similar to the observation in (Arjovsky et al., 2017), training is stable and the loss is a useful metric of progress, across the different formulations.

We observed that the default learning rates from WGAN (5e-5) are optimal for both primal and dual formulation. Figure 3 shows the loss (i.e. IPM estimate) dropping steadily for both the primal and dual formulation independently of the choice of the norm. We also observed that during the whole training process, samples generated from the same noise vector across iterations, remain similar in nature (face identity, bedroom style), while details and background will evolve. This qualitative observation indicates valuable stability of the training process.

For the dual formulation (Algorithm 2), we confirmed the hypothesis that we need a good estimate of in order to compute the gradient of the generator : we needed to increase the minibatch size of real threefold to .

Figure 4: lfw samples generated with covariance matching and plot of loss function (IPM estimate) .
Figure 5: LSUN samples generated with covariance matching and plot of loss function (IPM estimate) .

Covariance GAN. We now experimentally investigate the IPM defined by covariance matching. For this section and the following, we use only the primal formulation, i.e. with explicit and orthonormal (Algorithm 3). Figure 4 and 5 show samples and loss from lfw and LSUN training respectively. We use Algorithm 3 with components. We obtain samples of comparable quality to the mean matching formulations (Figure 2

), and we found training to be stable independent of hyperparameters like number of components

varying between 4 and 64.

Figure 6: Cifar-10: Class-conditioned generated samples. Within each column, the random noise is shared, while within the rows the GAN is conditioned on the same class: from top to bottom airplane, automobile, bird, cat, deer, dog, frog, horse, ship, truck.

Covariance GAN with labels and conditioning.

width= Cond (+L) Uncond (+L) Uncond (-L) L1+Sigma 7.11 0.04 6.93 0.07 6.42 0.09 L2+Sigma 7.27 0.04 6.69 0.08 6.35 0.04 Sigma 7.29 0.06 6.97 0.10 6.73 0.04 WGAN 3.24 0.02 5.21 0.07 6.39 0.07 BEGAN (Berthelot et al., 2017) 5.62 Impr. GAN “-LS” (Salimans et al., 2016) 6.83 0.06 Impr. GAN Best (Salimans et al., 2016) 8.09 0.07

Table 1: Cifar-10: inception score of our models and baselines.

Finally, we conduct experiments on the cifar-10 dataset, where we will leverage the additional label information by training a GAN with conditional generator with label supppplied as one-hot vector concatenated with noise . Similar to Infogan (Chen et al., 2016) and AC-GAN (Odena et al., 2016), we add a new output layer,

and will write the logits

. We now optimize a combination of the IPM loss and the cross-entropy loss . The critic loss becomes , with hyper-parameter . We now sample three minibatches for each critic update: a labeled batch for the CE term, and for the IPM a real unlabeled + generated batch.

The generator loss (with hyper-param ) becomes: which still only requires a single minibatch to compute.

We confirm the improved stability and sample quality of objectives including covariance matching with inception scores (Salimans et al., 2016) in Table 1. Samples corresponding to the best inception score (Sigma) are given in Figure 6. Using the code released with WGAN (Arjovsky et al., 2017), these scores come from the DCGAN model with n_extra_layers=3 (deeper generator and discriminator) . More samples are in appendix with combinations of Mean and Covariance Matching. Notice rows corresponding to recognizable classes, while the noise (shared within each column) clearly determines other elements of the visual style like dominant color, across label conditioning.

7 Discussion

We noticed the influence of clipping on the capacity of the critic: a higher number of feature maps was needed to compensate for clipping. The question remains what alternatives to clipping of can ensure the boundedness. For example, we succesfully used an penalty on the weights of . Other directions are to explore geodesic distances between the covariances (Arsigny et al., 2006), and extensions of the IPM framework to the multimodal setting (Isola et al., 2017).

References

Appendix A Subspace Matching Interpretation of Covariance Matching GAN

Let .

is a symmetric matrix but not PSD, which has the property that its eigenvalues

are related to its singular values as given by:

and its left and right singular vectors coincides with its eigenvectors and satisfy the following equality

. One can ask here if we can avoid having both in the definition of IPM since at the optimum . One could consider defined as follows:

and then solve for . Note that:

is not symmetric furthermore the sum of those eigenvalues is not guaranteed to be positive and hence is not guaranteed to be non negative, and hence does not define an IPM. Noting that ,we have that:

Hence is not an IPM but can be optimized as a lower bound of the IPM. This would have an energy interpretation as in the energy based GAN introduced recently (Zhao et al., 2017): the discriminator defines a subspace that has higher energy on real data than fake data, and the generator maximizes his energy in this subspace.

Appendix B Mean and Covariance Matching Loss Combinations

We report below samples for McGan, with different IPM and IPM combinations. All results are reported for the same architecture choice for generator and discriminator, which produced qualitatively good samples with IPM (Same one reported in Section 6 in the main paper). Note that in Figure 7 with the same hyper-parameters and architecture choice, WGAN failed to produce good sample. In other configurations training converged.

Figure 7: Cifar-10: Class-conditioned generated samples with IPM(WGAN). Within each column, the random noise is shared, while within the rows the GAN is conditioned on the same class: from top to bottom airplane, automobile, bird, cat, deer, dog, frog, horse, ship, truck.
Figure 8: Cifar-10: Class-conditioned generated samples with IPM. Within each column, the random noise is shared, while within the rows the GAN is conditioned on the same class: from top to bottom airplane, automobile, bird, cat, deer, dog, frog, horse, ship, truck.
Figure 9: Cifar-10: Class-conditioned generated samples with IPM. Within each column, the random noise is shared, while within the rows the GAN is conditioned on the same class: from top to bottom airplane, automobile, bird, cat, deer, dog, frog, horse, ship, truck.
Figure 10: Cifar-10: Class-conditioned generated samples with IPM+ IPM. Within each column, the random noise is shared, while within the rows the GAN is conditioned on the same class: from top to bottom airplane, automobile, bird, cat, deer, dog, frog, horse, ship, truck.
Figure 11: Cifar-10: Class-conditioned generated samples with IPM+ IPM. Within each column, the random noise is shared, while within the rows the GAN is conditioned on the same class: from top to bottom airplane, automobile, bird, cat, deer, dog, frog, horse, ship, truck.