GAN and VAE from an Optimal Transport Point of View

06/06/2017 ∙ by Aude Genevay, et al. ∙ Cole Normale Suprieure Université Paris-Dauphine 0

This short article revisits some of the ideas introduced in arXiv:1701.07875 and arXiv:1705.07642 in a simple setup. This sheds some lights on the connexions between Variational Autoencoders (VAE), Generative Adversarial Networks (GAN) and Minimum Kantorovitch Estimators (MKE).

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Minimum Kantorovitch Estimators

Mke.

Given some empirical distribution where

, and a parametric family of probability distributions

, , a Minimum Kantorovitch Estimator (MKE) [bassetti2006minimum, montavon2016wasserstein, bernton2017inference] for is defined as any solution of the problem þ W_c(μ_þ,ν), where is the Wasserstein cost on for some ground cost function , defined as W_c(μ,ν) = ∈(×) ∫_× c(x,y) \̣ga(x,y) P_1♯=μ, P_2♯=ν, where and , and and are marginalization operators that return for a given coupling its first and second marginal, respectively.

The notations and above agree with the more general notion of pushforward measures: Given a measurable map , which can be interpreted as a function “moving” points from a measurable space to another, one can naturally extend to become a more general map that can now “move” an entire probability measure on towards a new probability measure on . The operator “pushes forward” each elementary mass of a measure in by applying the map to obtain then a mass in , to build on aggregate a new measure in written . More rigorously, the pushforward measure of a measure by a map is the measure denoted as in such that for any set , .

Figure 1: Left: illustration of density fitting using the Minimum Kantorovitch Estimator for a generative model. Middle and right: comparison of the GAN vs. VAE setups.

Mke-Gm.

The MKE approach can be used directly in the case where is a statistical model, namely a parameterized family of probability distributions with a given density with respect to a dominant base measure, as considered for instance with exponential families on discrete spaces in [montavon2016wasserstein]. However, the MKE approach can also be used in a generative model setting, where is defined instead as the push forward of a fixed distribution supported on a low dimensional space , , where the parameterization lies now in choosing a map , i.e. , resulting in the following special case of the original (1) problem:

þ E(þ) W_c(g_þ♯ζ,ν), The map should be therefore thought as a “decoding” map from a low dimensional space to a high dimensional space. In such a setting, the maximum likelihood estimator is in general undefined or difficult to compute (because the support of the measures are singular) while MKEs are attractive because they are always well defined.

2 Dual Formulation and GAN

Because (1

) is a linear program, it has a dual formulation, known as the Kantorovich problem 

[villani2008optimal, Thm. 5.9]: E(þ) = h,~h ∫_ h(g_þ(z)) ζ̣(z) + ∫_ ~h(y) ν̣(y) h(x) + ~h(y) ≤c(x,y) . where are continuous functions on often called Kantorovich potentials in the literature.

In the dual formulation (2), does not appear anymore in the constraints. Therefore, the gradient of can be computed as ∇E(þ) = ∫_ [∂_þg_þ(z)]^⊤∇h^⋆( g_þ(z) ) ζ̣(z), where is an optimal dual function solving (2). Here is the adjoint of the Jacobian of , where is the dimension of the parameter space .

A key remark in Kantorovich’s formulation is to notice that the cost of any pair can always be improved by replacing in (2) by the -transform of defined as h^c(y) x c(x,y) - h(x), which is, indeed, given a candidate potential for the first variable, the best possible potential that can be paired with that satisfies the constraints of (2) (see [villani2008optimal, Thm. 5.9]). For this reason, one can parameterize problem (2) as depending on one potential function only.

A first approach to solve (2) is to remark that since is discrete, one can replace the continuous potential

by the discrete vector

and impose . As shown in [2016-genevay-nips], the optimization over

can then be achieved using stochastic gradient descent.

Similarly to [WassersteinGAN], another approach is to approximate (2) by restricting the dual potential to have a parametric form where is a discriminative deep network (see Figure 1, center). This map is often referred to as being an “adversarial” map. Plugging this ansatz in (2) leads to the Wasserstein-GAN problem þ ξ ∫_ h_ξ∘g_ξ(z) ζ̣(z) + ∑_j h_ξ^c(y_j). In the special case where , one can prove that the mechanics of -transforms result in the additional constraint that , subject to being a -Lipschitz function, see [villani2008optimal, Particular case 5.4]. This is used in [WassersteinGAN] to replace by in (2

) and use a deep network made of ReLu units whose Lipschitz constant is upper-bounded by

.

As a side-note, and as previously commented in the literature, there is at this point no empirical evidence that supports the idea that using discriminative deep networks that way can result in accurate approximations of Wasserstein distances. These alternative formulations provide instead a very useful proxy for a quantity directly related to the Wasserstein distance.

3 Primal Formulation and VAE

Following [Bousquet2017, 2017-Genevay-AutoDiff], in the special case of a generative model , formula (1) can be conveniently re-written as E(þ) = π∈(×) ∫_× c(g_þ(z),y) π̣(z,y) P_1♯π=ζ, P_2♯π=ν. This is advantageous because now is defined over , which is lower-dimensional than , and also because, as in Equation (2), does not appear in the constraints either. This provides an alternative formula for the gradient of : ∇E(þ) = ∫_× [∂_þg_þ(z)]^⊤∇_1 c(g_þ(z),y) π̣^⋆(z,y), where is an optimal coupling solving (3). Here denotes the gradient of with respect to the first variable.

[Bousquet2017] suggests to look for couplings with a parametric form. A simple way to achieve this is to restrict couplings to those of the form π_ξ ∑_j _(f_ξ(y_j),y_j) ∈(×), where is a parametric “encoding” map (typically a deep network), see Figure 1, right. This satisfies by construction the marginal constraint , but in general it cannot satisfy the other constraint (because is discrete while is not). So following [Bousquet2017], it makes sense to consider a relaxed “unbalanced” formulation (in the sense of [2016-chizat-sinkhorn]) of the form E_(þ) = π ∫_× c(g_þ(z),y) π̣(z,y) + D(P_1♯πζ) P_2♯π=ν, where is some distance or divergence between positive measures on and a relaxation parameter.

Plugging the ansatz in (3), one obtains the Wasserstein-VAE formulation (þ,ξ) _ν( g_þ∘f_ξ, _ ) + D( f_ξνζ), where is the cost measuring the deviation of a map to identity _ν( ϕ, _ ) ∫_c(ϕ(y),y) ν̣(y) = 1n∑_j=1^n c(ϕ(y_j),y_j). Such a cost is usually associated with the Monge formulation of optimal transport [Monge1781], whose original motivation was to find an optimal map under that cost that would be able to push forward a given measure onto another[santambrogio2015optimal, §1.1].

4 Conclusions

The 2 and 3 formulations are very different, and are in some sense dual one of each other. For GAN, the couple

should be thought as a (primal, dual) pair (often referred to as adversarial pair, which is reminiscent of game theory saddle points). For VAE, the couple

is rather an (encoding, decoding) pair, and both have the flavour of transportation maps.

In sharp contrast to the primal gradient formula (3) which only requires integrating against an optimal coupling , the dual gradient formula (2) involves the integration of the gradient of an optimal potential . The latter tends to be more unstable and thus necessitates accurate optimization sub-iterations to obtain an optimal dual potential  [2016-genevay-nips] or an approximation within a restricted parametric class [WassersteinGAN]. This is somehow inline with the empirical observation that training VAE is more stable than training GAN. One should however bear in mind that, although both formulations can be motivated by the same minimum Kantorovitch estimation problem (1), they define quite different estimators. In particular, GAN is often credited for producing less blurry outputs when used for image generation.

Denoting and the solutions of (1), (2) and (3), one has in the limit (to cancel the bias due to the marginal constraint relaxation), E(þ_WGAN) ≤E(þ_MKE) ≤E(þ_WVAE). [Bousquet2017] furthermore mentions that in the “non-parametric limit” (i.e. when the number of parameters appearing in tends to , and also letting ), the gap between the estimators should vanish. Indeed, and should capture the desired optimal map in the limit and one thus recovers the true solution to (1). While it would be interesting from a theoretical perspective to prove and quantify such a claim, it is unclear wether it would be useful for the practitioner. Indeed, the convergence rate might be slow, so that in practice one can be quite far from this non-parametric limit. One could even argue that this limit may give poor estimators for complicated datasets, so that parameterizing the maps and using non-convex optimization solvers lead instead to a beneficial and implicit regularization of these estimators.

References