1 Introduction
Generative model refers to a model that is capable of generating observable samples abiding by a given data distribution or mimicking the data samples drawn from an unknown distribution. It is worth studying because of the following reasons: i) it helps increase our ability to represent and manipulate highdimensional probability distributions; ii) generative models can be incorporated into reinforcement learning in several ways; and iii) generative models can be trained with missing data and can provide predictions on inputs that are missing data
(Goodfellow, 2017).The works in generative model can be categorized according to the taxonomy shown in Figure 1 (Goodfellow, 2017). In the left branch of the taxonomic tree, the explicit density node specifies the models that come with explicit model density function (i.e., ). The maximum likelihood inference is now straightforward with an explicit objective function. The tractability and the precision of inference is totally dependent on the choice of the density family. This family must be chosen to be wellpresented the true data distribution whilst maintaining the inference tractable. Under the explicit density node at leftmost, the tractable density node defines the models whose explicit density functions are computationally tractable. The wellknown models in this umbrella include fully visible belief nets (Frey et al., 1995), PixelRNN (Oord et al., 2016), Nonlinear ICA (Deco and Brauer, 1995), and Real NVP (Dinh et al., 2016). In contrast to the tractable density node, the approximate density node points out the models that have explicit density function but are computationally intractable. The remedy to address this intractability is to approximate the true density function using either variation method (Kingma and Welling, 2013; Rezende et al., 2014)
or Markov Chain
(Fahlman et al., 1983; Hinton et al., 1984).Some generative models can be trained without any model assumption. These implicit models are pointed out under the umbrella of the implicit density node. Some of models in this umbrella based on drawing samples from formulate a Markov chain transition operator that must be performed several times to obtain a sample from the model (Bengio et al., 2013). Another existing stateoftheart model in this umbrella is Generative Adversarial Network (GAN) (Goodfellow et al., 2014). GAN actually introduced a very novel and powerful way of thinking wherein the generative model is viewed as a minimax game consisting of two players (i.e., discriminator and generator). The discriminator attempts to discriminate the true data samples against the generated samples, whilst the generator tries to generate the samples that mimic the true data samples to maximally challenge the discriminator. The theory behind GAN shows that if the model converges to the Nash equilibrium point, the resulting generated distribution minimizes its JensenShannon divergence to the true data distribution (Goodfellow et al., 2014). The seminal GAN has really opened a new line of thinking that offers a foundation for a variety of works (Radford et al., 2015; Denton et al., 2015; Ledig et al., 2016; Zhu et al., 2016; Nowozin et al., 2016; Metz et al., 2016; Nguyen et al., 2017; Hoang et al., 2017). However, because of their minimax flavor, training GAN(s) is really challenging. Beside, even if we can perfectly train GAN(s), due to the nature of the JensenShannon divergence minimization, GAN(s) still encounter the model collapse issue (Theis et al., 2015).
In this paper, we first propose to view GAN(s) under another viewpoint, which is termed as the minimizing general loss viewpoint. Intuitively, since we do not hand in the formulas of both true and generated data distributions, GAN(s) elegantly invoke a strong discriminator (i.e., classifier) to implicitly justify how far these two distributions are. Concretely, if two distributions are far away, the task of the discriminator is much easier with a small resulting loss; in contrast, if they are moving closer, the task of the discriminator becomes harder with increasingly resulting loss. Eventually, when two distributions are completely mixed up, the resulting loss of the best discriminator is maximized, hence we come with the maxmin problem, where the inner minimization is for finding the optimal discriminator given a generator and the outer maximization is for finding the optimal generator that maximally makes the optimal discriminator confusing. Mathematically, we prove that given a convex loss function , the general loss of the classification for discriminating the true and fake data is a negative divergence between the true data and fake data distributions for a certain convex function . It follows that we maximize the general loss to minimize the divergence between two involving distributions. The viewpoint further explains why in practice, we can use many loss functions in training GAN while still gaining goodquality generated samples. Furthermore, the proposed viewpoint also reveals that we can freely employ any sufficient capacity family for discriminators instead of limiting ourselves in only NNbased family. Bearing this observation, we propose using kernelbased discriminators for classifying the real and fake data. This kernelbased family has powerful capacity, while being linear convex in the feature space (Cortes and Vapnik, 1995). This allows us to apply Fenchel duality to equivalently transform the maxmin problem to the maxmax dual problem.
2 Related Background
In this section, we present the related background used in our work. We depart with the introduction of Fenchel conjugate, a wellknown notation in convex analysis, followed by the introduction of Fourier random feature (Rahimi and Recht, 2007) which can be used to approximate a shiftinvariance and positive semidefinite kernel.
2.1 Fenchel Conjugate
Given a convex function , the Fenchel conjugate of this function is defined as
Regarding Fenchel conjugate, we have some following properties:

Argmax: If the function is strongly convex, the optimal argument is exactly .

Young inequality: Given , we have the inequality . The equality occurs if .

Fenchel–Moreau theorem: If is convex and continuous, then the conjugateoftheconjugate (known as the biconjugate) is the original function: which means that

The Legendre transform property: For strictly convex differentiable functions, the gradient of the convex conjugate maps a point in the dual space into the point at which it is the gradient of : .
2.2 Fourier Random Feature Representation
The mapping above is implicitly defined and the inner product is evaluated through a kernel . To construct an explicit representation of , the key idea is to approximate the symmetric and positive semidefinite (p.s.d) kernel with using a kernel induced by a random finitedimensional feature map (Rahimi and Recht, 2007). The mathematical tool behind this approximation is the Bochner’s theorem (Bochner, 1959), which states that every shiftinvariant, p.s.d kernel
can be represented as an inverse Fourier transform of a proper distribution
as below:(1) 
where and represents the imaginary unit (i.e., ). In addition, the corresponding proper distribution can be recovered through Fourier transform of kernel function as:
(2) 
Popular shiftinvariant kernels include Gaussian, Laplacian and Cauchy. For our work, we employ Gaussian kernel: parameterized by the covariance matrix . With this choice, substituting into Eq. (2) yields a closedform for the probability distribution which is .
2.3 Generative Adversarial Network
Given a data distribution whose p.d.f is where , the aim of Generative Adversarial Networks (GAN) (Goodfellow et al., 2014; Goodfellow, 2017)
is to train a neuralnetwork based generator
such that (s) fed by (i.e., the noise distribution) induce the generated distribution with the p.d.f coinciding the data distribution . This is realized by minimizing the JensenShanon divergence between and , which can be equivalently obtained via solving the following minimax optimization problem:(5) 
where is a neuralnetwork based discriminator and for a given , specifies the probability drawn from rather than .
Under the game theory perspective, GAN can be viewed as a game of two players: the discriminator and the generator . The discriminator tries to discriminate the generated (or fake) data and the real data, while the generator attempts to make the discriminator confusing by gradually generating the fake data that break into the real data. The diagram of GAN is shown in Figure 2.
Since we do not end up with any formulation for
, while still being able to generate data from this distribution, GAN(s) are regarded as a implicit density estimation method. The mysterious remedy of GANs is to employ a strong discriminator (i.e., classifier) to implicitly justify the divergence between
and . To further clarify this point, we rewrite the optimization problem in Eq. (5) as follows(6) 
According the optimization problem in Eq. (6), given a generator , we need to train the discriminator that minimizes the general logistic loss over the data domain including the real and fake data. Using the above general loss, we can implicitly estimate how far and are. In particular, if is far from then the general loss is very small, while if is moving closer to then the general loss increases. In the following section, we strengthen this by proving that in fact, we can substitute the logistic loss by any decreasing and convex loss, wherein the the optimization problem in Eq. (6) can be equivalently interpreted as minimizing a certain symmetric divergence between and .
In addition, the most challenging obstacle in solving the optimization problem of GAN in Eq. (5) is to break its minimax flavor. The existing GAN(s) address this problem by alternately updating the discriminator and generator which cannot accurately solve its minimax problem and the rendered solutions might accumulatively diverge from the optimal one.
3 Minimal General Loss Networks
In this section, we theoretically show the connection between the problem of discriminating the real and fake data and the problem of minimizing the distance between and . We start this section with the introduction of the setting for the classification problem, followed by proving that the general loss of this classification problem with a certain loss function is the negative divergence of and for some convex function . Finally, we close this section by indicating some common pairs of .
3.1 The Setting of The Classification Problem
Given two distributions and with the p.d.f(s) and respectively, we define the distribution for generating common data instances as the mixture of two aforementioned distributions as
When a data instance , it is either drawn from or with the probability for each, we use the following machinery to generate data instance and label pairs where :

Randomly draw .

If is really drawn from , its label is set to . Otherwise, its label is set to .
Let us denote the joint distribution over
by whose its p.d.f is . It is evident from our setting that:Let be the family of functions with an infinite capacity that contains discriminators , wherein we seek for the optimal discriminator . To form the criterion for finding the optimal discriminator, we recruit a decreasing and convex loss function . The general loss w.r.t. a specific discriminator and the general loss over the discriminator space are further defined as
In addition, the optimal discriminator is defined as the discriminator that minimizes the general losses, i.e., .
3.2 The Relationship between the General Loss and divergence
In our setting, we can further derive the general loss over the space as:
Since we assume that the discriminator family has an infinite capacity, we can proceed the above derivation as follows:
(7) 
Let us now denote
(8) 
, which is a decreasing and convex function, we now plug back this function to the above formulation to obtain:
where specifies the divergence between two distributions.
It turns out that the general loss is proportional to the negative divergence where the convex function is defined as in Eq. (8). It also follows that to minimize , we can equivalently maximize and hence come with the following maxmin problem:
The above maxmin problem also keeps the spirit of GAN(s), which is the discriminator attempts to classify the real and fake data while the generator tries to makes the discriminator confusing. From now on, for the sake of simplification, we replace sup and inf by max and min, respectively, though the mathematical soundness is lightly loosen. In particular, we need to tackle the maxmin problem:
It is worth noting that if the loss function then the corresponding divergence is the JensenShannon (JS) divergence. In Section 3.3, we will indicate other loss function and divergence pairs.
3.3 Loss Function and divergence Pairs
3.3.1 01 Loss
3.3.2 Hinge Loss
This loss has the form . From Eq. (7), the optimal discriminator takes the form of and the general loss takes the following form:
3.3.3 Exponential Loss
This loss has the form . From Eq. (7), the optimal discriminator takes the form of and the general loss takes the following form:
3.3.4 Least Square Loss
This loss has the form . From Eq. (7), the optimal discriminator takes the form of and the general loss takes the following form:
where with . In addition, this divergence is known as the triangular discrimination distance.
3.3.5 Logistic Loss
This loss has the form . From Eq. (7), the optimal discriminator takes the form of and the general loss takes the following form:
where specifies the JensenShannon divergence, which is a divergence with , .
4 Kernelized Generative Adversarial Networks
4.1 The Main Idea of KGAN
Given a p.s.d, symmetric, and shiftinvariant kernel with the feature map , we consider the Reproducing Kernel Hilbert space (RKHS)
of this kernel as the discriminator family. Therefore each discriminator parameterized by a vector
(i.e., ) has the following formulation:To speed up the computation and enable using the backprop in training, we approximate using the random feature kernel whose random feature map is and hence enforce the discriminator family to the RKHS of the approximate kernel. Each discriminator parameterized by a vector (i.e., ) has the following formulation:
The maxmin problem for minimizing the divergence between two distributions and is as follows:
, where we assume that the generator is a NNbased network parameterized by . We can further rewrite the above maxmin problem as:
(9) 
The advantage of the maxmin problem in Eq. (9) is that we are employing a very powerful family of discriminators, but each of them is linear in the RKHS which opens a door for us to employ the Fenchel duality to elegantly transform the maxmin problem to the maxmax problem which is much easier to tame. Moreover, the maxmin problem in Eq. (9) can be further explained as using the linear models in the RKHS to enforce two pushforward distributions and of and via the transformation to be equal. To further clarify this claim, it is always true that implies , while the converse statement holds if is a bijection. It is very wellknown in kernel method that data become more compacted in the feature space and linear models in this space are sufficient to well classify data, hence pushing toward .
4.2 The Fenchel Dual Optimization
Since in reality, we often do not collect enough data, we usually employ a regulizer to avoid overfitting. We now define the following convex objective function with the regulizer as:
and propose solving the maxmin problem: .
We first start with and derive as follows:
(10) 
where and with and .
Therefore, we achieve the following inequality:
(11) 
where we have defined
The inequality in Eq. (11) reveals that instead of solving the maxmin problem , we can alternatively solve the maxmax problem , which allows us to update the variables simultaneously. The inequality in Eq. (11) becomes equality if the inequality (1) in Eq. (10) is an equality. In Section 5, we point out some sufficient conditions for this equality.
4.3 Regularizers
We now introduce the regulizers that can be used in our KGAN. The first regulizer mainly consists of the empirical loss on the training set like the optimization problem in GAN, whilst the second one really adds a regularization quantity to the empirical loss.
The first regulizer is of the following form
The corresponding Fenchel duality has the following form:
where denotes the dual norm of the norm .
The second regulizer is the norm:
The corresponding Fenchel duality has the following form:
4.4 The Fenchel Conjugate of Loss Function
4.4.1 Logistic Loss
The Logistic loss has the following form
Its Fenchel conjugate is of the following form
here we use the convention .
4.4.2 Hinge Loss
The Hinge loss has the following form
Its Fenchel conjugate is of the following form
4.4.3 Exponential Loss
The exponential loss has the following form
Its Fenchel conjugate is of the following form
4.4.4 Least Square Loss
The least square loss has the following form
Its Fenchel conjugate is of the following form
5 Theory Related to KGAN
We are further able to prove that is an onetoone feature map if and where denotes the diameter of the set and denotes Frobenius norm of the matrix . This is stated in the following theorem.
Theorem 1.
If is a nonsingular matrix (i.e. positive definite matrix), , and , is an onetoone feature map.
We now state the theorem that shows the relationship of two equations: and . It is very obvious that leads to . We then can prove that the converse statement holds if is an onetoone map.
Proposition 2.
If the random feature map is an onetoone map from to , implies .
We now present and prove some sufficient conditions under which the maxmin problem is equivalent the maxmax problem. This equivalence holds when in Eq. (10), we obtain the equality:
where .
To achieve some sufficient conditions for the equivalence, we use the theorems in (Sion, 1958) which for completeness we present here.
Theorem 3.
Let be any spaces, is a function over that is convexconcave like function, i.e., is a convex function over for all and is a concave function over for all .
i) If is compact and is continuous in for all , .
ii) If is compact and is continuous in for all , .
Using Theorem 3, we arrive some sufficient conditions for the equivalence of the maxmin and the maxmax problems as stated in Theorem 4.
Theorem 4.
The maxmin problem is equivalent to the maxmax problem if one of the following statements holds
i) We limit our discriminator family to , where is a compact set (e.g., or ).
ii) is a discrete distribution, e.g., where is the atom measure.
6 Conclusion
In this paper, we have proposed a new viewpoint for GANs, termed as the minimizing general loss viewpoint, which points out a connection between the general loss of a classification problem regarding a convex loss function and a certain divergence between the true and fake data distributions. In particular, we have proposed a setting for the classification problem of the true and fake data, wherein we can prove that the general loss of this classification problem is exactly the negative divergence for a certain convex function . This enables us to convert the problem of learning the generator for minimizing the divergence between the true and fake data distributions to that of maximizing the general loss. This viewpoint extends the loss function used in discriminators to any convex loss function and suggests us to use kernelbased discriminators. This family has two appealing features: i) a powerful capacity in classifying nonlinear nature data and ii) being convex in the feature space, which enables the application of the Fenchel duality to equivalently transform the maxmin problem to the maxmax dual problem.
Appendix A All Proofs
In this appendix, we present all proofs stated in this manuscript.
Proof of Theorem 1
We need to verify that if then . We start with
It follows that
(12) 
where and .
With noting that , from the equality in Eq. (12), we gain that . In addition, we have: . It follows that .
Since , we find linearly independent vectors inside this set (i.e., ). Without loss of generality, we assume that they are . Combining with the fact that is not a singular matrix, we gain is also linearly independent. It implies that is a base of . Hence, can be represented as linear combination of this base which means
It follows that
Therefore, we arrive at .
Proof of Proposition 2
It is trivial from the fact that and are the pushfoward measures of and via the transformation .
Proof of Theorem 4
It is obvious that is a convexconcavelike function since given , is a convex function w.r.t and given , this function is a convex function w.r.t Our task is to reduce to verifying that either the domain of or that of is compact.
i) The domain of is which is a compact set. This leads to the conclusion.
ii) Since is only finite on the interval , the domain of has the form of which is a compact set. We note that in this case, and
Comments
There are no comments yet.