# KGAN: How to Break The Minimax Game in GAN

Generative Adversarial Networks (GANs) were intuitively and attractively explained under the perspective of game theory, wherein two involving parties are a discriminator and a generator. In this game, the task of the discriminator is to discriminate the real and generated (i.e., fake) data, whilst the task of the generator is to generate the fake data that maximally confuses the discriminator. In this paper, we propose a new viewpoint for GANs, which is termed as the minimizing general loss viewpoint. This viewpoint shows a connection between the general loss of a classification problem regarding a convex loss function and a f-divergence between the true and fake data distributions. Mathematically, we proposed a setting for the classification problem of the true and fake data, wherein we can prove that the general loss of this classification problem is exactly the negative f-divergence for a certain convex function f. This allows us to interpret the problem of learning the generator for dismissing the f-divergence between the true and fake data distributions as that of maximizing the general loss which is equivalent to the min-max problem in GAN if the Logistic loss is used in the classification problem. However, this viewpoint strengthens GANs in two ways. First, it allows us to employ any convex loss function for the discriminator. Second, it suggests that rather than limiting ourselves in NN-based discriminators, we can alternatively utilize other powerful families. Bearing this viewpoint, we then propose using the kernel-based family for discriminators. This family has two appealing features: i) a powerful capacity in classifying non-linear nature data and ii) being convex in the feature space. Using the convexity of this family, we can further develop Fenchel duality to equivalently transform the max-min problem to the max-max dual problem.

There are no comments yet.

## Authors

• 10 publications
• 14 publications
• 46 publications
• ### GANs beyond divergence minimization

Generative adversarial networks (GANs) can be interpreted as an adversar...
09/06/2018 ∙ by Alexia Jolicoeur-Martineau, et al. ∙ 0

• ### A Convex Duality Framework for GANs

Generative adversarial network (GAN) is a minimax game between a generat...
10/28/2018 ∙ by Farzan Farnia, et al. ∙ 0

• ### The relativistic discriminator: a key element missing from standard GAN

In standard generative adversarial network (SGAN), the discriminator est...
07/02/2018 ∙ by Alexia Jolicoeur-Martineau, et al. ∙ 2

• ### Linking Generative Adversarial Learning and Binary Classification

In this note, we point out a basic link between generative adversarial (...
09/05/2017 ∙ by Akshay Balsubramani, et al. ∙ 0

• ### On distinguishability criteria for estimating generative models

Two recently introduced criteria for estimation of generative models are...
12/19/2014 ∙ by Ian J. Goodfellow, et al. ∙ 0

• ### Training generative networks using random discriminators

In recent years, Generative Adversarial Networks (GANs) have drawn a lot...
04/22/2019 ∙ by Babak Barazandeh, et al. ∙ 0

• ### Deep Gamblers: Learning to Abstain with Portfolio Theory

We deal with the selective classification problem (supervised-learning p...
06/29/2019 ∙ by Liu Ziyin, et al. ∙ 8

##### This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

## 1 Introduction

Generative model refers to a model that is capable of generating observable samples abiding by a given data distribution or mimicking the data samples drawn from an unknown distribution. It is worth studying because of the following reasons: i) it helps increase our ability to represent and manipulate high-dimensional probability distributions; ii) generative models can be incorporated into reinforcement learning in several ways; and iii) generative models can be trained with missing data and can provide predictions on inputs that are missing data

(Goodfellow, 2017).

The works in generative model can be categorized according to the taxonomy shown in Figure 1 (Goodfellow, 2017). In the left branch of the taxonomic tree, the explicit density node specifies the models that come with explicit model density function (i.e., ). The maximum likelihood inference is now straight-forward with an explicit objective function. The tractability and the precision of inference is totally dependent on the choice of the density family. This family must be chosen to be well-presented the true data distribution whilst maintaining the inference tractable. Under the explicit density node at leftmost, the tractable density node defines the models whose explicit density functions are computationally tractable. The well-known models in this umbrella include fully visible belief nets (Frey et al., 1995), PixelRNN (Oord et al., 2016), Nonlinear ICA (Deco and Brauer, 1995), and Real NVP (Dinh et al., 2016). In contrast to the tractable density node, the approximate density node points out the models that have explicit density function but are computationally intractable. The remedy to address this intractability is to approximate the true density function using either variation method (Kingma and Welling, 2013; Rezende et al., 2014)

(Fahlman et al., 1983; Hinton et al., 1984).

Some generative models can be trained without any model assumption. These implicit models are pointed out under the umbrella of the implicit density node. Some of models in this umbrella based on drawing samples from formulate a Markov chain transition operator that must be performed several times to obtain a sample from the model (Bengio et al., 2013). Another existing state-of-the-art model in this umbrella is Generative Adversarial Network (GAN) (Goodfellow et al., 2014). GAN actually introduced a very novel and powerful way of thinking wherein the generative model is viewed as a mini-max game consisting of two players (i.e., discriminator and generator). The discriminator attempts to discriminate the true data samples against the generated samples, whilst the generator tries to generate the samples that mimic the true data samples to maximally challenge the discriminator. The theory behind GAN shows that if the model converges to the Nash equilibrium point, the resulting generated distribution minimizes its Jensen-Shannon divergence to the true data distribution (Goodfellow et al., 2014). The seminal GAN has really opened a new line of thinking that offers a foundation for a variety of works (Radford et al., 2015; Denton et al., 2015; Ledig et al., 2016; Zhu et al., 2016; Nowozin et al., 2016; Metz et al., 2016; Nguyen et al., 2017; Hoang et al., 2017). However, because of their mini-max flavor, training GAN(s) is really challenging. Beside, even if we can perfectly train GAN(s), due to the nature of the Jensen-Shannon divergence minimization, GAN(s) still encounter the model collapse issue (Theis et al., 2015).

In this paper, we first propose to view GAN(s) under another viewpoint, which is termed as the minimizing general loss viewpoint. Intuitively, since we do not hand in the formulas of both true and generated data distributions, GAN(s) elegantly invoke a strong discriminator (i.e., classifier) to implicitly justify how far these two distributions are. Concretely, if two distributions are far away, the task of the discriminator is much easier with a small resulting loss; in contrast, if they are moving closer, the task of the discriminator becomes harder with increasingly resulting loss. Eventually, when two distributions are completely mixed up, the resulting loss of the best discriminator is maximized, hence we come with the max-min problem, where the inner minimization is for finding the optimal discriminator given a generator and the outer maximization is for finding the optimal generator that maximally makes the optimal discriminator confusing. Mathematically, we prove that given a convex loss function , the general loss of the classification for discriminating the true and fake data is a negative -divergence between the true data and fake data distributions for a certain convex function . It follows that we maximize the general loss to minimize the -divergence between two involving distributions. The viewpoint further explains why in practice, we can use many loss functions in training GAN while still gaining good-quality generated samples. Furthermore, the proposed viewpoint also reveals that we can freely employ any sufficient capacity family for discriminators instead of limiting ourselves in only NN-based family. Bearing this observation, we propose using kernel-based discriminators for classifying the real and fake data. This kernel-based family has powerful capacity, while being linear convex in the feature space (Cortes and Vapnik, 1995). This allows us to apply Fenchel duality to equivalently transform the max-min problem to the max-max dual problem.

## 2 Related Background

In this section, we present the related background used in our work. We depart with the introduction of Fenchel conjugate, a well-known notation in convex analysis, followed by the introduction of Fourier random feature (Rahimi and Recht, 2007) which can be used to approximate a shift-invariance and positive semi-definite kernel.

### 2.1 Fenchel Conjugate

Given a convex function , the Fenchel conjugate of this function is defined as

 f∗(t)=maxu∈dom(f)(uTt−f(u))

Regarding Fenchel conjugate, we have some following properties:

1. Argmax: If the function is strongly convex, the optimal argument is exactly .

2. Young inequality: Given , we have the inequality . The equality occurs if .

3. Fenchel–Moreau theorem: If is convex and continuous, then the conjugate-ofthe-conjugate (known as the biconjugate) is the original function: which means that

 f(t)=maxu∈dom(f∗)(uTt−f∗(u))
4. The Legendre transform property: For strictly convex differentiable functions, the gradient of the convex conjugate maps a point in the dual space into the point at which it is the gradient of : .

### 2.2 Fourier Random Feature Representation

The mapping above is implicitly defined and the inner product is evaluated through a kernel . To construct an explicit representation of , the key idea is to approximate the symmetric and positive semi-definite (p.s.d) kernel with using a kernel induced by a random finite-dimensional feature map (Rahimi and Recht, 2007). The mathematical tool behind this approximation is the Bochner’s theorem (Bochner, 1959), which states that every shift-invariant, p.s.d kernel

can be represented as an inverse Fourier transform of a proper distribution

as below:

 K(x,x′)=k(u)=∫p(ω)eiω⊤udω (1)

where and represents the imaginary unit (i.e., ). In addition, the corresponding proper distribution can be recovered through Fourier transform of kernel function as:

 p(ω)=(12π)d∫k(u)e−iu⊤ωdu (2)

Popular shift-invariant kernels include Gaussian, Laplacian and Cauchy. For our work, we employ Gaussian kernel: parameterized by the covariance matrix . With this choice, substituting into Eq. (2) yields a closed-form for the probability distribution which is .

This suggests a Monte-Carlo approximation to the kernel in Eq. (1):

 K(x,x′) =Eω∼p(ω)[cos(ω⊤(x−x′))]≈∑1D∑Di=1[cos(ω⊤i(x−x′))] (3)

where we have sampled for .

Eq. (3) sheds light on the construction of a -dimensional random map :

 ~Φ(x) =[1√Dcos(ω⊤ix),1√Dsin(ω⊤ix)]Di=1 (4)

resulting in the approximate kernel that can accurately and efficiently approximate the original kernel: (Rahimi and Recht, 2007).

### 2.3 Generative Adversarial Network

Given a data distribution whose p.d.f is where , the aim of Generative Adversarial Networks (GAN) (Goodfellow et al., 2014; Goodfellow, 2017)

is to train a neural-network based generator

such that (s) fed by (i.e., the noise distribution) induce the generated distribution with the p.d.f coinciding the data distribution . This is realized by minimizing the Jensen-Shanon divergence between and , which can be equivalently obtained via solving the following mini-max optimization problem:

 minGmaxD(EPd[log(D(x))]+EPz[log(1−D(G(z)))]) (5)

where is a neural-network based discriminator and for a given , specifies the probability drawn from rather than .

Under the game theory perspective, GAN can be viewed as a game of two players: the discriminator and the generator . The discriminator tries to discriminate the generated (or fake) data and the real data, while the generator attempts to make the discriminator confusing by gradually generating the fake data that break into the real data. The diagram of GAN is shown in Figure 2.

Since we do not end up with any formulation for

, while still being able to generate data from this distribution, GAN(s) are regarded as a implicit density estimation method. The mysterious remedy of GANs is to employ a strong discriminator (i.e., classifier) to implicitly justify the divergence between

and . To further clarify this point, we rewrite the optimization problem in Eq. (5) as follows

 maxGminD(EPd[log(1D(x))]+EPz[log(11−D(G(z)))]) =maxGminD(EPd[log(1D(x))]+EPg[log(11−D(x))]) (6)

According the optimization problem in Eq. (6), given a generator , we need to train the discriminator that minimizes the general logistic loss over the data domain including the real and fake data. Using the above general loss, we can implicitly estimate how far and are. In particular, if is far from then the general loss is very small, while if is moving closer to then the general loss increases. In the following section, we strengthen this by proving that in fact, we can substitute the logistic loss by any decreasing and convex loss, wherein the the optimization problem in Eq. (6) can be equivalently interpreted as minimizing a certain symmetric -divergence between and .

In addition, the most challenging obstacle in solving the optimization problem of GAN in Eq. (5) is to break its mini-max flavor. The existing GAN(s) address this problem by alternately updating the discriminator and generator which cannot accurately solve its mini-max problem and the rendered solutions might accumulatively diverge from the optimal one.

## 3 Minimal General Loss Networks

In this section, we theoretically show the connection between the problem of discriminating the real and fake data and the problem of minimizing the distance between and . We start this section with the introduction of the setting for the classification problem, followed by proving that the general loss of this classification problem with a certain loss function is the negative -divergence of and for some convex function . Finally, we close this section by indicating some common pairs of .

### 3.1 The Setting of The Classification Problem

Given two distributions and with the p.d.f(s) and respectively, we define the distribution for generating common data instances as the mixture of two aforementioned distributions as

 p(x)=12pd(x)+12pg(x)orP(⋅)=12Pd(⋅)+12Pg(⋅)

When a data instance , it is either drawn from or with the probability for each, we use the following machinery to generate data instance and label pairs where :

• Randomly draw .

• If is really drawn from , its label is set to . Otherwise, its label is set to .

Let us denote the joint distribution over

by whose its p.d.f is . It is evident from our setting that:

 p(x∣y=1) =pd(x)andp(x∣y=−1)=pg(x) P(y=1) =P(y=−1)=0.5

Let be the family of functions with an infinite capacity that contains discriminators , wherein we seek for the optimal discriminator . To form the criterion for finding the optimal discriminator, we recruit a decreasing and convex loss function . The general loss w.r.t. a specific discriminator and the general loss over the discriminator space are further defined as

 Rℓ(D) =EPx,y[ℓ(yD(x))] Rℓ(D) =infD∈DRℓ(D)

In addition, the optimal discriminator is defined as the discriminator that minimizes the general losses, i.e., .

### 3.2 The Relationship between the General Loss and f-divergence

In our setting, we can further derive the general loss over the space as:

 Rℓ(D) =infD∈DRℓ(D)=infD∈DEPx,y[ℓ(yD(x))]=infD∈D1∑y=−1∫ℓ(yD(x))p(x,y)dx = infD∈D{∫ℓ(D(x))p(x,1)dx+∫ℓ(−D(x))p(x,−1)dx} = 12infD∈D{∫ℓ(D(x))p(x∣y=1)dx+∫ℓ(−D(x))p(x∣y=−1)dx} = 12infD∈D{∫ℓ(D(x))pd(x)dx+∫ℓ(−D(x))pg(x)dx} = 12infD∈D{∫[ℓ(D(x))pd(x)+ℓ(−D(x))pg(x)]dx} = 12infD∈D{∫[ℓ(D(x))pd(x)pg(x)+ℓ(−D(x))]pg(x)dx}

Since we assume that the discriminator family has an infinite capacity, we can proceed the above derivation as follows:

 Rℓ(D)=12∫infα[ℓ(α)pd(x)pg(x)+ℓ(−α)]pg(x)dx (7)

Let us now denote

 f(t)=−infα[ℓ(α)t+ℓ(−α)] (8)

, which is a decreasing and convex function, we now plug back this function to the above formulation to obtain:

 Rℓ(D)=−12∫f(pd(x)pg(x))pg(x)dx=−12If(Pd∥Pg)

where specifies the -divergence between two distributions.

It turns out that the general loss is proportional to the negative -divergence where the convex function is defined as in Eq. (8). It also follows that to minimize , we can equivalently maximize and hence come with the following max-min problem:

 supGinfDEPx,y[ℓ(yD(x))]

The above max-min problem also keeps the spirit of GAN(s), which is the discriminator attempts to classify the real and fake data while the generator tries to makes the discriminator confusing. From now on, for the sake of simplification, we replace sup and inf by max and min, respectively, though the mathematical soundness is lightly loosen. In particular, we need to tackle the max-min problem:

 maxGminDEPx,y[ℓ(yD(x))]

It is worth noting that if the loss function then the corresponding -divergence is the Jensen-Shannon (JS) divergence. In Section 3.3, we will indicate other loss function and -divergence pairs.

### 3.3 Loss Function and f-divergence Pairs

#### 3.3.1 0-1 Loss

This loss has the form , where is the indicator function. From Eq. (7), the optimal discriminator takes the form of and the general loss takes the following form:

 R0−1(D) =12∫min{pd(x),pg(x)}dx=12∫[pd(x)+pg(x)2−∣∣pd(x)−pg(x)∣∣2]dx =12(1−ITV(Pd∥Pg))

where

specifies the total variance distance between two distributions.

#### 3.3.2 Hinge Loss

This loss has the form . From Eq. (7), the optimal discriminator takes the form of and the general loss takes the following form:

 RHinge(D) =12∫2min{pd(x),pg(x)}dx=∫[pd(x)+pg(x)2−∣∣pd(x)−pg(x)∣∣2]dx =1−ITV(Pd∥Pg)

#### 3.3.3 Exponential Loss

This loss has the form . From Eq. (7), the optimal discriminator takes the form of and the general loss takes the following form:

 Rexp(D) =12∫2√pd(x)pg(x)dx=12[2−∫(√pd(x)−√pg(x))2dx]

#### 3.3.4 Least Square Loss

This loss has the form . From Eq. (7), the optimal discriminator takes the form of and the general loss takes the following form:

 Rsqr(D) =1−If(Pd∥Pg)

where with . In addition, this -divergence is known as the triangular discrimination distance.

#### 3.3.5 Logistic Loss

This loss has the form . From Eq. (7), the optimal discriminator takes the form of and the general loss takes the following form:

 Rsqr(D) =12∫[pd(x)logpd(x)+pg(x)pd(x)+pg(x)logpd(x)+pg(x)pg(x)]dx =12[2log2−IKL(Pd∥Pd+Pg2)−IKL(Pg∥Pd+Pg2)] =log2−IJS(Pd∥Pg)

where specifies the Jensen-Shannon divergence, which is a -divergence with , .

## 4 Kernelized Generative Adversarial Networks

### 4.1 The Main Idea of KGAN

Given a p.s.d, symmetric, and shift-invariant kernel with the feature map , we consider the Reproducing Kernel Hilbert space (RKHS)

of this kernel as the discriminator family. Therefore each discriminator parameterized by a vector

(i.e., ) has the following formulation:

 Dw(x)=wTΦ(x)=∑iαiK(zi,x)

To speed up the computation and enable using the backprop in training, we approximate using the random feature kernel whose random feature map is and hence enforce the discriminator family to the RKHS of the approximate kernel. Each discriminator parameterized by a vector (i.e., ) has the following formulation:

 Dw(x)=wT~Φ(x)=∑iαi~K(zi,x)

The max-min problem for minimizing the -divergence between two distributions and is as follows:

 maxψminwEPx,y[ℓ(ywT~Φ(x))]

, where we assume that the generator is a NN-based network parameterized by . We can further rewrite the above max-min problem as:

 maxψminw(EPd[ℓ(wT~Φ(x))]+EPz[ℓ(−wT~Φ(GΨ(z)))]) (9)

The advantage of the max-min problem in Eq. (9) is that we are employing a very powerful family of discriminators, but each of them is linear in the RKHS which opens a door for us to employ the Fenchel duality to elegantly transform the max-min problem to the max-max problem which is much easier to tame. Moreover, the max-min problem in Eq. (9) can be further explained as using the linear models in the RKHS to enforce two push-forward distributions and of and via the transformation to be equal. To further clarify this claim, it is always true that implies , while the converse statement holds if is a bijection. It is very well-known in kernel method that data become more compacted in the feature space and linear models in this space are sufficient to well classify data, hence pushing toward .

### 4.2 The Fenchel Dual Optimization

Since in reality, we often do not collect enough data, we usually employ a regulizer to avoid overfitting. We now define the following convex objective function with the regulizer as:

 gψ(w)=Ω(w)+EPd[ℓ(wT~Φ(x))]+EPz[ℓ(−wT~Φ(Gψ(z)))]

and propose solving the max-min problem: .

We first start with and derive as follows:

 minwgψ(w)=minw[Ω(w)+EPd[ℓ(wT~Φ(x))]+EPz[ℓ(−wT~Φ(Gψ(z)))]] =minwmaxu,v[Ω(w)+EPd[uxwT~Φ(x)−ℓ∗(ux)]+EPz[−vzwT~Φ(Gψ(z))−ℓ∗(vz)]] ≥(1)maxu,vminw[Ω(w)+EPd[uxwT~Φ(x)−ℓ∗(ux)]+EPz[−vzwT~Φ(Gψ(z))−ℓ∗(vz)]] =−minu,v[Ω∗(−EPd[ux~Φ(x)]+EPz[vz~Φ(Gψ(z))])+EPd[ℓ∗(ux)]+EPz[ℓ∗(vz)]] =maxu,v[−Ω∗(−EPd[ux~Φ(x)]+EPz[vz~Φ(Gψ(z))])−EPd[ℓ∗(ux)]−EPz[ℓ∗(vz)]] (10)

where and with and .

Therefore, we achieve the following inequality:

 maxψminwgψ(w)≥maxψmaxu,vhψ(u,v) (11)

where we have defined

 hψ(u,v)=−Ω∗(−EPd[ux~Φ(x)]+EPz[vz~Φ(Gψ(z))])−EPd[ℓ∗(ux)]−EPz[ℓ∗(vz)]

The inequality in Eq. (11) reveals that instead of solving the max-min problem , we can alternatively solve the max-max problem , which allows us to update the variables simultaneously. The inequality in Eq. (11) becomes equality if the inequality (1) in Eq. (10) is an equality. In Section 5, we point out some sufficient conditions for this equality.

### 4.3 Regularizers

We now introduce the regulizers that can be used in our KGAN. The first regulizer mainly consists of the empirical loss on the training set like the optimization problem in GAN, whilst the second one really adds a regularization quantity to the empirical loss.

The first regulizer is of the following form

 Ω(w)={0if∥w∥≤C+∞otherwise

The corresponding Fenchel duality has the following form:

 Ω∗(θ) =maxw(θTw−Ω(w))=max∥w∥≤C(θTw)=Cmax∥w∥≤1θTw=C∥w∥∗

where denotes the dual norm of the norm .

The second regulizer is the norm:

 Ω(w)=λ2∥w∥22

The corresponding Fenchel duality has the following form:

 Ω∗(θ)=12λ∥θ∥22

### 4.4 The Fenchel Conjugate of Loss Function

#### 4.4.1 Logistic Loss

The Logistic loss has the following form

 ℓ(α)=log(1+exp(−α))

Its Fenchel conjugate is of the following form

here we use the convention .

#### 4.4.2 Hinge Loss

The Hinge loss has the following form

 ℓ(α)=max{0,1−α}

Its Fenchel conjugate is of the following form

 ℓ∗(α)={αif0≤α≤1+∞otherwise

#### 4.4.3 Exponential Loss

The exponential loss has the following form

 ℓ(α)=exp(−α)

Its Fenchel conjugate is of the following form

 ℓ∗(α)={−αlog(−α)+αifα≤00otherwise

#### 4.4.4 Least Square Loss

The least square loss has the following form

 ℓ(α)=(1−α)2

Its Fenchel conjugate is of the following form

 ℓ∗(α)=α24+α

## 5 Theory Related to KGAN

We are further able to prove that is an one-to-one feature map if and where denotes the diameter of the set and denotes Frobenius norm of the matrix . This is stated in the following theorem.

###### Theorem 1.

If is a non-singular matrix (i.e. positive definite matrix), , and , is an one-to-one feature map.

We now state the theorem that shows the relationship of two equations: and . It is very obvious that leads to . We then can prove that the converse statement holds if is an one-to-one map.

###### Proposition 2.

If the random feature map is an one-to-one map from to , implies .

We now present and prove some sufficient conditions under which the max-min problem is equivalent the max-max problem. This equivalence holds when in Eq. (10), we obtain the equality:

 minwmaxu,vτ% (w,u,v)=maxu,vminwτ(w,u,v)

where .

To achieve some sufficient conditions for the equivalence, we use the theorems in (Sion, 1958) which for completeness we present here.

###### Theorem 3.

Let be any spaces, is a function over that is convex-concave like function, i.e., is a convex function over for all and is a concave function over for all .

i) If is compact and is continuous in for all , .

ii) If is compact and is continuous in for all , .

Using Theorem 3, we arrive some sufficient conditions for the equivalence of the max-min and the max-max problems as stated in Theorem 4.

###### Theorem 4.

The max-min problem is equivalent to the max-max problem if one of the following statements holds

i) We limit our discriminator family to , where is a compact set (e.g., or ).

ii) is a discrete distribution, e.g., where is the atom measure.

## 6 Conclusion

In this paper, we have proposed a new viewpoint for GANs, termed as the minimizing general loss viewpoint, which points out a connection between the general loss of a classification problem regarding a convex loss function and a certain -divergence between the true and fake data distributions. In particular, we have proposed a setting for the classification problem of the true and fake data, wherein we can prove that the general loss of this classification problem is exactly the negative -divergence for a certain convex function . This enables us to convert the problem of learning the generator for minimizing the -divergence between the true and fake data distributions to that of maximizing the general loss. This viewpoint extends the loss function used in discriminators to any convex loss function and suggests us to use kernel-based discriminators. This family has two appealing features: i) a powerful capacity in classifying non-linear nature data and ii) being convex in the feature space, which enables the application of the Fenchel duality to equivalently transform the max-min problem to the max-max dual problem.

## Appendix A All Proofs

In this appendix, we present all proofs stated in this manuscript.

Proof of Theorem 1

We need to verify that if then . We start with

 0

It follows that

 1 =~K(x,x′)=1DD∑i=1(cos(ui)cos(u′i)+sin(ui)sin(u′i)) =1DD∑i=1cos(ui−u′i)=1DD∑i=1cos(eTiΣ1/2(x−x′)) (12)

where and .

With noting that , from the equality in Eq. (12), we gain that . In addition, we have: . It follows that .

Since , we find linearly independent vectors inside this set (i.e., ). Without loss of generality, we assume that they are . Combining with the fact that is not a singular matrix, we gain is also linearly independent. It implies that is a base of . Hence, can be represented as linear combination of this base which means

 x−x′=d∑i=1αieTiΣ1/2

It follows that

 ∥∥x−x′∥∥2 =⟨x−x′,d∑i=1αieTiΣ1/2⟩=d∑i=1αieTiΣ1/2(x−x′)=0

Therefore, we arrive at .

Proof of Proposition 2

It is trivial from the fact that and are the pushfoward measures of and via the transformation .

Proof of Theorem 4

It is obvious that is a convex-concavelike function since given , is a convex function w.r.t and given , this function is a convex function w.r.t Our task is to reduce to verifying that either the domain of or that of is compact.

i) The domain of is which is a compact set. This leads to the conclusion.

ii) Since is only finite on the interval , the domain of has the form of which is a compact set. We note that in this case, and