1 Introduction
Generative Adversarial Networks (GANs) [1] have been relatively successful in learning underlying distribution of data, especially in application such as image generation. GANs aims to find the mapping that matches a known distribution to the underlying distribution of the data. The way they perform this task is by projecting the inputs to a higher dimension using Neural Networks [2] and then minimizing the distance between the mapped distribution and the unknown distribution in the projected space. To find the optimal network, [1] proposed using JensenShannon divergence[3] for measuring the distance between projected distribution and the data distribution. Later on, [4] generalized the idea by using the fdivergence as the measure. [5] and [6] proposed using least square and absolute deviation as the measure.
The most recent works proposed using Wasserstein distance and Maximum Mean Discrepancy (MMD) as the distance measure[7, 8, 9]. Unlike JensenShannon divergence, the recent measures are continuous and almost everywhere differentiable. The common thread between all these approaches is that the problem is usually formulated as a game between two agents, i.e. generator and discriminator. Generator’s role is to generate samples as close as possible to real data and discriminator is responsible for distinguishing between real data and the generated samples. The result is a nonconvex minmax game which is difficult to solve. The learning process, which should solve the resulting nonconvex minmax game, is hard to tackle, due to many factors such as using discontinuous [7] or nonsmooth [2] measure. In addition to these factors, the fact that all of these models try to learn the mapping transformation adversarially makes the training unstable. Adding regularization or starting from a good initial point is one approach to overcome these problems [2]. However, for most problems finding a good initial point might be as hard as solving the problem itself.
Randomization has shown promising improvement in machine learning algorithms
[10, 11]. As the result, to prevent overmentioned issues, we propose learning underlying distribution of data not through adversarial player but through a random projection. This random projection not only decreases the computation time by removing the optimization steps needed for most of the discriminator’s role, but also leads to a more stable optimization problem. The proposed method has the state of the art performance for simple datasets such as MNIST and FashionMNIST.2 Problem Formulation
Let
be a random variable with distribution
representing the real data; and be a random variable representing a known distribution such as standard Gaussian. Our goal is to find a function or a neural network such that has a similar distribution to the real data distribution . Therefore, our objective is to solve the following optimization problem(1) 
where is the distribution of and is a distance measure between the two distributions.
A natural question to ask is about what distance metric to use. The original paper of Goodfellow [1] suggests the use of Jensen–Shannon divergence. However, as mentioned in [7], this divergence is not continuous. Therefore, [7, 2] suggest to use the optimal transport distance. In what follows, we first review this distance and then discuss our methodology for solving (1).
3 Optimal Transport Distance
Let and be two discrete distributions taking different values/states. Thus the distributions and can be represented by
dimensional vectors
and . The optimal transport distance is defined as the minimum amount of work needs to be done for transporting distribution to (and vice versa). Let be the amount of mass moved from state to state ; and represent the perunit cost of this move. Then the optimal transport distance between the two distributions and is defined as [12]:(2) 
where the constrains guarantee that the mapping is a valid transport. In practice, a popular approach is to solve the dual problem. It is not hard to see that the dual of the optimization problem (2) can be written as
(3) 
When is a proper distance, this dual variable should satisfy [12]. In practice, since the dimension
is large and estimating
and accurately is not possible, we parameterize the dual variable with a neural network and solve the dual optimization problem by training two neural networks simultaneously [7]. However, this approach leads to a nonconvex minmax optimization problem. Unlike special cases such as convexconcave setup [13], there is no algorithm to date in the literature which can find even an stationary point in the general nonconvex setting; see [14] and the references therein. Therefore, training generative adversarial networks (GANs) can become notoriously difficult in practice and may require significant tuning of training parameters. A natural solution is to not parameterize the dual function and instead solve (2) or (3) directly which leads to a convex reformulation. However, as mentioned earlier, since the dimension is large, approximating and is statistically not possible. Moreover, the distance in the original feature domain may not reflect the actual distance between the distributions. Thus, we suggest an alternative formulation in the next section.4 Training in different feature domain
In many applications, the closeness of samples in the original feature domain does not reflect the actual similarity between the samples. For example, two images of the same object may have a large difference when the distance is computed in the pixel domain. Therefore, other mappings of the features, such as features obtained by Convolutional Neural Network (CNN) may be used to extract meaningful features from samples
[15].Let be a collection of meaningful features we are interested in. In other words, each function is a mapping from our original feature domain to the domain of interest, i.e., . Then, instead of solving (1), one might be interested in solving the following optimization problem
(4) 
where represents the distribution of the random variable ; is the distribution of ; and is a weight coefficient indicating the importance of the th feature .
In the general setting, we may have uncountable number of mappings . Thus, by defining a measure on the set , we can generalize (4) to the following optimization problem
(5) 
Remark 1.
We use the notation since the function plays the role of a discriminator in the Generative Adversarial Learning (GANs) context.
(6) 
where .
Unfortunately, in practice, we do not have access to the actual values of the distributions and . However, we can estimate them using a batch of generated and real samples. The following simple lemma motivates the use of a natural surrogate function.
Lemma 1.
Let and be two discrete distributions with and . Let and
be the corresponding onehot encoded random variables, i.e.,
and , where is the th standard basis. Assume further that is the optimal transport distance between and defined in (2). Let andbe the natural unbiased estimator of
and based on i.i.d. samples. In other words, and , where and are i.i.d samples obtained from distributions and , respectively. Then,Moreover,
Proof.
The proof is similar to the standard proof in sample average approximation method; see [16, Proposition 5.6]. Notice that,
The proof of the almost sure convergence follows directly from the facts that , , and the continuity of the distance function. ∎
The above lemma suggests a natural upperbound for the objective function in (6). More precisely, instead of solving (6), we can solve
(7) 
where and are the unbiased estimators of and based on our i.i.d samples. Moreover, the expectation is taken with respect to both, the function as well as the batch of samples which is drawn for estimating the distributions. As we will see later, in practice it is easier to use the primal form for solving the inner problem in (7), i.e.,
To show the dependence of to , let us assume that our generator is generating the output from the input . Here represents the weights of the network needed to be learned. Moreover, in practice, the value of is estimated by taking the average over all batch of data. Hence, by duplicating variables if necessary, we can rewrite the above optimization problem as
(8) 
Here, is the batch size and we ignored the entries of and that are zero. Notice that to obtain an algorithm with convergence guarantee for solving this optimization problem, one can properly regularize the inner optimization problem to obtain unbiased estimates of the gradient of the objective function [14, 2]. However, in this work, due to practical considerations, we suggest to approximately solve the inner problem and use the approximate solution for solving (8).
Solving the innerproblem approximately. In order to solve the inner problem in (8), we need to solve
(9) 
Notice that this problem is the classical optimal assignment problem which can be solved using Hungarian method [17], Auction algorithm [18], or many other methods proposed in the literature. Based on our observations, even the greedy method of assigning each column to the lowest unassigned row worked in our numerical experiments. The benefit of the greedy method is that it can be performed almost linearly in by the use of a proper hash function.
Algorithm 1 summarizes our proposed Generative Networks using Random Discriminator (GNRD) algorithm for solving (8).
Remark 2.
Remark 3.
The recent works [19, 20] have similarities in terms of learning generative models through minmin formulation instead of minmax formulation. However, unlike their method, 1) our algorithm is based on mapping images via randomly generated discriminators; 2) In our analysis, we establish that this formulation leads to an upperbound of the distance measure; 3) our algorithm is based on the use of optimal assignment, while the works [19, 20] suggests a greedy matching, which is more difficult to understand and analyze.
5 Numerical Experiments
In this section, we evaluate the performance of the proposed GNRD algorithm for learning generative networks to create samples from MNIST [21] and FashionMNIST [22] datasets. As mentioned previously, the proposed algorithm does not require any optimization on the discriminator network and only needs randomly generated discriminator to learn the underlying distribution of the data^{1}^{1}1All the experiments have been run on a machine with single GeForce GTX 1050 Ti GPU..
5.1 Learning handwritten digits and fashion products
In this section, we use GNRD for generating samples from handwritten digits and FashionMNIST datasets. Each of these datasets contains 50K training samples.
Architecture of the Neural Networks:
The generator’s Neural Network consists of two fully connected layer with 1024 and 6272 neurons. The output of the second fully connected layer is followed by two deconvolutional layers to generate the final
image.The discriminator neural network has two convolution layers each followed by a max pool. The size of the both convolutional layers are 64. The last layer has been flatten to create the output. The design of both neural networks is summarized below:

Discriminator: [CONV(64, filter size = 5, stride = 1), Leaky ReLU(alpha = 0.2), Max Pool (kernel size = 2, stride = 2), COVN(64, filter size = 5, stride = 1), Max Pool (kernel size = 2, stride = 2), Flatting].
We have used originally proposed adversarial discriminator for Wasserstein GAN (WGAN) [7], Wasserstein GAN with gradient penalty (WGANGP)[8] ^{2}^{2}2For WGAN and WGANGP implementation visit https://github.com/igul222/improved_wgan_training and Cramér GAN [23]^{3}^{3}3For Cramér GAN implementation visit https://github.com/jiamings/cramergan.
As mentioned in Algorithm 1
, it is important to notice that unlike benchmark methods, the proposed method only optimizes the generator’s parameters. However, at each iteration, weights in the convolutional layers of the discriminator are randomly generated from normal distribution.
Hyper parameters: We have used Adam with step size and and as the optimizer for our generator. The batch size is set to 100.
Fig.1 shows the result of the generated digits and the corresponding inception score[24] using different benchmark methods. As seen from the figure, the proposed GNRD is able to quickly learn the underlying distribution of the data and generate promising samples.
(a) WGAN  (b)WGANGP  (c) Cramer GAN 
(d) GNRD  (e) Inception Score (Over time in second ) 
Fig. 2 shows the result of using the proposed method for generating samples from fashion MNIST dataset. The sample is generated only after 600 iterations ( 10 minutes ) of the proposed method which shows that the GNRD quickly converges and generates promising samples.
(a) Original Data  (b)GNRD 
6 Conclusion
Generative Adversarial Networks (GANs) have been able to learn the underlying distribution of the data and generate samples from it. Training GANs is notoriously unstable due to their nonconvex minmax formulation. In this work, we propose the use of randomized discriminator to avoid facing the complexity of solving nonconvex minmax problems. Evaluating the performance of the proposed method on real data set of MNIST and FashionMNIST shows the ability of the proposed method in generating promising samples without adversarial learning.
Acknowledgement
The authors would like to thank Mohammad Norouzi for his insightful feedback.
References
 [1] I. Goodfellow, J. PougetAbadie, M. Mirza, B. Xu, D. WardeFarley, S. Ozair, A. Courville, and Y. Bengio, “Generative adversarial nets,” in Advances in neural information processing systems, 2014, pp. 2672–2680.
 [2] M. Sanjabi, J. Ba, M. Razaviyayn, and J. D. Lee, “On the convergence and robustness of training gans with regularized optimal transport,” in Advances in Neural Information Processing Systems, 2018, pp. 7091–7101.
 [3] J. Lin, “Divergence measures based on the shannon entropy,” IEEE Transactions on Information theory, vol. 37, no. 1, pp. 145–151, 1991.
 [4] S. Nowozin, B. Cseke, and R. Tomioka, “fgan: Training generative neural samplers using variational divergence minimization,” in Advances in neural information processing systems, 2016, pp. 271–279.

[5]
X. Mao, Q. Li, H. Xie, R. Y. Lau, Z. Wang, and S. Paul Smolley, “Least squares
generative adversarial networks,” in
Proceedings of the IEEE International Conference on Computer Vision
, 2017, pp. 2794–2802.  [6] J. Zhao, M. Mathieu, and Y. LeCun, “Energybased generative adversarial network,” arXiv preprint arXiv:1609.03126, 2016.
 [7] M. Arjovsky, S. Chintala, and L. Bottou, “Wasserstein gan,” arXiv preprint arXiv:1701.07875, 2017.
 [8] I. Gulrajani, F. Ahmed, M. Arjovsky, V. Dumoulin, and A. C. Courville, “Improved training of wasserstein gans,” in Advances in Neural Information Processing Systems, 2017, pp. 5767–5777.
 [9] M. Bińkowski, D. J. Sutherland, M. Arbel, and A. Gretton, “Demystifying mmd gans,” arXiv preprint arXiv:1801.01401, 2018.

[10]
B. Barazandeh and M. Razaviyayn, “On the behavior of the expectationmaximization algorithm for mixture models,” in
2018 IEEE Global Conference on Signal and Information Processing (GlobalSIP). IEEE, 2018, pp. 61–65.  [11] Y. Sun, A. Gilbert, and A. Tewari, “Random relu features: Universality, approximation, and composition,” arXiv preprint arXiv:1810.04374, 2018.
 [12] C. Villani, “Optimal transport–old and new, volume 338 of a series of comprehensive studies in mathematics,” 2009.
 [13] A. Juditsky and A. Nemirovski, “Solving variational inequalities with monotone operators on domains given by linear minimization oracles,” Mathematical Programming, vol. 156, no. 12, pp. 221–256, 2016.
 [14] M. Nouiehed, M. Sanjabi, J. D. Lee, and M. Razaviyayn, “Solving a class of nonconvex minmax games using iterative first order methods,” arXiv preprint arXiv:1902.08297, 2019.
 [15] K. O’Shea and R. Nash, “An introduction to convolutional neural networks,” arXiv preprint arXiv:1511.08458, 2015.
 [16] A. Shapiro, D. Dentcheva, and A. Ruszczyński, Lectures on stochastic programming: modeling and theory. SIAM, 2009.
 [17] H. W. Kuhn, “The hungarian method for the assignment problem,” Naval research logistics quarterly, vol. 2, no. 12, pp. 83–97, 1955.
 [18] D. P. Bertsekas, “The auction algorithm: A distributed relaxation method for the assignment problem,” Annals of operations research, vol. 14, no. 1, pp. 105–123, 1988.
 [19] K. Li and J. Malik, “On the implicit assumptions of gans,” arXiv preprint arXiv:1811.12402, 2018.
 [20] K.Li and J.Malik, “Implicit maximum likelihood estimation,” arXiv preprint arXiv:1809.09087, 2018.
 [21] Y. LeCun, C. Cortes, and C. Burges, “Mnist handwritten digit database, 1998,” URL http://www. research. att. com/~ yann/ocr/mnist, 1998.
 [22] H. Xiao, K. Rasul, and R. Vollgraf, “Fashionmnist: a novel image dataset for benchmarking machine learning algorithms,” arXiv preprint arXiv:1708.07747, 2017.
 [23] M. G. Bellemare, I. Danihelka, W. Dabney, S. Mohamed, B. Lakshminarayanan, S. Hoyer, and R. Munos, “The cramer distance as a solution to biased wasserstein gradients,” arXiv preprint arXiv:1705.10743, 2017.
 [24] S. Barratt and R. Sharma, “A note on the inception score,” arXiv preprint arXiv:1801.01973, 2018.
Comments
There are no comments yet.