Training generative networks using random discriminators

04/22/2019 ∙ by Babak Barazandeh, et al. ∙ University of Southern California 0

In recent years, Generative Adversarial Networks (GANs) have drawn a lot of attentions for learning the underlying distribution of data in various applications. Despite their wide applicability, training GANs is notoriously difficult. This difficulty is due to the min-max nature of the resulting optimization problem and the lack of proper tools of solving general (non-convex, non-concave) min-max optimization problems. In this paper, we try to alleviate this problem by proposing a new generative network that relies on the use of random discriminators instead of adversarial design. This design helps us to avoid the min-max formulation and leads to an optimization problem that is stable and could be solved efficiently. The performance of the proposed method is evaluated using handwritten digits (MNIST) and Fashion products (Fashion-MNIST) data sets. While the resulting images are not as sharp as adversarial training, the use of random discriminator leads to a much faster algorithm as compared to the adversarial counterpart. This observation, at the minimum, illustrates the potential of the random discriminator approach for warm-start in training GANs.



There are no comments yet.


page 5

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Generative Adversarial Networks (GANs) [1] have been relatively successful in learning underlying distribution of data, especially in application such as image generation. GANs aims to find the mapping that matches a known distribution to the underlying distribution of the data. The way they perform this task is by projecting the inputs to a higher dimension using Neural Networks [2] and then minimizing the distance between the mapped distribution and the unknown distribution in the projected space. To find the optimal network, [1] proposed using Jensen-Shannon divergence[3] for measuring the distance between projected distribution and the data distribution. Later on, [4] generalized the idea by using the f-divergence as the measure. [5] and [6] proposed using least square and absolute deviation as the measure.

The most recent works proposed using Wasserstein distance and Maximum Mean Discrepancy (MMD) as the distance measure[7, 8, 9]. Unlike Jensen-Shannon divergence, the recent measures are continuous and almost everywhere differentiable. The common thread between all these approaches is that the problem is usually formulated as a game between two agents, i.e. generator and discriminator. Generator’s role is to generate samples as close as possible to real data and discriminator is responsible for distinguishing between real data and the generated samples. The result is a non-convex min-max game which is difficult to solve. The learning process, which should solve the resulting non-convex min-max game, is hard to tackle, due to many factors such as using discontinuous [7] or non-smooth [2] measure. In addition to these factors, the fact that all of these models try to learn the mapping transformation adversarially makes the training unstable. Adding regularization or starting from a good initial point is one approach to overcome these problems [2]. However, for most problems finding a good initial point might be as hard as solving the problem itself.

Randomization has shown promising improvement in machine learning algorithms

[10, 11]. As the result, to prevent over-mentioned issues, we propose learning underlying distribution of data not through adversarial player but through a random projection. This random projection not only decreases the computation time by removing the optimization steps needed for most of the discriminator’s role, but also leads to a more stable optimization problem. The proposed method has the state of the art performance for simple datasets such as MNIST and Fashion-MNIST.

2 Problem Formulation


be a random variable with distribution

representing the real data; and be a random variable representing a known distribution such as standard Gaussian. Our goal is to find a function or a neural network such that has a similar distribution to the real data distribution . Therefore, our objective is to solve the following optimization problem


where is the distribution of and is a distance measure between the two distributions.

A natural question to ask is about what distance metric to use. The original paper of Goodfellow [1] suggests the use of Jensen–Shannon divergence. However, as mentioned in [7], this divergence is not continuous. Therefore, [7, 2] suggest to use the optimal transport distance. In what follows, we first review this distance and then discuss our methodology for solving (1).

3 Optimal Transport Distance

Let and be two discrete distributions taking different values/states. Thus the distributions and can be represented by

-dimensional vectors

and . The optimal transport distance is defined as the minimum amount of work needs to be done for transporting distribution to (and vice versa). Let be the amount of mass moved from state to state ; and represent the per-unit cost of this move. Then the optimal transport distance between the two distributions and is defined as [12]:


where the constrains guarantee that the mapping is a valid transport. In practice, a popular approach is to solve the dual problem. It is not hard to see that the dual of the optimization problem (2) can be written as


When is a proper distance, this dual variable should satisfy [12]. In practice, since the dimension

is large and estimating

and accurately is not possible, we parameterize the dual variable with a neural network and solve the dual optimization problem by training two neural networks simultaneously [7]. However, this approach leads to a non-convex min-max optimization problem. Unlike special cases such as convex-concave set-up [13], there is no algorithm to date in the literature which can find even an -stationary point in the general non-convex setting; see [14] and the references therein. Therefore, training generative adversarial networks (GANs) can become notoriously difficult in practice and may require significant tuning of training parameters. A natural solution is to not parameterize the dual function and instead solve (2) or (3) directly which leads to a convex reformulation. However, as mentioned earlier, since the dimension is large, approximating and is statistically not possible. Moreover, the distance in the original feature domain may not reflect the actual distance between the distributions. Thus, we suggest an alternative formulation in the next section.

4 Training in different feature domain

In many applications, the closeness of samples in the original feature domain does not reflect the actual similarity between the samples. For example, two images of the same object may have a large difference when the distance is computed in the pixel domain. Therefore, other mappings of the features, such as features obtained by Convolutional Neural Network (CNN) may be used to extract meaningful features from samples


Let be a collection of meaningful features we are interested in. In other words, each function is a mapping from our original feature domain to the domain of interest, i.e., . Then, instead of solving (1), one might be interested in solving the following optimization problem


where represents the distribution of the random variable ; is the distribution of ; and is a weight coefficient indicating the importance of the -th feature .

In the general setting, we may have uncountable number of mappings . Thus, by defining a measure on the set , we can generalize (4) to the following optimization problem

Remark 1.

We use the notation since the function plays the role of a discriminator in the Generative Adversarial Learning (GANs) context.

Plugging  (2) in the equation (5) leads to the optimization problem


where .

Unfortunately, in practice, we do not have access to the actual values of the distributions and . However, we can estimate them using a batch of generated and real samples. The following simple lemma motivates the use of a natural surrogate function.

Lemma 1.

Let and be two discrete distributions with and . Let and

be the corresponding one-hot encoded random variables, i.e.,

and , where is the -th standard basis. Assume further that is the optimal transport distance between and defined in (2). Let and

be the natural unbiased estimator of

and based on i.i.d. samples. In other words, and , where and are i.i.d samples obtained from distributions and , respectively. Then,



The proof is similar to the standard proof in sample average approximation method; see [16, Proposition 5.6]. Notice that,

The proof of the almost sure convergence follows directly from the facts that , , and the continuity of the distance function. ∎

The above lemma suggests a natural upper-bound for the objective function in (6). More precisely, instead of solving (6), we can solve


where and are the unbiased estimators of and based on our i.i.d samples. Moreover, the expectation is taken with respect to both, the function as well as the batch of samples which is drawn for estimating the distributions. As we will see later, in practice it is easier to use the primal form for solving the inner problem in (7), i.e.,

To show the dependence of to , let us assume that our generator is generating the output from the input . Here represents the weights of the network needed to be learned. Moreover, in practice, the value of is estimated by taking the average over all batch of data. Hence, by duplicating variables if necessary, we can re-write the above optimization problem as


Here, is the batch size and we ignored the entries of and that are zero. Notice that to obtain an algorithm with convergence guarantee for solving this optimization problem, one can properly regularize the inner optimization problem to obtain unbiased estimates of the gradient of the objective function [14, 2]. However, in this work, due to practical considerations, we suggest to approximately solve the inner problem and use the approximate solution for solving (8).

Solving the inner-problem approximately. In order to solve the inner problem in (8), we need to solve


Notice that this problem is the classical optimal assignment problem which can be solved using Hungarian method [17], Auction algorithm [18], or many other methods proposed in the literature. Based on our observations, even the greedy method of assigning each column to the lowest unassigned row worked in our numerical experiments. The benefit of the greedy method is that it can be performed almost linearly in by the use of a proper hash function.

Algorithm 1 summarizes our proposed Generative Networks using Random Discriminator (GN-RD) algorithm for solving (8).

Input :  Initialization for generator’s parameter, Learning rate, Batch size, : Maximum iteration number
1 for  do
2       Sample an i.i.d. batch of real data
3       Sample an i.i.d. batch of noise
4       Create a random discriminator neural network with random weights
5       Solve (9) by finding the optimal assignment value between real data and generated sample
6       Update generator’s parameter,
7 end for
Output : 
Algorithm 1 Generative Networks using Random Discriminator (GN-RD)
Remark 2.

The training approach in Algorithm 1 relies on two neural networks: the generative and the discriminator. Hence, Algorithm 1 can be viewed as a GANs training approach where we use a random discriminator at each iteration of updating the generator.

Remark 3.

The recent works [19, 20] have similarities in terms of learning generative models through min-min formulation instead of min-max formulation. However, unlike their method, 1) our algorithm is based on mapping images via randomly generated discriminators; 2) In our analysis, we establish that this formulation leads to an upper-bound of the distance measure; 3) our algorithm is based on the use of optimal assignment, while the works [19, 20] suggests a greedy matching, which is more difficult to understand and analyze.

5 Numerical Experiments

In this section, we evaluate the performance of the proposed GN-RD algorithm for learning generative networks to create samples from MNIST  [21] and Fashion-MNIST [22] datasets. As mentioned previously, the proposed algorithm does not require any optimization on the discriminator network and only needs randomly generated discriminator to learn the underlying distribution of the data111All the experiments have been run on a machine with single GeForce GTX 1050 Ti GPU..

5.1 Learning handwritten digits and fashion products

In this section, we use GN-RD for generating samples from handwritten digits and Fashion-MNIST datasets. Each of these datasets contains 50K training samples.
Architecture of the Neural Networks:

The generator’s Neural Network consists of two fully connected layer with 1024 and 6272 neurons. The output of the second fully connected layer is followed by two deconvolutional layers to generate the final


The discriminator neural network has two convolution layers each followed by a max pool. The size of the both convolutional layers are 64. The last layer has been flatten to create the output. The design of both neural networks is summarized below:

  • Generator: [FC(100, 1024), Leaky ReLU(

    ), FC(1024, 6272), Leaky ReLU(

    ), DECONV(64, kernel size = 4, stride = 2), Leaky ReLU(alpha = 0.2), DECONV(1, kernel size = 4, stride = 2), Sigmoid].

  • Discriminator: [CONV(64, filter size = 5, stride = 1), Leaky ReLU(alpha = 0.2), Max Pool (kernel size = 2, stride = 2), COVN(64, filter size = 5, stride = 1), Max Pool (kernel size = 2, stride = 2), Flatting].

We have used originally proposed adversarial discriminator for Wasserstein GAN (WGAN) [7], Wasserstein GAN with gradient penalty (WGAN-GP)[8] 222For WGAN and WGAN-GP implementation visit and Cramér GAN [23]333For Cramér GAN implementation visit

As mentioned in Algorithm 1

, it is important to notice that unlike benchmark methods, the proposed method only optimizes the generator’s parameters. However, at each iteration, weights in the convolutional layers of the discriminator are randomly generated from normal distribution.

Hyper parameters: We have used Adam with step size and and as the optimizer for our generator. The batch size is set to 100.

Fig.1 shows the result of the generated digits and the corresponding inception score[24] using different benchmark methods. As seen from the figure, the proposed GN-RD is able to quickly learn the underlying distribution of the data and generate promising samples.

(a) WGAN (b)WGAN-GP (c) Cramer GAN
(d) GN-RD (e) Inception Score (Over time in second )
Figure 1: Generating hand-written digits using MNIST dataset

Fig. 2 shows the result of using the proposed method for generating samples from fashion MNIST dataset. The sample is generated only after 600 iterations ( 10 minutes ) of the proposed method which shows that the GN-RD quickly converges and generates promising samples.

(a) Original Data (b)GN-RD
Figure 2: Generating fashion products using Fashion-MNIST dataset

6 Conclusion

Generative Adversarial Networks (GANs) have been able to learn the underlying distribution of the data and generate samples from it. Training GANs is notoriously unstable due to their non-convex min-max formulation. In this work, we propose the use of randomized discriminator to avoid facing the complexity of solving non-convex min-max problems. Evaluating the performance of the proposed method on real data set of MNIST and Fashion-MNIST shows the ability of the proposed method in generating promising samples without adversarial learning.


The authors would like to thank Mohammad Norouzi for his insightful feedback.