Hierarchical Mixtures of Generators for Adversarial Learning

11/05/2019 ∙ by Alper Ahmetoğlu, et al. ∙ 22

Generative adversarial networks (GANs) are deep neural networks that allow us to sample from an arbitrary probability distribution without explicitly estimating the distribution. There is a generator that takes a latent vector as input and transforms it into a valid sample from the distribution. There is also a discriminator that is trained to discriminate such fake samples from true samples of the distribution; at the same time, the generator is trained to generate fakes that the discriminator cannot tell apart from the true samples. Instead of learning a global generator, a recent approach involves training multiple generators each responsible from one part of the distribution. In this work, we review such approaches and propose the hierarchical mixture of generators, inspired from the hierarchical mixture of experts model, that learns a tree structure implementing a hierarchical clustering with soft splits in the decision nodes and local generators in the leaves. Since the generators are combined softly, the whole model is continuous and can be trained using gradient-based optimization, just like the original GAN model. Our experiments on five image data sets, namely, MNIST, FashionMNIST, UTZap50K, Oxford Flowers, and CelebA, show that our proposed model generates samples of high quality and diversity in terms of popular GAN evaluation metrics. The learned hierarchical structure also leads to knowledge extraction.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 5

page 7

page 8

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

In generative modeling, we are given a data set sampled from some unknown probability distribution and we want to be able to generate new instances from . This is an unsupervised learning problem and the usual approach is to first build an estimator for and then sample from that. The generative adversarial network (GAN) [5] is interesting in that it learns a generative model without explicitly modeling

but by using an auxiliary discriminative model, thereby transforming an unsupervised learning problem into a supervised learning problem.

A GAN model is composed of two learners, a generator and a discriminator . takes as input random drawn from some simple parametric distribution of relatively low dimensionality, e.g., a zero-mean Gaussian with unit covariance, and learns to transform it to a valid instance from (the unknown) . is implemented as a deep neural network that takes as input, generates as output, and has many layers in between necessary for the transformation; the weights in are denoted by . The that are generated by are called fake because they are synthetic. The discriminator

is a two-class classifier that learns to discriminate such fakes from true

sampled from the training set . is another deep neural network with either or as input and 0 or 1 as the desired output respectively for the single sigmoid output. Again has as many hidden layers as necessary for the task; the weights in are denoted by .

The objective function is

(1)

We train the weights of both and using gradient-based optimization, alternating between the two. wants to maximize the likelihood for true instances (drawn from unknown as represented by the training set ) and minimize the likelihood for fake instances generated by . At the same time, wants to generate fakes for which assigns as high likelihoods as possible. As gets better in generating fakes for which assigns high likelihood, is forced to better separate them from true instances, which in turn forces to generate even better fakes, and so on.

GANs are used successfully especially in image generation. A well-trained GAN can generate images that are almost indistinguishable by humans [12, 2, 13]; still, there are two main difficulties in training: Sometimes learns only a part of the true and can generate only a subset of the possible ; this is called mode collapse because it is indication that does not cover all the modes of . The second problem is that of vanishing gradients that we always have in training deep neural networks; note that here because both and are deep, is doubly deep because its gradient needs to be back-propagated through .

There is recent work in the literature that focuses on these problems. To solve problems related to training, it has been proposed to use either different objective functions, regularization methods, or architectures; see [3, 9, 15] for good surveys of the state of the art.

The direction we pursue in this study is to use multiple generators each one responsible from generating a local region of . Different local generators will learn to cover different modes and this will help alleviate the mode collapse problem. They also help with the problem of vanishing gradients because local generators are simpler, i.e., more shallow, and hence the paths through which the gradient is back-propagated are shorter. We review three previously proposed approaches from the literature, namely multi-agent diverse GAN (MADGAN)[4], mixture GAN (MGAN)[8], and mixtures of experts GAN (MEGAN)[20].

We propose the hierarchical mixture of generators that has a tree structure with internal decision nodes that divide up the latent space and leaves that are local generators responsible from a local region generating a subset of . Since the splits are soft, given the tree structure, the split parameters at the internal nodes as well as those of the generators in the leaves can be updated using gradient-descent. Note that it is only that is modeled this way and split locally, there is still a single implemented as a deep fully-connected neural network as usual.

The rest of this paper is organized as follows. In Chapter II, we discuss the previously proposed models in literature that also use multiple generators. We explain our proposed model of hierarchical mixture of generators in detail in Chapter III. Our experimental results on a toy two-dimensional and five real-world image data sets are given in Chapter IV. We conclude in Chapter V.

Ii Combining multiple generators in GAN

Ii-a Multi-agent diverse GAN

In the multi-agent diverse GAN (MADGAN) [4], there are generators each of which labels the fake data it generates with its index. does not learn a two-class, true vs. fake classification problem, but a -class problem where class 0 is for true instances and classes 1 to are the different ways of generating fake; in terms of implementation, has softmax outputs instead of one sigmoid output as we have for the original GAN.

Fig. 1: Multi-agent diverse GAN (MADGAN)[4]. Given the latent , the shared network (shown in yellow) produces an intermediate representation . From this representation, each generator (shown in blue) outputs a sample, one of which is randomly chosen and fed to the discriminator as a fake together with the index of its generator. (shown in green) is a -class classifier.

The model is shown in Figure 1. Given , first a shared neural network block produces an intermediate representation . is a low-dimensional unstructured vector, contains successive deconvolution layers and generates a two-dimensional image using multiple filters in , which is given as input to a set of generators , one of which we choose at random. The discriminator sees either a true with class code 0 or one of the generated fake with its index as the class code. The discriminator should push the different generators to different modes to solve the classification problem successfully: For correct classification, instances from the same class (i.e., fakes generated by the same generator) need to be more similar than instances from different classes (i.e., fakes generated by different generators).

More formally, for the discriminator, the objective is to

(2)

where is 1 if is processed by generator and 0 otherwise, and denotes the output of the discriminator for class .

In updating the weights of , the objective is to

(3)

Note that though there are multiple generators, their outputs are not combined in a cooperative manner. We do not partition and use each local partition for a different generators; for any , any of the generators can be used. It is more as if each generator produces its own interpretation of ; instead of partitioning , we learn alternative generator functions for the same region in .

Ii-B Mixture GAN

The mixture GAN (MGAN) [8] has some similarities with MADGAN, the main difference being that the classifier and the discriminator are separated. The discriminator is a two-class classifier as usual differentiating between true and fake examples, and there is an additional -class classifier used only for the fake examples learning the index of the generator used.

Fig. 2: Mixture GAN [8]. This is similar to MADGAN architecture with one difference being that the multiple generators are used earlier: Each generator creates an abstract representation which is fed to the shared block to generate the output. One of the generated fakes is selected at random and given to the discriminator; there is also an additional classifier (shown in purple) that learns the index of the generator of a fake.

The model is shown in Figure 2. There is also the difference that the split of the generators is earlier and the shared deconvolution block comes afterwards. The generators transform to in parallel and for all, the shared produces the final output. Training is formalized as a multi-task learning problem where the discriminator is trained to discriminate between the fake and the real data as usual, and at the same time, for a fake, the -class classifier tries to predict the index of the generator that produced it.

The overall objective is defined as follows:

(4)

where , parameterized by , is the -class classifier for the fakes whose output for class is denoted by .

Ii-C Mixtures of experts GAN

In MEGAN [20], inspired from the mixtures of experts architecture [10], there is an additional gating model, which is also trained, that chooses among the different generators.

Fig. 3: Mixture of experts GAN [20]. Given , each generator outputs a sample . The gating network (shown in red) selects one generated sample among all based on and , which are the first layer activations of the generators. Unlike the original mixtures of experts [10], the selection is hard; only one generator is used.

The model is shown in Figure 3. In addition to the generators , there is also a gating network that takes and as its input: are the first layer activations of and as such are believed to provide additional information as to how best to choose the responsible generator. Then Straight-Through Gumbel Softmax is applied which only selects one expert while allowing differentiability. The discriminator is still two-class. The gating model also has its parameters that are updated together with the generators. Although all generators generate an output, it is the gating model that decides which one is to be used. Except the way the generator is written as a weighted sum of generators, the training objective is the same as Equation (1) used in the original GAN model.

Different from MADGAN and MGAN, here, is partitioned into local regions which map to local regions of . Thanks to the gating network outputs, each generator is only responsible for a local region of and generates the corresponding local region of . However, this partitioning is hard since we let only one generator to be used. Besides, the gating network takes processed features as extra inputs and this may lead to a partitioning that might be non-smooth in the space.

Iii Hierarchical mixtures of generators

All previous approaches use multiple generators, yet these generators do not work cooperatively, and they all train a flat set of generators. We propose the hierarchical mixture of generators that are inspired from the hierarchical mixtures of experts [11], where the generators are organized at the leaves of a tree and they cooperate as defined by the tree structure.

Let us think of a binary decision tree. The generators are at the leaves of this tree. At each internal node

of the tree, there is a gating function with parameters that calculates the probability that we take the left child

(5)

is the probability that we take the right child. If is a leaf mode, the response is given by the generator at that leaf, . If is an internal node, its response is a weighted sum of its left and right children weighted by the gating values:

(6)

where and are the responses of the left and the right children respectively, calculated recursively until we get to leaf nodes; see Figure 4.

The generators at the leaves are simple linear models:

(7)
Fig. 4: Hierarchical mixture of generators (HMoG) with depth two and four generators. Each generator creates an intermediate representation from and their average is calculated with the weights given by gating values on each path. The resulting is passed through to generate the image.

Because the gating in Equation (5

) is a sigmoid, we take a soft combination of generators at the leaves. This has two uses: First, we have a smooth transition from one local generator to another smoothly interpolating in between. Second, the model is differentiable and therefore given a tree structure, we can use gradient-based optimization to learn all the gating parameters

in the decision nodes and the parameters of the generators at the leaves.

In training, we use the Wasserstein loss [1] which has been shown to work better than the likelihood-based criterion of Equation (1):

(8)

Here, estimates the score of “trueness” for a sample, and Wasserstein loss checks for the difference between the average scores for true samples and the generated fake samples. , which is a regressor and not a classifier, is trained to maximize this, and is trained to minimize it.

Note that Equation (5) defines a binary tree; we can also have a -ary tree by using the softmax instead of the sigmoid in the gating nodes. At the extreme, as a special case, we can have a tree of depth 1 with generator leaves; see Figure 5. This is the (flat) mixtures of generators (MoG), which is similar to MEGAN with two differences: We keep softmax gating so the combination is soft just like the original mixture of experts model [10], and the input to gating uses only without any extra features extracted from .

Fig. 5: Mixture of generators (MoG). Given , each generator outputs an intermediate . The gating unit chooses among generators using the softmax that assigns weights that are between 0 and 1 and sum up to 1. Hence, the output is a convex combination of all the generators, which is then passed through to generate the image.

Iv Experiments

Iv-a Results on toy data

(a) MADGAN
(b) MGAN
(c) MEGAN
Fig. 6: Results using the three competing methods that also use multiple generators. In all three, in the larger figure above, the toy data set is shown with black crosses and the different colors represent different parts of which is similarly color coded in the lower left hand side. The small figures below show the part of data generated by the eight individual generators.
Fig. 7: Result using our proposed hierarchical mixture of generators (HMoG) with depth three and eight generators on the toy data. How the data is split over the tree at various levels and the responsibilities of the generators at the leaves are also shown.
Fig. 8: Result using the flat mixture of generators (MoG) on the toy data. The small figures below show the part of data generated by each generator.

We begin with experiments on a toy two-dimensional data set sampled from a mixture of five Gaussians. The latent

is drawn from a two-dimensional Gaussian distribution with zero mean and unit variance, and we use eight generators. The data and how the generators split the data amongst themselves are shown in Figure

6 for MADGAN, MGAN, MEGAN. We color the different regions of and the corresponding so as to show which parts of generate which part of ; we also show in smaller plots below samples generated by individual generators similarly color coded.

We see that because MADGAN and MGAN do not use any gating function, with these two, the output regions of the generators overlap with each other. Each color in -space corresponds to eight different points in the -space by the eight models. MEGAN does use a gating function but because the gating also uses extra , the output regions of generators still overlap with each other. Note that all three miss parts of the underlying distribution; MADGAN and MEGAN miss the top component, and MGAN miss the one at the bottom.

The tree learned by our proposed hierarchical mixture of generators, HMoG, is shown in Figure 7; this is a tree of depth three that also has eight generators at its leaves. We calculate each decision node’s responsibility by counting the (soft) gating values and an instance is drawn in the box of the expert having the highest responsibility. We see that the tree has learned a hierarchical soft clustering of the data with the leaves learning parts of each corresponding to a part of . We see that this model covers the data completely and has not missed any of the components.

The results using a flat mixture of generators, MoG, is shown in Figure 8. Because the combination is soft and only depends on the input , here too each generator operates in a local region of . This model too learns the distribution without dropping any modes of the data; note that here, we see that some generators do more of the work with some not used at all. This we believe is the advantage of a hierarchical model which dissects the problem into two at each level, easing the problem in a divide-and-conquer fashion. The hierarchical organization also lends to discovering structure in the data.

Iv-B Results on image data sets

We test and compare our proposed mixture models HMoG and MOG with MADGAN, MGAN, and MEGAN, on five image data sets that are widely used in the GAN literature: MNIST [16], FashionMNIST [24], UTZap50K [25], Oxford Flowers [19], and CelebA [17]. We resize MNIST and FashionMNIST data set to and the other data sets with more detail to . All image pixels are normalized to the range .

It is known that using a convolutional architecture for tasks that involve images increases the performance dramatically, and we incorporate transposed convolutional (also known as deconvolutional or fractionally strided convolution) layers in each model. More specifically, we use the (transposed) convolutional part of DCGAN

[21] as the shared part of generators, denoted by above. Instead of generating samples directly in the data domain , each model generates an abstract representation which is given to the shared block that produces the output . For any data set, the same is used in all models.

All these variants combine multiple local models; we also define the fully connected (FC) model that uses a fully connected layer, which stands for the standard distributed alternative having one global generator, which we take as the baseline against which we compare all the localized variants.

In training HMoG, MoG, and the baseline model FC, Wasserstein loss with gradient penalty [6] is used. For MADGAN and MGAN, we use the original likelihood-based loss; Wasserstein loss is not applicable with these since since they require to be a classifier. MEGAN can be used with either Wasserstein loss or the original loss; we use the original loss because it performed better in our preliminary experiments. For all methods, we used the Adam optimizer [14] with amsgrad option [22]. The learning rate is set to with beta values of Adam set to . The batch size is set to 128.

For evaluating the performance of the variants, we use the Fréchet Inception distance (FID) [7] and the two-sample test (C2ST) [18]

, here, 5-nearest neighbor (5-NN) leave-one-out accuracy. Both FID and 5-NN accuracy are calculated with the activations before the softmax layer (2048-dim) of InceptionV3

[23]

. Lower FID scores are better and 5-NN accuracies that are close to 50% are better. All models are run five times with different random seeds, and we report the mean and standard deviations.

For flat models, we experiment with 4, 8, 16, and 32 generators, which for the hierarchical model translates to trees of depth 2, 3, 4, and 5. We also report the parameter count of each model; these do not include the shared deconvolution block used in all models.

(a) MNIST
(b) FashionMNIST
(c) UTZap50K
(d) Oxford Flowers
(e) CelebA
Fig. 9: FID scores and 5-NN accuracies of all tested models on the five data sets for different number of generators. These are average and one standard deviation error bars of five runs. The -axis is the parameter count (not including used in all) and the -axis represents 5-NN real accuracy, 5-NN fake accuracy, or the FID score, respectively. Lower FID and 5-NN scores closer to 50% are better.

Our experimental results on the five data sets are shown in Figure 9. We see that in terms of FID score, both of our proposed MoG and HMoG outperform other approaches. We also see that MADGAN and MGAN perform worse than the baseline FC; only on MNIST, MADGAN performs better than the baseline. This might suggest that forcing discriminator to classify generators may not always work, which is the idea behind MADGAN and MGAN. On the other hand, MEGAN seems to perform on par with the baseline, sometimes even better. Note that unlike MADGAN and MGAN, MEGAN uses a gating function to select among its generators. This hints at the importance of training different generators in different input regions and combining them based on the input, instead of relying on the discriminator to force multiple generators to different modes.

If we compare our mixture of experts formulation (MoG) with MEGAN, we see that our model gets better results in terms of FID scores and 5-NN accuracies. As opposed to MEGAN, our mixture of generators is a soft cooperative one. The input to the gating model is only the latent , which also reduces the number of parameters significantly.

Some samples generated from HMoG with depth four are shown in Figure 10. For the sampling procedure, we randomly draw and disregard the least likely

percent to get rid of possible outliers

[2]. A visual inspection of these also show that HMoG is able to generate realistic and diverse samples on all data sets.

(a)
(b)
(c)
(d)
(e)
Fig. 10: Samples generated by the hierarchical mixture of generators (HMoG) with depth four (and 16 generators) on the five data sets.

Iv-C Interpretability

Because both MoG and HMoG use a soft combination, we can check whether there is any correlation between the outputs of the local generators. For the flat MoG, the probability that a local model is used is given by the softmax gating; for the HMoG, it is the product of all the (binary) gatings on the path to the root. We calculate the correlation between these probabilities for pairs of local models for the case of 16 generators (or a tree of depth 4) on the CelebA data set. The correlation matrices for both are shown color coded in Figure 11.

We see that with the flat MoG, correlations are randomly scattered. In HMoG however, we see that the correlations are gathered around the diagonal; we can see spectral squares of sizes and corresponding to subtrees, which is an implication that generators that have the same ancestor on the second or the third level of the tree are frequently used together indicating that they learn semantically correlated samples.

(a) MoG
(b) HMoG
Fig. 11: Correlation matrices of gating values of generators for MoG and HMoG with 16 generators. With MoG, there is no apparent correlation; with HMoG, we see that generators that are closer in the tree (in the same subtree) are used together, which imply a semantic correlation.
Fig. 12: On MNIST using HMoG, the average response of each internal node in the tree hierarchy is shown. For each leaf, five random samples are shown that have the highest probability of being generated in there.
Fig. 13: On Celeb using HMoG, the average response of each internal node in the tree hierarchy is shown. For each leaf, five samples are shown that have the highest probability of being generated in there.

In Figures 12 and 13, the average responses of decision nodes in the tree are visualized by taking the weighted average of the generated samples on MNIST and CelebA respectively. For a given node, weights are found by multiplying the gating probabilities along the path to the node. At the bottom of the tree under each leaf, we show five random samples generated from the corresponding generator. To find these, we sample random 10,000 vectors and select the top five most likely for each generator. Here, the most likely point for a generator is the point which maximizes the probability that the corresponding leaf is chosen. We see the data set mean at the top root, and as we go down the tree the blurriness decreases and each node becomes more specialized to a specific region of . We see in Figure 12 that digits that are similar in shape are generated by leaves that are nearby in the tree. For CelebA too, as we see in Figure 13, we see that the examples are distributed over the leaves in terms of similarity in orientation, color, or background.

We believe that this interpretability is the advantage of the HMoG model over the MoG model, as well as other approaches that train a flat set of generators. As in soft hierarchical clustering, the division at each level, which may be interpreted as an architectural inductive bias, lets us view the data in different levels of granularity and understand the decisive features of the data through a divide-and-conquer type of approach.

V Conclusions

We propose the hierarchical mixture of generators, HMoG, and a special case, MoG, which is a flat mixture of generators. There are GAN variants in the literature that also combine multiple generators but they are limited in the way they force the generators to different modes. Our formulation is the first to our knowledge that learns a cooperative mixture of generators, either organized in a flat manner or hierarchically.

An important advantage of the hierarchical model is its interpretability. Since it is a tree architecture, we can make a post-hoc analysis of the learned tree to gain insight about the data. At each level of the tree, nodes can be seen as clusters, or modes, in different levels of granularity, where as we go down the tree, clusters get more local. At the same time, splits are soft and what the tree learns is a hierarchical soft clustering of the data. In the generative setting that we have here, the leaves are generators each responsible from generating one local cluster,

Our experimental results on five data sets show that the proposed models can generate samples that are realistic and diverse. Our proposed models have better FID score and 5-NN accuracy with lower variance when compared with other methods that incorporate multiple generators as well as the fully-connected standard GAN implementation.

Acknowledgements

This work is partially supported by Boğaziçi University Research Funds with Grant Number 18A01P7. The numerical calculations reported in this work were partially performed at TUBITAK ULAKBIM, High Performance and Grid Computing Center (TRUBA resources).

References

  • [1] M. Arjovsky, S. Chintala, and L. Bottou (2017) Wasserstein generative adversarial networks. In

    International Conference on Machine Learning 34

    ,
    pp. 214–223. Cited by: §III.
  • [2] A. Brock, J. Donahue, and K. Simonyan (2018) Large scale GAN training for high fidelity natural image synthesis. arXiv preprint arXiv:1809.11096. Cited by: §I, §IV-B.
  • [3] A. Creswell, T. White, V. Dumoulin, K. Arulkumaran, B. Sengupta, and A. A. Bharath (2018) Generative adversarial networks: an overview. IEEE Signal Processing Magazine 35 (1), pp. 53–65. Cited by: §I.
  • [4] A. Ghosh, V. Kulharia, V. P. Namboodiri, P. H. Torr, and P. K. Dokania (2018) Multi-agent diverse generative adversarial networks. In

    IEEE Conference on Computer Vision and Pattern Recognition 31

    ,
    pp. 8513–8521. Cited by: §I, Fig. 1, §II-A.
  • [5] I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio (2014) Generative adversarial nets. In Neural Information Processing Systems 27, pp. 2672–2680. Cited by: §I.
  • [6] I. Gulrajani, F. Ahmed, M. Arjovsky, V. Dumoulin, and A. C. Courville (2017) Improved training of Wasserstein GANs. In Neural Information Processing Systems 30, pp. 5767–5777. Cited by: §IV-B.
  • [7] M. Heusel, H. Ramsauer, T. Unterthiner, B. Nessler, G. Klambauer, and S. Hochreiter (2017) GANs trained by a two time-scale update rule converge to a nash equilibrium. arXiv preprint arXiv:1706.08500. Cited by: §IV-B.
  • [8] Q. Hoang, T. D. Nguyen, T. Le, and D. Phung (2018) MGAN: training generative adversarial nets with multiple generators. In International Conference on Learning Representations 6, Cited by: §I, Fig. 2, §II-B.
  • [9] Y. Hong, U. Hwang, J. Yoo, and S. Yoon (2019) How generative adversarial networks and their variants work: an overview. ACM Computing Surveys 52 (1), pp. 10. Cited by: §I.
  • [10] R. A. Jacobs, M. I. Jordan, S. J. Nowlan, and G. E. Hinton (1991) Adaptive mixtures of local experts. Neural Computation 3 (1), pp. 79–87. Cited by: Fig. 3, §II-C, §III.
  • [11] M. I. Jordan and R. A. Jacobs (1994) Hierarchical mixtures of experts and the EM algorithm. Neural Computation 6 (2), pp. 181–214. Cited by: §III.
  • [12] T. Karras, T. Aila, S. Laine, and J. Lehtinen (2017) Progressive growing of GANs for improved quality, stability, and variation. arXiv preprint arXiv:1710.10196. Cited by: §I.
  • [13] T. Karras, S. Laine, and T. Aila (2019) A style-based generator architecture for generative adversarial networks. In IEEE Conference on Computer Vision and Pattern Recognition 32, pp. 4401–4410. Cited by: §I.
  • [14] D. P. Kingma and J. Ba (2014) Adam: a method for stochastic optimization. arXiv preprint arXiv:1412.6980. Cited by: §IV-B.
  • [15] K. Kurach, M. Lucic, X. Zhai, M. Michalski, and S. Gelly (2018) The GAN landscape: losses, architectures, regularization, and normalization. arXiv preprint arXiv:1807.04720. Cited by: §I.
  • [16] Y. LeCun (1998)

    The MNIST database of handwritten digits

    .
    http://yann.lecun.com/exdb/mnist/. Cited by: §IV-B.
  • [17] Z. Liu, P. Luo, X. Wang, and X. Tang (2015) Deep learning face attributes in the wild. In IEEE International Conference on Computer Vision 15, pp. 3730–3738. Cited by: §IV-B.
  • [18] D. Lopez-Paz and M. Oquab (2016) Revisiting classifier two-sample tests. arXiv preprint arXiv:1610.06545. Cited by: §IV-B.
  • [19] M. Nilsback and A. Zisserman (2008) Automated flower classification over a large number of classes. In Indian Conference on Computer Vision, Graphics & Image Processing 6, pp. 722–729. Cited by: §IV-B.
  • [20] D. K. Park, S. Yoo, H. Bahng, J. Choo, and N. Park (2018) MEGAN: mixture of experts of generative adversarial networks for multimodal image generation. arXiv preprint arXiv:1805.02481. Cited by: §I, Fig. 3, §II-C.
  • [21] A. Radford, L. Metz, and S. Chintala (2015) Unsupervised representation learning with deep convolutional generative adversarial networks. arXiv preprint arXiv:1511.06434. Cited by: §IV-B.
  • [22] S. J. Reddi, S. Kale, and S. Kumar (2019) On the convergence of Adam and beyond. arXiv preprint arXiv:1904.09237. Cited by: §IV-B.
  • [23] C. Szegedy, V. Vanhoucke, S. Ioffe, J. Shlens, and Z. Wojna (2016) Rethinking the inception architecture for computer vision. In IEEE Conference on Computer Vision and Pattern Recognition 29, pp. 2818–2826. Cited by: §IV-B.
  • [24] H. Xiao, K. Rasul, and R. Vollgraf (2017) Fashion-MNIST: a novel image dataset for benchmarking machine learning algorithms. arXiv preprint arXiv:1708.07747. Cited by: §IV-B.
  • [25] A. Yu and K. Grauman (2014) Fine-grained visual comparisons with local learning. In IEEE Conference on Computer Vision and Pattern Recognition 27, pp. 192–199. Cited by: §IV-B.