Effective Data Augmentation with Multi-Domain Learning GANs

12/25/2019 ∙ by Shin'ya Yamaguchi, et al. ∙ 7

For deep learning applications, the massive data development (e.g., collecting, labeling), which is an essential process in building practical applications, still incurs seriously high costs. In this work, we propose an effective data augmentation method based on generative adversarial networks (GANs), called Domain Fusion. Our key idea is to import the knowledge contained in an outer dataset to a target model by using a multi-domain learning GAN. The multi-domain learning GAN simultaneously learns the outer and target dataset and generates new samples for the target tasks. The simultaneous learning process makes GANs generate the target samples with high fidelity and variety. As a result, we can obtain accurate models for the target tasks by using these generated samples even if we only have an extremely low volume target dataset. We experimentally evaluate the advantages of Domain Fusion in image classification tasks on 3 target datasets: CIFAR-100, FGVC-Aircraft, and Indoor Scene Recognition. When trained on each target dataset reduced the samples to 5,000 images, Domain Fusion achieves better classification accuracy than the data augmentation using fine-tuned GANs. Furthermore, we show that Domain Fusion improves the quality of generated samples, and the improvements can contribute to higher accuracy.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 8

page 11

page 12

page 13

page 14

page 15

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

Introduction

Deep learning models have demonstrated state-of-the-art performance in various tasks using high dimensional data such as computer vision

[26], speech recognition [32]

, and natural language processing

[30]. These models achieve the high performance by optimizing their millions of parameters through the training on labeled data. Since the models can easily overfit the small data due to the enormous parameters, the generalization performance tends to be in proportion to the size of labeled data. In fact, Sun_ICCV17 Sun_ICCV17 experimentally showed that the test performance on vision tasks could be improved logarithmically with the labeled data size. To obtain higher performance of deep models, we must develop as many labeled data as possible by collecting data and attaching labels. However, developing the labeled data becomes one of the main obstacles in the entire deployment of deep models since it requires a lot of time and high costs.

One of the most common techniques to alleviate the costs of labeled data developments is data augmentation (DA). To improve the performance of the target task (e.g., classification or regression), DA amplifies the variation of existing labeled data (target data) by adding small transformations (e.g., random expansion, flip, and rotation). Since DA improves the performance despite its simplicity and has no dependency on network architectures, it is widely applied to many applications [17, 14]. However, when we train target models on low-volume datasets, the improvements by DA is limited because DA is designed to transform an existing sample into a slightly modified sample. In other words, DA does not generate truly unseen data, which have information not included in the data to be transformed. For example, in image recognition, DA is not able to transform running-horse images into sitting-horse images. Therefore, the benefit of DA is limited when we only have low-volume datasets.

Several methods [29, 33, 5, 34, 1] have been presented to overcome the limitation of DA by applying generative adversarial networks (GANs, Goodfellow_NIPS14 Goodfellow_NIPS14). GANs generate various and realistic data samples by learning data distributions; they can generate unseen samples from the learned distributions. The existing methods employ this ability and use the generated samples as additional input for the target task. Although these GAN-based methods succeed at improving the target performance, they assume that there is a sufficient volume of data for training GANs. In fact, in the case of low volume data, the generated samples have less fidelity and variety and can degrade the target performance [31, 28]. This is because low volume data has insufficient knowledge, and thus, we need to utilize supplementary knowledge for training GANs. To train GANs with low-volume target data, wang_transferring_iccv18 wang_transferring_iccv18 proposed Transferring GANs (TGANs) which incorporate a fine-tuning technique into GANs. However, Wang et al. experimentally show TGANs do not improve the generating performance very well when we have only 1 K target dataset.

In this paper, we propose Domain Fusion (DF), which is an effective data augmentation technique exploiting GANs trained on a target and another dataset. To generate helpful samples, DF incorporates knowledge from the outer domain, which is another domain from the target, into a GAN. Specifically, we train GANs on target and outer datasets, simultaneously unlike TGAN. After training GANs, we use the generated samples in the target domain for the target tasks. In order to generate the target samples explicitly, we adopt conditional GANs that can produce the conditioned samples by assigning class labels. As a result, DF transfers the helpful knowledge of the outer domain into generated target samples via the shared parameters of GANs. We call this training method multi-domain training, and the trained GANs multi-domain learning GANs.

Furthermore, to enhance the quality of the generated samples, we propose two improvement techniques for DF. First, we introduce a metric to select an outer dataset that includes knowledge to generate more helpful target samples. An appropriate outer dataset needs to be selected for the target domain since the performance of DF depends on the choice. To this end, we develop a new metric based on Fréchet inception distance (FID, heusel_ttur_nips17 heusel_ttur_nips17) and multi-scale structural similarity (MS-SSIM, wang_msssim wang_msssim) that focuses on the relevance between the target and outer domain, and the diversity of the outer samples. Second, when generating samples from a GAN, we apply filtering to remove extremely broken samples that could lead to negative effects on target models. For this purpose, we use discriminator rejection sampling (DRS,azadi_drs_iclr19 azadi_drs_iclr19), which uses the information from a discriminator of a GAN to omit the bad samples. We extend the DRS algorithm for conditional GANs to generate high-quality class-conditional samples. Applying these improvements, we can generate more helpful target samples.

Our experimental results demonstrate that the samples from our GANs in DF more improve the accuracy in a low data regime compared to TGANs. Furthermore, we show that our GANs can produce higher quality samples than TGANs in terms of FID and Inception Score. We also experimentally confirm the correlation between the quality of generated samples and the classification accuracy. More importantly, we show that the classifiers trained by a combination of DF and conventional DA outperform the ones trained by only using conventional DA.

Our main contributions are as follows:

  • We propose a new data augmentation method using GANs called Domain Fusion, which transfers knowledge of the outer dataset into the target models by using a GAN trained on multi-domain via the shared parameters. We also propose a metric for outer dataset selection, and modified DRS for filtering generated samples.

  • We confirm that the correlations between the quality of generated samples and the target-task performances in our experiments on CIFAR-100, FGVC-Aircraft and Indoor Scene Recognition in low-volume data regime. These results support that Domain Fusion improve the target models because of the high quality generated samples.

Background

Generative Adversarial Networks

A generative adversarial network (GAN) is composed of a generator network , and a discriminator network [9]. The generates fake samples from random noise and the has a role to distinguish an observation whether comes from generator or data distribution . The objective functions for training a discriminator and a generator are respectively formalized as follows:

(1)
(2)

Through a tandem training of and ,

learns to maximize the probability of assigning the “real” label into real examples, whereas

learns to maximize the probability of failing the distinction by the . When and converge to equilibrium point, the generator network produces realistic samples as good representation of data distribution .

In Domain Fusion, we use conditional GANs (cGANs) [24, 21] that generate samples conditioned by class labels. The objective functions are given by rewriting Eq. (1) and (2):

(3)
(4)

While there are several formulations for cGANs, we adopt a projection based conditioning [21]

as our implementation of cGAN. This approach concatenates the embedded conditional vector to the feature vector of the generator and discriminator to learn the condition.

Data Augmentation with GANs

There are several studies applying GANs into data augmentation schemes. calimeri2017_biomedical_data_augmentation calimeri2017_biomedical_data_augmentation have proposed an approach simply applying generated samples as additional datasets for medical imaging tasks. Zhu2018bmvc_CGAN_Augmentation Zhu2018bmvc_CGAN_Augmentation have shown an application using conditional GANs for augmenting plant images. For re-identification tasks in computer vision, the study of [33] has presented a training method with unconditional generated samples. Tran_NIPS17 Tran_NIPS17 have presented a way to train classification models with GANs in semi-supervised fashion. Similarly to our work, these studies leveraged generated samples from GANs as supplementary training data for target models. This is an intuitive and flexible strategy because we can easily use the generated samples as augment dataset like conventional DA. However, in low volume data, these types of data augmentation suffers from the problem of insufficient training a GAN as described in the next section. In fact, shmelkov_ECCV18_howgoodismygan shmelkov_ECCV18_howgoodismygan have shown that the generated samples from low-data trained GANs degrade the accuracy of classifiers. Our approach can help these existing GAN-based methods to reduce the negative effects of this problem since it improves the quality of the generated samples in the case of low volume data.

Training GANs with Low Data Volume

In a low volume training data regime, wang_transferring_iccv18 wang_transferring_iccv18 have shown a fine-tuning technique for training of GANs, called Transferring GANs

. The authors tried to initialize weights of a GAN by leveraging pretrained generators and discriminators with greater volume outer datasets such as ImageNet. They investigated the effect of the target data size by the experiments where GANs were pretrained on the outer dataset (ImageNet) and then fine-tuned to the target dataset (LSUN Bedrooms). Their results showed that fine-tuned GANs generate high-quality samples in the case of large target data (18.5 of FID by 1M samples), but relatively low-quality samples in the case of less volume target data (93.4 of FID by 1K samples). Since 1K of target samples still requires us much effort for developing dataset, training of GANs with low data volume is still challenging.

Domain Fusion

In this section, we present Domain Fusion using multi-domain learning GANs. A multi-domain learning GAN is trained on the target dataset and outer dataset simultaneously. The procedure of Domain Fusion consists of the following three steps; (a) selecting an outer dataset, (b) multi-domain training a GAN, (c) sampling target labeled samples from the trained GAN. In the rest of this section, we describe each of the steps.

Selecting Outer Dataset

First, we select an outer dataset that has useful knowledge for the target domain. In this paper, we denote a dataset composed of and , where is a set of data samples (e.g., images) and is a set of labels. If we have a target dataset , the outer dataset is selected from the candidates according to which is our outer dataset metric of for :

(5)

In fact, it is non-trivial what metrics we should choose for outer dataset selection. We propose a metric that makes account both the relevance between the target and outer dataset, and the diversity of outer samples (see Improvements Section).

Multi-Domain Training

Next, we train a conditional GAN; discriminator to minimize Eq. (3) and generator to minimize Eq. (4) on both and . The objective functions of the multi-domain training are defined as follows:

(6)
(7)

where,

(8)
(9)
(10)
(11)

and

is a hyperparameter balancing the learning scale between the target and outer dataset (

in default setting). In each step of the optimization, we sample data from the both target and outer dataset, and then compute the objective functions. For both the target and outer domain, we adopt conditional GANs (CGANs) because the labels allow GANs to generate the target samples explicitly. Furthermore, GANs with labels can achieve a higher generation performance than one without the labels [18]. We assume that and are disjoint each other. In the training, we can summarize into one class since the target tasks do not use labels of the outer dataset. However, we experimentally found that class-wise training with as well as contributes to the higher quality of generated samples. We infer that this is because makes the learning of the outer domain be easier, and such learned representations help to generate target samples. The overall procedure of the multi-domain training is illustrated in Algorithm 1.

1:Set of target data , set of outer data , set of target labels , set of outer labels , batchsize , learning rate , scaling factor
2:Trained Generator
3:Randomly initialize parameters ,
4:while not convergence do
5:      for  steps  do
6:             GetSample()
7:             GenNoise()
8:             Eq.(8)
9:                           
10:             GetSample()
11:             GenNoise()
12:             Eq.(9)
13:                           
14:             Eq.(6)
15:      end for
16:       GetLabel()
17:       Eq.(10)
18:       GetLabel()
19:       Eq.(11)
20:       Eq.(7)
21:end while
Algorithm 1 Multi-Domain Training of Domain Fusion

Sampling Target Examples

After training, we generate a set of new data samples from the trained generator as follows:

(12)

Note that the input label is an element of since the purpose of a Domain Fusion is to augment the target dataset . We generate equal amount of samples for each label.

In general, trained conditional GANs generate samples by only using a generator . However, the generated samples can include poor quality samples that have been rejected by the discriminator at the training. To obtain more high-quality samples, we apply discriminator rejection sampling (DRS, azadi_drs_iclr19 azadi_drs_iclr19). In the next section, we show our modified DRS algorithm for conditional sampling.

Finally, the generated is integrated into the target dataset .

(13)
(14)
(15)

We assume that generated data derived from the generator have attribute consistency of the specified labels . Thus, the augmented dataset is directly used as the input for the target model training as the alternative of the target dataset .

Improvements

Outer Dataset Selection Metric

In Domain Fusion, the choice of an outer dataset for the target is a dominant factor determining both the target model performance and the quality of generated samples. In order to select a proper outer dataset, we focus on the relevance between the target and outer dataset, and the diversity of an outer dataset.

Relevance Between the Target and Outer Dataset

In the context of transfer learning, measuring the relevance between outer and target domain is widely used to avoid

negative transfer, i.e., the target models could perform worse than the case of non transferring. For GANs, wang_transferring_iccv18 wang_transferring_iccv18 attempt to select the outer dataset by measuring Fréchet inception distance (FID, heusel_ttur_nips17 heusel_ttur_nips17) to the target dataset. An FID between two datasets and is computed on features of ImageNet pretrained Inception Net:

(16)

where and are the mean and covariance of the feature vectors of Inception Net for input . A lower FID means that and are highly related to each other. Following Wang et al., we adopt FID as part of our metrics to measure the relevance of the target and outer dataset. In our use, FID is a more preferable than other relevance metrics (e.g., general Wasserstein distance and maximum mean discrepancy) because there is no need to train additional feature extractors or kernel functions for each pair of datasets.

Diversity of an Outer Dataset

In [31], they also reported the limitation of FID to predict actual quality of the generated samples from fine-tuned GANs. This indicates that even if the outer dataset is highly relevant to the target, the outer dataset does not necessarily improve the quality of the generated target samples. Thus, only using FID is insufficient for proper outer dataset selection.

In Domain Fusion, we propose a metric with an additional perspective of diversity to select an outer dataset. We assume that an outer dataset with diverse samples is preferable for the target sample generation because the more diverse samples can contain more useful and general information for target sample generations. In order to select the dataset containing more diverse samples, we exploit multi-scale structural similarity (MS-SSIM, wang_msssim wang_msssim). MS-SSIM is an approach to assess structural similarity in multi-scale, and it is well accepted as an evaluation method for image compression tasks. Recently, MS-SSIM is used for evaluating the diversity of generated samples by GANs odena_acgan,miyato_cgans_iclr18 odena_acgan,miyato_cgans_iclr18. We apply MS-SSIM to assess the diversity of existing datasets for selecting more helpful outer datasets. An MS-SSIM of two data samples and is defined as follows:

(17)

where , , , and denotes a scale number. is computed only once at the maximum , and are computed at all scales. and

are the mean and standard deviation of

. is the covariance of and . , and represent the hyperparameters, and , and are small constants computed by the dynamic range of the pixel values and scalar constants. The ranges of MS-SSIM is between 0 (high diversity) and 1 (low diversity), and .

To evaluate the diversity of a dataset, we calculate the mean MS-SSIM for all the combinations of the samples in the dataset.

(18)

where denotes the size of . We consider that the mean MS-SSIM indicates the diversity of the dataset.

Outer Dataset Metric

By combining FID and mean MS-SSIM, we compute an outer dataset metric for a target dataset and an outer dataset as follows:

(19)

A lower indicates a more proper outer dataset. We aim to select an outer dataset with both high relevance to a target dataset and high diversity within the samples. This metric helps to pick such outer datasets according to the multiplication of FID and MS-SSIM representing the relevance and diversity, respectively. The role of MS-SSIM (diversity), which is in , is to weight FID (relevance), which is in . In Experimental Results Section, we show that FID and MS-SSIM complementarily contribute to choosing an appropriate outer dataset in practice.

Filtering by Modified DRS

In general, after training of GANs, we obtain the generated samples from GANs by only using the generator. This is because we implicitly assume a successfully trained generator can always generate the samples fooling the discriminator with a probability of 1/2 [9]. However, since this assumption does not hold in real world, the generator can produce broken samples that are easily detected by the discriminator as fake. For data augmentation, we must avoid such broken samples.

In order to filter out broken samples, we adopt discriminator rejection sampling (DRS, azadi_drs_iclr19 azadi_drs_iclr19) to Domain Fusion. DRS is a rejection sampling method proposed for GANs, which computes an acceptance probability for each sample by using the density ratio from the discriminator. Since DRS cuts off the broken samples according to the acceptance probability, sampling with DRS produces more high-quality samples than one with a generator alone.

Since the original paper of DRS has only shown the algorithm for unconditional sampling, we cannot directly apply the algorithm to Domain Fusion, which requires conditional sampling for the data augmentation. Therefore, we modify the DRS algorithm for conditional sampling. The modification is to compute the density ratio for each class label. In the original DRS, one density ratio is estimated for a GAN without considering classes. This may cause losing the diversity of samples of a specific class, because the sampling difficulty varies according to each class

[4]. By estimating the class-wise density ratio, we aim to coordinate the acceptance probability for each class. Applying this modification, we can obtain class conditional generated samples with high fidelity and variety. (Our modified algorithm is shown in the supplemental materials.)

Dataset Classes Size
Oxford 102 Flowers [23] 102 8,189
Stanford Cars [15] 196 16,185
Food-101 [3] 101 101,000
Describable Textures (DTD) [6] 47 5,640
LFW [12] 1 13,000
SVHN [22] 10 99,289
Pascal-VOC 2012 Cls. [7] 20 5,717
Table 1: List of outer datasets. Each dataset size is the total number of the train and test size expect for Pascal-VOC.
CIFAR-100 FGVC-Aircraft Indroor Scene Recognition
Top-1 Acc. Top-5 Acc. FID IS Top-1 Acc. Top-5 Acc. FID IS Top-1 Acc. Top-5 Acc. FID IS
Without DA 27.20.1 54.30.5 22.62.3 48.43.4 24.02.0 52.00.7
CGAN 26.50.4 53.60.3 59.20.9 4.990.07 23.60.6 50.90.7 110.72.8 3.410.05 25.70.7 52.60.7 97.90.1 3.480.02
TGAN 25.90.5 52.10.6 60.95.9 5.200.20 24.10.4 51.00.7 109.03.6 3.450.03 24.00.2 51.01.4 104.15.3 3.490.07
DF (ours) 28.90.5 56.20.4 53.51.7 5.320.03 27.30.9 55.40.4 97.91.6 3.530.15 26.10.7 53.80.9 96.54.0 3.610.08
TGAN-Best 28.20.5 55.70.2 54.55.2 5.160.03 26.20.3 52.90.3 109.52.0 3.470.03 25.71.0 54.91.2 97.85.8 3.500.03
TGAN-AVG 26.71.4 53.61.9 60.53.5 4.980.22 23.83.4 49.84.6 113.48.5 3.420.04 23.41.5 51.02.3 111.812.8 3.380.18
DF-AVG 28.10.9 55.11.5 56.32.5 5.240.24 25.21.5 52.31.8 105.815.2 3.470.06 24.21.2 52.41.8 106.513.9 3.460.25
Table 2: Performance comparison among data augmentation using GANs (top-1 and top5 classification accuracy (%), FID, and IS). TGAN and DF are the cases of applying Pascal-VOC as an outer dataset that marks the best score of our metric for all targets. TGAN-Best denotes the best cases of TGAN approach when using another outer dataset that achieves the best accuracy. AVG represents average scores of 7 outer datasets. Note that, when we used 100% volume of CIFAR-100, FGVC-Aircraft, and ISR (without generated images and any other data augmentations), the classifiers respectively achieved 61.71%, 30.25%, and 27.27% test accuracy under these conditions.

Experimental Results

In this section, we show the evaluation of Domain Fusion (DF) on the image classification task using three datasets: CIFAR-100, FGVC-Aircraft, and Indoor Scene Recognition. We compare our proposed DF with the conditional GAN (CGAN) and Transferring GAN (TGAN).

Settings

Target Datasets

The target task was the image classification on CIFAR-100 [16], FGVC-Aircraft [19], and Indoor Scene Recognition (ISR) [25]. We used CIFAR-100 instead of CIFAR-10 because CIFAR-100 can contribute to a more realistic evaluation with a larger number of labels and fewer samples per class. These three datasets are characterized by samples with different features; CIFAR-100 is composed of the classes with various modes (vegetables, cars, furniture, etc.), FGVC-Aircraft includes only one mode (airplane) and has fine-grained classes that slightly differ each other, and ISR is also constructed by one mode (indoor scenes) but has more diverse and rough-grained information than FGVC-Aircraft. To evaluate the performance in low volume data setting, we reduced each training set of CIFAR-100 (50,000 images), FGVC-Aircraft (6,667 images), and ISR (5,360 images) to 5000 images, which are randomly sampled for each class. Note that although the reductions for FGVC-Aircraft and ISR are relatively smaller than one of CIFAR-100, they originally have small absolute dataset volume per class; they are difficult to train the models even if we use full of the datasets. We trained conditional GANs, and then, trained the classification model by using the generated samples as the additional dataset. At the test step, we used the original test images (CIFAR-100: 10,000 images, FGVC-Aircraft: 3,333 images, ISR: 1,340 images) to accurately evaluate the trained models.

Outer Datasets

Table 1 describes the list of the candidate for the outer dataset. These are image datasets of various domain that are often used for the evaluation of computer vision tasks. At training of DF and TGAN, we used train and test sets of these outer datasets except for Pascal-VOC. We used only train set of Pascal-VOC for training because Pascal-VOC is employed for the reverse-side evaluation which flips the target and outer dataset each other (The reverse-side evaluation is appeared in the supplemental materials). For fair evaluation of the outer datasets, we randomly sampled 5,000 images from each dataset, and used the samples for training GANs. We coordinated the number of samples to equal among classes. Since these datasets contain various images of resolutions, we resized all of the images into 32

32 by bilinear interpolation.

Implementation Details

GANs. We used ResNet-based SNGAN [20, 21] for 3232 resolution images as the implementation of conditional GANs. The model architecture was the same as [21]. We trained a GAN for 50k iterations with a batch of 256 using Adam (, [13]. Following [11], the learning rate of generators and discriminators were and , respectively. We linearly shifted both the learning rates to 0. Moreover, to fairly evaluate the models for each outer dataset, we incorporated early stopping with Inception Score (IS) [27]. The trigger of early stopping was set by estimated IS in each 1,000 iterations for 12,800 generated samples. We stopped training when the consecutive drop count of IS reaches to 5. In multi-domain training, we set for all experiments. In order to use filtering by DRS, we added additional sigmoid layers into the discriminator of the conditional SNGAN, and trained the additional layers for 10,000 steps for each class label (the learning rate was ). For TGAN, we trained the conditional GANs on an outer dataset for 50k iterations with the early stopping, and then fine-tuned the pretrained GANs for a target dataset in the same setting.

Classifiers. The architecture for the target classifier was ResNet-18 for 224224 [10]

with Adam optimizer for 100 epochs, batches of size 512. We selected the batch size by grid search over 128, 256, 512, 1024 on all three target datasets to maximize the average accuracy across the datasets. The hyperparameters for Adam were

, , . We applied no conventional data augmentation (e.g., flip, rotation) to the input images without noted. We used 50,000 samples (4,000 real images + 46,000 generated images) as training set, and 1,000 real images as validation set. In all cases, we run the test for measuring mean accuracy on each test set of the target datasets.

Evaluation Metrics

We evaluated DF on the two aspects: the performance of target classification models and the quality of generated samples on target domain. For the classifiers, we assessed the performance by top-1 and top-5 accuracy. The sample quality was measured by Fréchet Inception Distance (FID) [11] and Inception Score (IS) [27]. For each target dataset, we computed FID and IS with 128 generated samples per class. FID was calculated between the generated samples and the real images in the 100% volume train set. In all experiments, we trained GANs and classifiers three times, and show the mean and standard deviation of accuracy, FID, and IS.

CIFAR-100 FGVC-Aircraft Indoor Scene Recognition
Top-1 Acc. Top-5 Acc. FID IS Top-1 Acc. Top-5 Acc. FID IS Top-1 Acc. Top-5 Acc. FID IS
CGAN with DRS 27.30.3 54.51.3 58.70.8 5.050.01 24.60.8 52.40.9 110.03.5 3.420.09 24.80.9 52.80.5 99.96.8 3.420.06
TGAN with DRS 26.61.5 53.51.2 59.95.9 5.220.03 24.41.2 52.20.6 107.43.0 3.490.05 24.90.8 53.21.4 103.92.5 3.440.09
DF w/o and DRS (Worst) 25.50.3 52.40.2 60.90.1 4.750.13 24.20.3 50.91.8 105.25.8 3.350.01 24.20.3 50.91.8 105.25.8 3.350.01
DF w/o DRS 28.30.7 55.70.5 54.92.4 5.160.04 27.00.5 54.00.3 98.42.6 3.500.05 25.40.1 53.41.7 99.01.3 3.570.06
DF 28.90.5 56.20.4 53.51.7 5.320.03 27.30.9 55.40.4 97.91.6 3.530.15 26.10.7 53.80.9 96.54.0 3.610.08
Table 3: Ablation study of Domain Fusion
CIFAR-100 FGVC-Aircraft Indoor Scene Recognition
Top-1 Acc. Top-5 Acc. Top-1 Acc. Top-5 Acc. Top-1 Acc. Top-5 Acc.
cDA 30.70.7 57.30.3 29.60.9 58.51.6 31.00.3 59.60.7
DF+cDA 32.10.7 59.20.4 31.20.7 60.21.0 32.41.7 61.61.1
Table 4: Performance comparison to conventional DA
(a) CIFAR-100
(b) FGVC-Aircraft
(c) Indoor Scene Recognition
Figure 1: Correlation between generated sample quality and top-1 accuracy
(a) CIFAR-100
(b) FGVC-Aircraft
(c) Indoor Scene Recognition
Figure 2: Comparison of metrics
Figure 3: Comparison of generated samples

Evaluation of Classification Accuracy

Comparison to Other GAN-based Data Augmentations

First, we evaluated the efficacy of Domain Fusion (DF) in terms of the classification accuracy by comparing it to other GAN-based data augmentations. We compared the performance against two patterns of GAN-based data augmentation: generating target samples from (i) CGAN: conditional GANs trained on each target dataset only [34], and (ii) TGAN: conditional Transferring GANs pretrained on an outer dataset [31]. We also show the performance of classifiers trained on a target dataset without data augmentation (Without DA).

Table 2 lists the results of the top-1 and top-5 accuracy on the classification task, and summarizes the FID and IS of generated samples from GANs. For the results of DF and TGAN, we report the accuracy with the outer dataset which has best our metric score (Pascal-VOC). Additionally, for TGAN, we show the best accuracy among 7 outer datasets as TGAN-Best (CIFAR-100 and ISR: Food-101, FGVC-Aircraft: Stanford Cars). We can see that our DF achieves the best classification accuracy among all patterns. As reported in [28]

, CGAN dropped the accuracy from Without DA in the cases of CIFAR-100. On the other hand, we see that DF, which transfers outer knowledge to target models, outperforms Without DA. DF also generated the target samples with better FID and IS than CGAN. These results suggest that the quality improvements of the generated samples contribute to the target accuracy. Compared to TGAN, DF helps more accurate classifications and generates better samples. For all of the target datasets, we confirmed the differences between DF and TGAN are statistically significant by using the paired t-test with 0.05 of the p-value for all of the top-1/top-5 accuracy, FID, and IS. These differences may be caused by the transfer strategies of DF and TGAN. Since TGANs try to transfer outer knowledge by fine-tuning, they suffer from forgetting knowledge 

[8] in the pretrained GANs while retraining for the target dataset. Multi-domain training in DF seems to more effectively transfer the outer knowledge to the target samples without forgetting the knowledge than fine-tuning in TGAN.

In Domain Fusion, as shown in Improvements Section, we apply the metric for outer dataset selection and DRS to improve the quality of generated samples and the performance of target classifiers. As an ablation study, we compare the performances of DF and the cases of DF without our metric and DRS. Table 3 shows the results of the ablation study of DF. Note that the row of DF w/o and DRS denotes the worst cases among outer datasets that use no filtering by DRS, and the outer dataset was LFW for all target datasets. We see that applying our metric into DF allows us to select an appropriate outer dataset for each target dataset, and DRS boosts the performance of target classifiers and GANs. Furthermore, we tested CGAN with DRS and TGAN with DRS, but they underperformed our DF in terms of both the accuracy and the sample quality. This result indicates that DF improves the performances of classifiers and GANs by importing outer dataset knowledge, rather only filtering generated samples by DRS.

Combining to Conventional Data Augmentation

We also investigated the classification performance when combining conventional DA (cDA) and DF. For training the classifiers, we adopted multiple DA transformations: random flip (for x-axis), random expand (100% to 400% of expansion ratio), random rotation (0 to 15.0 of angle). These transformations were applied to images when the images are loaded into a batch. In Table 4, we show the top-1 and top-5 classification accuracies by applying cDA and the combination of cDA and DF. The outer dataset of DF is Pascal-VOC which has the best our metric score for all target datasets. In all cases of the target datasets, we see that DF outperforms only using cDA regarding to the classification performance improvements. These results indicate that DF generates useful samples that are not obtained from cDA.

Effects of Generated Sample Quality

The results of Table 2 imply that there are a meaningful relation between the target accuracy and the quality of the generated samples. We analyzed the relation by testing DF on 7 outer datasets. Figure 1

shows the relation between quality (FID and IS) of generated samples from DF on each outer dataset (x-axis) and test accuracy on a target dataset (y-axis). The dashed line in each panel represents linear regression, and

denotes correlation coefficient. These plots indicate that the target accuracy depends on the quality of generated samples. According to these results, DF produces strong or moderate correlations between the test accuracy and both FID and IS. Further, the visualization results in Figure 3 show that the samples from DF express more clear features for each class than ones from CGAN and TGAN. Therefore, we can see that DF improves the target performance because the GANs generates target samples with high quality.

Evaluation of Metric

We turn to evaluate our metric for selecting an outer dataset. We computed by using 5,000 sampled images of each outer dataset and the target datasets. Figure 2 (left column) represents the relation between our metric and the top-1 accuracy by DF for each outer data. As the results of the calculation, we obtain ranking of preferable outer datasets for a target dataset. In this experiment, the ranking order for CIFAR-100 is Pascal-VOC (1.5), Food-101 (2.6), DTD (4.1), Stanford Cars (5.1), Flowers (6.0), SVHN (10.5), LFW (14.2). For FGVC-Aircraft, the order is Pascal-VOC (3.5), Stanford Cars (5.6), Food-101 (5.9), DTD (7.7), Flowers (10.5), SHVN (17.5), LFW (20.7). Further, the order of ISR is Pascal-VOC (1.8), Food-101 (3.7), Stanford Cars (5.6), DTD (6.2), Flowers (9.1), SHVN (14.8), LFW (16.8). By our metric, Pascal-VOC is predicted as the best outer dataset for all of the target datasets. Since Pascal-VOC is a general image dataset composed of the various modal classes (e.g., Aeroplane, Dogs and Bottles), it has much diversity of the samples ( of 0.029). Moreover, the relevance between each target dataset and Pascal-VOC is also relatively high because Pascal-VOC partially share the classes with the target datasets (CIFAR-100: FID of 50.79, FGVC-Aircraft: FID of 120.05, ISR: FID of 63.2). From these observations, general datasets such as Pascal-VOC possibly tend to be selected by our metric and to contribute for target models successfully.

The lower score of tends to well predict the higher top-1 accuracy on the classification ( in CIFAR-100, in FGVC-Aircraft and ISR). We also compare to other metrics: FID between the target and each outer dataset, MS-SSIM of the samples of each outer dataset (described in the center and right columns of Figure 2 respectively). Although the FID and MS-SSIM correlate with the top-1 accuracy, our metric have the equal or stronger correlation than them. In particular, for FGVC-Aircraft and ISR, our metric succeeds to predict better outer datasets by cooperating FID and MS-SSIM complementarily.

Conclusion

This paper presented Domain Fusion (DF); a generative data augmentation technique based on multi-domain learning GANs. For improving accuracy in a target task when using a low-volume target dataset, DF exploits outer knowledge via the samples from GANs trained on the target and outer dataset simultaneously. We also proposed a metric to select the outer dataset that consists of two perspectives: relevance and diversity. In experiments of the classification task using 3 target and 7 outer datasets, we found that DF improved the target performance and the quality of generated samples.

Appendix A Appendix

1:Generator , discriminator and target label set
2:Filtered class conditional samples from
3: KeepTraining()
4:for  in  do
5:      BurnIn()
6:     samples
7:     while  samples  do
8:          GetSample()
9:         ratio
10:          max(ratio)
11:         acc_prob (())
12:          RandomUniform()
13:         if  acc_prob  then
14:              samples.append()
15:         end if
16:     end while
17:end for
Algorithm 1 Modified Discriminator Rejection Sampling for Conditional GANs

Modified Conditional DRS Algorithm

In this section, we describe the detail of modified discriminator rejection sampling (DRS) algorithm for Domain Fusion. The original paper of DRS have shown only the algorithm for unconditional sampling. For this reason, we cannot directly apply the algorithm because Domain Fusion requires conditional sampling for the data augmentation. Therefore, we modify the DRS algorithm for conditional sampling.

Algorithm 1 represents our modified DRS algorithm for conditional GANs. The main modification is to compute initial for each class label in BurnIn function (line 3 of Algorithm 1); the BurnIn function performs to find a maximum number of density ratios in constant iterations. This is because the sampling difficulty is different for each class [4]. If we set maximum density ratio of the whole class as the initial value, the samples of specific class may lose the diversity. Additionally, the functions KeepTraining, which continues the training of discriminators with early stopping, and GetSample, which generates samples from generators, are modified from the original DRS algorithm [2] to be class-wise. The other parts of our algorithm work the same as the original DRS.

Top-1 Acc. Top-5 Acc. FID IS
Without DA 30.50.8 69.70.8
CGAN 31.00.9 71.61.3 75.02.2 3.870.14
(CP) 32.61.8 72.62.3 71.00.5 4.050.08 1.8
(FP) 29.81.5 70.00.8 81.29.5 3.630.04 12.6
(IP) 30.90.3 72.00.2 72.53.6 3.900.13 2.0
Table 1: Effects of learning by reverse-side Domain Fusion () from CIFAR-100 (C), FGVC-Aircraft (F) or ISR (I) to Pascal-VOC (P). denotes scores of our proposed metric for outer datasets (C, F, and I).

Reverse-side Evaluation

Multi-domain training of DF can be applied bidirectionally, that is, the target and outer dataset are reversible since there is no specialization for fitting the model to only the target distribution. In this section, we demonstrate the reverse-side evaluation of DF to confirm how the target dataset produces positive or negative effects to outer dataset models. We call the models flipped the target and outer datasets from ones of DF models as reverse-side DF () models. We tested reverse-side DF by using Pascal-VOC as the target because Pascal-VOC is the best outer dataset for all of the target datasets in the experiments of the main paper. Conversely, we used CIFAR-100, FGVC-Aircraft and ISR as outer datasets. In the experiments, all settings with respect to training GANs and classifiers were taken over from Experimental Results Section of the main paper. The reverse-side DF models are applied DRS when generating target samples as well as normal DF. We tested the trained target models on the validation set of Pascal-VOC (5,823 images).

Table 1 summarizes the performance results of reverse-side Domain Fusion. We can see that models from CIFAR-100 to Pascal-VOC (CP) and models from ISR to Pascal-VOC (IP) outperform Without DA and CGAN in terms of the classification accuracy and the generated sample quality as well as the models of normal-directions. Nevertheless, the generated samples from the models from FGVC-Aircraft to Pascal-VOC (FP) are inferior in quality and cause negative transfer into the target classifier. This is because FGVC-Aircraft is not a better outer dataset for Pascal-VOC () than CIFAR-100 () nor ISR () since FGVC-Aircraft has large MS-SSIM; meaning less diversity. In fact, our metric is asymmetric because MS-SSIM depends on only outer datasets. Therefore, the reverse-side DF can bring negative effects even if the normal-side DF brings quite improvements. Thus, we should evaluate whether the combination of a target and an outer is appropriate by calculating our metric for each target, not inferring it from the relations of another direction or other combinations of datasets.

Forgetting Knowledge on TGAN

In the main paper, we discussed that TGAN can inferior to DF with respect to the target performance and the sample quality because of forgetting outer knowledge on GANs. In this section, we investigate the sample quality of TGANs in outer domain to show that TGANs forget the outer knowledge through the fine-tuning process. We tested the samples of outer dataset domain generated from TGANs fine-tuned with target dataset as following steps; (i) we trained CGANs with an outer dataset (for 50,000 iterations), (ii) fine-tuned the CGANs into a target dataset (for 50,000 iterations), (iii) finally we re-fine-tuned the CGANs into the outer dataset (for 1,000 iterations). Note that since the steps of (i) and (ii) are shared with the experiments on the main paper, we reused the trained models. We used Pascal-VOC as the outer dataset and used CIFAR-100, FGVC-Aircraft, and ISR as target datasets.

Table 2 shows the sample quality (FID and IS) of CGAN(P) and the re-fine-tuned TGANs; CGAN(P) learns only Pascal-VOC. For comparing Domain Fusion with TGAN, we also reprint the results of the reverse-side Domain Fusion models shown in Table 1. We can see that all TGAN models drop their sampling performances for the outer domain than CGAN(P) which is trained with the outer dataset originally. This indicates that the TGAN models forget their knowledge about the outer domain, which has been obtained in the first step of the training, through the fine-tuning processes. On the other hand, the models except for the case of FP improved the performances. As discussed in the previous section, the degradation of the FP model caused by the mismatch between the outer (FGVC-Aircraft) and target (Pascal-VOC) datasets. Therefore, TGANs can forget the outer knowledge and degrade the performance by the fine-tuning, whereas our Domain Fusion models keep the knowledge and improve the performance by the multi-domain training of GANs as long as selecting an appropriate outer dataset for the target.

FID IS
CGAN(P) 75.02.2 3.870.14
TGAN(PCP) 89.313.9 3.730.51
TGAN(PFP) 81.44.9 3.790.05
TGAN(PIP) 87.110.9 3.620.34
(CP) 71.00.5 4.050.08
(FP) 81.29.5 3.630.04
(IP) 72.53.6 3.900.13
Table 2: Forgetting knowledge on TGAN with respect to the sample quality (FID and IS). Each result was measured by re-fine-tuned TGAN models from Pascal-VOC (P) to CIFAR-100 (C), FGVC-Aircraft (F), or ISR (I)

Visualization Study

(a) Real
(b) CGAN
(c) TGAN
(d) Domain Fusion
Figure 1: Comparison of CIFAR-100 real images and generated samples with CGAN, TGAN and Domain Fusion
(a) Real
(b) CGAN
(c) TGAN
(d) Domain Fusion
Figure 2: Comparison of FGVC-Aircraft real images and generated samples with CGAN, TGAN and Domain Fusion
(a) Real
(b) CGAN
(c) TGAN
(d) Domain Fusion
Figure 3: Comparison of Indoor Scene Recognition real images and generated samples with CGAN, TGAN and Domain Fusion
(a) Real
(b) CGAN
(c) TGAN
(d) Domain Fusion
Figure 4: Comparison of the specific class samples of CIFAR-100 (apples) and the generated images with CGAN, TGAN and Domain Fusion
(a) Real
(b) CGAN
(c) TGAN
(d) Domain Fusion
Figure 5: Comparison of the specific class samples of FGVC-Aircraft (B-200) and the generated images with CGAN, TGAN and Domain Fusion
(a) Real
(b) CGAN
(c) TGAN
(d) Domain Fusion
Figure 6: Comparison of the specific class samples of Indoor Scene Recognition (Bookstore) and the generated images with CGAN, TGAN and Domain Fusion

In this section, we demonstrate a qualitative evaluation of generated samples by DF. Since DF improves target performances and FID/IS, we can expect that DF produces better visual images. To evaluate the generated images, we compare real images from target dataset to generated images from CGAN, TGAN and DF. For generating images, we diverted GANs trained in Experimental Results Section of the main paper, i.e. using CIFAR-100, FGVC-Aircraft or ISR as the target datasets, and Pascal-VOC as the outer dataset. Figure 1, 2 and 3 describe images of all classes randomly sampled from real datasets, CGAN, TGAN and DF. DF generates images with more clear and various shapes than CGAN and TGAN while the difference is slight in visual.

To figure out the detailed characteristics of images from DF, we compare generated images on the specific classes. Figure 4, 5 and 6 illustrate the images generated from CGAN, TGAN and DF by specifying apples from CIFAR-100, B-200 from FGVC-Aircraft and Bookstore, respectively. In Figure 4, we can see that DF produces more diverse apple images containing various patterns of pose, background and gloss, whereas images generated by CGAN and TGAN inclined toward few patterns. For FGVC-Aircraft, Figure 5 shows images of B-200 class, which is characterized by the signature tail unit and the propeller. In this case, CGAN and TGAN fails to represents those unique characteristics (e.g., tail units, propellers) on the images, and thus, there is no clear difference from other airplanes. In contrast, DF can learn fine-grained features of B200 with more fidelity since the characters of tail units and propellers appear on the generated images. Interestingly, in the case of ISR, the images can contribute to boosting the accuracy of models although the generated images from DF have a slight difference to ones from CGAN and TGAN. The reason why the little difference in visual for human might be that ISR is originally constructed by the images with higher resolutions than we used in the experiments. From these visualization studies, we infer that such diversity and fidelity on the images generated by DF helps target models to obtain higher accuracy.

Analysis of Classifier

CGAN TGAN (Pascal-VOC) DF (Pascal-VOC) DF (Stanford Cars) DF (LFW)
1 Food Containers (1.18) Food Containers (1.23) Food Containers (1.25) Food Containers (1.24) Food Containers (1.13)
2 Household Furnitures (1.09) Household Furnitures (1.22) Household Furnitures (1.21) Household Furnitures (1.23) Household Furnitures (1.09)
3 Vehicles2 (1.09) Vehicles2 (1.10) People (1.20) Vehicles1 (1.20) People (1.09)
4 Household Electrical Devices (1.00) Vehicles1 (1.09) Vehicles2 (1.15) Large Natural Outdoor Scenes (1.13) Vehicles2 (1.07)
5 Large Natural Outdoor Scenes (0.98) Large Natural Outdoor Scenes (1.08) Vehicles1 (1.12) Vehicles2 (1.11) Large Natural Outdoor Scenes (1.05)
Table 3: Top-5 ranking of CIFAR-100 superclass improvement rates by GAN-based data augmentation techniques.

As shown in the experiments of the main paper, DF can totally boost accuracy of target classifiers by using knowledge of an outer dataset. In this section, we further investigate which target class is likely to receive the benefit of DF, and how the difference of outer datasets affects the improvements of DF. We used the CIFAR-100 classifiers trained in the experiments of Evaluation of Classification Accuracy Section. For simplicity of the analysis, we tested the classifiers of CIFAR-100 on 20 superclasses [16], not on the original 100 classes. Table 3 shows that top-5 ranking of improvements of the classification on superclasses of CIFAR-100 for each GAN-based data augmentation by CGAN, TGAN and DF. Each ranking was sorted by rates calculated as follows:

(20)

where a With DA model represents one of CGAN, TGAN, or DF. Accuracy for each superclass was computed by gathering prediction results for the subclasses, and then calculating top-1 accuracy for the superclass.

In Table 3, surprisingly, all cases of GAN-based augmentation share the top 1 and 2 highest rated superclasses: Food Containers and Household Furnitures. This result implies these classes can be easily learned by any GAN-based approaches. It may be caused by the characteristics of Food Containers and Households Furnitures that tend to be composed of primitive features (e.g., round shapes, lines). Since these primitive features can be obtained from other classes, GANs can create relatively high quality samples for the two classes. Comparing DF (Pascal-VOC) with TGAN (Pascal-VOC), DF prominently raises up the performance on People class, which is related to Person class containing in Pascal-VOC. Contrastively, in TGAN, the improvement of People class is out of ranking. The improvement rate of TGAN on this class was 1.07 and lower than that of DF (1.20). This result indicates that DF can effectively transfer the knowledge related to the target tasks from the outer dataset. Calculating FID/IS scores for People class images, the average scores of DF (FID of 125.6 and IS of 3.85) is superior to the scores of TGAN (FID of 135.5 and IS of 3.29). This suggests that DF could preserve the knowledge of the outer dataset by multi-domain learning, whereas TGAN may forget it during the process of fine-tuning.

Lastly, we evaluate the effects of outer dataset choices on the classification performance for superclass. We chose Pascal-VOC, Stanford Cars, and LFW as the outer datasets because they have classes that are similar to the superclasses of the target. For example, Pascal-VOC has Person class related to People superclass in CIFAR-100 and some classes related to Vehicles1 and Vehicles2 superclass (e.g., Aeroplane, Bus, Car, Train). In Table 3, each outer dataset makes a different ranking of the improvement rates except for rank 1 and 2. We can see that these differences are caused by the classes of the outer datasets since the superclasses appeared in the ranking are related to the class contained by the outer dataset. From these results, we conclude that DF enhances target classification performances because it can export the knowledge contained in the outer dataset through the generated samples.

References

  • [1] A. Antoniou, A. Storkey, and H. Edwards (2018) Data augmentation generative adversarial networks. External Links: Link Cited by: Introduction.
  • [2] S. Azadi, C. Olsson, T. Darrell, I. Goodfellow, and A. Odena (2019) Discriminator rejection sampling. In International Conference on Learning Representations, External Links: Link Cited by: Appendix A.
  • [3] L. Bossard, M. Guillaumin, and L. Van Gool (2014)

    Food-101 – mining discriminative components with random forests

    .
    In European Conference on Computer Vision, Cited by: Table 1.
  • [4] A. Brock, J. Donahue, and K. Simonyan (2019) Large scale GAN training for high fidelity natural image synthesis. In International Conference on Learning Representations, External Links: Link Cited by: Appendix A, Filtering by Modified DRS.
  • [5] F. Calimeri, A. Marzullo, C. Stamile, and G. Terracina (2017)

    Biomedical data augmentation using generative adversarial neural networks

    .
    In International Conference on Artificial Neural Networks, Cited by: Introduction.
  • [6] M. Cimpoi, S. Maji, I. Kokkinos, S. Mohamed, and A. Vedaldi (2014) Describing textures in the wild. In

    Proceedings of the IEEE Conf. on Computer Vision and Pattern Recognition

    ,
    Cited by: Table 1.
  • [7] M. Everingham, S. M. A. Eslami, L. Van Gool, C. K. I. Williams, J. Winn, and A. Zisserman (2015) The pascal visual object classes challenge: a retrospective. International Journal of Computer Vision 111. Cited by: Table 1.
  • [8] I. J. Goodfellow, M. Mirza, D. Xiao, A. Courville, and Y. Bengio (2013) An empirical investigation of catastrophic forgetting in gradient-based neural networks. arXiv preprint arXiv:1312.6211. Cited by: Comparison to Other GAN-based Data Augmentations.
  • [9] I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio (2014) Generative adversarial nets. In Advances in Neural Information Processing Systems 27, External Links: Link Cited by: Generative Adversarial Networks, Filtering by Modified DRS.
  • [10] K. He, X. Zhang, S. Ren, and J. Sun (2016) Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, Cited by: Implementation Details.
  • [11] M. Heusel, H. Ramsauer, T. Unterthiner, B. Nessler, and S. Hochreiter (2017) Gans trained by a two time-scale update rule converge to a local nash equilibrium. In Advances in Neural Information Processing Systems, Cited by: Implementation Details, Evaluation Metrics.
  • [12] G. B. Huang, M. Ramesh, T. Berg, and E. Learned-Miller (2007)

    Labeled faces in the wild: a database for studying face recognition in unconstrained environments

    .
    Technical report Cited by: Table 1.
  • [13] D. Kingma and J. Ba (2014-12) Adam: a method for stochastic optimization. International Conference on Learning Representations. Cited by: Implementation Details.
  • [14] T. Ko, V. Peddinti, D. Povey, and S. Khudanpur (2015) Audio augmentation for speech recognition.. In 16th Annual Conference of the International Speech Communication Association, External Links: Link Cited by: Introduction.
  • [15] J. Krause, M. Stark, J. Deng, and L. Fei-Fei (2013) 3D object representations for fine-grained categorization. In 4th International IEEE Workshop on 3D Representation and Recognition, Sydney, Australia. Cited by: Table 1.
  • [16] A. Krizhevsky and G. Hinton (2009) Learning multiple layers of features from tiny images. Technical report Citeseer. Cited by: Appendix A, Target Datasets.
  • [17] A. Krizhevsky, I. Sutskever, and G. E. Hinton (2012)

    ImageNet classification with deep convolutional neural networks

    .
    In Proceedings of the 25th International Conference on Neural Information Processing Systems, External Links: Link Cited by: Introduction.
  • [18] M. Lučić, M. Tschannen, M. Ritter, X. Zhai, O. Bachem, and S. Gelly (2019) High-fidelity image generation with fewer labels. In

    Proceedings of the 36th International Conference on Machine Learning

    ,
    Cited by: Multi-Domain Training.
  • [19] S. Maji, J. Kannala, E. Rahtu, M. Blaschko, and A. Vedaldi (2013) Fine-grained visual classification of aircraft. Technical report External Links: 1306.5151 Cited by: Target Datasets.
  • [20] T. Miyato, T. Kataoka, M. Koyama, and Y. Yoshida (2018) Spectral normalization for generative adversarial networks. International Conference on Learning Representations. Cited by: Implementation Details.
  • [21] T. Miyato and M. Koyama (2018) CGANs with projection discriminator. International Conference on Learning Representations. Cited by: Generative Adversarial Networks, Implementation Details.
  • [22] Y. Netzer, T. Wang, A. Coates, A. Bissacco, B. Wu, and A. Y. Ng (2011) Reading digits in natural images with unsupervised feature learning. In NIPS workshop on deep learning and unsupervised feature learning, Vol. 2011. Cited by: Table 1.
  • [23] M-E. Nilsback and A. Zisserman (2008-12) Automated flower classification over a large number of classes. In Proceedings of the Indian Conference on Computer Vision, Graphics and Image Processing, Cited by: Table 1.
  • [24] A. Odena, C. Olah, and J. Shlens (2017) Conditional image synthesis with auxiliary classifier gans. Proceedings of the 34th International Conference on Machine Learning. Cited by: Generative Adversarial Networks.
  • [25] A. Quattoni and A. Torralba (2009-06) Recognizing indoor scenes. In 2009 IEEE Conference on Computer Vision and Pattern Recognition, Vol. . Cited by: Target Datasets.
  • [26] E. Real, A. Aggarwal, Y. Huang, and Q. V. Le (2019) Regularized evolution for image classifier architecture search.

    The Thirty-Third AAAI Conference on Artificial Intelligence

    .
    Cited by: Introduction.
  • [27] T. Salimans, I. Goodfellow, W. Zaremba, V. Cheung, A. Radford, X. Chen, and X. Chen (2016) Improved techniques for training gans. In Advances in Neural Information Processing Systems 29, External Links: Link Cited by: Implementation Details, Evaluation Metrics.
  • [28] K. Shmelkov, C. Schmid, and K. Alahari (2018) How good is my gan?. In Proceedings of the European Conference on Computer Vision, Cited by: Introduction, Comparison to Other GAN-based Data Augmentations.
  • [29] T. Tran, T. Pham, G. Carneiro, L. Palmer, and I. Reid (2017) A bayesian data augmentation approach for learning deep models. In Advances in Neural Information Processing Systems 30, External Links: Link Cited by: Introduction.
  • [30] A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, L. Kaiser, and I. Polosukhin (2017) Attention is all you need. In Advances in Neural Information Processing Systems 30, External Links: Link Cited by: Introduction.
  • [31] Y. Wang, C. Wu, L. Herranz, J. van de Weijer, A. Gonzalez-Garcia, and B. Raducanu (2018) Transferring gans: generating images from limited data. Cited by: Introduction, Diversity of an Outer Dataset, Comparison to Other GAN-based Data Augmentations.
  • [32] A. Zeyer, K. Irie, R. Schlüter, and H. Ney (2018)

    Improved training of end-to-end attention models for speech recognition

    .
    In 19th Annual Conference of the International Speech Communication Association, Cited by: Introduction.
  • [33] Z. Zheng, L. Zheng, and Y. Yang (2017) Unlabeled samples generated by gan improve the person re-identification baseline in vitro. In Proceedings of the IEEE International Conference on Computer Vision, Cited by: Introduction, Data Augmentation with GANs.
  • [34] Y. Zhu, M. Aoun, M. Krijn, and J. Vanschoren (2018) Data augmentation using conditional generative adversarial networks for leaf counting in arabidopsis plants. In British Machine Vision Conference, Cited by: Introduction, Comparison to Other GAN-based Data Augmentations.