Semi-supervised Conditional GANs

08/19/2017 ∙ by Kumar Sricharan, et al. ∙ PARC 0

We introduce a new model for building conditional generative models in a semi-supervised setting to conditionally generate data given attributes by adapting the GAN framework. The proposed semi-supervised GAN (SS-GAN) model uses a pair of stacked discriminators to learn the marginal distribution of the data, and the conditional distribution of the attributes given the data respectively. In the semi-supervised setting, the marginal distribution (which is often harder to learn) is learned from the labeled + unlabeled data, and the conditional distribution is learned purely from the labeled data. Our experimental results demonstrate that this model performs significantly better compared to existing semi-supervised conditional GAN models.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 16

page 17

page 18

page 19

page 20

page 21

page 22

page 23

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Generative adversarial networks (GAN’s) goodfellow2014generative are a recent popular technique for learning generative models for high-dimensional unstructured data (typically images). GAN’s employ two networks - a generator G that is tasked with producing samples from the data distribution, and a discriminator D that aims to distinguish real samples from the samples produced by G. The two networks alternatively try to best each other, ultimately resulting in the generator G converging to the true data distribution.

While most of the research on GAN’s is focused on the unsupervised setting, where the data is comprised of unlabeled images, there has been research on conditional GAN’s gauthier2014conditional where the goal is to learn a conditional model of the data, i.e. to build a conditional model that can generate images given a particular attribute setting. In one approach gauthier2014conditional , both the generator and discriminator are fed attributes as side information so as to enable the generator to generate images conditioned on attributes. In an alternative approach proposed in  odena2016conditional

, the authors build auxiliary classifier GAN’s (AC-GAN’s) where side information is reconstructed by the discriminator instead. Irrespective of the specific approach, this line of research focuses on the supervised setting where it is assumed that all the images have attribute tags.

Given that labels are expensive, it is of interest to explore semi-supervised settings where only a small fraction of the images have attribute tags, while a majority of the images are unlabeled. There has been some work on using GAN’s in the semi-supervised setting. salimans2017improved and springenberg2015unsupervised use GAN’s to perform semi-supervised classification by using a generator-discriminator pair to learn an unconditional model of the data and fine-tune the discriminator using the small amount of labeled data for prediction. However, we are not aware of work on building conditional models in the semi-supervised setting (see 2.1 for details). The closest work we found was AC-GAN’s, which can be extended to the semi-supervised setting in a straightforward manner (as was alluded to briefly by the authors in their paper).

In the proposed semi-supervised GAN (SS-GAN) approach, we take a different route. We instead supply the side attribute information to the discriminator as is the case with supervised GAN’s. We partition the discriminator’s task of evaluating if the joint samples of images and attributes are real or fake into two separate tasks: (i) evaluating if the images are real or fake, and (ii) evaluating if the attributes given an image are real or fake. We subsequently use all the labeled and unlabeled data to assist the discriminator with the first task, and only the labeled images for the second task. The intuition behind this approach is that the marginal distribution of the images is much harder to model relative to the conditional distribution of the attributes given an image, and by separately evaluating the marginal and conditional samples, we can exploit the larger unlabeled pool to accurately estimate the marginal distribution.

Our main contributions in this work are as follows:

  1. We present the first extensive discussion of the semi-supervised conditional generation problem using GAN’s.

  2. Related to (1), we apply the AC-GAN approach to the semi-supervised setting and present experimental results.

  3. Finally, our main contribution is a new model called SS-GAN to effectively address the semi-supervised conditional generative modeling problem, which outperforms existing approaches including AC-GAN’s for this problem.

(a) Unsupervised GAN
(b) Conditional GAN
(c) Auxiliary classifier GAN
(d) Semi-supervised stacked GAN
Figure 1: Illustration of 4 GAN models: (a) Unsupervised GAN, (b) Conditional GAN (for supervised setting), (c) Auxiliary Classifier GAN (for supervised and semi-supervised setting) and (d) proposed model semi-supervised GAN model. These models will be elaborated on further in the sequel.

The rest of this paper is organized as follows: In Section 2, we describe existing work on GAN’s including details about the unsupervised, supervised and semi-supervised settings. Next, in Section 3, we describe the proposed SS-GAN models, and contrast the model against existing semi-supervised GAN solutions. We present experimental results in Section 4, and finally, we give our conclusions in Section 5.

2 Existing GAN’s

2.1 Framework

We assume that our data-set is comprised of images

where the first images are accompanied by attributes

Each is assumed to be of dimension , where is the number of channels. The attribute tags are assumed to be discrete variables of dimension - i.e., each attribute is assumed to be -dimensional and each individual dimension of an attribute tag can belong to one of different classes. Observe that this accommodates class variables (), and binary attributes (

). Finally, denote the joint distribution of images and attributes by

, the marginal distribution of images by , and the conditional distribution of attributes given images by . Our goal is to learn a generative model that can sample from for a given by exploiting information from both the labeled and unlabeled sets.

2.2 Unsupervised GAN’s

Figure 2: Illustration of unsupervised GAN model.

In the unsupervised setting , the goal is to learn a generative model that samples from the marginal image distribution

, by transforming vectors of noise

as . In order for to learn this marginal distribution, a discriminator is trained jointly goodfellow2014generative

. The unsupervised loss functions for the generator and discriminator are as follows:

(1)

and

(2)

The above equations are alternatively optimized with respect to and respectively. The unsupervised GAN model is illustrated in  2.

2.3 Supervised GAN’s

In the supervised setting (i.e., ), the goal is to learn a generative model that samples from the conditional image distribution , by transforming vectors of noise as . There are two proposed approaches for solving this problem:

2.3.1 Conditional GAN’s

In order for to learn this conditional distribution, a discriminator is trained jointly. The goal of the discriminator is to distinguish whether the joint samples are samples from the data or from the generator. The supervised loss functions for the generator and discriminator for conditional GAN (C-GAN) are as follows:

(3)

and

(4)

The above equations are alternatively optimized with respect to and respectively. The conditional GAN model is illustrated in  3.

Figure 3: Illustration of Supervised Conditional GAN model.

2.3.2 Auxiliary-classifier GAN’s

An alternative approach odena2016conditional to supervised conditional generation is to only supply the images to the discriminator, and ask the discriminator to additionally recover the true attribute information. In particular, the discriminator produces two outputs: (i) and (ii)

, where the first output is the probability of

being real or fake, and the second output is the estimated conditional probability of given . In addition to the unsupervised loss functions, the generator and discriminator are jointly trained to recover the true attributes for any given images . In particular, define the attribute loss function as

(5)

The loss function for the discriminator is given by

(6)

and for the generator is given by

(7)

2.3.3 Comparison between C-GAN and AC-GAN

The key difference between C-GAN and AC-GAN is that instead of asking the discriminator to estimate the probability distribution of the attribute given the image as is the case in AC-GAN, C-GAN instead supplies discriminator

with both and asks it to estimate the probability that is consistent with the true joint distribution .

While both models are designed to learn a conditional generative model, we did not find extensive comparisons between the two approaches in literature. To this end, we compared the performance of the two architectures using a suite of qualitative and quantitative experiments on a collection of data sets, and through our analysis (see Section 4), determined that C-GAN typicaly outperforms AC-GAN in performance.

2.4 Semi-supervised GAN’s

We now consider the the semi-supervised setting where , and typically . In this case, both C-GAN and AC-GAN can be applied to the problem. Because C-GAN required the attribute information to be fed to the discriminator, it can be applied only by trivially training it only on the labeled data, and throwing away the unlabeled data. We will call this model SC-GAN.

On the other hand, AC-GAN can be applied to this semi-supervised setting in a far more useful manner as alluded to by the authors in  2017arXiv170403971X . In particular, the adversarial loss terms and are evaluated over all the images in , while the attribute estimation loss term is evaluated over only the real images with attributes. We will call this model SA-GAN. This model is illustrated in  4.

Figure 4: Illustration of Auxiliary Classifier GAN model.

3 Proposed Semi-supervised GAN

We will now propose a new model for learning conditional generator models in a semi-supervised setting. This model aims to extend the C-GAN architecture to the semi-supervised setting that can exploit the unlabeled data unlike SC-GAN, by overcoming the difficulty of having to provide side information to the discriminator. By extending the C-GAN architecture, we aim to enjoy the same performance advantages over SA-GAN that C-GAN enjoys over AC-GAN.

In particular, we consider a stacked discriminator architecture comprising of a pair of discriminators and , with tasked with with distinguishing real and fake images , and tasked with distinguishing real and fake (image, attribute) pairs . Unlike in C-GAN, will separately estimate the probability that is real using both the labeled and unlabeled instances, and will separately estimate the probability that given is real using only the labeled instances. The intuition behind this approach is that the marginal distribution is much harder to model relative to the conditional distribution , and by separately evaluating the marginal and conditional samples, we can exploit the larger unlabeled pool to accurately estimate the marginal distribution.

3.1 Model description

Let denote the discriminator, which is comprised of two stacked discriminators: (i) outputs the probability that the marginal image is real or fake, and (ii) outputs the probability that the conditional attribute given the image is real or fake. The generator is identical to the generator in C-GAN and AC-GAN. The loss functions for the generator and the pair of discriminators are defined below:

(8)
(9)

and

(10)

where controls the effect of the conditional term relative to the unsupervised term.

Model architecture:

We design the model so that depends only on the argument, and produces an intermediate output (last but one layer of unsupervised discriminator) , to which the argument is subsequently appended and fed to the supervised discriminator to produce the probability that the joint samples are real/fake. The specific architecture is shown in Figure 5.

The advantage of this proposed model which supplies to via the features learned by over directly providing the argument to is that can not overfit to the few labeled examples, and instead must rely on the features general to the whole population in order to uncover the dependency between and .

For illustration, consider the problem of conditional face generation where one of the attributes of interest is eye-glasses. Also, assume that in the limited set of labeled images, only one style of eye-glasses (e.g., glasses with thick rims) are encountered. If so, then the conditional discriminator can learn features specific to rims to detect glasses if the entire image is available to the supervised discriminator. On the other hand, the features learned by the unsupervised discriminator would have to generalize over all kinds of eyeglasses and not just rimmed eyeglasses specifically. In our stacked model, by restricting the supervised discriminator to access to the image through the features learned by the unsupervised discriminator, we ensure that the supervised discriminator now generalizes to all different types of eyeglasses when assessing the conditional fit of the glasses attribute.

Figure 5: Illustration of proposed semi-supervised GAN model. Intermediate features from the last but one layer of the unsupervised discriminator are concatenated with and fed to the supervised discriminator.

3.2 Convergence analysis of model

Denote the distribution of the samples provided by the generator as . Provided that the discriminator has sufficient modeling power, following Section 4.2 in goodfellow2014generative , it follows that if we have sufficient data , and if the discriminator is trained to convergence, will converge to , and consequently, the generator will adapt its output so that will converge to .

Because is finite and typically small, we are not similarly guaranteed that will converge to , and that consequently, the generator will adapt its output so that will converge to . However, we make the key observation that because converges to though the use of , will equivalently look to converge to , and given that these distributions are discrete, plus the fact that the supervised discriminator operates on via the low-dimensional embedding , we hypothesize that will successfully learn to closely approximate even when is small. The joint use of and will therefore ensure that the joint distribution of the samples produced by the generator will converge to the true distribution .

4 Experimental results

We propose a number of different experiments to illustrate the performance of the proposed SS-GAN over existing GAN approaches.

4.1 Models and datasets

We compare the results of the proposed SS-GAN model against three other models:

  1. Standard GAN model applied to the full data-set (called C-GAN)

  2. Standard GAN model applied to only the labeled data-set (called SC-GAN)

  3. Supervised AC-GAN model applied to the full data-set (called AC-GAN)

  4. Semi-supervised AC-GAN model (called SA-GAN)

We illustrate our results on 3 different datasets: (i) MNIST, (ii) celebA, and (iii) CIFAR10.

In all our experiments, we use the DCGAN architecture proposed in  radford2015unsupervised , with slight modifications to the generator and discriminator to accommodate the different variants described in the paper. These modifications primarily take the form of (i) concatenating the inputs and for the supervised generator and discriminator respectively, and adding an additional output layer to the discriminator in the case of AC-GAN, and connecting the last but one layer of to in the proposed SS-GAN. In particular, we use the same DCGAN architecture as in  radford2015unsupervised for MNIST and celebA, and a slightly modified version of the celebA architectures to accommodate the smaller 32x32 resolutions of the cifar10 dataset. The stacked DCGAN discriminator model for the celebA faces dataset is shown in Figure 6.

Figure 6:

Illustration of SS-GAN discriminator for celebA dataset. The different layer operations in the neural network are illustrated by the different colored arrows (Conv = convolutional operator of stride 2, BN = Batch Normalization).

4.2 Evaluation criteria

We use a variety of different evaluation criteria to contrast SS-GAN against the models C-GAN, AC-GAN, SC-GAN and SA-GAN listed earlier.

  1. Visual inspection of samples: We visually display a large collection of samples from each of the models and highlight differences in samples from the different models.

  2. Reconstruction error: We optimize the inputs to the generator to reconstruct the original samples in the dataset (see Section 5.2 in  2017arXiv170403971X ) with respect to squared reconstruction error. Given the drawbacks of reconstruction loss, we also compute the structural similarity metric (SSIM) wang2004image in addition to the reconstruction error.

  3. Attribute/class prediction from pre-trained classifier (for generator): We pre-train an attribute/class predictor from the entire training data set, and apply this predictor to the samples generated from the different models, and report the accuracy (RMSE for attribute prediction, 0-1 loss for class prediction).

  4. Supervised learning error (for discriminator): We use the features from the discriminator and build classifiers on these features to predict attributes, and report the accuracy.

  5. Sample diversity: To ensure that the samples being produced are representative of the entire population, and not just the labeled samples, we first train a classifier than can distinguish between the labeled samples (class label 0) and the unlabeled samples (class label 1). We then apply this classifier to the samples generated by each of the generators, and compute the mean probability of the samples belonging to class 0. The closer this number is to 0, the better the unlabeled samples are represented.

4.3 Mnist

The MNIST dataset contains 60,000 labeled images of digits. We perform semi-supervised training with a small randomly picked fraction of these, considering setups with 10, 20, and 40 labeled examples. We ensure that each setup has a balanced number of examples from each class. The remaining training images are provided without labels.

4.3.1 Visual sample inspection

In Figure 7, we show representative samples form the 5 different models for the case with labeled examples. In addition, in figures  9, 10, 11, 12, 13, we show more detailed results for this case with 20 labeled example (two examples per digit). In these detailed results, each row corresponds to a particular digit. Both C-GAN and AC-GAN successfully learn to model both the digits and the association between the digits and their class label. From the results, it is clear that SC-GAN learns to predict only the digit styles of each digit made available in the labeled set. While SA-GAN produces greater diversity of samples, it suffers in producing the correct digits for each label. SS-GAN on the other hand both produces diverse digits while also being accurate. In particular, its performance closely matches the performance of the fully supervised C-GAN and AC-GAN models. This is additionally borne out by the quantitative results shown in Tables  1, 2 and 3 for the cases and respectively, as shown below.

(a) MNIST samples 1
(b) MNIST samples 2
Figure 7: 2 sets of representative samples from the 5 models (each row from top to bottom corresponds to samples from C-GAN, AC-GAN, SC-GAN, SA-GAN and SS-GAN). SS-GAN’s performance is close to the supervised models (C-GAN and AC-GAN). SA-GAN gets certain digit associations wrong, while SC-GAN generates copies of digits from the labeled set.
Samples source Class pred. error Recon. error Sample diversity Discrim. error
True samples 0.0327 N/A 0.992 N/A
Fake samples N/A N/A 1.14e-05 N/A
C-GAN 0.0153 0.0144 1.42e-06 0.1015
AC-GAN 0.0380 0.0149 1.49e-06 0.1140
SC-GAN 0.0001 0.1084 0.999 0.095
SA-GAN 0.3091 0.0308 8.62e-06 0.1062
SS-GAN 0.1084 0.0320 0.0833 0.1024
Table 1: Compilation of quantitative results for the MNIST dataset for .
Samples source Class pred. error Recon. error Sample diversity Discrim. error
True samples 0.0390 N/A 0.994 N/A
Fake samples N/A N/A 2.86e-05 N/A
C-GAN 0.0148 0.01289 8.74e-06 0.1031
AC-GAN 0.0189 0.01398 9.10e-06 0.1031
SC-GAN 0.0131 0.0889 0.998 0.1080
SA-GAN 0.2398 0.02487 2.18e-05 0.1010
SS-GAN 0.1044 0.0160 2.14e-05 0.1014
Table 2: Compilation of quantitative results for the MNIST dataset for .
Samples source Class pred. error Recon. error Sample diversity Discrim. error
True samples 0.0390 N/A 0.993 N/A
Fake samples N/A N/A 1.63e-05 N/A
C-GAN 0.0186 0.0131 1.36e-05 0.1023
AC-GAN 0.0141 0.0139 6.84e-06 0.1054
SC-GAN 0.0228 0.080 0.976 0.1100
SA-GAN 0.1141 0.00175 1.389e-05 0.1076
SS-GAN 0.0492 0.0135 3.54e-05 0.1054
Table 3: Compilation of quantitative results for the MNIST dataset for .

4.3.2 Discussion of quantitative results

The fraction of incorrectly classified points for each source, the reconstruction error, the sample diversity metric and the discriminator error is shown in Tables 1, 2 and 3 below. SS-GAN comfortably outperforms SA-GAN with respect to classification accuracy, and comfortably beats SC-GAN with respect to reconstruction error (due to the limited sample diversity of SC-GAN). The sample diversity metric for SS-GAN is slightly worse compared to SA-GAN, but significantly better than SC-GAN. Taken together, in conjunction with the visual analysis of the samples, these results conclusively demonstrate that SS-GAN is superior to SA-GAN and SC-GAN in the semi-supervised setting.

From the three sets of results for the different labeled sample sizes ( and ), we see that the performance of all the models increases smoothly with increasing sample size, but with SSGAN still outperforming the other two semi-supervised models for each of the settings for the number of labeled samples.

4.3.3 Semi-supervised learning error

For MNIST, we run an additional experiment, where we draw samples from the various generators, train a classifier using each set of samples, and record the test error performance of this classifier. On MNIST, with 20 labeled examples, we show the accuracy of classifiers trained using samples generated from different models using MNIST in Table 4.

Samples source 10-fold 0-1 error
C-GAN 5.1
AC-GAN 5.2
SC-GAN 12.9
SA-GAN 24.3
SS-GAN 5.4
Table 4: Classifier accuracy using samples generated from different models for MNIST.

From the results in table 4, we see that our model SS-GAN is performing close to the supervised models. In particular, we note that these results are the state-of-the-art for MNIST given just 20 labeled examples (please see  salimans2017improved for comparison). However, the performance as the number of labeled examples increases remains fairly stationary, and furthermore is not very effective for more complex datasets such as CIFAR10 and celebA, indicating that this approach of using samples from GAN’s to train classifiers should be restricted to very low sample settings for simpler data sets like MNIST.

4.4 celebA dataset results

CelebFaces Attributes Dataset (CelebA) liu2015faceattributes is a large-scale face attributes dataset with more than 200K celebrity images, each with 40 attribute annotations. The images in this dataset cover large pose variations and background clutter. Of the 40 attributes, we sub-select the following 18 attributes: 0: ’Bald’, 1: ’Bangs’, 2: ’Black Hair’, 3: ’Blond Hair’, 4: ’Brown Hair’, 5: ’Bushy Eyebrows’, 6: ’Eyeglasses’, 7: ’Gray Hair’, 8: ’Heavy Makeup’, 9: ’Male’, 10: ’Mouth Slightly Open’, 11: ’Mustache’, 12: ’Pale Skin’, 13: ’Receding Hairline’, 14: ’Smiling’, 15: ’Straight Hair’, 16: ’Wavy Hair’, 17:’Wearing Hat’.

4.4.1 Visual sample inspection

In Figure 8, we show representative samples form the 5 different models for the case with labeled examples for the celebA dataset. Each row correponds to an individual model, and each column corresponds to one of the 18 different attributes listed above. In addition, we show more detailed samples generated by the 5 different models in figures  15, 14, 16, 17, and 18. In each of these figures, each row corresponds to a particular attribute type while all the other attributes are set to 0. From the generated samples, we can once again see that the visual samples produced by SS-GAN are close to the quality of the samples generated by the fully supervised models C-GAN and AC-GAN. SC-GAN when applied to the subset of data produces very poor results (significant mode collapse + poor quality of the generated images), while SA-GAN is relatively worse when compared to SS-GAN. For instance, SA-GAN produces images with incorrect attributes for attributes 0 (faces turned to a side instead of bald), 7 (faces with hats instead of gray hair), and 12 (generic faces instead of faces with pale skin).

(a) CelebA samples 1
(b) CelebA samples 2
Figure 8: 2 sets of representative samples from the 5 models (each row from top to bottom corresponds to samples from C-GAN, AC-GAN, SC-GAN, SA-GAN and SS-GAN respectively). SS-GAN’s performance is close to the supervised models (C-GAN and AC-GAN). SA-GAN gets certain associations wrong (e.g., attributes 0, 7 and 12), while SC-GAN produces samples of poor visual quality.
Samples source Attribute RMSE Recon. error SSIM Sample diversity Disc. error
True samples 0.04 N/A N/A 0.99 N/A
Fake samples N/A N/A N/A 0.001 N/A
C-GAN 0.25 0.036 0.497 0.002 0.07
AC-GAN 0.29 0.047 0.076 0.005 0.06
SC-GAN 0.26 0.343 0.143 0.454 0.01
SA-GAN 0.36 0.042 0.167 0.006 0.07
SS-GAN 0.31 0.040 0.217 0.004 0.03
Table 5: Compilation of quantitative results for the celebA dataset. Across the joint set of metrics, SS-GAN achieves performance close to the supervised C-GAN and AC-GAN models, while performing much better than either of the semi-supervised models - SC-GAN and SA-GAN.

4.4.2 Discussion of quantitative results

The four different quantitative metrics - The attribute prediction error, the reconstruction error, the sample diversity metric and the discriminator error - are shown in Table 5.

SS-GAN comfortably outperforms SA-GAN and achieves results close to the fully supervised models for the attribute prediction error metric. It is interesting to note that SC-GAN produces better attribute prediction error numbers than the SA-GAN model, while producing notably worse samples. We also find that with respect to reconstruction error and the SSIM metric, SS-GAN marginally out performs SA-GAN while coming close to the performance of the supervised C-GAN and AC-GAN models. As expected, SC-GAN performs poorly in this case. We also find that SS-GAN has a fairly low sample diversity score, marginally higher than C-GAN, but better than SA-GAN, and better even than the fully supervised AC-GAN. Finally, SS-GAN comfortably outperforms SA-GAN and achieves results close to the fully supervised model with respect to the discriminator feature error.

4.5 cifar10 dataset

The CIFAR-10 dataset krizhevsky2009learning consists of 60000 32x32 colour images in 10 classes, with 6000 images per class. There are 50000 training images and 10000 test images. The following are the 10 classes: airplane, automobile, bird, cat, deer, dog, frog, horse, ship, truck.

4.5.1 Visual sample inspection

From the generated samples in figures 19, 20, 21, 22 and 23, we can see that the visual samples produced by SS-GAN are close to the quality of the samples generated by C-GAN. All the other three models, AC-GAN, SA-GAN, and SC-GAN suffer from significant mode collapse. We especially found the poor results of AC-GAN in the fully supervised case surprising, especially given the good performance of C-GAN on cifar10, and the good performance of AC-GAN on the MNIST and celebA datasets.

Samples source Class pred. error Recon. error SSIM Sample diversity Disc. error
True samples 0.098 N/A N/A 1.00 N/A
Fake samples N/A N/A N/A 1.21e-07 N/A
C-GAN 0.198 0.041 0.501 1.39e-07 0.874
AC-GAN 0.391 0.204 0.024 1.41e-06 0.872
SC-GAN 0.355 0.213 0.026 0.999 0.870
SA-GAN 0.0.468 0.173 0.021 2.30e-06 0.874
SS-GAN 0.299 0.061 0.042 6.54e-06 0.891
Table 6: Compilation of quantitative results for the cifar10 dataset. Across the joint set of metrics, SS-GAN achieves performance close to the supervised C-GAN and AC-GAN models, while performing much better than either of the semi-supervised models - SC-GAN and SA-GAN.

4.5.2 Discussion of quantitative results

The different quantitative metrics computed against the cifar10 datasets are shown in Table 6. In our experiments, we find that the samples generated by SS-GAN are correctly classified 70 percent of the time, which is second best after C-GAN and is off from the true samples by 15 percent. We also find that the reconstruction error for SS-GAN comes close to the performance of C-GAN and comfortably out performs the other three models. This result is consistent with the visual inspection of the samples. The sample diversity metric for SS-GAN is significantly better than SC-GAN, and comparable to the other three models.

5 Conclusion and discussion

We proposed a new GAN based framework for learning conditional models in a semi-supervised setting. Compared to the only existing semi-supervised GAN approaches (i.e., SC-GAN and SA-GAN), our approach shows a marked improvement in performance over several datasets including MNIST, celebA and CIFAR10 with respect to visual quality of the samples as well as several other quantitative metrics. In addition, the proposed technique comes with theoretical convergence properties even in the semi-supervised case where the number of labeled samples is finite.

From our results on all three of these datasets, we can conclude that the proposed SS-GAN performs almost as well as the fully supervised C-GAN and AC-GAN models, even when provided with very low number of labeled samples (down to the extreme limit of just one sample per class in the case of MNIST). In particular, it comfortably outperforms the semi-supervised variants of C-GAN and AC-GAN (SC-GAN and SA-GAN respectively). While the superior performance over SC-GAN is clearly explained by the fact that SC-GAN is only trained on the labeled data set, the performance advantage of SS-GAN over SA-GAN is not readily apparent. We explicitly discuss the reasons for this below:

5.1 Why does SS-GAN work better than SA-GAN?

  1. Unlike AC-GAN where the discriminator is tasked with recovering the attributes, in C-GAN, the discriminator is asked to estimate if the pair is real or fake. This use of adversarial loss that classifies pairs as real or fake over the cross-entropy loss that asks the discriminator to recover from seems to work far better as demonstrated by our experimental results. Our proposed SS-GAN model learns the association between and using an adversarial loss as is the case with C-GAN, while SA-GAN uses the cross-entropy loss over the labeled samples.

  2. The stacked architecture in SS-GAN where the intermediate features of are fed to ensures that , and in turn the generator does not over-fit to the labeled samples. In particular, is forced to learn discriminative features that characterize the association between and based on the features over the entire unlabeled set learned by , which ensures generalization to the complete set of images.

References