Denoising Adversarial Autoencoders: Classifying Skin Lesions Using Limited Labelled Training Data

01/02/2018 ∙ by Antonia Creswell, et al. ∙ Imperial College London 0

We propose a novel deep learning model for classifying medical images in the setting where there is a large amount of unlabelled medical data available, but labelled data is in limited supply. We consider the specific case of classifying skin lesions as either malignant or benign. In this setting, the proposed approach -- the semi-supervised, denoising adversarial autoencoder -- is able to utilise vast amounts of unlabelled data to learn a representation for skin lesions, and small amounts of labelled data to assign class labels based on the learned representation. We analyse the contributions of both the adversarial and denoising components of the model and find that the combination yields superior classification performance in the setting of limited labelled training data.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

The problem of image classification is one of assigning one or more labels to a given image. Deep learning has been demonstrated to be able to achieve both human and super-human levels of performance [8], on classification tasks. However, achieving competitive levels of performance using deep learning often requires vast numbers of {image, label} pairs, typically in the millions.

In the medical image setting, it is unlikely that vast amounts of labelled images are available, particularly since medical experts are required to label the data, and this may be very costly and time consuming. Instead, it is often the case that there exists a large corpus of unlabelled data and a smaller dataset of labelled data.

We propose a model that is able to learn both from labelled data and from unlabelled data by building on previous work involving autoencoders [2, 12, 14, 20, 10]. Autoencoders are able to learn data representations from unlabelled data, by jointly learning an encoder and decoder. The encoder maps data samples – in this case images – to a low dimensional encoding space, and the decoder maps the encoding back to image space. An autoencoder is trained to reconstruct its input. There are two key factors that enhance the performance of autoencoders, these are:

  • Denoising: Before being encoded, an input image is corrupted, and the decoder is trained to recover the clean image. By making the decoding process more challenging, the autoencoder learns more robust representations [20, 21].

  • Regularisation: Rather than allowing encoded data samples to occupy an unconstrained space, the distribution of encoded samples may be shaped to match a desired, prior

    distribution, for example a multivariate standard normal distribution. Regularisation reduces the amount of information that may be held in the encoding, forcing the model to learn an efficient representation for the data.

To implement a denoising process, an arbitrary corruption process may be used. For example, white Gaussian noise [2] may be added to samples of the training data. Corruption is often trivial to implement. More challenging is the regularisation of the distribution of encoded data samples. There are at least two approaches for shaping the distribution of encoded samples to match a desired distribution. The two key methods for regularising the encoding space are:

Variational

Minimising the KL divergence between the distribution of encoded samples and a chosen prior distribution [12]

. For ease of implementation, the prior distribution is often a multivariate standard normal distribution and the encoder is designed to learn parameters of a Gaussian distribution.

Adversarial

Rather than using the encoder to parametrise a distribution and calculate the KL divergence, a third, discriminative model is trained to correctly distinguish encoded samples from samples drawn from a chosen prior distribution. The encoder is then updated to encode samples such that the discriminator cannot distinguish encoded data samples from samples drawn from the prior distribution [14]. We will more formally introduce adversarial training in Section 2.2.

The adversarial [14] approach allows the encoder to be more expressive than the variational approach [12], and has achieved superior classification performance in a semi-supervised fashion on several benchmark dataset. While denoising and adversarial training have been used to augment autoencoders in isolation, they have yet to be combined in one model. Here, we propose augmenting an autoencoder with both a denoising criterion and by using adversarial training to shape the distribution of encoded data samples. We augment this model further to make use of labelled data where it is available while still learning from unlabelled data where label information is not available.

Our contributions are as follows:

  • We introduce the semi-supervised denoising adversarial autoencoder (ssDAAE) which is able to learn from a combination of labelled and unlabelled data (Section 2.3).

  • We apply our model, the ssDAAE, to the task of classifying skin-lesions as benign or malignant in the setting where the amount of labelled data is limited (Section 3).

  • We compare performance of the ssDAAE with a semi-supervised adversarial autoencoder (ssAAE), a fully supervised AAE (sAAE), a fully supervised DAAE (sDAAE), and a CNN trained with and without corruption. For fair comparison, the CNNs had the same architecture as the encoder of the ssAAE and ssDAAE; that is, the portion of the ssAAE and ssDAAE architecture used to perform classification is the same as the CNN used for standard deep network classification. Additionally, we assessed the effect of additive noise during training of the otherwise standard CNN. Our results show that the ssDAAE consistently out performs the others.

Although we demonstrate this approach on skin lesions, the semi-supervised approach explored in this paper are not specific to skin lesions, and could potentially be applied to other image datasets where labelled samples are in limited supply, but there is a surplus of unlabelled images that have been captured.

2 Method: Classifying Skin Lesions

In this section, we formulate the ssDAAE. First, we discuss the skin lesion classification problem. Secondly, we describe the Adversarial Autoencoder (AAE) and then we describe how the AAE may be augmented to become the ssDAAE. Finally, we describe how the ssDAAE is trained.

2.1 Skin lesion classification

Skin lesion classification is a non-trivial problem. Even humans have to be specially trained to be able to distinguish benign (not harmful) skin lesions from malignant (harmful) skin lesions. Examples of benign and malignant skin lesions are shown in Figure 1. The high level goal is to train a model to correctly predict whether a skin lesion is benign or malignant. Beyond this, we want to design models, for which we can be confident that we correctly identify a specific proportion of malignant skin lesions as being malignant, while still being able to correctly identify a large number of benign skin lesions as being benign. To this end, in the following sections we describe the model that we propose for skin lesion classification in the setting of limited labelled data.

(a) Benign
(b) Malignant
Figure 1: Examples of Benign and Malignant skin-lesions. Classifying skin lesions as benign or malignant is non-trivial and requires expert knowledge.

2.2 Adversarial Autoencoders

An autoencoder consists of two models, and encoder and a decoder, each with their own set of learnable parameters. In our approach, we are using deep convolutional neural networks to embody the encoder and decoder. The encoder,

with parameters , is designed to map an image sample, to an encoding,

. The encoding vector,

, is of much lower dimension than the number of pixels in an image, . The decoder, is designed to map an encoding back to an image, . The parameters, and of the encoder and decoder respectively are learned such that the difference between the input to the encoder, , and the output of the decoder, , are minimised.

The adversarial autoencoder [14] incorporates adversarial training [9] to shape the distribution of encoded data samples to match some chosen prior distribution, , such as a multivariate standard normal distribution. Note that we are applying adversarial training to the encoded data samples, rather than the data samples, as more commonly seen in the literature [9, 16]. Adversarial training requires the introduction of another model, a discriminator, for which we also use a deep convolutional neural network. The discriminator, maps encodings (either encoded data samples, or samples drawn from the prior,

,) to a probability of whether that sample comes from the chosen prior distribution. The parameters,

of the discriminator are learned such that high values are assigned to samples that come from the chosen prior distribution and low values are assigned to samples that come from the encoder. To encourage encoded samples to match the chosen prior distribution, the parameters of the encoder, are updated such that are maximised.

Formally, the following objectives must be optimised during the training of an adversarial autoencoder:

(1)
(2)

where is the distributions of encoded data samples, and .

(3)

where is some chosen prior distribution, for example a standard normal and is the training data distribution.

Equation (1) is the reconstruction cost that is used to train the encoder and decoder. This cost should be minimised, so that input images may be recovered after encoding and decoding. Equation (2) is the discriminator cost, which, when minimised, means that the discriminator can correctly distinguish between encoded data samples and samples from the chosen prior distribution. Equation (3) is a second regularisation cost used to update the encoder. When minimised – simultaneously with Equation (3) [9] – this regularisation cost encourages the distribution of encoded samples to be similar to the chosen prior distribution.

An adversarial autoencoder may be trained entirely with unlabelled data and may be evaluated by measuring reconstruction error on a test dataset, or synthesising novel samples by first drawing samples, , from the chosen prior distribution and passing these through the decoder to produce synthetic images, . The process of encoding and decoding test samples often reveals whether or not the decoder model has learned a sufficient representation for the data. A further test is to attempt to generate novel samples, by passing random encodings – drawn from the chosen prior distribution – through the decoder. Since a regularised autoencoder is able to generate novel samples, we often refer to it incorporating a generative model of the data.

In its current form, it is not immediately obvious how an adversarial autoencoder may be used to perform classification. In fact, it is necessary to augment the encoder to predict not only the encoding, but also the label.

2.3 Semi-Supervised Denoising Adversarial Autoencoder

Before learning to classify skin lesion as benign or malignant, we may first consider learning more about what skin lesion looks like. This could involve learning the colour, general shape and textures of skin lesions. An ssDAAE allows us to do this by incorporating both a generative and classification model in one. The ssDAAE differs from the AAE in two ways.

Firstly, the AAE is augmented by applying a corruption process. The corruption process,

is a stochastic process in which Gaussian noise, with standard deviation,

, is added to a data sample, to obtain, . This change results in a DAAE.

Secondly, the encoder of the DAAE is altered to define an ssDAAE by splitting the encoder in to three parts. An initial encoder, , and two sub-encoders, and . The encoder is trained to predict not only an encoding, but also a label vector, . Adversarial training is used (as in an AAE [14]) to shape both the distribution of encoded samples to match a chosen prior distribution, and the distribution of predicted class labels to match a categorical distribution [14].

However, since we are posing skin lesion classification as a binary classification problem, we represent the labels benign and malignant using a single unit and apply a sigmoid function at the end of

. We therefore train a label discriminator to distinguish predicted labels from labels drawn from a binary distribution. This encourages the output of the classifier, , to be either or rather than taking values in between.

For an input , the output of the decoder, is given by:

(4)
(5)

where, is a concatenation of and .

The weights of the encoder, and , are updated via both adversarial training to match the distribution of ’s to a chosen prior and a reconstruction error between and . This forms the generative part of the model, and may be trained on entirely unlabelled data. This property of the model means that we can learn parameters and using large amounts of unlabelled data to learn more about the structure of skin-lesions. We can also visualise what this model has learned by generating novel images of skin lesions and evaluating them by eye to see whether the model has captured the basic concept of what a skin lesion is.

Following on from this, we may use the limited labelled training data to “fine tune” the generative model. We may use the labelled data to update the weights , or additionally to update by minimising the classification error between predicted a label and the true label . Experimentally, we found it beneficial to update both and as this made training more stable.

For completeness, note that – similar to an adversarial autoencoder (AAE) – the weights of the decoder, are learned as part of the minimisation of the reconstruction error between and . In Figure 2, we present a diagram of our proposed model.

Figure 2: ssDAAE model Image data, , is corrupted before being encoded. The encoding process consists of two sub-mappings of the corrupted image, , yielding an encoding of the image appearance and a label prediction, . The decoder uses both of these to reconstruct a version of the uncorrupted image, . The blue parts correspond to an AAE model, while the red parts are additions that make this model an ssDAAE.

2.4 Training Data

As described above our ssDAAE may be trained using a mixture of both labelled and unlabelled data. The labelled data is obtained from the ISIC archive [1]. The archive consists of nearly k images, of which over k are benign skin lesions taken from children. The child skin lesion samples contain colour-coded identifier patches - rendering them not suitable for training in their current state. Of the remaining images there are examples of benign skin lesions and examples of malignant skin lesions.

To make the k skin lesions taken from children more appropriate for training and classification, we removed the identifier patches as shown in Figure 3. This processing step is not considered to be part of the classification framework, rather a means to increase the amount of available training data. These identifier patches are unlikely to be present in real world encounters. The processed child skin lesions are combined with the rest of the benign skin-lesions.

Figure 3: Pre-processing of child skin lesions Images have been pre-processed to crop out the area of the skin lesion. First, the skin was detected using a color profile that matches any usual skin color within a certain threshold in order to obtain a binary mask, in which a rectangle was defined such that it possesses the maximal area. The image was then cropped and centered in order to obtain square images of 64x64.

The ISIC dataset [1] does not specifically provide distinct labelled and unlabelled datasets. We partition the data into k unlabelled data samples, k labelled data samples to be used for training and the rest, each for testing and validation. To expand each dataset we performed data augmentation, by flipping the examples of skin-lesions in both the x and y axes and rotating the samples up to degrees.

3 Experiments and Results

In this section, we perform exhaustive ablation study to isolate the effectiveness of (a) a model that incorporates denoising, (b) the use of an adversarial autoencoder (generative model) opposed to a CNN (discriminative model) and (c) of utilising additional unlabelled data. From these experiments we will be able to isolate exactly which components of the ssDAAE are necessary to achieve good performance. We start by explaining how the performance of our models is evaluated.

3.1 Evaluation

There is significant label imbalance that can be observed in the ISIC dataset, meaning that the majority of the images (%) are benign, choosing a single classification accuracy as a performance metric may have been misleading given that even a system that always outputs the benign class would get, on average, a high score (). Instead, we prefer using clinically insightful and interpretable metrics such that the percentage of malignant skin lesions correctly classified as malignant (true positive, or sensitivity) and benign skin lesions correctly classified as benign (true negative, or specificity). Furthermore, in the context of a medical application and because of the label imbalance problem, we are particularly interested in comparing the model performances, in terms of specificity, at high sensitivity values, to avoid miss diagnosing a malignant skin lesion as benign.

We used similar evaluation metrics to those using in an interrelation skin lesion classification challenge, hosted by the International Skin Imaging Collaboration (ISIC), at the International Symposium on Biomedical Imaging (ISBI). The challenge was composed of three tasks, where the final tasks was skin lesion classification. For the last task, participant’s models were ranked according to the specificity of their model giving a particular sensitivity threshold

. We use the same evaluation metrics in our in all our experiments.

3.2 Training and Architectural Details

3.2.1 Architectural Details

In this subsection we present the detailed architecture of both the CNN baseline as well as our semi-supervised denoising adversarial autoencoder.

Cnn

The baseline CNN model consists of a sequence of 4 convolution layers, a relu non-linearity is applied to the output of each layer before being fed to the next one. The output of the CNN sequence is then flattened and fed to a linear layer containing

neurons followed by a final linear layer, with one neuron and sigmoid non-linearity that returns values between , where is the label for benign and is the label for malignant.

Semi-supervised Denoising Adversarial Autoencoder

The CNN described above, without the final linear layer, forms the encoder, of our adversarial autoencoder. The output of the 1000 neurons linear layer splits in two:

  1. Latent Encoder, : consists of a linear layer with neurons that is responsible for returning the 200-dimension encoding vector, , representing an input image in the learned latent space.

  2. Classifier,

    : consists of a linear layer with a single neuron followed by a sigmoid activation function. The output of this layer is the class prediction,

    , for a given input image.

The label output as well as the encoded vector are then fed through three sub-networks (refer to Figure 2 for a visualisation).

  • Latent Space Discriminator, : This model consists of a linear layer with neurons and a relu non-linearity, followed by a linear layer with a single neuron and sigmoid non-linearity.

  • Binary Label Discriminator, : This model has similar architecture to the the Latent Space Discriminator.

  • Decoder, : Finally, this model consists of a linear layer followed by sequence of 4 transposed convolution layers, again a relu non-linearity is applied to the output of each transposed convolution layer before being fed to the next one, finally a sigmoid layer is applied to the last output of the sequence. The input to the decoder is the concatenation of the label and the encoded vector.

These models are summarised in the Appendix in Table 2.

3.2.2 Preprocessing and Input Corruption

All images have been scaled so that all values are between 0 and 1. Furthermore, In order to allow for the partial corruption of the input. A corruption layer has been implemented that is responsible for adding Gaussian noise with mean

and variance

to the input of the system. Various values between and have been attempted. Experimentally the best results were obtained for .

3.2.3 Loss functions and Class Imbalance

In this section we describe in detail how we balance the cost functions used to train our networks. For training the baseline CNN model, a single binary cross entropy loss function was used as it is an appropriate loss function for classification tasks. The ssDAAE on the other hand consists of several modules each with their own loss function, these loss functions need to be combined with care. The loss functions include: a classification loss at the output of the classifier,

, a reconstruction loss at the output of the decoder, and both discriminator and regularisation losses at the output of the discriminators, and . The latent space discriminator loss is described in Equation (2), this cost may be modified for the label discriminator, by replacing with a binary distribution and with the output of . We now describe the encoder loss function, (designed to update the weights, ), which is defined as the weighted combination of the following losses:

  • Classification Loss : The binary cross-entropy loss between the predicted class and the ground truth label.

  • Reconstruction Loss : The mean squared-error between the decoded image and the input image.

  • Latent Regularisation Loss : The binary-cross entropy loss between output of the latent discriminator, and a target label 1. (Where refers to the discriminator predicting that a samples is from the chosen prior distribution).

  • Label Regularisation Loss : The binary-cross entropy loss between output of the binary label discriminator, and a target label . (Where refers to the discriminator predicting that a sample is from a binary distribution).

where , and are coefficients chosen through experimentation.

Furthermore, due to the heavy class imbalance in the ISIC dataset (90% of the data is benign), it was also necessary to slightly modify the cross entropy loss function for the classification loss by adding a weight for label 1 and a different weight for label 0. which leads to the following expression :

3.2.4 Hyper Parameter Choices

For both the baseline model and our adversarial auto-encoder model, we used

for the weighted classification loss. The CNN was trained using an RMSProp Optimizer with a momentum of

and a learning rate of . The encoder and decoder of the ssDAAE were trained with the same optimizer with same learning are and momentum as the CNN. We found setting coefficients to work well. When training the discriminator training, the same optimizer and learning rate were used, but the momentum was set to .

4 Ablation Study

To appreciate the contributions of our proposed model, we performed ablation studies. We trained different models listed in Table 1. Each autoencoding model – consisting of an encoder an decoder – had the same architecture and each CNN had the same architecture as the encoder. The CNN and CNN+noise models act as simple baselines that do not incorporate a generative model, and are trained in a fully supervised way, not making use of any unlabelled data. The sAAE and sDAAE are fully supervised models, that do incorporate a generative model, in the form of an adversarial autoencoder. Finally, the ssAAE and ssDAAE are trained in a semi-supervised fashion to use both labelled and unlabelled data. All models were trained with the same amount of labelled data. The semi-supervised models are trained with the same amount of unlabelled data. To make the comparisons as fair as possible, we used the same hyper parameters 222

learning rate, number of training epochs, amount of labelled and unlabelled data, loss function weightings, level of corruption, size of encoding

for all models in the study.

Model (a) Denoising (b) Autoencoder (c) Unlabelled
CNN
CNN + noise
sAAE
sDAAE
ssAAE
ssDAAE
Table 1: Models used for the ablation study. The semi-supervised DAAE (ssDAAE) has three core components (a) denoising, (b) an adversarial autoencoder and (c) is trained in a semi-supervised fashion, training with additional unlabelled data. The sAAE and sDAAE are fully supervised models.

The results of our ablation study are shown in Figure 4. At all sensitivity values the ssDAAE outperformed the simple baselines, of the CNN and the CNN with added noise (CNN+noise). At all sensitivity values, the ssDAAE outperformed the ssAAE, suggesting that the corruption process is useful, but perhaps, more so in the semi-supervised model where there are more examples, since the sAAE outperformed the sDAAE only at lower sensitivities .

Additionally, the CNN outperformed the CNN+noise model at all sensitivity values, further suggesting that many more training examples are needed for denoising to be effective. The fact that the CNN+noise performed less well than a CNN for all sensitivities, in contrast to the sAAE and sDAAE, which do perform well at the lower sensitivities, may be because the CNN+noise network is never exposed to the uncorrupted images, while the autoencoder models are exposed to uncorrupted images when the reconstruction loss is computed.

It is at the higher sensitivities that we most clearly see that all semi-supervised variants outperformed their supervised variants. The benefits of semi-supervised models over fully supervised suggested by the results, supports our motivation to design models that incorporate unlabelled data with labelled. Further, the additional benefit of incorporating a denoising criterion into semi-supervised models has, as anticipated, also improved performance. Finally, our results suggest that the model that most consistently performs well is our proposed model, the ssDAAE.

Figure 4: Ablation Study on the ICIS image database. The results of this study allow us to compare effect of different model variants. The ssDAAE yields the best specificity at high sensitivity levels.

4.1 Effect of different levels of corruption

We also explored the effects of using different levels of corruption during training of ssDAAE models. We compared models trained using noise levels, , the results are shown in Figure 5. Each model had the same architecture and was trained with the same hyper parameters 333learning rate, number of training epochs, amount of labelled and unlabelled data, loss function weightings, size of encoding to make the comparison as fair as possible. Our results suggest that the optimal corruption level is for most sensitivity values. We see that, for all sensitivity values, an ssDAAE trained with a noise level of outperformed an ssAAE (a model trained with a noise level of ). For ssDAAE models trained with noise levels greater than , inclusive, performance dropped significantly for all sensitivity values, suggesting that too much noise may have an adverse effect on training.

Figure 5: Effect of the level of corruption when training an ssDAAE. We compare models trained with corruption levels, . For lesion classification, moderate levels of noise yield the best results.

5 Related Work

Previous research has been conducted on the ISIC dataset [1], as part of a challenge hosted by ISBI [6], however there are two core differences between our work and the approaches currently taken for this and other medical image datasets.

Firstly, while our work focuses on a single end-to-end classification approach, previous work on skin lesion classification has tended to adopt a three-stage approach [5, 13, 17], splitting the task into, (1) a lesion segmentation to extract the relevant parts of the images [19, 7], followed by (2) a dermoscopic feature classification [11, 18] that helps to detect clinical patterns, and, finally, (3) a disease classification task aiming to identify “melanoma”, “seborrheic keratosis” and “benign nevi”. The initial preprocessing stages, (1) and (2) require extensive pixel-wise labelling of images – such as by image segmentation – to provide ground truth examples in order to learn to perform these tasks. On the contrary, our approach requires only a small amount of labelled data, where the label is simply “benign” or “malignant”, and makes use of unlabelled data, too.

The best performances recorded during this challenge have been obtained using fully-supervised deep learning architectures (AlexNet [3]

), and transfer learning (VGG-16 nets

[15], ResNet [4]

) with networks previously trained on ImageNet. While the success of approaches introduced by Menegola et al.

[15] and Lei et al. [4] highlight the benefits of using additional data to improve performance, the additional data they use is different to the skin lesion data. This because these approaches, [15], [4] are fully supervised and therefore can only make use of labelled data, the authors were therefore limited to use datasets for which labelled data was available.

One of our main contributions is in proposing a specific architecture and approach to skin lesion classification which can make use of both labelled and unlabelled medical image data. This method allows classification models to make use of unlabelled data, when the amount of labelled data is in limited supply. This allows us to use additional data that is more similar to the skin lesion data when training our models, unlike Menegola et al. [15] and Lei et al. [4].

6 Conclusion

Despite the clear success of deep learning techniques in specific image datasets, wide adoption of the many available approaches to training deep networks techniques is highly dependent on the availability of sufficient quantities of pairs.

The solution that we propose in this work is a form of semi-supervised learning, in the sense that if ground truth labels are available for only a subset of the data, all the data can still be used to train a deep classification model. Our results show that the additional information that may be learned from the unlabelled data is useful for boosting classification performance.


Our solution also includes a denoising procedure. While an adversarial autoencoder [14] is trained to simply to recover its input, our model is trained to recover clean data samples from corrupted ones. This results in our model learning a more robust data representation, which in turn boosts classification performance.

The approach we suggest is not limited specifically to the form of image data explored in this paper. Currently, we have applied this to dermatological images of skin lesions. Our model is flexible, and may potentially be applied to other datasets, where there is a large amount of image data, but a limited amount of it is labelled. The semi-supervised approach that we have taken in this paper holds significant relevance in developing high specificity classification systems for other medical images. This is because it is often the case that it is very easy to collect many examples of unlabelled images and the availability of experts that provide ground truth labelling is limited.

Acknowledgment

We like to acknowledge the Engineering and Physical Sciences Research Council for funding through a Doctoral Training studentship as well as Nick Pawlowski and Martin Rajchl for help with providing access to cluster computers.

References

7 Appendices

Table. 2) shows details of the architecture of the CNN baseline.

Input Real Image
Layers conv2D

[ filterSize : 5, nFilters : 64, stride=2, padding=2]

Relu
conv2D [ filterSize : 5, nFilters : 128, stride=2, padding=2]
Relu
conv2D [ filterSize : 5, nFilters : 256, stride=2, padding=2]
Relu
conv2D [ filterSize : 5, nFilters : 512, stride=2, padding=2]
Relu
Linear [Size : 1000]
Linear [Size : 1]
Sigmoid
Output Probability of label = 1
Table 2: CNN architecture

The tables bellow show details of the architecture for the ssDAAE. Table 3 shows the encoder and Table 4 shows the decoder. In addition, the label discriminator is shown in Table 5 and the latent discriminator is shown in Table 6.

Input Real Image
Layers conv2D [ filterSize : 5, nFilters : 64, stride=2, padding=2]
Relu
conv2D [ filterSize : 5, nFilters : 128, stride=2, padding=2]
Relu
conv2D [ filterSize : 5, nFilters : 256, stride=2, padding=2]
Relu
conv2D [ filterSize : 5, nFilters : 512, stride=2, padding=2]
Relu
Linear [Size : 1000]
Linear [Size : 1] Linear [Size : 200]
Sigmoid
Encoder Output Label Encoded Vector
Table 3: Encoder
Input Label Encoded Vector
Layers Concat
Linear [Size : 512x(4x4)]
Relu
Conv2D [filterSize = 3, nFilters : 256, stride=2, padding=1, output_padding=1]
Relu
Conv2D [filterSize = 3, nFilters : 128, stride=2, padding=1, output_padding=1]
Relu
Conv2D [filterSize = 3, nFilters : 64, stride=2, padding=1, output_padding=1]
Relu
Conv2D [filterSize = 3, nFilters : 3, stride=2, padding=1, output_padding=1]
Sigmoid
Decoder Output Reconstruction Image
Table 4: Decoder. Conv2D represents transposed 2D convolutions.
Input Label
Layers Linear [Size : 1000]
Relu
Linear [Size : 1]
Sigmoid
Discriminator Output Label Discriminator Probability
Table 5: Regularisation of the classifier
Input Encoded Vector
Layers Linear [Size : 1000]
Relu
Linear [Size : 1]
Sigmoid
Discriminator Output Latent Space Discriminator Probability
Table 6: Regularisation of the encoder