Melanoma Detection using Adversarial Training and Deep Transfer Learning

04/14/2020 ∙ by Hasib Zunair, et al. ∙ 9

Skin lesion datasets consist predominantly of normal samples with only a small percentage of abnormal ones, giving rise to the class imbalance problem. Also, skin lesion images are largely similar in overall appearance owing to the low inter-class variability. In this paper, we propose a two-stage framework for automatic classification of skin lesion images using adversarial training and transfer learning toward melanoma detection. In the first stage, we leverage the inter-class variation of the data distribution for the task of conditional image synthesis by learning the inter-class mapping and synthesizing under-represented class samples from the over-represented ones using unpaired image-to-image translation. In the second stage, we train a deep convolutional neural network for skin lesion classification using the original training set combined with the newly synthesized under-represented class samples. The training of this classifier is carried out by minimizing the focal loss function, which assists the model in learning from hard examples, while down-weighting the easy ones. Experiments conducted on a dermatology image benchmark demonstrate the superiority of our proposed approach over several standard baseline methods, achieving significant performance improvements. Interestingly, we show through feature visualization and analysis that our method leads to context based lesion assessment that can reach an expert dermatologist level.



There are no comments yet.


page 3

page 5

page 6

page 7

page 8

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Melanoma is one of the most aggressive forms of skin cancer [36, 42]. It is diagnosed in more than 132,000 people worldwide each year, according to the World Health Organization. Hence, it is essential to detect melanoma early before it spreads to other organs in the body and becomes more difficult to treat.

While visual inspection of suspicious skin lesions by a dermatologist is normally the first step in melanoma diagnosis, it is generally followed by dermoscopy imaging for further analysis. Dermoscopy is a noninvasive imaging procedure that acquires a magnified image of a region of the skin at a very high resolution to clearly identify the spots on the skin [6], and helps identify deeper levels of skin, providing more details of the lesions. Moreover, dermoscopy provides detailed visual context of regions of the skin and has proven to enhance the diagnostic accuracy of a naked eye examination, but it is costly, error prone, and achieves only average sensitivity in detecting melanoma [17]

. This has triggered the need for developing more precise computer-aided diagnostics systems that would assist in early detection of melanoma from dermoscopy images. Despite significant strides in skin lesion recognition, melanoma detection remains a challenging task due to various reasons, including the high degree of visual similarity (i.e. low inter-class variation) between malignant and benign lesions, making it difficult to distinguish between melanoma and non-melanoma skin lesions during the diagnosis of patients. Also, the contrast variability and boundaries between skin regions owing to image acquisition make automated detection of melanoma an intricate task. In addition to the high intra-class variation of melanoma’s color, texture, shape, size and location in dermoscopic images 

[9], there are also artifacts such as hair, veins, ruler marks, illumination variation, and color calibration charts that usually cause occlusions and blurriness, further complicating the situation [27].

Classification of skin lesion images is a central topic in medical imaging, having a relatively extensive literature. Some of the early methods for classifying melanoma and non-melanoma skin lesions have focused mostly on low-level computer vision approaches, which involve hand-engineering features based on expert knowledge such as color

[9], shape [30] and texture [4, 44]

. By leveraging feature selection, approaches that use mid-level computer vision techniques have also been shown to achieve improved detection performance 

[8]. In addition to ensemble classification based techniques[37], other methods include two-stage approaches, which usually involve segmentation of skin lesions, followed by a classification stage to further improve detection performance [44, 17, 8]. However, hand-crafted features often lead to unsatisfactory results on unseen data due to high intra-class variation and visual similarity, as well as the presence of artifacts in dermoscopy images. Moreover, such features are usually designed for specific tasks and do not generalize across different tasks.

Deep learning has recently emerged as a very powerful way to hierarchically find abstract patterns using large amounts of training data. The tremendous success of deep neural networks in image classification, for instance, is largely attributed to open source software, inexpensive computing hardware, and the availability of large-scale datasets [23]. Deep learning has proved valuable for various medical image analysis tasks such as classification and segmentation [33, 34, 2, 28, 11, 19]. In particular, significant performance gains in melanoma recognition have been achieved by leveraging deep convolutional neural networks in a two-stage framework [49], which uses a fully convolutional residual network for skin lesion segmentation and a very deep residual network for skin lesion classification. However, the issues of low inter-class variation and class imbalance of skin lesion image datasets severely undermine the applicability of deep learning to melanoma detection [40, 49], as they often hinder the model’s ability to generalize, leading to over-fitting [41]. In this paper, we employ conditional image synthesis without paired images to tackle the class imbalance problem by generating synthetic images for the minority class. Built on top of generative adversarial networks (GANs) [18], several image synthesis approaches, both conditional [31] and unconditional [16], have been recently adopted for numerous medical imaging tasks, including melanoma detection [47, 13, 51]. Also, approaches that enable the training of diverse models based on distribution matching with both paired and unpaired data were introduced in [53, 20, 26, 24]. These approaches include image translation from CT-PET [5], CS-MRI [46], MR-CT [45], XCAT-CT [35] and H&E staining in histopathology [39, 14]. In [7, 1], image synthesis models that synthesize images from noise were developed in an effort to improve melanoma detection. However, Cohen et al. [12] showed that the training schemes used in several domain adaptation methods often lead to a high bias and may result in hallucinating features (e.g. adding or removing tumors leading to a semantic change). This is due in large part to the source or target domains consisting of over- or under-represented samples during training (e.g. source domain composed of 50% malignant images and 50% benign; or target domain composed of 20% malignant and 80% benign images).

In this paper, we introduce MelaNet, a deep neural network based framework for melanoma detection, to overcome the aforementioned issues. Our approach mitigates the bias problem [12], while improving detection performance and reducing over-fitting. The proposed MelaNet framework consists of two integrated stages. In the first stage, we generate synthetic dermoscopic images for the minority class (i.e. malignant images) using unpaired image-to-image translation in a bid to balance the training set. These additional images are then used to boost training. In the second stage, we train a deep convolutional neural network classifier by minimizing the focal loss function, which assists the classification model in learning from hard examples, while down-weighting the easy ones. The main contributions of this paper can be summarized as follows:

  • We propose an integrated deep learning based framework, which couples adversarial training and transfer learning to jointly address inter-class variation and class imbalance for the task of skin lesion classification.

  • We train a deep convolutional network by iteratively minimizing the focal loss function, which assists the model in learning from hard examples, while down-weighting the easy ones.

  • We show experimentally on a dermatology image analysis benchmark significant improvements over several baseline methods for the important task of melanoma detection.

  • We show how our method enables visual discovery of high activations for the regions surrounding the skin lesion, leading to context based lesion assessment that can reach an expert dermatologist level.

The rest of this paper is organized as follows. In Section 2, we introduce a two-stage approach for melanoma detection using conditional image synthesis from benign to malignant in an effort to mitigate the effect caused by class imbalance, followed by training a deep convolutional neural network via iterative minimization of the focal loss function in order to learn from hard examples. We also discuss in detail the major components of our approach, and summarize its main algorithmic steps. In Section 3, experiments performed on a dermatology image analysis datasets are presented to demonstrate the effectiveness of the proposed approach in comparison with baseline methods. Finally, we conclude in Section 4 and point out future work directions.

2 Method

In this section, we describe the main components and algorithmic steps of the proposed approach to melanoma detection.

2.1 Conditional Image Synthesis

In order to tackle the challenging issue of low inter-class variation in skin lesion datasets [49, 41], we partition the inter-classes into two domains for conditional image synthesis with the goal to generate malignant lesions from benign lesions. This data generation process for the malignant minority class is performed in an effort to mitigate the class imbalance problem, as it is relatively easy to learn a transformation with given prior knowledge or conditioning for a narrowly defined task [14, 45]. Also, using unconditional image synthesis to generate data of a target distribution from noise often leads to artifacts and may result in training instabilities [52]. In recent years, various methods based on generative adversarial networks (GANs) have been used to tackle the conditional image synthesis problem, but most of them use paired training data for image-to-image translation [21], which requires the generation of a new image that is a controlled modification of a given image. Due to the unavailability of datasets consisting of paired examples for melanoma detection, we use cycle-consistent adversarial networks (CycleGAN), a technique that involves the automatic training of image-to-image translation models without paired examples [53]. These models are trained in an unsupervised fashion using a collection of images from the source and target domains. CycleGAN is a framework for training image-to-image translation models by learning mapping functions between two domains using the GAN model architecture in conjunction with cycle consistency. The idea behind cycle consistency is to ward off the learned mappings between these two domains from contradicting each other.

Given two image domains and denoting benign and malignant, respectively, the CycleGAN framework aims to learn to translate images of one type to another using two generators and , and two discriminators and , as illustrated in Figure 1. The generator (resp. ) translates images from benign to malignant (resp. malignant to benign), while the discriminator (resp. ) scores how real an image of (resp. ) looks. In other words, these discriminator models are used to determine how plausible the generated images are and update the generator models accordingly. The objective function of CycleGAN is defined as


which consists of two adversarial loss functions and a cycle consistency loss function regularized by a hyper-parameter that controls the relative importance of these loss functions [53]. The first adversarial loss is given by


where the generator tries to generate images that look similar to malignant images, while aims to distinguish between generated samples and real samples . During the training, as generates a malignant lesion, verifies if the translated image is actually a real malignant lesion or a generated one. The data distributions of benign and malignant are and , respectively. Similarly, the second adversarial loss is given by


where takes a malignant image from as input, and tries to generate a realistic image in that tricks the discriminator . Hence, the goal of is to generate a benign lesion such that it fools the discriminator to label it as a real benign lesion.

The third loss function is the cycle consistency loss given by


which basically quantifies the difference between the input image and the generated one using the -norm. The idea of the cycle consistency loss it to enforce and . In other words, the objective of CycleGAN is to learn two bijective generator mappings by solving the following optimization problem


We adopt the U-Net architecture [33] for the generators and PatchGAN [20] for the discriminators. The U-Net architecture consists of an encoder subnetwork and decoder subnetwork that are connected by a bridge section, while PatchGAN is basically a convolutional neural network classifier that determines whether an image patch is real or fake.

Figure 1: Illustration of the generative adversarial training process for unpaired image-to-image translation. Lesions are translated from benign to malignant and then back to benign to ensure cycle consistency in the forward pass. The same procedure is applied in the backward pass from malignant to benign.

2.2 Pre-trained Model Architecture

Due to limited training data, it is standard practice to leverage deep learning models that were pre-trained on large datasets [48]. The proposed melanoma classification model uses the pre-trained VGG-16 convolutional neural network without the fully connected (FC) layers, as illustrated in Figure 2. The VGG-16 network consists of 16 layers with learnable weights: 13 convolutional layers, and 3 fully connected layers [43]. As shown in Figure 2

, the proposed architecture, dubbed VGG-GAP, consists of five blocks of convolutional layers, followed by a global average pooling (GAP) layer. Each of the first and second convolutional blocks is comprised of two convolutional layers with 64 and 128 filters, respectively. Similarly, each of the third, fourth and fifth convolutional blocks consists of three convolutional layers with 256, 512, and 512 filters, respectively. The GAP layer, which is widely used in classification tasks, computes the average output of each feature map in the previous layer and helps minimize overfitting by reducing the total number of parameters in the model. GAP turns a feature map into a single number by taking the average of the numbers in that feature map. Similar to max pooling layers, GAP layers have no trainable parameters and are used to reduce the spatial dimensions of a three-dimensional tensor. The GAP layer is followed by a single FC layer with a softmax function (i.e. a dense softmax layer of two units for the binary classification case) that yields the probabilities of predicted classes.

Figure 2: VGG-GAP architecture with a GAP layer, followed by an FC layer that in turn is fed into a softmax layer of two units.

Since we are addressing a binary classification problem with imbalanced data, we learn the weights of the VGG-GAP network by minimizing the focal loss function [25] defined as


where and are given by

with denoting the ground truth for negative and positive classes, and denoting the model’s predicted probability for the class with label . The weight parameter balances the importance of positive and negative labeled samples, while the nonnegative tunable focusing parameter smoothly adjusts the rate at which easy examples are down-weighted. Note that when , the focal loss function reduces to the cross-entropy loss. A positive value of the focusing parameter decreases the relative loss for well-classified examples, focusing more on hard, misclassified examples.

Intuitively, the focal loss function penalizes hard-to-classify examples. It basically down-weights the loss for well-classified examples so that their contribution to the total loss is small even if their number is large.

2.3 Data Preprocessing and Augmentation

In order to achieve faster convergence, feature standardization is usually performed, i.e. we rescale the images to have values between 0 and 1. Given a data matrix

, the standardized feature vector is given by


where is the -th input data point, denoting a row vector. It is important to note that in our approach, no domain specific or application specific pre-processing or post-processing is employed.

On the other hand, data augmentation is usually carried out on medical datasets to improve performance in classification tasks [16, 3]. This is often done by creating modified versions of the input images in a dataset through random transformations, including horizontal and vertical flip, Gaussian noise, brightness and zoom augmentation, horizontal and vertical shift, sampling noise once per pixel, color space conversion, and rotation.

We do not perform on-the-fly data augmentation (random) during training, as it may add an unnecessary layer of complexity to training and evaluation. When designing our configurations, we first augment the data offline and then we train the classifier using the augmented data. Also, we do not apply data augmentation in the proposed two-stage approach, as it would not give us an insight on which of the two approaches has more contribution in the performance (data augmentation or image synthesis?). Hence, we keep these two configurations independent from each other.

2.4 Algorithm

The main algorithmic steps of our approach are summarized in Algorithm 1. The input is a training set consisting of skin lesion dermoscopic images, along with their associated class labels. In the first stage, the different classes are grouped together (e.g. for binary classification, we have two groups), and we resize each image to . Then, we balance the inter-class data samples by performing undersampling. We train CycleGAN to learn a function of the interclass variation between the two groups, i.e. we learn a transformation between melanoma and non-melanoma lesions. We apply CycleGAN to the over-represented class samples in order to synthesize the target class samples (i.e. under-represented class). After this transformation is applied, we acquire a balanced dataset, composed of original training data and generated data. In the second stage, we employ the VGG-GAP classifier with the focal loss function. Finally, we evaluate the trained model on the test set to generate the predicted class labels.

0:  Training set of dermoscopic images, where is a class label of the input .
0:  Vector containing predicted class labels.
1:  for  to  do
2:     Group each lesion image according to class label.
3:     Resize each image to .
4:  end for
5:  Balance the inter-class data samples.
6:  Train CycleGAN on unpaired and balanced interclass data.
7:  for  to  do
8:     if class label benign then
9:        Translate to malignant using the generator network
10:     else
11:        pass
12:     end if
13:  end for
14:  Merge synthetic under-represented class outputs and original training set.
15:  Shuffle.
16:  Train VGG-GAP on the balanced training set
17:  Evaluate the model on the test set and generate predicted class labels.
Algorithm 1 MelaNet classifier

3 Experiments

In this section, extensive experiments are conducted to evaluate the performance of the proposed two-stage approach on a standard benchmark dataset for skin lesion analysis.

Dataset.  The effectiveness of MelaNet is evaluated on the ISIC-2016 dataset, a publicly accessible dermatology image analysis benchmark challenge for skin lesion analysis towards melanoma detection [10], which leverages annotated skin lesion images from the International Skin Imaging Collaboration (ISIC) archive. The dataset contains a representative mix of images of both malignant and benign skin lesions, which were randomly partitioned into training and test sets, with 900 images in the training set and 379 images in the test set, respectively. These images consist of different types of textures in both background and foreground, and also have poor contrast, making the task of melanoma detection a challenging problem. It is also noteworthy to mention that in the training set, there are 727 benign cases and only 173 malignant cases, resulting in an inter-class ratio of 1:4. Sample benign and malignant images from the ISIC-2016 dataset are depicted in Figure 3, which shows that both categories have a high visual similarity, making the task of melanoma detection quite arduous. Note that there is a high intra-class variation among the malignant samples. These variations include color, texture and shape. On the other hand, it is important to point out that benign samples are not visually very different, and hence they exhibit low inter-class variation. Furthermore, there are artifacts present in the images such as ruler markers and fine hair, which cause occlusions. Notice that most malignant images show more diffuse boundaries owing to the possibility that before image acquisition, the patient was already diagnosed with melanoma and the medical personnel acquired the dermoscopic images at a deeper level in order to better differentiate between the benign and malignant classes.

Figure 3: Sample malignant and benign images from the ISIC-2016 dataset. Notice a high intra-class variation among the malignant samples (left), while benign samples (right) are not visually very different.

The histogram of the training data is displayed in Figure 4, showing the class imbalance problem, where the number of images belonging to the minority class (“malignant”) is far smaller than the number of the images belonging to the majority class (“benign”). Also, the number of benign and malignant cases in the test set are 304 and 75, respectively, with an inter-class ratio of 1:4.

Figure 4: Histogram of the ISIC-2016 training set, showing the class imbalance between malignant and benign cases.

Since the images in the ISIC-2016 dataset are of varying sizes, we resize them to

pixels after applying padding to make them square in order to retain the original aspect ratio.

Training Procedure. Since we are tackling a binary classification problem with imbalanced data, we use the focal loss function for the training of the VGG-GAP model. The focal loss is designed to address class imbalance problem by down-weighting easy examples, and focusing more on training the hard examples. Fine-tuning is essentially performed through re-training the whole VGG-GAP network by iteratively minimizing the focal loss function.

Baseline methods. We compare the proposed MelaNet approach against VGG-GAP, VGG-GAP + Augment-5x, and VGG-GAP + Augment-10x. The VGG-GAP network is trained on the original training set, which consists of 900 samples. The VGG-GAP + Augment-5x model uses the same VGG-GAP architecture, but is trained on an augmented dataset composed of 5400 training samples, i.e. we increase the training set 5 times from 900 to 5400 samples using image augmentation. Similarly, the VGG-GAP + Augment-10x network is trained on an augmented set of 99000 training samples (i.e. 10 times the original set). We also ran experiments with augmented training sets higher than 10x the original one, but we did not observe improved performance as the network tends to learn redundant representations.

Implementation details.

 All experiments are carried out on a Linux server with 2x Intel Xeon E5-2650 V4 Broadwell @ 2.2GHz, 256 GB RAM, 4x NVIDIA P100 Pascal (12G HBM2 memory) GPU cards. The algorithms are implemented in Keras with TensorFlow backend.

We train CycleGAN for 500 epochs using Adam optimizer 

[22] with learning rate 0.0002 and batch size 1. We set the regularization parameter to 10. The VGG-GAP classifier, on the other hand, is trained using Adadelta optimizer [50] with learning rate 0.001 and mini-batch 16. A factor of 0.1 is used to reduce the learning rate once the loss stagnates. For the VGG-GAP model, we set the focal loss parameters to and , meaning that for positive labeled samples, and for negative labeled samples. Training of VGG-GAP is continued on all network layers until the focal loss stops improving, and then the best weights are retained. For fair comparison, use used the same set of hyper-parameters for VGG-GAP and baseline methods. We choose Adadelta as an optimizer due to its robustness to noisy gradient information and minimal computational overhead.

3.1 Results

The effectiveness of the proposed classifier is assessed by conducting a comprehensive comparison with the baseline methods using several performance evaluation metrics 

[19, 49, 32], including the receiver operating characteristic (ROC) curve, sensitivity, and the area under the ROC curve (AUC). Sensitivity is defined as the percentage of positive instances correctly classified, i.e.


where TP, FP, TN and FN denote true positives, false positives, true negatives and false negatives, respectively. TP is the number of correctly predicted malignant lesions, while TN is the number of correctly predicted benign lesions. A classifier that reduces FN (ruling cancer out in cases that do have it) and FP (wrongly diagnosing cancer where there is none) indicates a better performance. Sensitivity, also known as recall or true positive rate (TPR), indicates how often a classifier misses a positive prediction. It is one of the most common measures to evaluate a classifier in medical image classification tasks [15]. We use a threshold of 0.5.

Another common metric is AUC that summarizes the information contained in the ROC curve, which plots TPR versus , the false positive rate, at various thresholds. Larger AUC values indicate better performance at distinguishing between melonoma and non-melanoma images. It is worth pointing out that the accuracy metric is not used in this study, as it provides no interpretable information and may lead to a false sense of superiority of classifying the majority class.

The performance comparison results of MelaNet and the baseline methods using AUC, FN and Sensitivity are depicted in Figure 5. We observe that our approach outperforms the baselines, achieving an AUC of 81.18% and a sensitivity of 91.76% with performance improvements of 2.1% and 7.3% over the VGG-GAP baseline. Interestingly, MelaNet yields the lowest number of false negatives, which were reduced by more than 50% compared to the baseline methods, meaning it picked up on malignant cases that the baselines had missed. In other words, MelaNet caught instances of melanoma that would have otherwise gone undetected. This is a significant performance in the potential for early melanoma detection, albeit MelaNet was trained on only 1627 samples composed of 900 images from the original dataset and 727 synthesized images (benign and malignant) obtained via generative adversarial training.

Figure 5: Classification performance of MelaNet and the baseline methods using AUC, FN and Sensitivity as evaluation metrics on the ISIC-2016 test set.

Figure 6 displays the ROC curve, which shows the better performance of our proposed MelaNet approach compared to the baseline methods. Each point on ROC represents different trade-off between false positives and false negatives. An ROC curve that is closer to the upper right indicates a better performance (TPR is higher than FPR). Even though during the early and last stages, the ROC curve of MelaNet seems to fluctuate at certain points, the overall performance is much higher than the baselines, as indicated by the AUC value. This better performance demonstrates that the conditional image synthesis procedure plays a crucial role and enables our model to learn effective representations, while mitigating data scarcity and class imbalance.

Figure 6: ROC curves for MelaNet and baseline methods, along with the corresponding AUC values.

We also compare MelaNet to two other standard baseline methods [19, 49]. The top evaluation results on the ISIC-2016 dataset to classify images as either being benign or malignant are reported in [19]. The method presented in [49] is also a two-stage approach consisting of a fully convolutional residual network for skin lesion segmentation, followed by a very deep residual network for skin lesion classification. The classification results are displayed in Table 1, which shows that the proposed approach achieves significantly better results than the baseline methods.

Performance Measures
Method AUC (%) Sensitivity (%) FN
Gutman et al. [19] 80.40 50.70
Yu et al. [49] (without segmentation) 78.20 42.70
Yu et al. [49] (with segmentation) 78.30 54.70
VGG-GAP 79.08 84.46 55
VGG-GAP + Augment-5x (ours) 78.81 85.34 51
VGG-GAP + Augment-10x (ours) 79.56 86.09 47
MelaNet (ours) 81.18 91.76 22
Table 1: Classification evaluation results of MelaNet and baseline methods. Boldface numbers indicate the best performance.

Feature visualization and analysis. Understanding and interpreting the predictions made by a deep learning model provides valuabe insights into the input data and the features learned by the model so that the results can be easily understood by human experts. In order to visually explain the decisions made by the proposed classifier and baseline methods, we use gradient-weighted class activation map (Grad-CAM) [38] to generate the saliency maps that highlight the most influential features affecting the predictions. Since convolutional feature maps retain spatial information and each pixel of the feature map indicates whether the corresponding visual pattern exists in its receptive field, the output from the last convolutional layer of the VGG-16 network shows the discriminative region of the image.

The class activation maps displayed in Figure 7 show that even though the baseline methods demonstrate high activations for the region consisting of the lesion, they still fail to correctly classify the dermoscopic image. For our proposed MelaNet approach, we observe that the area surrounding the skin lesion is highly activated. Notice that most of the borders of the whole input image are highlighted, due largely to the fact the classifiers are not looking at the regions of interest, and hence result in misclassification.

We can also see in Figure 8 that while the proposed approach shows similar visual patterns as the baselines when correctly classifying the input image, it, however, outputs high activations for the regions surrounding the skin lesion in many cases. These regions consist of shapes and edges. Hence, our approach not only focus on the skin lesion, but also captures its context, which helps in the final detection. This context-based approach is commonly used by expert dermatologists [15]. This observation is of great significance, and further shows the effectiveness of our approach.

Figure 7: Grad-CAM heat maps for the misclassified malignant cases by MelaNet and baseline methods.
Figure 8: Grad-CAM heat maps for the correctly classified malignant cases by MelaNet and baseline methods.

In order to get a clear understanding of the data distribution, the learned features from both the original training set and the balanced dataset (i.e. with the additional synthesized data using adversarial training) are visualized using Uniform Manifold Approximation and Projection for Dimension Reduction (UMAP) [29]

, which is a dimensionality reduction technique that is particularly well-suited for embedding high-dimensional data into a two- or three-dimensional space. The UMAP embeddings shown in Figure

9 were generated by running the UMAP algorithm on the original training set with 900 samples (benign and malignant) and the balanced dataset consisting of 1627 samples (benign and malignant).

From Figure 9

(left), it is evident that the inter-class variation is significantly small due in large part to the very high visual similarity between malignant and benign skin lesions. Hence, the task of learning a decision boundary between the two categories is challenging. We can also see that the synthesized samples (malignant lesions shown in green) lie very close to the original data distribution. It is important to note that the outliers present in the dataset are not due to the image synthesis procedure, but this is rather a characteristic present in the original training set. Therefore, the synthetically generated data are representative of the original under-represented class (i.e. malignant skin lesions).

Figure 9: Two-dimensional UMAP embeddings using the original ISIC-2016 training set (left) consisting of 900 samples (benign shown in blue and malignant in orange) and with additional synthesized malignant data samples (shown in green) consisting of a total 1627 samples (right).

Discussion.  With a training set consisting of only 1627 images, our proposed MelaNet approach is able to achieve improved performance. This better performance is largely due to the fact that by leveraging the inter-class variation in medical images, the mapping between the source and target distribution for conditional image synthesis can be easily learned. Moreover, it is much easier to generate target images given prior information, rather than generating from noise which often results in training instability and artifacts [32]. It is important to note that even though image-to-image translation schemes are considered to hallucinate images by adding or removing image features [12], we showed that in our scheme the partition of the inter-classes does not result in a bias or unwanted feature hallucination. Figure 10 shows the benign lesions sampled from the ISIC-2016 training set, which are translated to malignant samples using MelaNet. As can be seen, the benign and the corresponding synthesized malignant images have a high degree of visual similarity. This is largely due to the nature of the dataset, which is known to have a low inter-class variation.

In the synthetic minority over-sampling technique (SMOTE), when drawing random observations from its k-nearest neighbors, it is possible that a “border point” or an observation very close to the decision boundary may be selected, resulting in synthetically-generated observations lying too close to the decision boundary, and as a result the performance of the classifier may be degraded. The advantage of our approach over SMOTE is that we learn a transformation between a source and a target domain by solving an optimization problem in order to determine two bijective mappings. This enables the generator to synthesize observations, which help improve the classification performance while learning the transformation/decision boundary.

Figure 10: Sample benign images from the ISIC-2016 dataset that are translated to malignant images using the proposed approach. Notice that the synthesized images display a reasonably good visual quality.

In order to gain a deeper insight on the performance of the proposed approach, we sample all the original benign lesions and a subset of the synthesized malignant lesions, consisting of 727 and 10 samples, respectively. For the benign group of images, the proposed MelaNet model yields a sensitivity score of 89%, with 77 misclassified images. By contrast, a 100% sensitivity score is obtained when performing predictions on the synthesized malignant group of images. In addition, the F-score values for MelaNet on the benign and synthesized malignant groups are 94% and 21%, respectively.

4 Conclusion

In this paper, we proposed a two-stage framework for melanoma detection. The first stage addresses the problem of data scarcity and class imbalance by formulating inter-class variation as conditional image synthesis for over-sampling in order to synthesize under-represented class samples (e.g. melanoma from non-melanoma lesions). The newly synthesized samples are then used as additional data to train a deep convolutional neural network by minimizing the focal loss function, which assists the classifier in learning from hard examples. We demonstrate through extensive experiments that the proposed MelaNet approach improves sensitivity by a margin of 13.10% and the AUC by 0.78% from only 1627 dermoscopy images compared to the baseline methods on the ISIC-2016 dataset. For future work directions, we plan to address the multi-class classification problem, which requires an independent generative model for each domain, leading to prohibitive computational overhead for adversarial training. We also intend to apply our method to other medical imaging modalities.


  • [1] I. S. Ali, M. F. Mohamed, and Y. B. Mahdy (2019) Data augmentation for skin lesion using self-attention based progressive generative adversarial network. arXiv preprint arXiv:1910.11960. Cited by: §1.
  • [2] M. Anthimopoulos, S. Christodoulidis, L. Ebner, A. Christe, and S. Mougiakakou (2016) Lung pattern classification for interstitial lung diseases using a deep convolutional neural network. IEEE Transactions on Medical Imaging 35 (5), pp. 1207–1216. Cited by: §1.
  • [3] T. Araújo, G. Aresta, E. Castro, J. Rouco, P. Aguiar, C. Eloy, A. Polónia, and A. Campilho (2017) Classification of breast cancer histology images using convolutional neural networks. PloS one 12 (6), pp. e0177544. Cited by: §2.3.
  • [4] L. Ballerini, R. B. Fisher, B. Aldridge, and J. Rees (2013) A color and texture based hierarchical K-NN approach to the classification of non-melanoma skin lesions. In Color Medical Image Analysis, pp. 63–86. Cited by: §1.
  • [5] A. Ben-Cohen, E. Klang, S. P. Raskin, M. M. Amitai, and H. Greenspan (2017) Virtual PET images from CT data using deep convolutional networks: initial results. In Proc. International Workshop on Simulation and Synthesis in Medical Imaging, pp. 49–57. Cited by: §1.
  • [6] M. Binder, M. Schwarz, A. Winkler, A. Steiner, A. Kaider, K. Wolff, and H. Pehamberger (1995) Epiluminescence microscopy: a useful tool for the diagnosis of pigmented skin lesions for formally trained dermatologists. Archives of Dermatology 131 (3), pp. 286–291. Cited by: §1.
  • [7] A. Bissoto, F. Perez, E. Valle, and S. Avila (2018) Skin lesion synthesis with generative adversarial networks. In Proc. International Workshop on Computer-Assisted and Robotic Endoscopy, pp. 294–302. Cited by: §1.
  • [8] M. E. Celebi, H. A. Kingravi, B. Uddin, H. Iyatomi, Y. A. Aslandogan, W. V. Stoecker, and R. H. Moss (2007) A methodological approach to the classification of dermoscopy images. Computerized Medical Imaging and Graphics 31 (6), pp. 362–373. Cited by: §1.
  • [9] Y. Cheng, R. Swamisai, S. E. Umbaugh, R. H. Moss, W. V. Stoecker, S. Teegala, and S. K. Srinivasan (2008) Skin lesion classification using relative color features. Skin Research and Technology 14 (1), pp. 53–64. Cited by: §1, §1.
  • [10] N. C. Codella, D. Gutman, M. E. Celebi, B. Helba, M. A. Marchetti, S. W. Dusza, A. Kalloo, K. Liopyris, N. Mishra, H. Kittler, et al. (2018) Skin lesion analysis toward melanoma detection: a challenge at the 2017 international symposium on biomedical imaging (ISBI), hosted by the international skin imaging collaboration (ISIC). In Proc. IEEE International Symposium on Biomedical Imaging, pp. 168–172. Cited by: §3.
  • [11] N. C. Codella, Q. Nguyen, S. Pankanti, D. A. Gutman, B. Helba, A. C. Halpern, and J. R. Smith (2017) Deep learning ensembles for melanoma recognition in dermoscopy images. IBM Journal of Research and Development 61 (4/5), pp. 5–1. Cited by: §1.
  • [12] J. P. Cohen, M. Luck, and S. Honari (2018) Distribution matching losses can hallucinate features in medical image translation. In Proc. International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 529–536. Cited by: §1, §1, §3.1.
  • [13] P. Costa, A. Galdran, M. I. Meyer, M. Niemeijer, M. Abràmoff, A. M. Mendonça, and A. Campilho (2017) End-to-end adversarial retinal image synthesis. IEEE Transactions on Medical Imaging 37 (3), pp. 781–791. Cited by: §1.
  • [14] T. de Bel, M. Hermsen, J. Kers, J. van der Laak, and G. Litjens (2018) Stain-transforming cycle-consistent generative adversarial networks for improved segmentation of renal histopathology. In Proc. International Conference on Medical Imaging with Deep Learning, Vol. 102, pp. 151–163. Cited by: §1, §2.1.
  • [15] A. Esteva, B. Kuprel, R. A. Novoa, J. Ko, S. M. Swetter, H. M. Blau, and S. Thrun (2017) Dermatologist-level classification of skin cancer with deep neural networks. Nature 542 (7639), pp. 115. Cited by: §3.1, §3.1.
  • [16] M. Frid-Adar, E. Klang, M. Amitai, J. Goldberger, and H. Greenspan (2018) Synthetic data augmentation using gan for improved liver lesion classification. In Proc. IEEE International Symposium on Biomedical Imaging, pp. 289–293. Cited by: §1, §2.3.
  • [17] H. Ganster, P. Pinz, R. Rohrer, E. Wildling, M. Binder, and H. Kittler (2001) Automated melanoma recognition. IEEE transactions on medical imaging 20 (3), pp. 233–239. Cited by: §1, §1.
  • [18] I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio (2014) Generative adversarial nets. In Advances in Neural Information Processing Systems, pp. 2672–2680. Cited by: §1.
  • [19] D. Gutman, N. C. Codella, E. Celebi, B. Helba, M. Marchetti, N. Mishra, and A. Halpern (2016) Skin lesion analysis toward melanoma detection: a challenge at the international symposium on biomedical imaging (ISBI) 2016, hosted by the international skin imaging collaboration (ISIC). arXiv preprint arXiv:1605.01397. Cited by: §1, §3.1, §3.1, Table 1.
  • [20] P. Isola, J. Zhu, T. Zhou, and A. A. Efros (2017)

    Image-to-image translation with conditional adversarial networks


    Proc. IEEE Conference on Computer Vision and Pattern Recognition

    pp. 1125–1134. Cited by: §1, §2.1.
  • [21] S. Kazeminia, C. Baur, A. Kuijper, B. van Ginneken, N. Navab, S. Albarqouni, and A. Mukhopadhyay (2018) GANs for medical image analysis. arXiv preprint arXiv:1809.06222. Cited by: §2.1.
  • [22] D. P. Kingma and J. Ba (2015) Adam: a method for stochastic optimization. In International Conference on Learning Representations, Cited by: §3.
  • [23] A. Krizhevsky, I. Sutskever, and G. E. Hinton (2012) ImageNet classification with deep convolutional neural networks. In Advances in Neural Information Processing Systems, pp. 1097–1105. Cited by: §1.
  • [24] A. M. Lamb, D. Hjelm, Y. Ganin, J. P. Cohen, A. C. Courville, and Y. Bengio (2017) GibbsNet: iterative adversarial inference for deep graphical models. In Advances in Neural Information Processing Systems, pp. 5089–5098. Cited by: §1.
  • [25] T. Lin, P. Goyal, R. Girshick, K. He, and P. Dollár (2017) Focal loss for dense object detection. In Proc. IEEE International Conference on Computer Vision, pp. 2980–2988. Cited by: §2.2.
  • [26] M. Liu, T. Breuel, and J. Kautz (2017) Unsupervised image-to-image translation networks. In Advances in Neural Information Processing Systems, pp. 700–708. Cited by: §1.
  • [27] Z. Liu and J. Zerubia (2015) Skin image illumination modeling and chromophore identification for melanoma diagnosis. Physics in Medicine & Biology 60 (), pp. 3415–3431. Cited by: §1.
  • [28] K. Matsunaga, A. Hamada, A. Minagawa, and H. Koga (2017) Image classification of melanoma, nevus and seborrheic keratosis by deep neural network ensemble. arXiv preprint arXiv:1703.03108. Cited by: §1.
  • [29] L. McInnes, J. Healy, and J. Melville (2018) UMAP: uniform manifold approximation and projection for dimension reduction. The Journal of Open Source Software. Cited by: §3.1.
  • [30] N. K. Mishra and M. E. Celebi (2016)

    An overview of melanoma detection in dermoscopy images using image processing and machine learning

    arXiv preprint arXiv:1601.07843. Cited by: §1.
  • [31] D. Nie, R. Trullo, J. Lian, C. Petitjean, S. Ruan, Q. Wang, and D. Shen (2017) Medical image synthesis with context-aware generative adversarial networks. In Proc. International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 417–425. Cited by: §1.
  • [32] F. Perez, C. Vasconcelos, S. Avila, and E. Valle (2018) Data augmentation for skin lesion analysis. In Proc. International Workshop on Computer-Assisted and Robotic Endoscopy, pp. 303–311. Cited by: §3.1, §3.1.
  • [33] O. Ronneberger, P. Fischer, and T. Brox (2015) U-net: convolutional networks for biomedical image segmentation. In International Conference on Medical image computing and computer-assisted intervention, pp. 234–241. Cited by: §1, §2.1.
  • [34] H. R. Roth, L. Lu, A. Seff, K. M. Cherry, J. Hoffman, S. Wang, J. Liu, E. Turkbey, and R. M. Summers (2014) A new 2.5 d representation for lymph node detection using random sets of deep convolutional neural network observations. In Proc. International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 520–527. Cited by: §1.
  • [35] T. Russ, S. Goerttler, A. Schnurr, D. F. Bauer, S. Hatamikia, L. R. Schad, F. G. Zöllner, and K. Chung (2019) Synthesis of CT images from digital body phantoms using CycleGAN. International Journal of Computer Assisted Radiology and Surgery 14 (10), pp. 1741–1750. Cited by: §1.
  • [36] E. Saito and M. Hori (2018) Melanoma skin cancer incidence rates in the world from the cancer incidence in five continents XI. Japanese Journal of Clinical Oncology 48 (12), pp. 1113–1114. Cited by: §1.
  • [37] G. Schaefer, B. Krawczyk, M. E. Celebi, and H. Iyatomi (2014) An ensemble classification approach for melanoma diagnosis. Memetic Computing 6 (4), pp. 233–240. Cited by: §1.
  • [38] R. R. Selvaraju, M. Cogswell, A. Das, R. Vedantam, D. Parikh, and D. Batra (2017) Grad-CAM: visual explanations from deep networks via gradient-based localization. In Proc. IEEE International Conference on Computer Vision, pp. 618–626. Cited by: §3.1.
  • [39] M. T. Shaban, C. Baur, N. Navab, and S. Albarqouni (2019) StainGAN: stain style transfer for digital histological images. In Proc. IEEE International Symposium on Biomedical Imaging, pp. 953–956. Cited by: §1.
  • [40] C. Shie, C. Chuang, C. Chou, M. Wu, and E. Y. Chang (2015) Transfer representation learning for medical image analysis. In Proc. International Conference of the IEEE Engineering in Medicine and Biology Society, pp. 711–714. Cited by: §1.
  • [41] H. Shin, H. R. Roth, M. Gao, L. Lu, Z. Xu, I. Nogues, J. Yao, D. Mollura, and R. M. Summers (2016) Deep convolutional neural networks for computer-aided detection: cnn architectures, dataset characteristics and transfer learning. IEEE transactions on medical imaging 35 (5), pp. 1285–1298. Cited by: §1, §2.1.
  • [42] R.L. Siegel, K.D. Miller, and A. Jemal (2019) Cancer statistics, 2019. CA: A Cancer Journal for Clinicians 69, pp. 7–34. Cited by: §1.
  • [43] K. Simonyan and A. Zisserman (2015) Very deep convolutional networks for large-scale image recognition. In International Conference on Learning Representations, Cited by: §2.2.
  • [44] T. Tommasi, E. La Torre, and B. Caputo (2006) Melanoma recognition using representative and discriminative kernel classifiers. In Proc. International Workshop on Computer Vision Approaches to Medical Image Analysis, pp. 1–12. Cited by: §1.
  • [45] J. M. Wolterink, A. M. Dinkla, M. H. Savenije, P. R. Seevinck, C. A. van den Berg, and I. Išgum (2017) Deep MR to CT synthesis using unpaired data. In Proc. International Workshop on Simulation and Synthesis in Medical Imaging, pp. 14–23. Cited by: §1, §2.1.
  • [46] G. Yang, S. Yu, H. Dong, G. Slabaugh, P. L. Dragotti, X. Ye, F. Liu, S. Arridge, J. Keegan, Y. Guo, et al. (2017) DAGAN: deep de-aliasing generative adversarial networks for fast compressed sensing mri reconstruction. IEEE Transactions on Medical Imaging 37 (6), pp. 1310–1321. Cited by: §1.
  • [47] X. Yi, E. Walia, and P. Babyn (2019) Generative adversarial network in medical imaging: a review. Medical Image Analysis, pp. . Cited by: §1.
  • [48] J. Yosinski, J. Clune, Y. Bengio, and H. Lipson (2014) How transferable are features in deep neural networks?. In Advances in Neural Information Processing Systems, pp. 3320–3328. Cited by: §2.2.
  • [49] L. Yu, H. Chen, Q. Dou, J. Qin, and P. Heng (2016) Automated melanoma recognition in dermoscopy images via very deep residual networks. IEEE Transactions on Medical Imaging 36 (4), pp. 994–1004. Cited by: §1, §2.1, §3.1, §3.1, Table 1.
  • [50] M. D. Zeiler (2012) ADADELTA: an adaptive learning rate method. arXiv preprint arXiv:1212.5701. Cited by: §3.
  • [51] T. Zhang, H. Fu, Y. Zhao, J. Cheng, M. Guo, Z. Gu, B. Yang, Y. Xiao, S. Gao, and J. Liu (2019) SkrGAN: sketching-rendering unconditional generative adversarial networks for medical image synthesis. In Proc. International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 777–785. Cited by: §1.
  • [52] Z. Zhao, Q. Sun, H. Yang, H. Qiao, Z. Wang, and D. O. Wu (2019) Compression artifacts reduction by improved generative adversarial networks. EURASIP Journal on Image and Video Processing 2019 (1), pp. 62. Cited by: §2.1.
  • [53] J. Zhu, T. Park, P. Isola, and A. A. Efros (2017) Unpaired image-to-image translation using cycle-consistent adversarial networks. In Proc. IEEE International Conference on Computer Vision, pp. 2223–2232. Cited by: §1, §2.1, §2.1.