Illumination-based Transformations Improve Skin Lesion Segmentation in Dermoscopic Images

by   Kumar Abhishek, et al.
Simon Fraser University

The semantic segmentation of skin lesions is an important and common initial task in the computer aided diagnosis of dermoscopic images. Although deep learning-based approaches have considerably improved the segmentation accuracy, there is still room for improvement by addressing the major challenges, such as variations in lesion shape, size, color and varying levels of contrast. In this work, we propose the first deep semantic segmentation framework for dermoscopic images which incorporates, along with the original RGB images, information extracted using the physics of skin illumination and imaging. In particular, we incorporate information from specific color bands, illumination invariant grayscale images, and shading-attenuated images. We evaluate our method on three datasets: the ISBI ISIC 2017 Skin Lesion Segmentation Challenge dataset, the DermoFit Image Library, and the PH2 dataset and observe improvements of 12.02 model trained only with RGB images.



There are no comments yet.


page 2

page 5

page 7


Automatic skin lesion segmentation on dermoscopic images by the means of superpixel merging

We present a superpixel-based strategy for segmenting skin lesion on der...

Enhanced Self-Perception in Mixed Reality: Egocentric Arm Segmentation and Database with Automatic Labelling

In this study, we focus on the egocentric segmentation of arms to improv...

RethNet: Object-by-Object Learning for Detecting Facial Skin Problems

Semantic segmentation is a hot topic in computer vision where the most c...

Illumination-Invariant Image from 4-Channel Images: The Effect of Near-Infrared Data in Shadow Removal

Removing the effect of illumination variation in images has been proved ...

Multiplexed Illumination for Classifying Visually Similar Objects

Distinguishing visually similar objects like forged/authentic bills and ...

Improving Dermoscopic Image Segmentation with Enhanced Convolutional-Deconvolutional Networks

Automatic skin lesion segmentation on dermoscopic images is an essential...

Segmentation of skin lesions and their attributes using Generative Adversarial Networks

This work is about the semantic segmentation of skin lesion boundary and...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Skin conditions are the most common reason for visits to general practitioners in studied populations [25], and the prevalence of skin cancer in the United States has been higher than all other cancers combined over the last three decades [23]. Melanoma, a type of skin cancer which represents only a small fraction of all skin cancers in the USA, is responsible for over 75% of skin cancer related fatalities and over 10,000 deaths annually in the USA alone [6]. However, studies have shown that early diagnosis can drastically improve patient survival rates. While skin cancers can be diagnosed by visual examination, it is often difficult to distinguish malignant lesions from healthy skin. As a result, computer aided diagnoses of skin lesions have been widely used to automate the assessment of dermoscopic images. The segmentation of skin lesion images is therefore a crucial step in the diagnosis and subsequent treatment. Segmentation refers to the process of delineating the lesion boundary by assigning pixel-wise labels to the dermoscopic images, so as to separate the lesion from the surrounding healthy skin. However, this is a complicated task, primarily because of the large variety in the shape, color, presentation, and contrast of skin lesions, originating from intra- and inter-class variations as well as image acquisition.

Recent years have witnessed the successful applications of machine learning, particularly deep learning-based approaches, to the semantic segmentation of skin lesions. Numerous contributions have been made in terms of new architectures (such as fully convolutional network models 

[32], deep residual networks [31], deep auto-context architectures [20], etc.), shape [19] and texture [34] priors, input transformations [28], synthesis-based augmentations [21, 1]

, and loss functions 


Although deep learning-based approaches have made significant improvements to the segmentation performance, they are reliant on a large amount of training data in order to yield acceptable performances. Deep learning-based approaches also tend to ignore knowledge about illumination in skin lesion images and other such physics-based properties, an area that has been explored in the past. Madooei et al. [16] proposed a new 2D log-chromaticity color space and showed that color intensity triplets in skin images lie on a plane, and used Otsu’s algorithm to segment skin lesions, demonstrating superior performance even on low-contrast lesions. In another work [15]

, they also presented pre-processing techniques for improved segmentation of skin lesions. They calculated an illumination invariant grayscale ‘intrinsic’ image and used it to attenuate shading and lighting intensity changes in dermoscopic images. Moreover, they also presented a novel RGB-to-grayscale conversion algorithm for dermoscopic images using principal component analysis in the optical density space. In a more recent work, Guarracino et al. 

[10] proposed an unsupervised approach for skin lesion segmentation from dermoscopic images by choosing certain color bands. Ng et al. [12] demonstrated an improvement in segmentation performance with the use of color constancy algorithms in a fully convolutional network-based segmentation model. However, very little research has been done on the applicability of color image theory and illumination information to a deep learning-based semantic segmentation framework.

We propose a novel deep semantic segmentation algorithm for dermoscopic images, which leverages prior illumination and color theory knowledge. In particular, we build upon previous works and leverage specific color bands, intrinsic images, and skin image information to yield improved segmentation results. To the best of our knowledge, this is the first work that incorporates such information in a deep learning-based framework.

The rest of the paper is structured as follows: Section 2 describes the proposed approach, Section 3

describes the dataset, the experiments, and the evaluation metrics, Section 

4 contains quantitative and qualitative analyses of the proposed approach, and Section 5 concludes the paper.

2 Method

In this work, we extract color information from the RGB dermoscopic images and use them along with the original image to train a deep semantic segmentation model. In particular, we use (a) variations of certain color bands (Section 2.1

), (b) a color-theory-based grayscale estimate (Section 

2.3), (c) an illumination-invariant intrinsic grayscale image (Section 2.2), and (d) a shading-attenuated image obtained from the dermoscopic image (Section 2.4). Figure 1 shows an overview of the proposed approach In the subsequent sections, we describe the methods for obtaining these images.

Figure 1: An overview of the proposed approach. Various color bands and transformations are computed as explained in Section 2 and concatenated channel-wise to the original RGB dermoscopic image in order to train the segmentation model.

2.1 Choosing color bands

We choose the two color bands which have been shown to be efficient for skin lesion segmentation [10]: the red color channel from the normalized RGB image, and the complement of the value channel from the HSV color space representation of the image (denoted by and respectively) and concatenate them to the original RGB dermoscopic image. They are defined as:


where denote the channels from the original image and denotes the Value channel from the HSV representation of the image. For computational efficiency, instead of converting the image from the RGB to the HSV color space, the channel can directly be calculated as:


where denotes the number of gray-levels in an -bit image ( for our 8-bit color images).

2.2 Intrinsic images

We follow the approach proposed by Finlayson et al. [9] to derive an illumination-invariant grayscale ‘intrinsic’ image using entropy minimization. Given an RGB image, let

denote the channel-wise intensities, and the 3-vector chromaticities can be obtained by dividing each color channel by the geometric mean of the channels.


where . Finlayson et al. note that while it is possible to obtain 2-vector chromaticities by dividing by one of the color channels, the choice of dividing by the geometric mean ensures that there is no bias towards any particular channel.

From [9], assuming the light as a Planckian radiator (and using Wein’s approximation [30]) and the camera sensors to be fairly narrow-band, the channel can be written as:


where is the Lambertian shading, is the overall light intensity, is the temperature of the lighting color, is the spectral reflectance of the surface (which is the skin in our case) as a function of the wavelength , s are the camera sensor sensitivity functions, and are constants. Therefore, the log-chromaticities (obtained by taking the logarithm of Eqn. 4) can be written as:


where and . Note that this expression does not have the shading and the intensity information. Eqn. 6 is the parametric equation of a straight line with as the parameter, and although the surface information is present in the intercept of the line, the direction is given by , which is independent of the surface.

With 2D log-chromaticities, it is possible to obtain the intrinsic image by projecting in a direction orthogonal to (denoted by ), followed by taking its exponential. However, dividing by the geometric mean (Eqn. 4) yields 3D log-chromaticities, and therefore, the task is to find a projector which can project onto the 2D chromaticity space, which is a plane. Note that the log-chromaticities are orthogonal to , and so the projector can be used to characterize the plane. Since

has two non-zero eigenvalues, it can be decomposed as:



is the identity matrix and

is a orthogonal matrix, which projects the three vectors onto a coordinate system in the plane as two vectors denoted by . It should be noted that straight lines in remain straight in .


The next step is to find the optimal angle to project along in the plane, for which the entropy for the marginal distribution along a 1D line orthogonal to the lighting direction is minimized. The resulting projected log grayscale image is given by


To compute the best projection angle, only the middle 90% of the data is used. This is done to exclude the outliers by using data between the

and the percentiles. Then, Scott’s rule [26] is used to estimate the bin width for constructing the histogram as:


where STD(

) denotes the standard deviation and

is the size of the grayscale image data for a given angle

. Next, for each angle, probabilities

for each bin are computed by dividing the bin by the sum of the bin counts, and the entropy is calculated as:


The angle which yields the lowest entropy is chosen as the projection angle, and finally the projected log-image is exponentiated to yield the intrinsic image. The entire approach is shown in Algorithm 1.

Input: RGB image
Output: Grayscale intrinsic image
construct 2D log-chromaticity representation of for  to  do
       calculate histogram bin width;
       compute histogram with middle 90% data;
       compute , the entropy for the angle ;
end for
return ;
Algorithm 1 Intrinsic image by entropy minimization

2.3 Grayscale images of skin lesions

Madooei et al. [15] proposed a RGB-to-grayscale conversion algorithm for dermoscopic images based on the optics and the reflectance properties of skin surfaces. Unlike a traditional grayscale representation calculated as the weighted sum of the red, the green, and the blue channels [22], this grayscale image preserves the lesion while suppressing the healthy skin, thereby increasing the contrast between the healthy and the affected regions. Based on the skin models proposed by Hiraoka et al. [11] and Tsumura et al. [29], the spectral reflection of skin at a pixel under polarized light can be written as:


where denote the densities (), cross-sectional areas () for scattering absorption, and mean path lengths for photons in the epidermis and dermis layers of the human skin for melanin and hemoglobin (denoted by subscripts and respectively). Substituting this expression in Eqn. 5 followed by taking the logarithms on both sides yields:


where . Eqn. 13 suggests that the pixels from a skin image lie on a plane in the optical density space described by

. As such, Madooei et al. observe that in almost all the skin lesion images analyzed, the first eigenvector in the principal component analysis (PCA) explains a very high fraction of the total variance, and thus contains most of the information in the image. As such, the first principal component can be used to obtain a grayscale skin lesion image. The approach has been described in Algorithm 


Input: RGB image
Output: Grayscale image
image in optical density space;
reshape() return ;
Algorithm 2 Grayscale skin lesion image

2.4 Shading-attenuated skin lesion images

The non-flat nature of skin surfaces, especially lesions, and the effect of light intensity falloffs towards the edges of the skin lesions can induce shading in dermoscopic images, which can degrade the segmentation (and classification) performance. Madooei et al. [15] proposed to use the intrinsic images generated by Finlayson et al. [9] to perform illumination normalization in dermoscopic images, thereby performing shading-attenuation. They proposed to use the intrinsic image to normalize the intensity values. Given a dermoscopic image, its intrinsic image is first calculated. The RGB image is then converted to the HSV color space. In order to normalize the intensities, the Value (V) channel of the HSV image is used. First, both the intrinsic image and the Value channel are normalized. The image intensity histogram of the Value channel is then mapped to that of the intrinsic image. Finally, this new normalized and histogram-mapped Value channel is used to replace the original Value channel in the HSV image, and the resultant image is then mapped back to the RGB color space. The authors demonstrated a significant attenuation in the shading and the intensity falloff using this approach. The entire approach is summarized in Algorithm 3.

Input: RGB image
Output: Shading-attenuated RGB image
compute from using Algorithm 1;
normalize , ;
histogram-matched to ;
return ;
Algorithm 3 Shading-attenuated skin lesion image
Figure 2: Transformation results for 5 images from the ISIC 2017 training set.

3 Datasets and Experimental Details

3.1 Datasets

We evaluate our proposed approach on three skin lesion image datasets, namely the ISIC ISBI 2017 dataset, the DermoFit dataset, and the PH2 dataset.

3.1.1 Isbi Isic 2017

The ISIC ISBI 2017: Skin Lesion Analysis Towards Melanoma Detection Challenge [7] Part 1 dataset contains skin lesion images and the corresponding manually annotated lesion delineations belonging to three diagnoses: melanoma, seborrheic keratosis, and benign nevi. The dataset is split into training, validation, and testing subsets containing 2000, 150, and 600 images respectively.

3.1.2 DermoFit

The DermoFit Image Library [3, 8] contains 1300 skin lesion images belonging to ten diagnosis classes, along with the corresponding binary segmentation masks. We divide the original dataset into training, validation, and testing splits in the ratio of .

3.1.3 Ph2

The PH2 Image Database [17] contains a total of 200 dermoscopic images of common nevi, atypical nevi, and melanomas, along with their lesion segmentations annotated by an expert dermatologist.

Method Accuracy Dice Coefficient Jaccard Index Sensitivity Specificity
RGB Only
All Channels
No Intrinsic
Table 1: Quantitative results for the seven methods on 600 images from the ISIC 2017 test set. (mean standard error).
Figure 3: Kernel density estimates for the five metrics for all the segmentation methods evaluated on the ISIC 2017 test set.

3.2 Experiments and Evaluation

Since the goal of this work is to demonstrate the effectiveness of the various color theory and illumination-based transformations for enhancing segmentation performance, we use U-Net [24] as the baseline segmentation network. The U-Net consists of a symmetric encoder-decoder architecture, with skip connections between symmetrically corresponding layers in the encoder and the decoder, which help in recovering the full spatial resolution [14] and address the problem of gradient vanishing [27]. For evaluating upon the ISIC 2017 dataset, we train seven segmentation models where the inputs to the corresponding networks are the following:

  • RGB Only: The original 3-channel RGB dermoscopic image.

  • All Channels: The original RGB dermoscopic image channel-wise concatenated with (Section 2.1), intrinsic image (denoted by Intrinsic; Section 2.2), grayscale image (denoted by GRAY; Section 2.3), and shading-attenuated image (denoted by GRAY; Section 2.4). The result is a 10-channel image.

  • Next, to determine the contribution of each of the transformations described in Section 2, we drop one component at a time from the 10-channel image above, and denoted it by No x, where x denotes the dropped channel. The models are:

    • No : 9-channel image.

    • No : 9-channel image.

    • No GRAY: 9-channel image.

    • No Intrinsic: 9-channel image.

    • No SA: 7-channel image.

For each of these aforementioned models, the input layer of the segmentation network is modified to handle the corresponding number of channels, and the rest of the architecture remains the same. The models are trained to predict the pixelwise labels for the semantic segmentation task. All images and their corresponding ground truth segmentation masks are resized to

using nearest neighbor interpolation from Python’s SciPy library. All networks are trained with Dice loss 


using mini-batch stochastic gradient descent with a batch size of

(since a larger batch size exceeded our GPU memory) and a learning rate of

. We apply real time data augmentation strategies (random horizontal and vertical flips and rotations) during training. All the code was written in Python and the PyTorch framework was used to implement the deep segmentation models.

For the evaluation of the three methods, we report the metrics used by the official challenge - pixel-wise accuracy, sensitivity, specificity, Dice similarity coefficient, and Jaccard index (also known as the intersection over union). They are given by:


where TP, TN, FP, FN denote true positive, true negative, false positive, and false negative respectively, and denote two binary masks. As with the challenge, all metrics are reported at 128 confidence threshold.

Figure 4: Qualitative results for the segmentation performance of all the methods on 6 images from the ISIC 2017 test set. Incorporating the two color bands and illumination-based transformations improves the segmentation consistently, and the performance drop is the most significant when Intrinsic is not used.

4 Results and Discussion

Figure 2 shows the normalized red channel (), the complement of the Value channel (), the intrinsic image (Intrinsic) using the approach by Finlayson et al. [9], the grayscale converted image (GRAY) and the shading-attenuated image (SA) using the approach by Madooei et al. [15] for 5 dermoscopic images (with their ISIC image IDs) and their corresponding ground truth segmentation masks from the ISIC 2017 training set. We notice that the presence of artifacts such as markers (second row) and rulers (third and fourth rows) lead to poor results, particularly for shading-attenuated images. While the shading-attenuation results are acceptable for some images, a large number of images yield poor results, such as the last row in Figure 2.

Table 1 shows the quantitative results for the 600 test images evaluated using the seven trained segmentation networks. We observe that ‘All Channels’ outperforms ‘RGB Only’ in all metrics except specificity, where the difference is quite small. Using all the transformations yields an improvement of 12.02% and 7.76% over the baseline (‘RGB Only’) in the mean Jaccard index and the mean Dice similarity coefficient metrics respectively. We also note that we are within 1% of the Jaccard index of the top 3 entries on the challenge leaderboard [13], without using any additional external data [5], post-processing, or an ensemble of models [33, 4]

and without optimizing the network architecture or any other hyperparameters.

Dataset Method Accuracy Dice Coefficient Jaccard Index Sensitivity Specificity
DermoFit RGB Only
PH2 RGB Only
Table 2: Quantitative results for the two methods on the 390 test images from the DermoFit dataset and 200 images from the PH2 dataset (mean standard error).

To further capture the improvement in the segmentation performance, we plot the kernel density estimates of the metrics for all the methods (Figure 3

). We use the Epanechnikov kernel to estimate their probability density functions, and the plots have been clipped to the range of the values of the respective metrics. The plots show higher peaks (indicating higher densities) at larger values of all the metrics for the proposed method(s).

Figure 4 shows six samples from the test dataset and their corresponding ground truth segmentation masks, along with the prediction outputs from the seven models. The samples have been chosen to cover almost all possible variations in the images, such as the size of the lesion, the contrast between the lesion and the surrounding skin, and the presence of artifacts (ruler, gel bubble, etc.). We note that apart from the improved segmentation performance, incorporating the proposed transformations into the input to the model also considerably improves the false positive and the false negative labels.

Next, we analyze the contribution of each of the color theory and illumination-based transformations towards improving the segmentation performance. From Table 1, we can see that dropping the normalized red channel (), the complement of the Value channel (), and the shading-attenuated image (SA) have the least impact on the Dice coefficients. Of these, the first two can possibly be explained by the fact that these are relatively simpler transformations as compared to the other three, and are therefore easier for the network to learn. As for the SA component, as already noted previously and shown in the SA column in Figure 2, a large number of images yield very poor results. Since we use JPEG compressed images, most of the high frequencies (in the Fourier domain representation) are discarded during JPEG compression, which leads to the entropy minimization step producing sub-optimal projection angles. We confirm this by plotting the projection angles calculated for the 2000 and 780 images in the ISIC 2017 and the DermoFit training sets (Figure 5). We observe that the projection angles are spread across the entire range, which is in contrast to Finlayson et al. [9] where the minimum entropy angles are between and for their HP912 camera. As such, we do not expect the SA images to provide a considerable improvement when used in a segmentation model, which is consistent with the quantitative results on the ISIC 2017 test set.

On the other hand, we observe that the intrinsic image (Intrinsic) and the grayscale converted image (GRAY) are crucial to the segmentation performance improvement. Since these transformations rely on the log-chromaticity and the optical density space representations respectively, and therefore are not so easily learned by a deep semantic segmentation model. The dip in performance is the most when the Intrinsic image is dropped, indicating that it is the most important illumination-based transformation for improving the segmentation. Figure 4 shows that ‘No Intrinsic’ also results in higher false positives and false negatives (most clearly visible in the second and the third rows).

Finally, for the DermoFit dataset, we train two models: ‘RGB Only’ and ‘No SA’. As discussed, SA images do not contribute much to improving the segmentation performance (as shown for the ISIC 2017 dataset, Table 1), while also being computationally intensive (Algorithm 3). As such, we use ‘RGB Only’ as the baseline to evaluate the performance of ‘No SA’. As for the PH2 dataset, given the small number of images, we use the entire PH2 dataset as a test set for the two models trained on the DermoFit dataset to evaluate the generalizability of the trained models.

Table 2 shows the quantitative results for evaluating these two trained models on the DermoFit test set and the entire PH2 dataset. We observe that ‘No SA’ improves the mean Jaccard index for the DermoFit and the PH2 datasets by 4.30% and 8.86% respectively over the ‘RGB Only’ baseline.

Figure 5: Histogram of projection angles for the training images from the ISIC 2017 and the DermoFit datasets. The projection angles for these images are spread across the entire range, whereas it is restricted to a small range for Finlayson et al. [9].

5 Conclusion

Motivated by the potential value of leveraging information about the physics of skin illumination and imaging in a data hungry deep learning setting, in this work, we proposed a novel semantic segmentation framework for skin lesion images by augmenting the RGB dermoscopic images with additional color bands and intrinsic, grayscale, and shading-attenuated images. We demonstrated the efficacy of the proposed approach by evaluating on three datasets: the ISIC ISBI 2017 Challenge dataset, the DermoFit Image Library, and the PH2 database and observed a considerable performance improvement over the baseline method. We also performed ablation studies to ascertain the contribution of each of the transformations on the segmentation performance improvement. We hypothesize that, despite being useful for improving prediction accuracy, deep learning does not happen to stumble upon these illumination-based channels given the large search space, the fixed architecture, and the local gradient-descent optimizer. Future work may explore architectures, losses, or training strategies that ensure such illumination information are encoded.


  • [1] K. Abhishek and G. Hamarneh (2019) Mask2Lesion: mask-constrained adversarial skin lesion image synthesis. In International Conference on Medical Image Computing and Computer-Assisted Intervention Workshop on Simulation and Synthesis in Medical Imaging (MICCAI SASHIMI), pp. 71–80. External Links: Document, Link Cited by: §1.
  • [2] N. Abraham and N. M. Khan (2019) A novel focal Tversky loss function with improved attention U-Net for lesion segmentation. In 2019 IEEE 16th International Symposium on Biomedical Imaging (ISBI 2019), pp. 683–687. Cited by: §1.
  • [3] L. Ballerini, R. B. Fisher, B. Aldridge, and J. Rees (2013) A color and texture based hierarchical K-NN approach to the classification of non-melanoma skin lesions. In Color Medical Image Analysis, pp. 63–86. Cited by: §3.1.2.
  • [4] M. Berseth (2017) ISIC 2017-skin lesion analysis towards melanoma detection. arXiv preprint arXiv:1703.00523. Cited by: §4.
  • [5] L. Bi, J. Kim, E. Ahn, and D. Feng (2017) Automatic skin lesion analysis using large-scale dermoscopy images and deep residual networks. arXiv preprint arXiv:1703.04197. Cited by: §4.
  • [6] Cancer Facts & Figures 2016. Note: American Cancer Society, Cited by: §1.
  • [7] N. C. Codella, D. Gutman, M. E. Celebi, B. Helba, M. A. Marchetti, S. W. Dusza, A. Kalloo, K. Liopyris, N. Mishra, H. Kittler, et al. (2018) Skin lesion analysis toward melanoma detection: a challenge at the 2017 International Symposium on Biomedical Imaging (ISBI), hosted by the International Skin Imaging Collaboration (ISIC). In 2018 IEEE 15th International Symposium on Biomedical Imaging (ISBI 2018), pp. 168–172. Cited by: §3.1.1.
  • [8] DermoFit Image Library. Note: [Accessed: March 4, 2020] Cited by: §3.1.2.
  • [9] G. D. Finlayson, M. S. Drew, and C. Lu (2004) Intrinsic images by entropy minimization. In

    European Conference on Computer Vision

    pp. 582–595. Cited by: §2.2, §2.2, §2.4, Figure 5, §4, §4.
  • [10] M. R. Guarracino and L. Maddalena (2018) SDI+: a novel algorithm for segmenting dermoscopic images. IEEE Journal of Biomedical and Health Informatics 23 (2), pp. 481–488. Cited by: §1, §2.1.
  • [11] M. Hiraoka, M. Firbank, M. Essenpreis, M. Cope, S. Arridge, P. Van Der Zee, and D. Delpy (1993) A Monte Carlo investigation of optical pathlength in inhomogeneous tissue and its application to near-infrared spectroscopy. Physics in Medicine & Biology 38 (12), pp. 1859. Cited by: §2.3.
  • [12] J. hua Ng, M. Goyal, B. Hewitt, and M. H. Yap (2019) The effect of color constancy algorithms on semantic segmentation of skin lesions. In Medical Imaging 2019: Biomedical Applications in Molecular, Structural, and Functional Imaging, Vol. 10953, pp. 109530R. Cited by: §1.
  • [13] ISIC 2017: skin lesion analysis towards melanoma detection part 1: lesion segmentation phase 3: final test submission leaderboard. Note: [Accessed: November 24, 2019] Cited by: §4.
  • [14] H. Li, Z. Xu, G. Taylor, C. Studer, and T. Goldstein (2018) Visualizing the loss landscape of neural nets. In Advances in Neural Information Processing Systems, pp. 6389–6399. Cited by: §3.2.
  • [15] A. Madooei, M. S. Drew, M. Sadeghi, and M. S. Atkins (2012) Automated pre–processing method for dermoscopic images and its application to pigmented skin lesion segmentation. In Color and Imaging Conference, Vol. 2012, pp. 158–163. Cited by: §1, §2.3, §2.4, §4.
  • [16] A. Madooei, M. S. Drew, M. Sadeghi, and M. S. Atkins (2012) Intrinsic melanin and hemoglobin colour components for skin lesion malignancy detection. In International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 315–322. Cited by: §1.
  • [17] T. Mendonça, P. M. Ferreira, J. S. Marques, A. R. Marcal, and J. Rozeira (2013) PH2 - a dermoscopic image database for research and benchmarking. In 2013 35th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), pp. 5437–5440. Cited by: §3.1.3.
  • [18] F. Milletari, N. Navab, and S. Ahmadi (2016)

    V-net: fully convolutional neural networks for volumetric medical image segmentation

    In 2016 Fourth International Conference on 3D Vision (3DV), pp. 565–571. Cited by: §3.2.
  • [19] Z. Mirikharaji and G. Hamarneh (2018) Star shape prior in fully convolutional networks for skin lesion segmentation. In International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 737–745. Cited by: §1.
  • [20] Z. Mirikharaji, S. Izadi, J. Kawahara, and G. Hamarneh (2018) Deep auto-context fully convolutional neural network for skin lesion segmentation. In Biomedical Imaging (ISBI 2018), 2018 IEEE 15th International Symposium on, pp. 877–880. Cited by: §1.
  • [21] F. Pollastri, F. Bolelli, R. Paredes, and C. Grana (2019-05-18) Augmenting data with GANs to segment melanoma skin lesions. Multimedia Tools and Applications. External Links: ISSN 1573-7721 Cited by: §1.
  • [22] C. A. Poynton (1997-03) Frequently asked questions about color. Note: [Accessed: February 25, 2020] External Links: Link Cited by: §2.3.
  • [23] H. W. Rogers, M. A. Weinstock, S. R. Feldman, and B. M. Coldiron (2015) Incidence estimate of nonmelanoma skin cancer (keratinocyte carcinomas) in the US population, 2012. JAMA Dermatology 151 (10), pp. 1081–1086. Cited by: §1.
  • [24] O. Ronneberger, P. Fischer, and T. Brox (2015) U-Net: convolutional networks for biomedical image segmentation. In International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 234–241. Cited by: §3.2.
  • [25] J. Schofield, D. Fleming, D. Grindlay, and H. Williams (2011) Skin conditions are the commonest new reason people present to general practitioners in England and Wales. British Journal of Dermatology 165 (5), pp. 1044–1050. Cited by: §1.
  • [26] D. W. Scott (2012) Multivariate density estimation and visualization. In Handbook of Computational Statistics, pp. 549–569. Cited by: §2.2.
  • [27] S. A. Taghanaki, K. Abhishek, J. P. Cohen, J. Cohen-Adad, and G. Hamarneh (2019) Deep semantic segmentation of natural and medical images: a review. arXiv preprint arXiv:1910.07655. Cited by: §3.2.
  • [28] S. A. Taghanaki, K. Abhishek, and G. Hamarneh (2019) Improved inference via deep input transfer. In International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 819–827. Cited by: §1.
  • [29] N. Tsumura, H. Haneishi, and Y. Miyake (1999) Independent-component analysis of skin color image. JOSA A 16 (9), pp. 2169–2176. Cited by: §2.3.
  • [30] G. Wyszecki and W. Stiles (2000) Color science: concepts and methods, quantitative data and formulae. Color Science: Concepts and Methods, Quantitative Data and Formulae, 2nd Edition, by Gunther Wyszecki, WS Stiles, pp. 968. ISBN 0-471-39918-3. Wiley-VCH, July 2000., pp. 968. Cited by: §2.2.
  • [31] L. Yu, H. Chen, Q. Dou, J. Qin, and P. Heng (2017) Automated melanoma recognition in dermoscopy images via very deep residual networks. IEEE Transactions on Medical Imaging 36 (4), pp. 994–1004. Cited by: §1.
  • [32] Y. Yuan, M. Chao, and Y. Lo (2017-Sept) Automatic skin lesion segmentation using deep fully convolutional networks with Jaccard distance. IEEE Transactions on Medical Imaging 36 (9), pp. 1876–1886. External Links: Document, ISSN 0278-0062 Cited by: §1.
  • [33] Y. Yuan (2017) Automatic skin lesion segmentation with fully convolutional-deconvolutional networks. arXiv preprint arXiv:1703.05165. Cited by: §4.
  • [34] L. Zhang, G. Yang, and X. Ye (2019) Automatic skin lesion segmentation by coupling deep fully convolutional networks and shallow network with textons. Journal of Medical Imaging 6 (2), pp. 024001. Cited by: §1.