Learning to Detect Blue-white Structures in Dermoscopy Images with Weak Supervision

06/30/2015 ∙ by Ali Madooei, et al. ∙ 0

We propose a novel approach to identify one of the most significant dermoscopic criteria in the diagnosis of Cutaneous Melanoma: the Blue-whitish structure. In this paper, we achieve this goal in a Multiple Instance Learning framework using only image-level labels of whether the feature is present or not. As the output, we predict the image classification label and as well localize the feature in the image. Experiments are conducted on a challenging dataset with results outperforming state-of-the-art. This study provides an improvement on the scope of modelling for computerized image analysis of skin lesions, in particular in that it puts forward a framework for identification of dermoscopic local features from weakly-labelled data.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 2

page 8

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

Dermatological practice relies heavily on visual examination of skin lesions. Thus, it is not surprising that interest in Computer Vision based diagnosis technology in this field is growing. The goal of automatically understanding dermatological images is tremendously challenging, however, since much like human vision itself what is understood about how diagnostic expertise actually operates is subjective and limited.

Many of those who take up this challenge focus on detection of Cutaneous (skin) Melanoma through dermoscopy image analysis. Melanoma is the most life-threatening form of skin cancer. Dermoscopy is a non-invasive, in-vivo skin examination technique that uses optical magnification and cross-polarized lighting to allow enhanced visualization of lesion characteristics which are often not discernible by the naked eye. Early detection of melanoma is paramount to patients’ prognosis towards greater survival. The challenges involved in clinical diagnosis of early melanoma have provoked increased interest in computer-aided diagnosis systems through automatic analysis of digital dermoscopy images.

In this paper, we focus on the identification of blue-whitish structures (BWS), one of the most important findings through dermoscopic examination in making a diagnosis of invasive melanoma [1]. The term Blue-white structures is a unified heading for features also known as Blue-white veil and Regression structures (this is discussed below in §II).

To this aim, a typical approach would be based on the classical paradigm of supervised learning requiring extensively annotating each dermoscopic image with instances of BWS, in all training images. This is difficult (or even impossible) to be carried accurately and consistently due to subjectivity of feature definition and poor inter-observer agreement.

The dermoscopy data in fact available to us has motivated a different, more challenging, research problem. In this dataset [2], image-level labels encode only whether an image contains a dermoscopic feature or not, but the features themselves are not locally annotated. In Computer Vision this situation is referred to as weakly-labelled data.

To approach this problem, we use the multiple instance learning (MIL) paradigm. MIL is a relatively new learning paradigm, and has broad applications in computer vision, text processing, etc. Unlike standard supervised learning, where each training instance is labeled, MIL is a type of weakly supervised learning, where the instance labels are ambiguous (more on this in §IV).

Our goal is to learn to identify and localize BWS using this weakly-labelled data (i.e. with minimal supervision). Learning to localize dermoscopic features with minimal supervision is an important class of problem. It provides an improvement on the scope of modelling for computerized image analysis of skin lesions because the vast majority of data available are in fact weakly-labeled. Surprisingly, this class of problem is the least studied in the relevant literature.

Ii Clinical background

Dermoscopy allows the identification of many different structures not seen by the unaided eye. As the field has evolved, terminology has been accumulated to describe structures seen via dermoscopy. This terminology can sometimes be confusing. For clarity, a brief description follows to illuminate the feature under study here.

In dermoscopic examination of pigmented skin lesions, accurate analysis of lesion colouration is essential to the diagnosis. Lesions with dark, bluish or variegated colours are deemed to be more likely to be malignant. Indeed, the crucial role of colour cues is evident as most clinical diagnosis guidelines (such as the “ABCD rule” [3] and the “7-point checklist” [4]) include colour for lesion scoring.

Among the common colours seen under dermoscopy, the presence of a blue hue and a white (together or separately) are a diagnosis clue. White is often the result of depigmentation, sclerosis of the dermis or keratinization. Blue itself is the result of the Tyndall effect: the longer-wavelength light (red) is more transmitted while the shorter-wavelength light (blue) is more reflected via scattering from the melanin pigment present deep within the lesion.

Identification of blue and white hues within a lesion is a good predictor of malignancy but not a specific one. Shades of blue are also observed in benign lesions such as in blue nevi (moles) and haemangiomas, and white areas are also seen in e.g. benign halo nevi. As a general rule, colours in melanoma are focal, asymmetrical and irregular whereas in benign lesions are distributed uniformly. This generalization however lacks adequate specificity. To overcome these issues, two specific features have been defined, denoted as blue-white veil feature and regression structure.

Blue-white veil is defined as irregular, confluent, grey-blue to whitish blue diffuse pigmentation with an overlying “ground-glass” white haze or “veil” as if the image were out of focus there. For discriminability, the pigmentation cannot occupy the entire lesion and is found mainly in the papular part of the lesion. On the other hand, regression is defined as areas with white scar-like depigmentation and/or blue-white to grey pepper-like granules (a.k.a. peppering). A particular pitfall when assessing the so-called combinations of white and blue areas is the fact that this combination is virtually indistinguishable between the blue-white veil and regression structures111Some clinical references use “blue-white due to orthokeratosis” vs. “blue-white due to regression” to distinguish between the two features.. To improve the diagnostic efficacy and increase inter-observer reproducibility, the two terminologies were unified (although it was demonstrated that these correspond to two different histopathologic subtrates) into the definition of blue-whitish structure, during the Consensus Net Meeting on Dermoscopy [5]. An example of this feature is given in Fig.1.

Fig. 1: Schematic representation of blue-whitish structures (left) and a dermoscopic image (right) of melanoma with this feature (It is difficult to differentiate between ‘regression’ and ‘blue-white veil’ areas; both features are present).

Iii Related Work

Colour assessment, as discussed, is essential in the clinical diagnosis of skin cancers. Due to this diagnostic importance, many studies have either focused on or employed colour features as a constituent part of their skin lesion analysis systems. These studies range from employing low-level colour features, such as simple statistical measures of colours occurring in the lesion [6, 7], to availing themselves of high-level semantic features such as presence of blue-white veil [8, 9] or colour variegation [10, 11] in the lesion.

There exists extensive literature on skin lesion image analysis (see [12] for a survey). However, there are not many studies that report a method specifically pertaining to the detection of the feature under study here. A handful of studies aim to detect (and localize) blue-white veil in dermoscopy images. Prior work is briefly reviewed next.

Ogorzalek et al. [13] used thresholding in RGB space to detect white, black and blue-grey areas. Blue-grey areas were identified as pixels which satisfied , or , . There is no indication on how these decision rules were generated. Also, the paper does not provide any experiment to evaluate the success of colour-based detection. The detected colours were quantified by their area as part of a feature set for classification of skin lesions (for computer-aided diagnosis).

Sforza et al. [14] proposed an adaptive segmentation of grey (blue-grey) areas in dermoscopy images. They achieved this by thresholding on the B component of HSB colour space. The threshold values were induced “adaptively”, although the paper is unclear on how this adaptive process was carried out. The paper also lacks quantitative evaluation: results are shown qualitatively for only five dermoscopy images.

Celebi et al. [15, 8]

detected blue-white veil areas through pixel classification using explicit thresholding, where the threshold values were induced by a trained decision tree. Various colour and texture features were used to characterize the image pixels. The method was tested (for localization accuracy) on 100 dermoscopy images and produced sensitivity of 84% and a specificity of 96%. Celebi et al. further developed a second decision tree classifier to use detected blue-white veil areas for discriminating between melanoma and benign lesions. The detected areas were characterized using simple numerical parameters such as region area, circularity and ellipticity measures. Experiments on a set of 545 dermoscopy images yielded a sensitivity of 69% and a specificity of 89%.

The findings of Celebi et al. indicate that blue-white veil colour data has a restricted range of values and does not fall randomly in a given colour feature space. This indicates that their method can benefit from the choice of colour representation. To investigate this, Madooei et al. [9] reproduced Celebi’s training experiment where each pixel is represented by its corresponding coordinates in various colour spaces. Their investigation revealed that by thresholding over the Luminance channel () and normalized blue (), one can obtain equally good results with considerably less computation compared to [8].

Devita et al. [16, 17]

detected image regions containing blue-white veil, irregular pigmentation or regression features. To this aim, first, the lesion was segmented into homogeneous colour regions. Next, simple statistical parameters such as mean and standard deviation were extracted, from HSI colour components, for each region. Finally, a Logistic Model Tree (LMT) was trained to detect each colour feature. LMT is a supervised learning classification model that combines logistic regression and decision tree learning. Devita et al. detected these colour features as part of their system

[17] for automatic diagnosis of melanoma based on the 7-Point checklist clinical guideline. They also evaluated the performance of their colour detection method over a set of 287 images (150 images were used for training and 137 for testing). It is not clear whether the test was aimed to identify (presence/absence) or to localize the colour features. Nevertheless, results are shown with average specificity and sensitivity of about 80%.

Madooei et al. [9] identified the blue-white veil feature in each dermoscopy image through a nearest neighbour matching of image regions to colour patches of a “blue-white veil colour palette”: a discrete set of Munsell colours best describing the feature. The palette was created by mapping instances of veil and non-veil data to Munsell colour patches, keeping these colours that exclusively described the feature with highest frequency. Madooei et al. claim their method mimics the colour assessment performed by human observers, pointing to the fact that in identifying a colour, observers are influenced by the colours they saw previously. They tested their proposed method for localization of blue-white veil feature on a set of 223 dermoscopy images and reported sensitivity of 71% and a specificity of 97%. They also tested their method, in a different experiment, to identify only the presence (or absence) of this feature on a set of 300 images with two subsets of 200 ‘easy’ and 100 ‘challenging’ cases. An image was considered challenging if the blue-white veil area was too small, too pale, occluded, or had variegated colour. They reported accuracy of 87% and 67% on easy and challenging sets respectively.

Wadhawan et al. [18]

detected blue-white veil areas through classification of image patches. The image patches were extracted over the lesion area using a regular-grid sampling. For each image patch, a feature vector was computed by concatenating histogram representation of pixel values in various colour channels of different colour spaces. Wadhawan et al. evaluated their method by performing 10-fold cross-validation on a set of 489 dermoscopy images (163 containing the veil and remaining 326 free of this feature). For training, images were manually segmented and annotated by one of the authors. Support vector machine (SVM) with linear kernel was used for classification. For testing, only presence/absence of the feature was considered. Results were reported with average sensitivity of about 95% and average specificity of about 70%.

Lingala et al. [19] detected blue areas in dermoscopy images and further classified them to the three shades lavender, dark and light blue using fuzzy set membership functions. Their colour detection method builds on a simple thresholding approach similar to Ogorzalek et al. [13]. A pixel is considered as ‘blue’ if its normalized RGB values are within a certain range determined empirically (the threshold values are not reported). These blue areas are further classified into lavender, light and dark blue by thresholding their intensity value (the luminance channel). This thresholding scheme is used to generate training data using 22 dermoscopy images. The training data is then used to determine the parameters of fuzzy set membership functions for three shades of blue. The method is evaluated over a set of 866 images (173 melanoma and 693 benign). There is no indication of how successful the colour detection was. Evaluation was conducted by classifying lesions as melanoma vs. benign by extracting simple statistical features over blue areas. Interestingly, using fuzzy set membership vs. simple thresholding was reported to improve classification performance by less than 0.5% which calls into question the effectiveness of the proposed method.

It is to be noted that there are other studies aimed at colour classification [20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32] where the objective is to assign labels (such as colour names) to each region (or pixel) of the image using the colour information contained in that region. In some of these, the general colour class of blue or white are considered. This is, however, different from identifying blue-white structures. We shall remind the reader that BWS is a specific dermoscopic feature and not necessarily a particular hue. In fact, what dermatologists annotate as BWS is a mixture of many different hues including various shades of blue, white, grey and sometimes purple.

Also note that the use of the blue-white veil feature has been reported in some commercially available computer-aided diagnosis (CAD) systems that do use colour information (see e.g. [33]). These studies however often omit description of methods and techniques which are used for feature extraction (perhaps due to patent protection) and therefore were not considered here.

Summary – Most of the prior works are pixel-based classification approaches where the classification method often is simply to partition a colour space by imposing a set of decision boundaries (thresholds) either found empirically or induced by e.g. training a decision tree (see Table.I). Since dermoscopic features are in fact defined over a region of at least 10% of the size of the lesion, pixel-based classification would seem to be an inappropriate approach. Also, caution must be taken in defining the decision boundaries (threshold values): the colour values are highly dependent on the acquisition technique and the imaging setup. For example, if illumination changes, or in cases with shadows, shading, and other colour-degrading factors, thresholding methods might fail ungracefully.

Author Year Method Approach
Celebi et al. [15] 2006 Decision tree Pixel-based
Celebi et al. [8] 2008 Decision tree Pixel-based
Ogorzalek et al. [13] 2010 Thresholding Pixel-based
Sforza et al. [14] 2011 Thresholding Pixel-based
Devita et al. [16] 2012 LMT Region-based
Wadhawan et al. [18] 2012 SVM Region-based
Madooei et al [9] 2013 Thresholding Pixel-based
Madooei et al [34] 2013 Colour Palette Region-based
Fabbrocini et al. [17] 2014 LMT Region-based
Lingala et al. [19] 2014 Fuzzy sets Pixel-based
TABLE I: Summary of related works.

In all the studies reported here, the emphasis is to use colour features, and structural information such as texture are either ignored [13, 18, 16, 17, 19] or found to be not useful [8, 9]. This is problematic since these detectors would potentially fail to distinguish between a BWS and a similar feature in a benign lesion.222Uniform blue-white structures may be observed in common moles such as blue nevi, but in melanoma they are diffuse, asymmetric and irregular. Moreover, these studies fall into the classical paradigm of supervised learning that requires fully annotated data. However, this exhaustive labeling approach is costly and error prone, especially since such annotations have been always made by a single expert rather than via a consensus of experts’ opinions. It is hard to make outright claims about the success of these algorithms, especially since often these studies have failed to provide comparisons to other algorithms.

Another caveat appears here: the BWS, if identified correctly, is highly specific to melanoma. However, it is not its presence that is the diagnostic indicator, but rather the extent to which it manifests in a lesion with respect to other dermoscopic characteristics of the lesion. The blue and whitish area are also found in benign lesions. However, in combination with other dermoscopic features such as atypical network or irregular globules, these colours are diagnosed as a malignancy criteria. This imposes an extra challenge since any computer program using localization of this feature should be (somehow) aware of other local features.

One possible solution is to use a Structured Prediction paradigm, which allows training of a classifier for structured output labels. The output can be a set of dermoscopic criteria (including the BWS and other associated features) and e.g. a graphical model can be used to learn the relationship (structure) between the labels. We hypothesize, in the case of our alternative MIL-based approach, that since the BWS instances are not annotated in our image set, the detector would learn to recognize those salient regions that contain BWS in association with e.g. pigment network alterations, irregular globules, etc. Results from our experiments (§V) support this hypothesis.

Iv Proposed Method

Fig. 2: Schematic representation of data representation in our proposed model.

Our goal is to learn a detector for BWS from a set of images, each with a binary label indicating presence or absence of the BWS feature. We model an image as a set of non-overlapping regions where regions also have unknown binary labels (the region may or may not correspond to an instance of BWS). This reduces the problem of BWS localization to the problem of binary classification of image regions.

We used the well-known mean-shift segmentation tool EDISON [35] to extract image regions.333EDISON parameters: SpatialBandWidth=7, RangeBandWidth=6.5, and MinimumRegionArea=0.01*nRows*nColumns. To further reduce the number of instances, we discard regions outside the lesion area. The lesion is detected by application of grey-level thresholding method of Otsu [36] (classical foreground/background detection) as described in [37]. Note that neither region nor lesion segmentation are main constituents of our approach; we could instead use e.g. a regular grid of windows over the image.

Each region is represented by a fixed-size feature vector . Our choice of feature was a concatenated (and normalized) histogram of colour and texture information extracted over each region. For texture, we used two popular descriptors from the texture classification literature: LBP [38] and MR8 [39].

Colour features are the most important information to be captured here. We used a uni-dimensional histogram of CIE Lab colour values. CIE Lab is a perceptual colour space based on opponent process theory. A property of this space is that distances correspond (approximately) linearly to human perception of colour. This is desired since we apply uniform binning (in construction of colour histograms), which implicitly assumes a meaningful distance measure. The colour histogram is constructed using a bin size of 5 units. Thus each bin has a radius of JND444Weber’s Law of Just Noticeable Difference (JND), see http://apps.usd.edu/coglab/WebersLaw.html and so it subdivides colour space near the theoretical resolution of human colour differentiation.

Finally the whole set of training data is represented by (a schematic representation is given in Fig.2). At training time we are only given image-level labels , leading to the classic MIL problem.

In the MIL setting, training examples are presented as labeled bags (i.e. sets) of instances, and the instance labels are not given. According to the standard MIL assumption, a bag is positive if at least one of the instances is positive, while in a negative bag all the instances are negative. This assumption fits well with our problem. We can think of each image as a “bag” of instances (image regions) where the binary image label specifies that the bag contains at least one instance of the BWS feature. The label specifies that the image contains no instances of the feature.

MIL problems are typically solved (locally) by finding a local minimum of a non-convex objective function, such as mi-SVM [40]. In this paper, we use a recent MIL algorithm, the multi-instance Markov network (MIMN) [41]. It is proved in [41] that MIMN is a generalized version of mi-SVM with guaranteed convergence to an optimum (unlike mi-SVM, which might get stuck in a loop and never converge).

This method introduces a probabilistic graphical model for multi-instance classification. Because of the multi-unit and structural nature of probabilistic graphical models, they seem to be powerful tools for MIL. The proposed algorithm works by parameterizing a cardinality potential on latent instance labels in a Markov network. Consequently the model can deal with different levels of ambiguity in the instance labels and model the standard MIL assumption as well as more generalized MIL assumptions. On the other hand, this graphical model leads to principled and efficient inference algorithms for predicting both the bag label and instance labels.

The graphical representation of the MIMN model is shown in Figure 3. Given this model, a scoring function over tuples is defined as:

(1)
Fig. 3: Graphical illustration of the MIMN model for MIL. Instance potential functions relate instances to labels . A clique potential relates all instance labels to the bag label .

This Markov network consists of the instance-label potentials and a cardinality clique potential . The instance-label potentials are parameterized as:

(2)

and the cardinality clique potential is parameterized by two different cardinality functions, one for positive bags () and one for negative bags ():

(3)

where / denotes the number of instance labels in which are inferred to be positive/negative. By appropriate parameterization of and , the standard MIL assumption can be modeled:

(4)
(5)
(6)
(7)

This formulation encodes that there must be at least one positive instance in a positive bag (4)&(5). However, there must not be any positive instances in a negative bag (6)&(7).

Inference – Given the MIMN model, the inference problem is to find the instance labels, by solving the following optimization problem:

(8)

It was shown in [41] how to efficiently solve this inference problem in time. Next, the bag label can be predicted by simply running inference twice, trying and and taking the label which maximizes .

Learning –

Similar to the relations in latent SVM, the learning problem is formulated in a max-margin discriminative framework by minimizing a regularized hinge loss function:

(9)

This can be solved by using the non-convex cutting plane method in [42].

V Experiments and Results

We make use of a set of dermoscopy images from the CD-ROM Interactive Atlas of Dermoscopy [2] (from now on denoted Atlas for brevity). This educational media contains a collection of about 1000 clinical cases acquired in three institutions in Europe. All cases are accompanied with clinical data including dermoscopic images, diagnosis (nearly all confirmed histo-pathologically), and consensus documentation of dermoscopic criteria. Thus, all images are weakly-labeled. That is, although there are image-level labels that encode whether an image contains a dermoscopic feature or not, the features are not locally annotated. The dataset is a well-known (de facto) benchmark, albeit most studies use only a small subset since for typical supervised learning methods manual labelling is required.

The proposed method is tested on a set of 855 dermoscopy images selected from the Atlas. Images were excluded if the lesion was heavily occluded by hair or oil artifacts, or if they were located on palms/soles, lips or genitalia. In our selected set, 155 images are documented to contain blue-white veil regions, 156 images contain blue (or combination of white and blue) regression structures, and 43 images contain both of these dermoscopic criteria (thus a total of 354 positive BWS cases). The remaining 501 are free of these features. We consider this set as challenging since not many images contain a sizeable BWS and most others on the other hand contain too small, too pale, occluded, or variegated colour BWS instances. Also note that there are various other dermoscopic features present in each image.

Table.II reports the results over 3-fold cross validation for the main task of BWS identification (i.e. whether the image contains the feature or not). For comparison, we considered the most prominent studies amongst the prior arts: Celebi et al. [8] and Madooei et al. [9]. These studies report a method, experimental procedure, and results specifically pertaining to the detection of the feature under study here. Compared to prior work, our method shows substantial improvements with specificity boosted by , precision increased by , and accuracy improved by

. The f-score of our detector is comparable to that of the prior art. Our method’s recall lags behind that of

[8, 9]

. We would like to bring the readers attention to the gap between the precision and recall for baseline methods of

[8, 9]; their good recall is achieved at the expense of high false positives. Our method on the other hand maintains a steady performance level.

Note that both [8, 9] use only colour features. They found that texture information was not useful; but here indeed our MIL-based method improves by adding texture. We used the same texture features [8, 9]. This makes sense, and further demonstrates the capacity of MIL to make use of such information towards the computational task at hand. Even using only colour, our proposed method still outperforms [8, 9] by a large margin. For comparison, we have added a row to Table.II with performance measures using only colour features.

Dataset Method Accuracy Precision Recall f-score Specificity
Atlas [2] Proposed method 72.63 68.07 63.84 65.89 78.84
Proposed method (colour) 70.52 64.32 64.68 64.89 76.25
Celebi et al. [8] 59.88 50.89 88.42 64.60 39.72
Madooei et al. [9] 65.96 55.87 84.75 67.34 52.69
PH2 [43] Proposed method 84.50 61.54 74.42 67.37 87.90
Celebi et al. [8] 79.50 51.28 93.02 66.12 75.80
Madooei et al. [9] 76.50 47.67 95.35 63.57 71.43
Results over 3-fold cross validation. The proposed method trained using only colour features. The method is trained using the Atlas and tested on the PH2 set. Please see §V for details and discussion.
TABLE II: BWS detection : Proposed method vs. [8, 9]

Moreover, both [8, 9] are supervised methods and require annotated training data (images with instances of BWS localized on them) whereas our data is only weakly labelled. Note that we used the detection methods originally produced by [8, 9] and did not train these systems again. Please refer to Alg.1 and Alg.2 for a summary. We used the code and data of [9], and the implementation in [9] of [8]. Note that both [8, 9] were originally trained on a subset of the same dataset that is used here. In our experiment, to compare to [8, 9] we simply run that code on the whole dataset. One might argue it is unfair to the present paper since the test data contains their training data as well, whereas our result is obtained over cross validation with separated test and training sets. For further clarification, a short description of the (original) training process of the baseline methods follows.

1:  Load a dermoscopy image of skin lesion.
2:  Extract lesion border.
3:  Dilate the border by 10% of its area.
4:  Extract region outside the dilated border of size 20% of lesion area.
5:  for each pixel in extracted region do
6:     if  and and  then
7:        Mark the pixel as healthy skin.
8:     else
9:        Ignore the pixel and continue.
10:     end if
11:  end for
12:  Set as the mean of red channel values for pixels marked healthy skin.
13:  for each pixel in the image do
14:     
15:     
16:     if  and  then
17:        Classify pixel as BWS
18:     end if
19:  end for
Algorithm 1 – The method of Celebi et al. [8]
1:  PART1: Colour Palette
2:  for each image in database do
3:     Convert from sRGB to CIELAB
4:     Replace each pixel with superpixel representation
5:     for each pixel marked as veil do
6:        Compute the approximate Munsell specification
7:     end for
8:  end for
9:  Create frequency table from the computed Munsell colour patches, keep the most representative colours (in terms of highest frequency) and organize them in a palette.
10:  PART2: Detection
11:  Load a skin lesion image
12:  Convert from sRGB to CIELAB
13:  Segment using EDISON [35]
14:  for each segmented region do
15:     Find the best match from colour palette
16:     if The best match is within the threshold distance then
17:        Classify as BWS
18:     end if
19:  end for
Algorithm 2 – The method of Madooei et al. [9]

Celebi et al. [8] used a set of 105 dermoscopy images (selected from the Atlas) consisting of 43 images containing sizeable blue-white veil areas with the remaining 62 free of this feature. For each image, a number of small circular regions that either contained the feature or was free of it were manually determined by a dermatologist and used for training. A decision tree classifier with C4.5 [44] induction algorithm was employed to classify each pixel in the training stage into two classes: blue-white veil and otherwise. Among the 18 different colour and texture features included,555The description of features – as well as the feature extraction process– is omitted for space considerations. The interested reader is referred to [8] for details. only two features appeared in the induced decision rules: The classification was conducted by thresholding on a normalized-blue channel () and relative-red feature (defined as where is the mean of red channel values for healthy skin areas only).

Madooei et al. [9] used the same 105 dermoscopy images employed by [8]. They mapped each colour of blue-white veil data to its closest colour patch in the Munsell system (using the nearest neighbour searching technique). Interestingly, the 146,353 pixels under analysis mapped to only 116 of the totality of 2352 Munsell colour patches available in their look-up table.666For implementation details on e.g. colour transformation or segmentation parameters, please refer to [9]. Among these, 98% of the veil data was described by only 80 colour patches. These 80 colours were organized on a palette as a discrete set of Munsell colours best describing the feature. Madooei et al. also analyzed non-veil data by the same principle. The 254,739 pixels from non-veil areas mapped to 129 Munsell colour patches, among which only 3 patches were overlapping with the 116 veil patches. These 3 contribute (all together) to less than 2% of veil data and were not considered among the 80 patches in the blue-white veil colour palette. For testing, the blue-white veil feature was segmented in each dermoscopy image through a nearest neighbour matching of image regions to colour patches of their “blue-white veil palette”.

[Input image] [Proposed method] [Celebi et al. [8]] [Madooei et al. [9]]
[] [] [] []
[] [] [] []
[] [] [] []
[] [] [] []

Fig. 4: Sample outputs: the first two rows are positive samples, with all detectors succeeding in identifying and localizing the feature correctly. In the 2 row, our method localized the feature over the areas that contain both BWS and irregular globules. We believe that this supports our hypothesis that our proposed method learns to detect salient regions for BWS identification. This is further demonstrated in the 3 row: both competing methods detected confluent blue areas in a benign blue nevi falsely as BWS. Our method correctly classified it as negative because there are no salient features (blue/white colour plus other features such as globules) whereas [8, 9] detected it incorrectly as positive because they only look for colour features. Note that among all blue and combined nevi, 43% were falsely identified as positive by [8], 41% by [9], and 36% by our method, which further validates our approach. The 4 row is a challenging positive example (small BWS area) which only our proposed method succeeds in correctly identifying as positive. The sample shown on the 5 row is an extremely challenging negative case. It contains blue-grey areas, atypical streaks, typical pigment network and regular globules. Our method (and [9]) wrongly detected this case as a positive. It was correctly labeled by [8] as a negative. For further examination, interested reader is referred to http://www.sfu.ca/~amadooei/research/publication/TMI2015_sup.html where we have provided sample outputs (all true-positive cases) of our proposed method.

The methods of [8, 9] are simple to use and easy to understand, yet they impose disadvantages and shortcomings. For instance, their good sensitivity (recall) arrives at the expense of low specificity. Note that for propagating the label of pixels [8] and regions [9] to image-level, we applied a post processing step: an image is labelled positive if those pixels labelled positive were contained within the lesion. There are images containing bluish artefacts, e.g. ruler markings at the corner of Fig.4-i; the post-processing is set to reduce such false positives.

Figure 4 shows some sample outputs comparing the localization of BWS in test images among different methods. There are some interesting observations to be made here, particularly in support of our hypothesis that our proposed method learns to detect salient regions for BWS identification. Please refer to the figure’s caption for details. It is to be noted that the main limitation of the proposed method compared to [8, 9] (and any other supervised learning in general) is that wrong localization might still lead to correct image-level output. This is, however, a limitation of MIL in general and not specific to our case.

Another general limitation of machine learning techniques, in particular supervised learning, is the issue of “domain adaptation.” The vast majority of learning methods today are trained and evaluated on the same image distribution. A training dataset might be biased by the way in which it was collected. A different dataset (visual domain) could differ by various factors including scene illumination, camera characteristics, etc. Recent studies (see e.g. 

[45]

) have demonstrated a significant degradation in the performance of state-of-the-art image classifiers due to domain shift. In dealing with this issue, a class of techniques, called “Transfer Learning,” has emerged that aims for developing domain adaptation algorithms.

777Transfer learning in general aims to transfer knowledge between related domains. In computer vision, examples of transfer learning, besides domain adaptation, include studies which try to “overcome the deficit of training samples for some categories by adapting classifiers trained for other categories [46].” A good review can be found in [47]. Although Transfer Learning is beyond the scope of this study, we aim to examine and compare domain adaptability of our proposed method. To this aim, we tested our method and that of [8, 9] on a second database called PH2 [43].

The dermoscopic image database PH2 [43] contains a total of 200 melanocytic lesions, including 80 common nevi, 80 atypical nevi, and 40 melanomas. This small database was built through a joint research collaboration between the Universidade do Porto, Técnico Lisboa, and the Dermatology service of Hospital Pedro Hispano in Matosinhos, Portugal. It includes clinical diagnosis and dermoscopic criteria, including presence (or absence) of blue-white veil and regression structures. Among the 200 images, 43 contain these features (positive cases) and the remaining 157 are free of these. This dataset is considerably less challenging compared to the Atlas, since most positive cases contain a sizeable BWS structure. Results are included in Table.II for comparison. Note that there is no training involved; all methods (both the proposed and baselines) were trained over the Atlas but tested on the PH2 set. The test results are consistent with our prior experiments over the Atlas.

Vi Conclusion

We proposed a new approach for automatic identification of the BWS feature which needs considerably less supervision than for previous methods. Our method employs the MIL framework to learn from image-level labels, without explicit annotation of image regions containing the feature under study here. Experiments show that this method can learn, in addition to labelling the image, to localize salient BWS regions in images, with high specificity, which is of great importance in medical applications. Our results are very encouraging since it is often the case that supervised learning with fully-labeled data outperforms learning with only weakly-labeled data, with the performance of latter being at best only comparable to that of the former. In future, we plan to adapt the multi-label multi-instance learning (MLMIL) framework to simultaneously detect multiple dermoscopic features.

References

  • [1] H. P. Soyer, G. Argenziano, I. Zalaudek, R. Corona, F. Sera, R. Talamini, F. Barbato, A. Baroni, L. Cicale, A. Di Stefani, P. Farro, L. Rossiello, E. Ruocco, and S. Chimenti, “Three-point checklist of dermoscopy,” vol. 208, no. 1, pp. 27–31, 2004.
  • [2] G. Argenziano, H. P. Soyer, V. De Giorgio, D. Piccolo, P. Carli, M. Delfino, A. Ferrari, R. Hofmann-Wellenhof, D. Massi, G. Mazzocchetti, M. Scalvenzi, and I. H. Wolf, Interactive Atlas of Dermoscopy (Book and CD-ROM).   Edra Medical Publishing & New Media, 2000.
  • [3] W. Stolz, A. Riemann, A. Cognetta, W. Abmayr, L. Pillet, and D. Holzel, “ABCD rule of dermoscopy: a new practical method for early recognition of melanoma.” vol. 4, no. 7, pp. 521–527, 1994.
  • [4] Argenziano G, Fabbrocini G, Carli P, De Giorgi V, Sammarco E, and Delfino M, “Epiluminescence microscopy for the diagnosis of doubtful melanocytic skin lesions: Comparison of the abcd rule of dermatoscopy and a new 7-point checklist based on pattern analysis,” vol. 134, no. 12, pp. 1563–1570, 1998.
  • [5] G. Argenziano, H. Soyer, S. Chimenti, R. Talamini, R. Corona, F. Sera, M. Binder, L. Cerroni, G. De Rosa, G. Ferrara, R. Hofmann-Wellenhof, M. Landthaler, S. W. Menzies, H. Pehamberger, D. Piccolo, H. S. Rabinovitz, R. Schiffner, S. Staibano, W. Stolz, I. Bartenjev, A. Blum, R. Braun, H. Cabo, P. Carli, V. De Giorgi, M. G. Fleming, J. M. Grichnik, C. M. Grin, A. C. Halpern, R. Johr, B. Katz, R. O. Kenet, H. Kittler, J. Kreusch, J. Malvehy, G. Mazzocchetti, M. Oliviero, F. Özdemir, K. Peris, R. Perotti, A. Perusquia, M. A. Pizzichetta, S. Puig, B. Rao, P. Rubegni, T. Saida, M. Scalvenzi, S. Seidenari, I. Stanganelli, M. Tanaka, K. Westerhoff, I. H. Wolf, O. Braun-Falco, H. Kerl, T. Nishikawa, K. Wolff, and A. W. Kopf, “Dermoscopy of pigmented skin lesions: Results of a consensus meeting via the internet,” vol. 48, no. 5, pp. 679–693, 2003.
  • [6] A. Madooei, M. S. Drew, M. Sadeghi, and M. S. Atkins, Intrinsic Melanin and Hemoglobin Colour Components for Skin Lesion Malignancy Detection, ser. Lecture Notes in Computer Science.   Springer Berlin Heidelberg, 2012, no. 7510, pp. 315–322.
  • [7] A. Madooei and M. S. Drew, A Probabilistic Approach to Quantification of Melanin and Hemoglobin Content in Dermoscopy Images, ser. Lecture Notes in Computer Science.   Springer International Publishing, 2014, no. 8673, pp. 49–56.
  • [8] M. E. Celebi, H. Iyatomi, W. V. Stoecker, R. H. Moss, H. S. Rabinovitz, G. Argenziano, and H. P. Soyer, “Automatic detection of blue-white veil and related structures in dermoscopy images,” vol. 32, no. 8, pp. 670–677, 2008.
  • [9] A. Madooei, M. S. Drew, M. Sadeghi, and M. S. Atkins, Automatic Detection of Blue-White Veil by Discrete Colour Matching in Dermoscopy Images, ser. Lecture Notes in Computer Science.   Springer Berlin Heidelberg, 2013, no. 8151, pp. 453–460.
  • [10] S. Umbaugh, R. Moss, and W. Stoecker, “Automatic color segmentation of images with application to detection of variegated coloring in skin tumors,” vol. 8, no. 4, pp. 43–50, 1989.
  • [11] S. Seidenari, G. Pellacani, and G. Costantino, Automated Assessment of Pigment Distribution and Color Areas for Melanoma Diagnosis, 2nd ed.   Informa Healthcare, CRC/Taylor & Francis, 2006, pp. 135–146 [Chapter 18].
  • [12] K. Korotkov and R. Garcia, “Computerized analysis of pigmented skin lesions: A review,” vol. 56, no. 2, pp. 69–90, 2012.
  • [13] M. Ogorzałek, G. Surówka, L. Nowak, and C. Merkwirth, Computational Intelligence and Image Processing Methods for Applications in Skin Cancer Diagnosis, ser. Communications in Computer and Information Science.   Springer Berlin Heidelberg, 2010, no. 52, pp. 3–20.
  • [14] G. Sforza, G. Castellano, R. Stanley, W. Stoecker, and J. Hagerty, “Adaptive segmentation of gray areas in dermoscopy images,” in IEEE International Workshop on Medical Measurements and Applications Proceedings (MeMeA).   IEEE, 2011, pp. 628–631.
  • [15] M. E. Celebi, H. A. Kingravi, Y. A. Aslandogan, and W. V. Stoecker, “Detection of blue-white veil areas in dermoscopy images using machine learning techniques,” in Proceedings of SPIE Medical Imaging: Image Processing, vol. 6144.   International Society for Optics and Photonics, 2006, pp. 61 445T–61 445T–8.
  • [16] V. De Vita, G. D. L. Di Leo, G. Fabbrocini, C. Liguori, A. Paolillo, and P. Sommella, “Statistical techniques applied to the automatic diagnosis of dermoscopic images,” vol. 1, no. 1, pp. 7–18, 2012.
  • [17] G. Fabbrocini, V. D. Vita, S. Cacciapuoti, G. D. Leo, C. Liguori, A. Paolillo, A. Pietrosanto, and P. Sommella, Automatic Diagnosis of Melanoma Based on the 7-Point Checklist, ser. Series in BioEngineering.   Springer Berlin Heidelberg, 2014, pp. 71–107.
  • [18] T. Wadhawan, R. Hu, and G. Zouridakis, “Detection of blue-whitish veil in melanoma using color descriptors,” in IEEE EMBS International Conference on Biomedical and Health Informatics (BHI).   IEEE, 2012, pp. 503–506.
  • [19] M. Lingala, R. Joe Stanley, R. K. Rader, J. Hagerty, H. S. Rabinovitz, M. Oliviero, I. Choudhry, and W. V. Stoecker, “Fuzzy logic color detection: Blue areas in melanoma dermoscopy images,” vol. 38, no. 5, pp. 403–410, 2014.
  • [20] A. Sboner, E. Blanzieri, C. Eccher, P. Bauer, M. Cristofolini, G. Zumiani, and S. Forti, “A knowledge based system for early melanoma diagnosis support,” in Proceedings of the 6th IDAMAP workshop-Intelligent Data Analysis in Medicine and Pharmacology (IDAMAP), R. Bellazzi, B. Zupan, and X. Liu, Eds., 2001, pp. 30–35.
  • [21] J. Chen, R. J. Stanley, R. H. Moss, and W. Van Stoecker, “Colour analysis of skin lesion regions for melanoma discrimination in clinical images,” vol. 9, no. 2, pp. 94–104, 2003.
  • [22] A. Sboner, C. Eccher, E. Blanzieri, P. Bauer, M. Cristofolini, G. Zumiani, and S. Forti, “A multiple classifier system for early melanoma diagnosis,” vol. 27, no. 1, pp. 29–44, 2003.
  • [23] S. Seidenari, G. Pellacani, and C. Grana, “Computer description of colours in dermoscopic melanocytic lesion images reproducing clinical assessment,” vol. 149, no. 3, pp. 523–529, 2003.
  • [24] G. Pellacani, C. Grana, and S. Seidenari, “Automated description of colours in polarized-light surface microscopy images of melanocytic lesions,” vol. 14, no. 2, p. 125, 2004.
  • [25] A. Sboner, P. Bauer, G. Zumiani, C. Eccher, E. Blanzieri, S. Forti, and M. Cristofolini, “Clinical validation of an automated system for supporting the early diagnosis of melanoma,” vol. 10, no. 3, pp. 184–192, 2004.
  • [26] S. Seidenari, G. Pellacani, and C. Grana, “Colors in atypical nevi: a computer description reproducing clinical assessment,” vol. 11, no. 1, pp. 36–41, 2005.
  • [27] ——, Early Detection of Melanoma by Image Analysis.   Informa Healthcare, CRC/Taylor & Francis, 2007, pp. 305–311 [Chapter 22].
  • [28] J. Alcon, C. Ciuhu, W. ten Kate, A. Heinrich, N. Uzunbajakava, G. Krekels, D. Siem, and G. de Haan, “Automatic imaging system with decision support for inspection of pigmented skin lesions and melanoma diagnosis,” vol. 3, no. 1, pp. 14–25, 2009.
  • [29] P. G. Cavalcanti and J. Scharcanski, “Automated prescreening of pigmented skin lesions using standard cameras,” vol. 35, no. 6, pp. 481–491, 2011.
  • [30] C. S. P. Silva, A. R. S. Marcal, M. A. Pereira, T. Mendonça, and J. Rozeira, Separability Analysis of Color Classes on Dermoscopic Images, ser. Lecture Notes in Computer Science.   Springer Berlin Heidelberg, 2012, no. 7325, pp. 268–277.
  • [31] P. G. Cavalcanti, J. Scharcanski, and G. V. G. Baranoski, “A two-stage approach for discriminating melanocytic skin lesions using standard cameras,” vol. 40, no. 10, pp. 4054–4064, 2013.
  • [32]

    C. Barata, M. A. Figueiredo, M. Celebi, and J. S. Marques, “Color identification in dermoscopy images using gaussian mixture models,” in

    IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP).   IEEE, 2014, pp. 3611–3615.
  • [33] S. W. Menzies, L. Bischof, H. Talbot, A. Gutenev, M. Avramidis, L. Wong, S. K. Lo, G. Mackellar, V. Skladnev, W. McCarthy, J. Kelly, B. Cranney, P. Lye, H. Rabinovitz, M. Oliviero, A. Blum, A. Varol, A. Virol, B. De’Ambrosis, R. McCleod, H. Koga, C. Grin, R. Braun, and R. Johr, “The performance of SolarScan: an automated dermoscopy image analysis instrument for the diagnosis of primary melanoma,” vol. 141, no. 11, pp. 1388–1396, 2005.
  • [34] A. Madooei and M. Drew, “A colour palette for automatic detection of blue-white veil,” in 21st Color and Imaging Conference: Color Science and Engineering Systems, Technologies, and Applications, 2013, pp. 200–205.
  • [35] B. Georgescu and C. M. Christoudias, “The edge detection and image segmentation (EDISON) system,” Robust Image Understanding Laboratory, Rutgers University. Code available at http://coewww.rutgers.edu/riul/research/code/EDISON/, 2003.
  • [36] N. Otsu, “A threshold selection method from gray-level histograms,” vol. 9, no. 1, pp. 62–66, 1979.
  • [37] A. Madooei, M. Drew, M. Sadeghi, and S. Atkins, “Automated pre-processing method for dermoscopic images and its application to pigmented skin lesion segmentation,” in Twentieth Color and Imaging Conference: Color Science and Engineering Systems, Technologies, and Applications, 2012, pp. 158–163.
  • [38] T. Ojala, M. Pietikainen, and T. Maenpaa, “Multiresolution gray-scale and rotation invariant texture classification with local binary patterns,” vol. 24, no. 7, pp. 971–987, 2002.
  • [39] M. Varma and A. Zisserman, “A statistical approach to texture classification from single images,” vol. 62, no. 1, pp. 61–81, 2005.
  • [40] S. Andrews, I. Tsochantaridis, and T. Hofmann, “Support vector machines for multiple-instance learning,” in Proceedings of the 15th Advances in Neural Information Processing Systems.   MIT Press, 2002, pp. 561–568.
  • [41] H. Hajimirsadeghi, J. Li, G. Mori, M. Zaki, and T. Sayed, “Multiple instance learning by discriminative training of markov networks,” in

    Proceedings of the Twenty-Ninth Conference on Uncertainty in Artificial Intelligence (UAI2013)

    , 2013, pp. 262–271.
  • [42]

    T.-M.-T. Do and T. Artières, “Large margin training for hidden markov models with partially observed states,” in

    Proceedings of the 26th Annual International Conference on Machine Learning, ser. ICML ’09.   ACM, 2009, pp. 265–272.
  • [43] T. Mendonca, P. Ferreira, J. Marques, A. Marcal, and J. Rozeira, “PH2 - a dermoscopic image database for research and benchmarking,” in Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBS).   IEEE, pp. 5437–5440.
  • [44] J. R. Quinlan, C4.5: Programs for Machine Learning.   Morgan Kaufmann Publishers Inc., 1993.
  • [45] A. Torralba and A. Efros, “Unbiased look at dataset bias,” in

    2011 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)

    .   IEEE, pp. 1521–1528.
  • [46]

    M. Oquab, L. Bottou, I. Laptev, and J. Sivic, “Learning and transferring mid-level image representations using convolutional neural networks,” in

    2014 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1717–1724.
  • [47] S. J. Pan and Q. Yang, “A survey on transfer learning,” vol. 22, no. 10, pp. 1345–1359.