Lesion segmentation using U-Net network

07/23/2018 ∙ by Adrien Motsch, et al. ∙ 0

This paper explains the method used in the segmentation challenge (Task 1) in the International Skin Imaging Collaboration's (ISIC) Skin Lesion Analysis Towards Melanoma Detection challenge held in 2018. We have trained a U-Net network to perform the segmentation. The key elements for the training were first to adjust the loss function to incorporate unbalanced proportion of background and second to perform post-processing operation to adjust the contour of the prediction.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 2

page 3

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Segmentation is an important task to highlight zone of interests in medical images. In the ISIC 2018 challenge, the goal is to perform lesion boundary segmentation. To perform the segmentation, we have chosen to use a U-Net architecture [2] as it is known to perform well in medical image and without a large data-set. The details of the training is explained below and an example of mask generation is given in figure 1. Our data was extracted from the ISIC 2018: Skin Lesion Analysis Towards Melanoma Detection grand challenge data-sets [4, 3].

Figure 1: Example of segmentation (image ISIC_0000009.jpg from the training set 2018): the original image is on the left, in the middle the mask manually generated (’ground truth’), and on the right the prediction of our network based on the U-Net architecture.

2 Methods

2.1 Pre-processing

Before the training of the U-Net network [2]

, we first pre-processed the data-set. The very first statistic consisted in estimating the

average pixel by estimating the proportion of each color (

(red is slightly more prominent) and the standard deviation of the data-set

. This information was used in the training to re-center the training set. Another important information was the average proportion of the mole in the image which was estimated as:

(2.1)

Masks represent in average only of the image. The loss function was modified to even this ratio. Lastly, we also estimated the position of the mole inside the image. As expected, moles are more likely present in the center (see fig. 2-left). This information can be used as a prior for a Bayesian approach (not used here).

Figure 2: Left:Probability that a pixel is part of a mole for the all training set. The probability at the center is around and close to zero at the boundary (). Right

: Evolution of the loss function (blue) and Jaccard index (orange) per epochs.

2.2 Training

For the training of the U-net network, we used the cross entropy as loss function and the Adam optimizer to update the parameters. Learning rate was set-up as and reduced when the loss function was not decaying. The evolution per epoch of a typical training is given in figure 2

-right. The orange curve gives the evolution of the so-called Jaccard index used to rank the method. Usually data-augmentation methods were used (random flipping and image rotation). The code has been implemented using PyTorch.

2.3 Post-processing

The U-net gives a two scores for each pixel (scores for the background and foreground). These scores are then transformed into probability via softmax function:

(2.2)

The mask is then taken as the pixel with probability higher than . However, we find out that the masks can be further improved using several post-processing methods on the scores . First, we apply a Gaussian filter on both scores (). Plotting the difference (see fig. 3-left), we observe three levels: at the boundary the difference is below (), at the center where the mole is, the values are around , and finally the normal skin the values are in between and . The histogram of all these values (regardless of their position on the image) is given in fig. 3-right. The threshold separating ’skin’ to ’mole’ seems to be around . We use the Otsu algorithm [1] to estimate automatically this threshold.

Figure 3: Distribution of the score (foreground - background, i.e. ) returns by the U-net network. Rather than taking a zero threshold to determine the mask, we first estimate the threshold separating distribution of scores (Otsu method).

2.4 Results

Our method score a Jaccard index of over various test set. For the validation test, the network has a score of due to the metric used (scores below are considered zero).

Acknowledgments Authors would like to thank Thomas Laurent and Xavier Bressan for fruitful suggestions. S. Motsch thanks Renate Mittelman, Salil Malik and people at research computing (ASU) for their support during the computation.

References