1 Introduction
Segmentation is an important task to highlight zone of interests in medical images. In the ISIC 2018 challenge, the goal is to perform lesion boundary segmentation. To perform the segmentation, we have chosen to use a U-Net architecture [2] as it is known to perform well in medical image and without a large data-set. The details of the training is explained below and an example of mask generation is given in figure 1. Our data was extracted from the ISIC 2018: Skin Lesion Analysis Towards Melanoma Detection grand challenge data-sets [4, 3].

2 Methods
2.1 Pre-processing
Before the training of the U-Net network [2]
, we first pre-processed the data-set. The very first statistic consisted in estimating the
average pixel by estimating the proportion of each color ((red is slightly more prominent) and the standard deviation of the data-set
. This information was used in the training to re-center the training set. Another important information was the average proportion of the mole in the image which was estimated as:(2.1) |
Masks represent in average only of the image. The loss function was modified to even this ratio. Lastly, we also estimated the position of the mole inside the image. As expected, moles are more likely present in the center (see fig. 2-left). This information can be used as a prior for a Bayesian approach (not used here).
![]() |
![]() |
: Evolution of the loss function (blue) and Jaccard index (orange) per epochs.
2.2 Training
For the training of the U-net network, we used the cross entropy as loss function and the Adam optimizer to update the parameters. Learning rate was set-up as and reduced when the loss function was not decaying. The evolution per epoch of a typical training is given in figure 2
-right. The orange curve gives the evolution of the so-called Jaccard index used to rank the method. Usually data-augmentation methods were used (random flipping and image rotation). The code has been implemented using PyTorch.
2.3 Post-processing
The U-net gives a two scores for each pixel (scores for the background and foreground). These scores are then transformed into probability via softmax function:
(2.2) |
The mask is then taken as the pixel with probability higher than . However, we find out that the masks can be further improved using several post-processing methods on the scores . First, we apply a Gaussian filter on both scores (). Plotting the difference (see fig. 3-left), we observe three levels: at the boundary the difference is below (), at the center where the mole is, the values are around , and finally the normal skin the values are in between and . The histogram of all these values (regardless of their position on the image) is given in fig. 3-right. The threshold separating ’skin’ to ’mole’ seems to be around . We use the Otsu algorithm [1] to estimate automatically this threshold.
![]() |
![]() |
2.4 Results
Our method score a Jaccard index of over various test set. For the validation test, the network has a score of due to the metric used (scores below are considered zero).
Acknowledgments Authors would like to thank Thomas Laurent and Xavier Bressan for fruitful suggestions. S. Motsch thanks Renate Mittelman, Salil Malik and people at research computing (ASU) for their support during the computation.
References
- [1] Nobuyuki Otsu. A threshold selection method from gray-level histograms. IEEE transactions on systems, man, and cybernetics, 9(1):62–66, 1979.
- [2] Olaf Ronneberger, Philipp Fischer, and Thomas Brox. U-net: Convolutional networks for biomedical image segmentation. In International Conference on Medical image computing and computer-assisted intervention, pages 234–241. Springer, 2015.
- [3] Philipp Tschandl, Cliff Rosendahl, and Harald Kittler. The HAM10000 Dataset: A Large Collection of Multi-Source Dermatoscopic Images of Common Pigmented Skin Lesions. arXiv preprint arXiv:1803.10417, 2018.
- [4] Eduardo Valle, Michel Fornaciali, Afonso Menegola, Julia Tavares, Flávia Vasques Bittencourt, Lin Tzy Li, and Sandra Avila. Data, depth, and design: learning reliable models for melanoma screening. arXiv preprint arXiv:1711.00441, 2017.
Comments
There are no comments yet.