MobileGAN: Skin Lesion Segmentation Using a Lightweight Generative Adversarial Network

07/01/2019 ∙ by Md. Mostafa Kamal Sarker, et al. ∙ Universitat Rovira i Virgili 2

Skin lesion segmentation in dermoscopic images is a challenge due to their blurry and irregular boundaries. Most of the segmentation approaches based on deep learning are time and memory consuming due to the hundreds of millions of parameters. Consequently, it is difficult to apply them to real dermatoscope devices with limited GPU and memory resources. In this paper, we propose a lightweight and efficient Generative Adversarial Networks (GAN) model, called MobileGAN for skin lesion segmentation. More precisely, the MobileGAN combines 1D non-bottleneck factorization networks with position and channel attention modules in a GAN model. The proposed model is evaluated on the test dataset of the ISBI 2017 challenges and the validation dataset of ISIC 2018 challenges. Although the proposed network has only 2.35 millions of parameters, it is still comparable with the state-of-the-art. The experimental results show that our MobileGAN obtains comparable performance with an accuracy of 97.61

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 3

page 7

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Skin cancer is one of the wide speared of cancers. According to WHO111World Health Organization, there are 1.04 million cases in 2018222https://www.who.int/news-room/fact-sheets/detail/cancer.

Over the last decades, the percentage of both melanoma and non-melanoma skin cancers increased rapidly [3]. Melanoma is the most dangerous types of skin cancer, and 75% of deaths are related to it [4]. Image analysis techniques (Dermoscopy) based on computerized non-invasive dermatology is getting very important for physicians to inspect the pigmented skin lesions and detect malignant melanoma at an early stage [10] in order to improve the survival rate and reduce cost. Consequently, a Computer-Aided Diagnosis (CAD) system is essential to support the dermatologists to investigate the dermoscopic images and segment melanomas as precisely as possible.
Several melanoma segmentation methods have been proposed in the literature [2]. The main challenges faced in the segmentation of pigmented skin lesion include the huge diversity in color, shape, texture, size, but also the low contrast between skin tissues, the irregular and fuzzy boundaries and the presence of blood vessels and hairs [2]. Several methods have been proposed to cope with these challenges using traditional image processing algorithms, such as histogram thresholding, unsupervised clustering, and supervised segmentation methods (see an overview in [7]). However, these approaches yield inaccurate segmentation results, when the skin lesions have fuzzy boundaries [7]. In addition, the performance of these methods highly relies on pre-processing algorithms, such as hair removal and contrast enhancement.
With the rapid progress in deep learning models, many skin lesion segmentation approaches have been introduced increasing the accuracy of segmentation. For instance, the SLSDeep model was proposed in [17] to segment the skin lesion by using feature pyramid pooling. In [2]

, a full resolution convolutional networks (FrCN) was introduced to directly learn the full resolution features of each pixel of the input image without the need for pre- or post-processing operations. Besides, GAN with a multi-scale loss function, called SegAN, has also been proposed for skin lesion segmentation in 

[18].
All of the methods mentioned above provided high precision. However, they have tens or hundreds of millions of parameters. In this paper, we propose a lightweight GAN model, named MobileGAN, for skin lesion segmentation of dermoscopic images. In the proposed model, we extract low features with multi-scale convolutional networks. In order to reduce the computational cost, the proposed model uses 1D non-bottleneck factorization network. Moreover, position and channel attention modules are used to improve the features representation regardless of spatial and channel dimensions.
The contribution are: 1) to cope with shadows by supposing that only the part of true lesions appears at multiple scales (consequently, a multi-scale block is introduced for aggregating the coarse-to-fine features of dermoscopic images), 2) to reduce the computational cost by using a 1D non-bottleneck factorized network [15]

, 3) to enhance the discriminant ability of feature representations in spatial and channel dimensions by using both position and channel attention models 

[11], 4) to use a combination of the binary cross entropy, Jaccard and -norm as a loss function for training the modified GAN model.

2 Proposed Model

2.1 Network architecture

Figure 1: The architecture of the proposed MobileGAN network: generator network (top) and discriminator network (bottom).

The generative adversarial network pix2pix 

[12] has been used in different tasks, such as synthetic image generation and medical image segmentation. It consists of two main networks: generator and discriminator . The generator is an encoder-decoder architecture that learns the mapping from an image from domain (the skin image) to domain (the segmented lesions). The discriminator compares the generated segmentation masks with real segmented images. Figure 1 presents the architecture of the proposed model, which has the and networks as the pix2pix model. We remind that, to alleviate false detection due to shadows, a multi-scale block for aggregating the coarse-to-fine features of dermoscopic images is used. Below, we explain the encoder and decoder networks of the generator, and the discriminator networks in details.

The encoder network: the input images for the encoder of generator network are scaled to four resolutions (i.e., the original input size and three different resolutions) as shown in Figure 1. The four resolutions are feed into four convolution blocks to generate

feature maps. The four convolutional blocks are then followed by four channel attention module (CAM) to capture visual features dependencies in channel dimensions (for more details, see the supplementary A.6). Afterward, we upsample three scaled inputs to the same size of the original input image by using bilinear interpolation and then average all feature maps of the four scales to generate

feature maps. The encoder network can extract low features in different scales in order to cope with shadows. In addition, the resulted feature maps are created in both spatial and frequency domains. The resulted 16 feature maps are fed into two Convolutional-Downsampling-Attention (CDA) layers. Each CDA layer comprises a convolutional block followed by a max pooling of 2, and then a Position Attention Module (PAM) to capture the spatial features (for more details, see the supplementary A.5). The two layers produce 64 feature maps that are fed into the next four factorized-attention (FCA) layers. Each FCA layer consists of a non-bottleneck factorized block followed by a CAM. The resulting feature maps are fed into a CDA layer to obtain 128 feature maps that are fed into eight FCA layers. The result of the eighth FCA layer is fed to a non-bottleneck factorized block followed by two parallel attention blocks; one for CAM and the other for PAM that is summed to capture visual high features independently to position and channel dimensions. The final 128 feature maps are fed into the decoder to construct the segmented image.


The decoder network: We upsample the final output of the encoder to feed both streams. Each stream consists of one Deconvolutional-Upsampling-Attention (DUA) and two FCA layers. The final feature maps are upsampled to obtain the segmented image. In all layers of the encoder and decoder networks, we used convolutional and deconvolutional filters with a kernel size of

, a stride of 2 and a padding of 1 (for more details, see the supplementary A.4).


In the testing phase, the trained generator network is used to produce the segmentation mask for each test image.
The discriminator network: It comprises four convolutional and downsampling layers. The four convolutional layers use a kernel of , a stride of 2, and a padding of 1. In the second layer, a PAM block is added after the convolutional block, while in the third layer, a CAM block is added.

2.2 Model training

The and networks are alternately trained by back-propagation in an adversarial fashion: we first fix and train for one step using gradients computed from the loss function, and then fix and train for another step using gradients computed from the same loss function passed from to . Assume is a skin lesion image containing a lesion, is the ground-truth of the segmented image of that lesion, and and are the outputs of the generator and the discriminator, respectively. The generator loss function comprises three terms: binary cross entropy loss,

norm to boost the outliers, and Jaccard loss to increase the intersection:

(1)

where and are empirical weighting factors. The variable

is a random variable introduced as a dropout in the decoding layers at both training and testing phases, which helps to generalize the learning process and avoid overfitting. The

loss is also necessary to boost the learning process that may be too slow because the adversarial loss term may not properly formulate the gradient towards the expected segmented lesion shape. In addition, we consider the optimization of the Jaccard loss (JL) for the lesion classes (for more details, see the supplementary A.1).
If the generator network is optimized properly, the values of approach , meaning that the discriminator cannot differentiate the generated segmentation mask from the ground truth, while and Jaccard losses should approach to , indicating that every generated mask matches the corresponding ground truth mask both in overall pixel-to-pixel distances () and in tight convex surrogate (JL) to all Intersection-Over-Union (IoU).
The discriminator loss function can be formulated as follows:

(2)

The optimizer should fit to maximize the loss values for ground truth images (by minimizing ) and to minimize the loss values for the predicted image (by minimizing ). These two terms compute the binary cross entropy (BCE) loss using both images, assuming that the expected class for ground truth and generated images is and , respectively.

3 Experiments

Datasets: The efficacy of the proposed model is assessed on two publicly available benchmark datasets of dermoscopic images for skin lesion analysis: ISIC 2018 ( Skin Lesion Analysis Towards Melanoma Detection, grand challenge datasets) [8] and ISBI 2017 (IEEE International Symposium on Biomedical Imaging, ISBI 2017, grand challenge datasets) [9]. The ISIC 2018 dataset includes 2,594 images with the corresponding ground truth masks annotated by expert dermatologists. The validation and testing sets contain 100 and 1,000 images, respectively, without ground truth (evaluated by online333https://submission.challenge.isic-archive.com/ only). In our experiments, we used 80% of the training set of the ISIC 2018 dataset for training and 20% for validation as proposed in [2]. In turn, ISBI 2017 dataset was divided into training, validation and testing sets with 2000, 150 and 600 images, respectively. Note that we trained our model with ISIC 2018 training set and evaluated our model on ISBI 2017 test and ISIC 2018 validation sets. Evaluation Metrics:

Five evaluation metrics are used for assessing the performance of our model, with the ISBI 2017 test dataset: Jaccard index (JAC), Dice coefficient (DIC) and Accuracy (ACC), Specificity (SPE), Sensitivity (SEN) 

[9]. We used the threshold Jaccard index to evaluate our model with ISIC 2018 validation dataset (for further details, see supplementary A.2).

Data augmentation: To achieve accurate segmentation results, we augment the two datasets by flipping the images horizontally and vertically, applying gamma reconstruction and changing the contrast using adaptive histogram equalization (CLAHE) with different values on the original RGB images.

Implementation: We used Adam [13]. We achieved the best results with Adam optimizer with parameters , . In turn, the learning rate was set to 0.0002 with a batch size of 8. The weighting factors of Jaccard loss and -norm loss ( and

) were set to 0.1 and 0.5, respectively. Our experiments are carried on NVIDIA 1080Ti with 11GB memory taking around 8 hours to train the network. The model is implemented on PyTorch

444https://pytorch.org/ deep learning library.

Experimental results: The size of the images ranges from to pixels that is considered a very large to train the proposed model. Each input image was resized to pixels to speed up the training process of our model. We trained and tested our model with different image sizes (, and ). The best segmentation results are obtained with the input size of (for detailed results, see the supplementary A.3).

Methods ACC   DIC JAC SEN SPE
Parameters
(million)
 FCN [14]  92.72  83.83  72.17  79.98  96.66  134.3
 U-Net [16]  90.14  76.27   61.64  67.15  97.24  12.3
 SegNet [5]  91.76  82.09  69.63  80.05  95.37  11.5
 FrCN [2]  94.03  87.08  77.11 85.40  96.69  16.3
 SLSDeep [17]  93.6 87.8  78.2  81.6  98.3  46.65
 SegAN [18]  94.1  86.7 78.5  -  -  382.17
Proposed 97.61  87.63  77.98  78.50 99.92 2.35
Table 1: Evaluating the proposed model on the ISBI 2017 test dataset

Quantitative results of the proposed model on ISBI 2017 test and ISIC 2018 validation sets shows in Table 1 and Table 2. With ISBI 2017 test dataset, we compared the MobileGAN with five skin lesion segmentation methods (FCN  [14], U-Net [16], SegNet [5], FrCN [2], SLSDeep [17] and an adversarial network, SegAN [18]). We took all the test results of FCN, U-Net, SegNet, FrCN from the literature [2] that used the same dataset. As shown, the proposed MobileGAN model yields the best results in terms of ACC and SPE. MobileGAN achieves an improvement of the ACC score of more top than the SegAN model, and the SPE score of higher than the SLSDeep model. In turn, the SLSDeep model yields a little bit better results with an improvement of compared to our model. In turn, the SegAN model gives a better JAC score than our model with an improvement of . Also, the FrCN model achieves an increase of of the SEN score higher than our model. Regarding the ISIC 2018 validation dataset, we compared MobileGAN to the FCN, U-Net, SegNet, FrCN and GAN-FCN models as shown in Table 2. We used the validation evaluation of FCN, U-Net, SegNet, FrCN from the literature [1]. Our model achieves the highest score compared to the GAN-FCN models with an improvement of and better than the U-Net model with an increase of .

Methods
Parameters
(million)
 FCN [14]  74.70  134.3
 U-Net [16]  54.4  12.3
 SegNet [5]  69.50  11.5
 FrCN [2]  74.60  16.3
 GAN-FCN [6]  77.80  10.61
Proposed 78.4 2.35
Table 2: Evaluating the proposed model on the ISIC 2018 validation dataset

In addition, we compared the MobileGAN model to the FCN, U-Net, SegNet, FrCN, SegAN and GAN-FCN models in terms of the number of the parameters. The MobileGAN has only 2.35 millions of parameters. While the closest one is the GAN-FCN model with 10.61 millions of parameters. In turn, the SegAN is the most massive model with 382.17 millions of parameters. That model used the traditional GAN model. It is evident that adding non-bottleneck and position and channel attention modules significantly reduced the number of parameters of the MobileGAN model. Besides, Mobile GAN has a number of parameters 57x,5x, 4x, 6x, and 19x lower than the FCN, U-Net, SegNet, FrCN, and SLSDeep models, respectively.

Fig. 2 shows qualitative segmentation results of the MobileGAN model with some examples from the ISBI 2017 test dataset. As shown, in Fig. 2 (left), Although the tested images have a high similarity between the color of the lesion and the skin regions, fuzzy boundaries and even very small lesions, the MobileGAN model accurately segments the boundary of each skin lesion with an accuracy of about . Besides, in Fig. 2(right), the four images shown have skin regions (the background) are very small compared to lesion regions, also the lesion regions occupy most of the image and intersect three margins of the images. In these cases, our MobileGAN yields inaccurately segmentation. It is a bit difficult to segment the boundaries of tumors accurately. That means our model needs to a complete shape of the lesion area to properly segment the boundaries of the legions regions.

Figure 2: Segmentation results of our model: (a) input image (b) ground truth (c) left: accurately segmented lesions (c) right: incorrectly segmented lesions

4 Conclusions

In this paper, we have proposed a lightweight yet efficient GAN model (MobileGAN) for skin lesion segmentation. The MobileGAN is built by adapting the GAN model by adding 1D non-bottleneck factorization networks with position and channel attention blocks. In comparison to state-of-art skin melanoma segmentation, the number of parameters of MobileGAN model is significantly reduced with only 2.35 millions of parameters. The MobileGAN model has been evaluated on ISBI 2017 test and ISIC 2018 validation datasets. With the ISBI 2017 test dataset, it yields appropriate segmentation results with an accuracy of , a specificity of . The proposed model also provides Jaccard and sensitivity of , respectively that is comparable to the state-of-the-art. The proposed model achieves a threshold Jaccard score of with the ISIC 2018 validation dataset. Future work attempts to implement a mobile application based on the MobileGAN model to segment skin lesions in images captured by a low-resolution camera.

References

  • [1] Al-masni, M., Al-antari, M., Rivera, P., Valarezo, et al.: Automatic skin lesion boundary segmentation using deep learning convolutional networks with weighted cross entropy. ISIC2018: Skin Image Analysis Workshop and Challenge (2018)
  • [2] Al-Masni, M.A., Al-antari, M.A., Choi, M.T., Han, S.M., Kim, T.S.: Skin lesion segmentation in dermoscopy images via deep full resolution convolutional networks. Computer methods and programs in biomedicine 162, 221–231 (2018)
  • [3] Apalla, Z., Nashan, D., Weller, R.B., Castellsague, X.: Skin cancer: epidemiology, disease burden, pathophysiology, diagnosis, and therapeutic approaches. Dermatology and therapy 7(1), 5–19 (2017)
  • [4] Atlanta, G.: American cancer society; 2011. American Cancer Society: Cancer Facts and Figures (2011)
  • [5] Badrinarayanan, V., Kendall, A., Cipolla, R.: Segnet: A deep convolutional encoder-decoder architecture for image segmentation. IEEE transactions on pattern analysis and machine intelligence 39(12), 2481–2495 (2017)
  • [6] Bi, L., Feng, D., Kim, J.: Improving automatic skin lesion segmentation using adversarial learning based data augmentation (2018)
  • [7] Celebi, M.E., Wen, Q., Iyatomi, H., Shimizu, K., Zhou, H., Schaefer, G.: A state-of-the-art survey on lesion border detection in dermoscopy images. Dermoscopy image analysis pp. 97–129 (2015)
  • [8] Codella, N., Rotemberg, V., Tschandl, P., Celebi, M.E., et al.: Skin lesion analysis toward melanoma detection 2018: A challenge hosted by the international skin imaging collaboration (isic) (2019)
  • [9] Codella, N.C., Gutman, D., Celebi, M.E., Helba, B., Marchetti, et al.: Skin lesion analysis toward melanoma detection: A challenge at the 2017 isbi, hosted by the isic. In: 2018 IEEE 15th ISBI. pp. 168–172. IEEE (2018)
  • [10]

    Esteva, A., Kuprel, B., Novoa, R.A., Ko, et al.: Dermatologist-level classification of skin cancer with deep neural networks. Nature 542(7639), 115 (2017)

  • [11] Fu, J., Liu, J., Tian, H., Fang, Z., Lu, H.: Dual attention network for scene segmentation. arXiv preprint arXiv:1809.02983 (2018)
  • [12]

    Isola, P., Zhu, J.Y., Zhou, T., Efros, A.A.: Image-to-image translation with conditional adversarial networks. In: Proceedings of the IEEE conference on computer vision and pattern recognition. pp. 1125–1134 (2017)

  • [13] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014)
  • [14] Long, J., Shelhamer, E., Darrell, T.: Fully convolutional networks for semantic segmentation. In: Proceedings of the IEEE conference on computer vision and pattern recognition. pp. 3431–3440 (2015)
  • [15] Romera, E., Alvarez, J.M., Bergasa, L.M., Arroyo, R.: Erfnet: Efficient residual factorized convnet for real-time semantic segmentation. IEEE Transactions on Intelligent Transportation Systems 19(1), 263–272 (2018)
  • [16] Ronneberger, O., Fischer, P., Brox, T.: U-net: Convolutional networks for biomedical image segmentation. In: International Conference on Medical image computing and computer-assisted intervention. pp. 234–241. Springer (2015)
  • [17] Sarker, M.M.K., Rashwan, H.A., Akram, F., et al.: Slsdeep: Skin lesion segmentation based on dilated residual and pyramid pooling networks. In: International Conference on MICCAI. pp. 21–29. Springer (2018)
  • [18] Xue, Y., Xu, T., Huang, X.: Adversarial learning with multi-scale loss for skin lesion segmentation. In: 2018 IEEE 15th International Symposium on Biomedical Imaging (ISBI 2018). pp. 859–863. IEEE (2018)