The classification of pigmented skin lesions with unaided eye is challenging, even for the highly experienced dermatologists. So, dermoscopy is used for visual inspection of the skin lesions in a better way. This device can magnify the inspected regions as well as eliminates the surface reflection of the skin which leads to improve the diagnostic accuracy. But in several occasions using dermatoscope a trained experts also fail to make correct prediction [1, 2]. Hence, several computer assisted automated approaches are proposed to analysis dermoscopy images [3, 4, 5].
The identification of skin disease from dermoscopy images are treated as an image classification problem. The tradition approach of image classification needs robust feature representation which are feed to the classifier for training. So, inspired by the medical diagnostic procedure several color, texture and shape features are used to characterize the skin lesion. However, it is very much difficult to develop robust feature representation to deal with the dermoscopy images obtained from different acquisition devices and captured in diverse illumination conditions
. This draws the computer vision researchers to use deep convolutional neural networks.
The convolutional neural network (CNN) binds the feature extraction, feature selection and classification modules into a single unit. It can automatically extract discriminating features from the labelled images
. Hence, CNN produces unimaginable performance in several image classification problems. But the limitation of CNN is that it is data hungry. So, to get rid of that, transfer learning is used
. The transfer learning approach is nothing but tricky initialization of the network weights. In transfer learning scheme, the network weights are initialized with the leant weights of a CNN trained on another dataset. Generally, a CNN trained on the imagenet classification challenge is used for that purpose.
In this paper, ensemble of deep convolutional neural networks are used to classify the dermoscopy images into one of the seven disease classes- Melanoma, Melanocytic nevus, Basal cell carcinoma, Actinic keratosis, Benign keratosis, Dermatofibroma and Vascular lesion. We fine-tune three popular deep learning architectures namely- ResNet50, DenseNet-121  and MobileNet-50 
. to predict the disease class. Finally, a majority-voting is applied on the basis of the predicted class probability maps obtained from the trained classification networks.
In this research, the challenge dataset produced for the workshop ISIC 2018: Skin Lesion Analysis Towards Melanoma Detection111https://challenge2018.isic-archive.com/task3/ is used [11, 12]. In training set, there are a total of skin lesion images from seven skin diseases- Melanoma (), Melanocytic nevus (), Basal cell carcinoma (), Actinic keratosis (), Benign keratosis (), Dermatofibroma () and Vascular (). The validation dataset consists of images. Sample images from all seven lesion types are shown in Figure 1.
Iii Proposed Methodology
In this paper, we used ensemble of three trained convolutional neural networks to identify the skin lesion. Ensemble learning is nothing but aggregation of predicted scores obtained from different classifiers. It is used for combining multiple weak classifiers to develop a stronger classifier. The individual classifiers can be constructed in several ways such as (a) using different classification algorithms, (b) training same classifier with different hyperparameters (c) using different training sets.
In this paper, we used three state-of the art convolutional neural network models namely ResNet50 , DenseNet-121  and MobileNet . The success of these networks in image net classification challenge motivate us for choosing them. The training dataset suffers from data imbalance. We tackle this problem by back propagating the weighted loss from the loss layer. For classifier model construction, we fine-tuned the pre-trained weights of these models separately. Finally, the average predicted class probabilities obtained from these trained networks are used to decide the class label of the test image. Thus an input image is classified into one of the specified lesion classes.
Iv Implementation Details
We used 10% of the training images as validation images. The validation images are used for deciding the training hyper parameters. Before fine-tuning a pre-trained model, firstly the last layer (soft-max classification layer) is removed and then the number of node in last layer is set to (as we are dealing with
class classification problem). Firstly, except the the last layer all other layers are freezed and the network is trained with a learning rate of 0.01, for 10 epochs with early stopping having a patience of 5 (i.e., if there is no improvement in validation loss after 5 epochs the backpropagation algorithm automatically terminates). After that, all layers are unfreezed and fine-tuned with a learning rate of 0.001 for 100 epochs. This time we used early stopping having a patience of 10. Horizontal and vertical flipping is used for augmentation.
V Result and Discussion
The performance of the developed classifiers are scored using a normalized multi-class accuracy metric (balanced across categories). This scoring is obtained from the online portal of the challenge. The scores obtained on the validation images () are listed in Table I. According to Table I, a performance is improved when the ensembling is performed.
We are thankful to the organizer of “MICCAI 2018 challenge ISIC for providing the skin images.
-  K. H., P. H., W. K., and B. M., “Diagnostic accuracy of dermoscopy,” The Lancet Oncology, vol. 3, no. 3, pp. 159–165, 2002.
-  M. Vestergaard, H. P. Macaskill, P and, and S. Menzies, “Dermoscopy compared with naked eye examination for the diagnosis of primary melanoma: a meta-analysis of studies performed in a clinical setting,” Br J Dermatol, vol. 159, pp. 669–676, 2008.
-  C. Barata, M. E. Celebi, and J. S. Marques, “Improving dermoscopy image classification using color constancy,” IEEE journal of biomedical and health informatics, vol. 19, no. 3, pp. 1146–1152, 2015.
-  F. E. S. Alencar, D. C. Lopes, and F. M. M. Neto, “Development of a system classification of images dermoscopic for mobile devices,” IEEE Latin America Transactions, vol. 14, no. 1, pp. 325–330, Jan 2016.
-  N. C. F. Codella, Q. B. Nguyen, S. Pankanti, D. A. Gutman, B. Helba, A. C. Halpern, and J. R. Smith, “Deep learning ensembles for melanoma recognition in dermoscopy images,” IBM Journal of Research and Development, vol. 61, no. 4/5, pp. 5:1–5:15, July 2017.
-  A. Krizhevsky, I. Sutskever, and G. E. Hinton, “Imagenet classification with deep convolutional neural networks,” in Advances in neural information processing systems, 2012, pp. 1097–1105.
A. Pal, A. Chaturvedi, U. Garain, A. Chandra, and R. Chatterjee, “Severity
grading of psoriatic plaques using deep cnn based multi-task learning,” in
2016 23rd International Conference on Pattern Recognition (ICPR), Dec 2016, pp. 1478–1483.
-  K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, pp. 770–778.
-  G. Huang, Z. Liu, L. Van Der Maaten, and K. Q. Weinberger, “Densely connected convolutional networks.” in CVPR, vol. 1, no. 2, 2017, p. 3.
-  A. G. Howard, M. Zhu, B. Chen, D. Kalenichenko, W. Wang, T. Weyand, M. Andreetto, and H. Adam, “Mobilenets: Efficient convolutional neural networks for mobile vision applications,” arXiv preprint arXiv:1704.04861, 2017.
-  P. Tschandl, C. Rosendahl, and H. Kittler, “The ham10000 dataset: A large collection of multi-source dermatoscopic images of common pigmented skin lesions,” arXiv preprint arXiv:1803.10417, 2018.
-  N. C. Codella, D. Gutman, M. E. Celebi, B. Helba, M. A. Marchetti, S. W. Dusza, A. Kalloo, K. Liopyris, N. Mishra, H. Kittler et al., “Skin lesion analysis toward melanoma detection: A challenge at the 2017 international symposium on biomedical imaging (isbi), hosted by the international skin imaging collaboration (isic),” in Biomedical Imaging (ISBI 2018), 2018 IEEE 15th International Symposium on. IEEE, 2018, pp. 168–172.