Deep learning models such as the convolution neural networks (CNN) have shown outstanding potential in dermatology for skin cancer classification [4, 20, 5]. However, the diversity of real life skin disease still hinder the application of automatic differential diagnosis to real life. E.g., the well-known HAM10000 dataset  contains eight different skin lesion classes in its training set. This is quite small compared to the actual number of known skin lesion types and subtypes, which can be in the thousands . Hence, it is important to have methods that can make use of the limited amount of disease types in existing datasets to detect the unseen diseases. This is the problem of Out-Of-Distribution (OOD) detection, or abnormality detection. Recent work 
proposes a simple but effective OOD detection framework. They model a class conditional Guassian distribution on the final feature of any pre-trained neural network, and they use Mahalanobis-distance-based metric to compute the abnormality score. However, skin lesions, even within the same class, are known to have huge intra-class difference. As a result, we argue that a uni-modal Gaussian distribution might not be expressive enough to capture the distribution of representation, which is shown in our paper.
To address this limitation, we propose to replace the simple Guassian estimation with a powerful non-parametric method Isolation Forest (IF)
. Unlike traditional anomaly detection techniques, IF does not require normal profiling nor assuming a distribution family for normal samples. IF is designed based on the intuition that, abnormal samples are few and different, and as a result, they can be easily classified by a decision tree with fewer splits. In this work, we propose to use IF on the features computed by a pre-trained deep CNN to detect OOD images of skin lesions, and hence the name DeepIF. Our contributions are as follows:
We propose DeepIF as a modification to the existing OOD framework  to take into account the huge intra-class diversity of skin disease image.
We present a comprehensive analysis of hidden representations from different convolutional layers. Results show that the last convolutional layer has the most expressive representations among most of the diseases.
2 Related Works
In recent years, a broad range of approaches based on deep learning have been proposed for this problem. 10], uses softmax temperature scaling and adversarial input perturbation to make the softmax scores of in-distribution and out-of-distribution examples better separated. Based on the assumption that features computed by a pre-trained network follow a class-conditional Gaussian distribution, Lee et al. 
use the Mahalanobis distance in the predicted class distribution to detect OOD and adversarial samples. Our method can be viewed as a non-parametric model extension on the above framework to take into account the high complexity of medical images like skin disease.
, Devries et al. use an auxiliary loss function to generate a confidence score in another branch. The extra loss function encourages the network to identify examples for which its prediction is unsure. Vyas et al. train an ensemble of classifiers in a self-supervised manner, considering a random subset of training examples as OOD data and the rest as in-distribution data. A margin-based loss is proposed to impose a given margin between the mean entropy of OOD and in-distribution samples. In , Masana et al. use metric learning to derive an embedding space where samples from the same in–distribution class form clusters that are separated from other in–distribution classes and OOD samples. 
propose to use transfer learning as a general abnormality detection for medical images. propose using the likelihood ratio between the output probability of two deep networks, the first one modeling in-distribution data and the second capturing background statistics, as measure of normality.
While all these approaches require modifying the original training algorithm of the model, our method is more flexible as it only needs a pre-trained network and can use a black-box algorithm for training. In addition, these studies focus on natural images and, as shown in our experiments, do not work well on skin lesion images which have less inter-class variability. So far, only a few works have investigated OOD detection for this type of image. Pacheco et al. 
use the mean Shannon entropy of the softmax output for correctly classified and misclassified validation examples to detect outliers, yielding a 11.45% OOD detection rate for the ISIC 2019 dataset. In a different approach, Lu et al.
consider the likelihood of a variational autoencoder (VAE) to identify OOD skin lesion images. Different from these approaches, our method does not presume any distribution for the anomaly class. As we will empirically demonstrate, this makes our OOD method more robust.
Isolation Forest (IF) is an anomaly detection algorithm built on the idea of decision tree ensembling. Each decision tree is constructed by the data points in the training set. At each node of a tree, select a random feature from a subset of features (the proportion of the size of subset is ). A random value between the minimum and maximum values of that feature is chosen to make a split at that node. We construct a total of decision trees.
For a given isolation forest and the test data , we calculate the normality as:
where is the number of tree nodes (i.e., path length) traversed by from the root node to the terminal leaf node on the -th decision tree, and we take its average across all trees in . is the average path length for training data. We refer to the original paper  for detailed information. The intuition is that anomaly data points have extreme values on certain features, such that they can be easily isolated and have shorter paths. Thus would be small if is an OOD data.
OOD Detection Framework
An arbitrary CNN is pre-trained to predict the normal classes of the training data. The parameters of are then fixed when training finishes. Afterwards, training examples are fed into to obtain their hidden representation from the last convolutional layer. Lee et al  calculate the class mean and covariance as class-conditional Gaussian distributions based on the . For OOD detection, they extract the from and calculate the Mahalanobis distance of each class, and assign the shortest distance as the final anomaly score.
Deep Isolation Forest (DeepIF)
Our DeepIF shares the same idea for extracting from a pre-trained CNN (see Fig 1). Different from their distance-based approach, we construct models for each class. Then our final normality score is computed as
Data and setup
The data we use is from the HAM10000  training set which contains 25,331 images with 8 classes: Melanoma (MEL), Melanocytic nevus (NV), Basal cell carcinoma (BCC), Actinic keratosis (AK), Benign keratosis (BKL), Dermatofibroma (DF), Vascular lesion (VASC), Squamous cell carcinoma (SCC). For each experiment, we hold out 1 class as an Anomaly Class, which we refer to as an OOD set. For each remaining class, a 90% - 10% split is made for the training and validation sets. We treat the validation set as in-distribution set. Since HAM10000  contains 8 classes, we conduct 8 experiments with a single class being treated as the Anomaly class and the rest 7 are normal classes in each experiment.
We train a skin lesion classification network with a standard approach: an image is feed into a ResNet152 
to get the predictions for each class. Cross-entropy loss is calculated and back-propagated to the network. SGD is adopted to optimize the network with a learning rate of 1e-4. We train the network 200 epochs with a batch size of 32. In the training stage, one class is held out to be treated as an anomaly class. Once the training procedure finishes, the parameters of the network is fixed through the rest of the procedures.
For constructing the models, we set to be 200, and to be 1.0. Final scores for in-distribution and OOD sets are stored separately for evaluation.
Our first baseline is to compare with the originally Mahalanobis-distance baseline using the implementation from . We also compare to other strong baselines that beyond our framework. We compare to a Confidence Score baseline , which learns to predict the confidence score. We use the implementation from  but with the same network architecture as our DeepIF. Finally we compare with the VAE baseline  by measuring the negated reconstruction score.
We adopt the same metrics as in other studies on OOD detection [2, 8, 10]: area under the ROC curve (AUROC); area under the precision recall curve where in-distribution is specified as the positive (AUPR in); area under the precision recall curve where OOD is specified as the positive (AUPR out); true negative rate (TNR) when the true positive rate is as high as 95% (TNR95TPR). In the latter, the TNR is computed as TN/(TN+FP), where TN is the number of true negative and FP the number of false positives. We also show the classification accuracy on the validation dataset.
The results are shown in Table 1. We can first find that the confidence-based baseline would decrease the classification performance on validation data, with 4% mean accuracy drop than the other methods. We believe that learning to predict confidence would add extra requirement to the training process which might hurt the performance of the main task, and therefore an OOD framework that does not touch the training procedure like ours has the advantage to preserve the model performance.
DeepIF easily beat the Mahalanobis baseline, which confirms our hypothesis that medical images like skin lesion are too complex to be properly modelled by a uni-mode Gaussian even on the representation space. Our method also beat the VAE baseline, and VAE is known to be a very distribution modelling for high-dimensional data. We believe that this results show the potential of non-parametric OOD detection that does not depend on normal profiling. The strongest baseline is the confidence score. DeepIF is better except in one metric (AUPR in), but DeepIF preserves the model accuracy.
|OOD||Method||AUROC||AUPR||AUPR||TNR at||Val. Acc %|
We plot in Fig 2 the histograms of normality scores for in- and out-distribution data point between Mahalanobis baseline and DeepIF with MEL as the OOD set. It can be observed that DeepIF scores lead to a better separation of in-distribution and OOD examples, which explains our method’s better ability in differentiating those two datasets. We also plot in Fig 3 the ROC curves with OOD set to be BKL and DF.
We analyze the effect of using the representation from different layers. Our default choice is to use the last layer . We evaluate the performance of DeepIF from to as well. The result is shown in Table. 2
. We find that, with the exception of NV, the performance of DeepIF with shallower features is worse than using deep features. This highlights the importance of semantic information captured in deeper layers for OOD detection.
6 Discussion and Conclusion
In this paper, we studied the problem of OOD detection with a non-parametric approach on the HAM10000  skin lesion dataset. We proposed a simple framework by adopting a pre-trained CNN and Isolation Forest models. Our experiments showed our approach to achieve state-of-the-art performance for differentiating in-distribution and OOD data.
We demonstrated the usefulness of our proposed DeepIF, method on a skin lesion dataset. To further validate our method, we aim to cover a broader range of medical image datasets where there exists huge intra-class diversity, for instance, Diabetic Retinopathy, CT, and MRI datasets. Moreover, while our DeepIF focuses on image data, our method can be easily transferred to other non-image data, such as electric medical records data, or time sequence data including electroencephalogram (EEG) and electrocardiogram (ECG). In future work, we would also like to compare test method with more non-parametric algorithms such as Dirichlet Process Mixture Model (DPMM)  or a self-organizing network .
-  (2006) Variational inference for dirichlet process mixtures. Bayesian analysis 1 (1), pp. 121–143. Cited by: §6.
-  (2018) Learning confidence for out-of-distribution detection in neural networks. arXiv preprint arXiv:1802.04865. Cited by: §2, §4, §4.
-  (2018) Learning confidence for out-of-distribution detection in neural networks. GitHub. Note: https://github.com/uoguelph-mlrg/confidence_estimation Cited by: §4.
-  (2017) Dermatologist-level classification of skin cancer with deep neural networks. Nature 542 (7639), pp. 115–118. Cited by: §1.
-  (2018) Deep neural networks show an equivalent and often superior performance to dermatologists in onychomycosis diagnosis: automatic construction of onychomycosis datasets by region-based convolutional deep neural network. PloS one 13 (1). Cited by: §1.
-  (2016) Deep residual learning for image recognition. In , pp. 770–778. Cited by: §4.
-  (2016) A baseline for detecting misclassified and out-of-distribution examples in neural networks. arXiv preprint arXiv:1610.02136. Cited by: §2.
-  (2018) A simple unified framework for detecting out-of-distribution samples and adversarial attacks. In Advances in Neural Information Processing Systems, pp. 7167–7177. Cited by: 1st item, 2nd item, §1, §2, §3, §4.
-  (2019) A simple unified framework for detecting out-of-distribution samples and adversarial attacks. GitHub. Note: https://github.com/pokaxpoka/deep_Mahalanobis_detector Cited by: §4.
-  (2018) Enhancing the reliability of out-of-distribution image detection in neural networks. In 6th International Conference on Learning Representations, ICLR 2018, Cited by: §2, §4.
-  (2008) Isolation forest. In 2008 Eighth IEEE International Conference on Data Mining, pp. 413–422. Cited by: §1, §3, §5.
-  (2018) Anomaly detection for skin disease images using variational autoencoder. arXiv preprint arXiv:1807.01349. Cited by: §2, §4.
-  (2002) A self-organising network that grows when required. Neural networks 15 (8-9), pp. 1041–1058. Cited by: §6.
-  (2018) Metric learning for novelty and anomaly detection. arXiv preprint arXiv:1808.05492. Cited by: §2.
-  (2019) Towards practical unsupervised anomaly detection on retinal images. In Domain Adaptation and Representation Transfer and Medical Image Learning with Less Labels and Imperfect Data, pp. 225–234. Cited by: §2.
-  (2019) Skin cancer detection based on deep learning and entropy to detect outlier samples. arXiv preprint arXiv:1909.04525. Cited by: §2.
-  (2019) Likelihood ratios for out-of-distribution detection. In Advances in Neural Information Processing Systems, pp. 14680–14691. Cited by: §2.
-  (2018) The ham10000 dataset, a large collection of multi-source dermatoscopic images of common pigmented skin lesions. Scientific data 5, pp. 180161. Cited by: 2nd item, §1, §4, §6.
-  (2018) Out-of-distribution detection using an ensemble of self supervised leave-out classifiers. In Proceedings of the European Conference on Computer Vision (ECCV), pp. 550–564. Cited by: §2.
-  (2018) Towards improving diagnosis of skin diseases by combining deep neural network and human knowledge. BMC medical informatics and decision making 18 (2), pp. 59. Cited by: §1.