We are amid a transition from traditional pathology to digital pathology where scanners are replacing microscopes rapidly. Capturing the tissue characteristics in digital formats opens new horizons for diagnosis in medicine. On on hand, we will need to store thousands and thousands of specimens in large physical archives of glass samples. This will be a relief for many hospitals with limited space. On the other hand, acquiring an image from the specimen enables more systematic analysis, collaborations possibilities and, last but not least, the computer-aided diagnosis for pathology, arguable the final frontier of vision-based disease diagnosis. However, like any other technology, digital pathology comes with its own challenges; whole-scan imaging generally generates gigapixel files that also require (digital) storage and are not easy to analyze via computer algorithms. Detection, segmentation, and identification of tissue types in huge digital images, e.g., 50,000
70,000 pixels, appears to be a quite daunting task for computer vision algorithms.
Looking at the computer vision community, the emergence of deep learning and its vast possibilities for recognition and classification seems to be a lucky coincidence when we intend to address the above-mentioned obstacles of digital pathology. Diverse deep architectures have been trained with large set of images, e.g., ImageNet project or Faces in the Wild
database, to perform difficult tasks like object classification and face recognition. The results have been more than impressive; one may objectively speak of a computational revolution. Accuracy numbers in mid and high 90s have become quite common when deep networks, trained with millions of images, are tested to recognize unseen samples.
In spite of all progress, one can observe that the applications of deep learning in digital pathology hast not fully started yet. The major obstacle appears to be the lack of large labelled datasets of histopathology scans to properly train some type of multi-layer neural networks, a requirement that may still be missing for some years to come. Hence, we have to start designing and training deep nets with the available datasets. Training from scratch when we artificially increase the number of images, i.e., data augmentation, is certainly the most obvious action. But we can also use nets that have been trained with millions of (non-medical) images to extract deep features
. As a last possibility, we could slightly train (fine-tune) the pre-trained nets to adjust them to the nature of or data before we use them as feature extractors or classifiers.
In this paper, we investigate the usage of deep networks for Kimia Path24
via training from scratch, feature extraction, and fine-tuning. The results show that employing a pre-trained network (trained with non-medical images) may be the most viable option.
Over recent years researchers have shown interest in leveraging machine-learning techniques for digital pathology images. These images pose unique issues due to their high variation, rich structures, and large dimensionality. This has lead researchers to investigate various image analysis techniques and their application to digital pathology. For dealing with the large rich structures within a scan, researchers have attempted segmentation on both local and global scales. For example, researchers have conducted works on the segmentation of various structures in breast histopathology images using methods such as thresholding, fuzzy c-means clustering, and adaptive thresholding with varying levels of success [1, 2, 3, 4].
When applying these methods to histopathological images, it is often desired that a computer aided diagnosis (CAD) method be adopted for use in a content-based image retrieval (CBIR) system. Work has been done to propose various CBIR systems for CAD by multiple groups. Recently, hashing methods have been employed for large-scale image retrieval. Among the hashing methods, kernelized and supervised hashing are considered the most effective [5, 6]. More recently Radon barcodes have been investigated as a potential method for creating a CBIR [7, 8, 9].
Yi et al. utilized CNNs on a relatively small mammography dataset to achieve a classification accuracy of and an ROC AUC of whereas handcrafted features were only able to obtain an accuracy of .
Currently, there is interest in using pre-trained networks to accomplish a variety of tasks outside of the original domain . This is of great interest for medical tasks where there is often a lack of comprehensive labeled data to train a deep network . Thus, other groups have leveraged networks trained on the ImageNet database which consists of more than 1.2 million categorized images of 1000+ classes [13, 12, 14]. These groups have reported a general success when attempting to utilize pre-trained networks for medical imaging tasks [12, 14, 15].
3 Data Set
The data used to train and test the CNNs was the Kimia Path24 consisting of 24 whole scan images (WSIs), manually selected from more than 350 scans, depicting diverse body parts with distinct texture patterns. The images were captured by TissueScope LE 1.0 111http://www.hurondigitalpathology.com bright field using a 0.75 NA lens. For each image, one can determine the resolution by checking the description tag in the header of the file. For instance, if the resolution is 0.5m, then the magnification is 20x, and if the resolution is 0.25m, then the magnification is 40x. The dataset offers 27,055 training patches and 1,325 (manually selected) test patches of size 10001000 (0.5mm0.5mm) . The locations of the test patches in the scans have been removed (whitened) such that they cannot be mistakenly used for training. The color (staining) is neglected in Kimia Path24 dataset; all patches are saved as grayscale images. The Kimia Path24 dataset is publicly available222http://kimia.uwaterloo.ca.
3.1 Patch Selection
To create the Kimia Path24 dataset, each scan is divided into patches that are 10001000 pixels in size with no overlap between patches. Background pixels (i.e., very bright pixels) are set to white and ignored using a homogeneity measure for each patch. The homogeneity for selection criterion is that every patch with a homogeneity of less than 99 is ignored. The high threshold ascertains that no patch with significant texture pattern is ignored. From the set of patches each scan had 100 randomly sampled patches are selected to be used for the fine-tuning process (we do not use all of them to emulate cases where no large dataset is available; besides more extensive training may destroy what a network has already learned). The values of each patch were subsequently normalized into . The patches were finally downsized to a 224224 to be fed into the CNN architecture.
Following the above steps, we first obtained 27,055 patches from each scan based purely on the homogeneity threshold. Then, randomly sampled 100 patches from each class leading to the much smaller training set of 2,400 patches. A selection of patches from the training set can be viewed within Fig. 1. As Fig. 2 shows the testing samples are relatively balanced in Kimia Path24 dataset, whereas the training set is rather imbalanced. Different size and frequency of specimens are the main reasons for the imbalance.
3.2 Accuracy Calculation
The accuracy measures used for the experiments are adopted from . These were chosen so that results between the papers could be compared. There are testing patches that belong to 24 sets with . Looking at the set of retrieved images for an experiment, , the patch-to-scan accuracy, , can be defined as
The whole-scan accuracy, , can be defined as
With the total accuracy is defined as .
By incorporating both the accuracy measurements the resulting problem becomes much more difficult when attempting to obtain acceptable results .
Each experiment was run using the architecture for both the VGG16 and Inception-v3 networks as provided in the Keras Python package [18, 19, 20]. Utilizing a pre-trained network, we then analyze the effectiveness of the network when using it just as a feature extractor, and when transferring the network (some of its weights) to the medical imaging domain.
4.1 Fine-Tuning Protocols
When fine-tuning a deep network, the optimal setup varies between applications . However, using a pre-trained network and applying it to other domains has yielded better performing models . It was decided that only the final convolutional block (block 5) within VGG16 and the final two inception blocks within Inception-v3 would be re-trained [12, 21, 18, 19]. As in  a single fully connected layer of size 256 (followed by an output layer of size 24) was chosen to replace the default VGG16 fully connected layers when fine-tuning. This was found to give better results. The optimizer we used follows the logic from [12, 14] where the learning rate chosen was very small () and the momentum used was large (
), both of which were selected to ensure no drastic changes within the weights of the network during training (which would destroy what had been already learned). The Keras data augmentation API was used to generate extra training samples and the network was trained for a total of 200 epochs (after which the accuracy was no longer changing) with a batch size of 32.
4.2 Pre-Trained CNN as a Feature Extractor
By using the provided implementation of the specified architectures within Keras, the pre-trained network was first used as a feature extractor without any fine-tuning (Feature Extractor VGG16 or FE-VGG16 and FE-Inception-v3)
. The last fully connected layer of the network – prior to classification – was used extracted to be used a feature vector. As pre-trained networks are trained in other domains (very different image categories) and hence cannot be used as classifier, we used the deep features to train a linear Support Vector Machine (SVM) for classification. The Python packagescikit-learn as well as LIBSVM were used to train SVM classifiers with a linear kernel [22, 23]. Both NumPy and SciPy were leveraged to manipulate and store data during these experiments [24, 25].
4.3 Fine-Tuned CNN as a Classifier
The proposed network was then fine-tuned to the Kimia Path24 dataset. Using the Keras library, the convolutional layers were first separated from the top fully connected layers . The training patches were fed through the model to create a set of bottleneck features to initially pre-train the new fully-connected layers 
. These features were used to initialize the weights of a fully connected MLP consisting of one 256 dense ReLU layer and asoftmax classification layer. Next, the fully connected model was attached to the convolutional layers and training on each convolutional block, except the last block, was performed to adjust classification weights [12, 14]. Similarily, for the Inception-v3 network the fully connected layers were replaced with one 1024 dense ReLU layer and a softmax classification layer. The fully connected layers were pretrained on bottleneck features and then attached to the convolutional layers and training on the final two inception blocks was then performed. The resulting networks (Transfer Learned VGG16 or TL-VGG16 and TL-Inception-v3) were then used to classify the test patches. The class activation mappings (CAMs) for the fine-tuned Inception-v3 network on randomly selected test patches can be viewed in Fig. 3.
The results of our experiments are summarized in Table 1. It can be stated the results for VGG16 and are quite similar; training from scratch, using a pre-trained network as feature extractor, and fine-tuning a pre-trained network are all delivering comparable results for Kimia Path24 . Whereas the results for Inception-v3 are similar with the transfer-learned model outperforming the feature extractor. As TL-Inception-v3 produced the best results, , and minimally updating the weights of a pre-trained network is not a time consuming task, one may prefer to utilize it. However, one may prefer using Inception-v3 to training from scratch and fine-tuning a pre-trained net as it requires no extra effort and produces similar results with a linear SVM.
|Train from scratch||||64.98%||64.75%||41.80%|
|Fine-tuning the pre-trained net||TL-VGG16||63.85%||66.23%||42.29%|
|Fine-tuning the pre-trained net||TL-Inception-v3||74.87%||76.10%||56.98%|
It was surprising to find out that simply using features from a pre-trained network (trained on non-medical images, see Fig. 4) can deliver results comparable with a network that, with considerable effort and resources, has been trained from scratch for the domain in focus (here histopathology). As well, such simpler approach was even able to achieve a noticeable accuracy increase of in overall performance for Kimia Path24 dataset. Another surprising effect was that transfer learning via fine-tuning for VGG16 was not able to provide any improvement compared to extracting deep features from a pre-trained network without any change in the learned of its weights whereas with Inception-v3 the improvement was immediate.
Perhaps the most obvious reaction to this finding is that if we had enough samples, i.e., millions of histopathological images, and if we would use proper computational devices for efficient training, then CNN would perhaps deliver the best results clearly better than transfer learning. Although this statement is supported by comparable empirical evidence, it remains speculation for a sensitive field like medical imaging.
But why is so difficult to train a CNN for this case? It is most likely due to a number of factors such as a relative lack of image data, the effect of scaling down a patch for use within a deep network, architecture not well suited to the problem, or an overly simplistic fully connected network. However, as previously discussed in , the problem given by the Kimia Path24
dataset is indeed a hard problem, most likely due to the high variance between the different patches within a given scan (intra-class variability). This is further validated when looking at the results in Fig.3. The two columns contain patches that have distinct patterns with their own unique features. The CAM from the first column shows that the network responds strongly to the unique structures within the 4 label (very strongly for the final patch). Whereas when presented with completely different patterns in the second column, the network responds strongly to other areas, typically ones that embody inner edges within the sample. This shows evidence that the model has at the very least begun to learn higher level structures within individual patches. Further investigation with different architectures would likely improve upon these results as would more aggressive augmentation.
Retrieval and classification of histopathological images are useful but challenging tasks in analysis for diagnostic pathology. Whole scan imaging (WSI) generates gigapixel images that are immensely rich in details and exhibit tremendous inter- and intra-class variance. Both a feature extractor and transfer-learned network were able to offer increases in classification accuracy on the Kimia Path24 dataset when compared to a CNN trained from scratch. Comparatively low performance of the latter could be due to the architecture not being well suited for the problem, lack of sufficient number of training images, and/or the inherent difficulty of the classification task for high-resolutional and highly variable histopathology images. Further work would warrant using different architectures for comparison, more aggressive data augmentation, and potentially increasing the size of training samples used from the Kimia Path24 dataset. However, both transfer-learned and feature extractor models were able to compete with the state-of-the-art methods reported in literature , and therefore show potential for further improvements.
The authors would like to thank Huron Digital Pathology (Waterloo, ON, Canada) for its continuing support.
-  M. N. Gurcan, L. E. Boucheron, A. Can, A. Madabhushi, N. M. Rajpoot, and B. Yener, “Histopathological image analysis: A review,” IEEE reviews in biomedical engineering, vol. 2, pp. 147–171, 2009.
-  S. Naik, S. Doyle, M. Feldman, J. Tomaszewski, and A. Madabhushi, “Gland segmentation and computerized gleason grading of prostate histology by integrating low-, high-level and domain specific information,” in MIAAB workshop, 2007, pp. 1–8.
-  P. S. Karvelis, D. I. Fotiadis, I. Georgiou, and M. Syrrou, “A watershed based segmentation method for multispectral chromosome images classification,” in Engineering in Medicine and Biology Society, 2006. EMBS’06. 28th Annual International Conference of the IEEE. IEEE, 2006, pp. 3009–3012.
-  S. Petushi, F. U. Garcia, M. M. Haber, C. Katsinis, and A. Tozeren, “Large-scale computations on histology images reveal grade-differentiating parameters for breast cancer,” BMC medical imaging, vol. 6, no. 1, p. 14, 2006.
-  X. Zhang, W. Liu, M. Dundar, S. Badve, and S. Zhang, “Towards large-scale histopathological image analysis: Hashing-based image retrieval,” IEEE Transactions on Medical Imaging, vol. 34, no. 2, pp. 496–506, Feb 2015.
W. Liu, J. Wang, R. Ji, Y. G. Jiang, and S. F. Chang, “Supervised hashing with
2012 IEEE Conference on Computer Vision and Pattern Recognition, June 2012, pp. 2074–2081.
-  H. R. Tizhoosh, “Barcode annotations for medical image retrieval: A preliminary investigation,” in Image Processing (ICIP), 2015 IEEE International Conference on. IEEE, 2015, pp. 818–822.
-  H. R. Tizhoosh, S. Zhu, H. Lo, V. Chaudhari, and T. Mehdi, “Minmax radon barcodes for medical image retrieval,” in International Symposium on Visual Computing. Springer, 2016, pp. 617–627.
-  A. Khatami, M. Babaie, A. Khosravi, H. R. Tizhoosh, S. M. Salaken, and S. Nahavandi, “A deep-structural medical image classification for a radon-based image retrieval,” in 2017 IEEE 30th Canadian Conference on Electrical and Computer Engineering (CCECE), April 2017, pp. 1–4.
-  D. Yi, R. L. Sawyer, D. C. III, J. Dunnmon, C. Lam, X. Xiao, and D. Rubin, “Optimizing and visualizing deep learning for benign/malignant classification in breast tumors,” CoRR, vol. abs/1705.06362, 2017. [Online]. Available: http://arxiv.org/abs/1705.06362
-  S. J. Pan and Q. Yang, “A survey on transfer learning,” IEEE Transactions on Knowledge and Data Engineering, vol. 22, no. 10, pp. 1345–1359, Oct 2010.
-  H. C. Shin, H. R. Roth, M. Gao, L. Lu, Z. Xu, I. Nogues, J. Yao, D. Mollura, and R. M. Summers, “Deep convolutional neural networks for computer-aided detection: Cnn architectures, dataset characteristics and transfer learning,” IEEE Transactions on Medical Imaging, vol. 35, no. 5, pp. 1285–1298, May 2016.
-  J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei, “Imagenet: A large-scale hierarchical image database,” in Computer Vision and Pattern Recognition, 2009. CVPR 2009. IEEE Conference on. IEEE, 2009, pp. 248–255.
-  R. Girshick, J. Donahue, T. Darrell, and J. Malik, “Region-based convolutional networks for accurate object detection and segmentation,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 38, no. 1, pp. 142–158, Jan 2016.
-  Y. Bar, I. Diamant, L. Wolf, and H. Greenspan, “Deep learning with non-medical training used for chest pathology identification,” in Proc. SPIE, vol. 9414, 2015, p. 94140V.
-  Y. Lecun, L. Bottou, Y. Bengio, and P. Haffner, “Gradient-based learning applied to document recognition,” Proceedings of the IEEE, vol. 86, no. 11, pp. 2278–2324, Nov 1998.
-  M. Babaie, S. Kalra, A. Sriram, C. Mitcheltree, S. Zhu, A. Khatami, S. Rahnamayan, and H. R. Tizhoosh, “Classification and Retrieval of Digital Pathology Scans: A New Dataset.” [Online]. Available: http://arxiv.org/abs/1705.07522
-  K. Simonyan and A. Zisserman, “Very deep convolutional networks for large-scale image recognition,” CoRR, vol. abs/1409.1556, 2014.
-  C. Szegedy, V. Vanhoucke, S. Ioffe, J. Shlens, and Z. Wojna, “Rethinking the inception architecture for computer vision,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2016, pp. 2818–2826.
-  F. Chollet et al., “Keras,” https://github.com/fchollet/keras, 2015.
-  N. Tajbakhsh, J. Y. Shin, S. R. Gurudu, R. T. Hurst, C. B. Kendall, M. B. Gotway, and J. Liang, “Convolutional neural networks for medical image analysis: Full training or fine tuning?” IEEE Transactions on Medical Imaging, vol. 35, no. 5, pp. 1299–1312, May 2016.
-  F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel, M. Blondel, P. Prettenhofer, R. Weiss, V. Dubourg, J. Vanderplas, A. Passos, D. Cournapeau, M. Brucher, M. Perrot, and E. Duchesnay, “Scikit-learn: Machine learning in Python,” Journal of Machine Learning Research, vol. 12, pp. 2825–2830, 2011.
-  C.-C. Chang and C.-J. Lin, “LIBSVM: A library for support vector machines,” ACM Transactions on Intelligent Systems and Technology, vol. 2, pp. 27:1–27:27, 2011, software available at http://www.csie.ntu.edu.tw/~cjlin/libsvm.
-  S. van der Walt, S. C. Colbert, and G. Varoquaux, “The numpy array: A structure for efficient numerical computation,” Computing in Science & Engineering, vol. 13, no. 2, pp. 22–30, 2011. [Online]. Available: http://aip.scitation.org/doi/abs/10.1109/MCSE.2011.37
-  E. Jones, T. Oliphant, P. Peterson et al., “SciPy: Open source scientific tools for Python,” 2001–, [Online; accessed ¡today¿]. [Online]. Available: http://www.scipy.org/
-  D. Yu and M. L. Seltzer, “Improved bottleneck features using pretrained deep neural networks,” in Twelfth Annual Conference of the International Speech Communication Association, 2011.
-  R. Kotikalapudi and contributors, “keras-vis,” https://github.com/raghakot/keras-vis, 2017.
-  R. R. Selvaraju, M. Cogswell, A. Das, R. Vedantam, D. Parikh, and D. Batra, “Grad-cam: Visual explanations from deep networks via gradient-based localization,” See https://arxiv. org/abs/1610.02391 v3, 2016.
-  M. D. Zeiler and R. Fergus, Visualizing and Understanding Convolutional Networks. Cham: Springer International Publishing, 2014, pp. 818–833. [Online]. Available: http://dx.doi.org/10.1007/978-3-319-10590-1_53