Recognizing Magnification Levels in Microscopic Snapshots

by   Manit Zaveri, et al.

Recent advances in digital imaging has transformed computer vision and machine learning to new tools for analyzing pathology images. This trend could automate some of the tasks in the diagnostic pathology and elevate the pathologist workload. The final step of any cancer diagnosis procedure is performed by the expert pathologist. These experts use microscopes with high level of optical magnification to observe minute characteristics of the tissue acquired through biopsy and fixed on glass slides. Switching between different magnifications, and finding the magnification level at which they identify the presence or absence of malignant tissues is important. As the majority of pathologists still use light microscopy, compared to digital scanners, in many instance a mounted camera on the microscope is used to capture snapshots from significant field-of-views. Repositories of such snapshots usually do not contain the magnification information. In this paper, we extract deep features of the images available on TCGA dataset with known magnification to train a classifier for magnification recognition. We compared the results with LBP, a well-known handcrafted feature extraction method. The proposed approach achieved a mean accuracy of 96 classifier.



There are no comments yet.


page 3


Pathology GAN: Learning deep representations of cancer tissue

We apply Generative Adversarial Networks (GANs) to the domain of digital...

Deep learning-based holographic polarization microscopy

Polarized light microscopy provides high contrast to birefringent specim...

Large scale digital prostate pathology image analysis combining feature extraction and deep neural network

Histopathological assessments, including surgical resection and core nee...

Advances in Computer Vision in Gastric Cancer: Potential Efficient Tools for Diagnosis

Early and rapid diagnosis of gastric cancer is a great challenge for cli...

Computer-Aided Diagnosis of Label-Free 3-D Optical Coherence Microscopy Images of Human Cervical Tissue

Objective: Ultrahigh-resolution optical coherence microscopy (OCM) has r...

Thyroid Cancer Malignancy Prediction From Whole Slide Cytopathology Images

We consider preoperative prediction of thyroid cancer based on ultra-hig...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

A biopsy followed by specimen preparation in laboratory and a subsequent microscopic examination by a trained pathologist is necessary for a definitive diagnosis of any type of cancer and many other diseases. Hematoxylin and eosin (H&E) staining is applied to thin cuts of the biopsy sample to visualize the structural patterns and any distortion of the tissue. To differentiate benign from malignant cells and to extract the distinctive cell features, the pathologist generally observes the tissue at several magnification levels to gain a more comprehensive understanding of the specimen. The size of the area under observation decreases with increase in magnification, allowing experts to view the enlarged tissue and observe the minute characteristics relevant for diagnosis. Some microscopes are equipped with a mounted camera that is used to capture snapshots from the glass slide. Pathologists create snapshots of the tissues and save them for future reference in reports or for research purposes. These snapshots usually miss some crucial information such as the magnification level which is required for many classification tasks. Therefore, the use of these snapshots for future research is quite limited particularly if there are large and diverse repositories of such snapshots of different organs and diseases. This limitation is the main motivation for this work [4, 21].

The trade-off between the superior performance and time complexity in computer vision research is mainly noticeable when it comes to feature extraction. There are several algorithms for feature extraction like SIFT, HOG and LBP which are based on handcrafted design. In contrast, there are also deep learning methods using convolutional neural networks (CNN) like VGG, ResNet and Incecption which are based on learning from data. The modern AI algorithms have been intensively investigated recently 


. Deep networks can learn distinctive image features, while traditional algorithms implement a series of functions on the image to extract important characteristics. These features are being deployed for various classification and content-based image retrieval tasks 

[11]. Both techniques, handcrafted and learning-based, have proven to perform well for different applications although deep features are reported to generally be much more expressive  [5, 16]. Recent publications have investigated this problem to be generic in histopathology domain. Several approaches for classification of malignancy in breast cancer images, for instance, have been performed using various techniques. Bayramoglu et al. [1] utilized deep features while Gupta et al. [8] used color texture features and evaluated the influence of magnification on classification model to identify malignancy. Otalora et al. [16] focused only on magnification level and implemented a CNN-based regression to find the magnification level using multiple open access datasets.

This paper is organized as follows: The feature extractors and classification algorithms are briefly discussed in section II. The dataset is introduced in  section IIIsection IV explains the methodology used in the study. Experiments and results are discussed in section V. Finally section VI concludes the paper.

Ii Background

Ii-a Feature Extraction

Ii-A1 Local Binary Pattern (LBP)

LBP is a conventional feature extraction algorithm. Ojala and Pietikainen’s research in multi-resolution approach [15] proved LBP’s useful application in text identification [10] and histopathology classification [3, 6]. A combined approach of LBP descriptor with histogram of Oriented Gradients (HOG) descriptor was used by Wang  [20] to improve the detection performance.

There are many LBP variations available which can address the effect of rotational invariance uniformity on neighborhood pixels [14]. A brief description of most recent applications of LBP are discussed in a survey [12].

Ii-A2 Pre-Trained Deep Nets

Pre-trained deep networks can be employed to transfer knowledge from one domain to another domain. For instance, DenseNet121 is a deep convolutional neural network proposed by Huang et al. [9] that has been applied to many different problems. Recent publications emphasize the capabilities of artificial neural networks and their performance in several classification and search-based tasks [18, 7].

Ii-B Classifiers

There is a large body of literature on different algorithms for data classification. We restrict ourselves to the following two approaches mainly because of our previous experience with these methods.

Ii-B1 Artificial Neural Networks (ANNs)

ANNs are classifiers with the capability to simulate the function of the human brain at a very small scale and for a given task. They are commonly used as robust classifiers in machine learning when a large body of labelled data is available for training. The network mainly consists of three primary layers: the first layer represents input neurons; the last layer represents output neurons; and hidden layers consist of a series of weighted layers which minimize the error between actual output and predicted output. It is difficult to interpret the trajectory of ANNs toward output to understand the rational behind the decision.

When we talk about ANNs we generally mean shallow networks (less than 5 layers), in contrast to convolutional neural networks like DenseNet, ResNet and VGG-19 that are deep networks with many hidden layers.

Ii-B2 Random Forests (RF)

Random Forest (RF) classifiers are multi-way decision trees with some randomization used to grow each tree as a potential solution. The leaf nodes represent the posterior distribution of each image class. Internal nodes contain a test split based on the maximum information gained from the feature subspace. Bosch et al. implemented image classification using random forests beating state-of-the-art results [2].

Iii Dataset

We used publicly available data from The Cancer Genome Atlas (TCGA). An important characteristic of this dataset is the available objective power of the whole slide images, which represents different magnification levels. Other publicly available datasets like PMC do not contain magnification information, making them unsuitable for this study. Each image was stored at various magnification levels and in a pyramid structure. The subset from the original dataset was created using an indexing algorithm, which would randomly index one whole slide image (WSI) at a time. If the objective power(i.e., the magnification of base layer) of 40x or 20x was absent, we discarded that WSI. In total, we gathered 29,596 WSIs for creating our magnification specific dataset. For these WSI files we randomly selected the coordinates of 5 points, read the image at respective magnifications from that specific coordinates and took a snapshot at each point. This yielded us the total of 693,518 patches, consisting of 147,477 patches at 2.5x,5x,10x and 20x and 103,611 at 40x. A region of a sample snapshot at different magnification levels is represented in Figure 1.

Fig. 1: Illustration of patches from different magnification levels extracted from a WSI. All the patches are centered around the same coordinates (images re-scaled for convenient visualization).

Iv Methodology

For classifying the magnification, two models are trained based on different features, one CNN-based approach using DenseNet121 and one conventional algorithm, namely Local Binary Patterns (LBP). The vectors obtained by each feature extractor is considered as an input for classification models. These features are independently fed into ANNs and RF to train them for magnification recognition. The performance of classification is evaluated using a 5-fold cross-validation with 80%-20% split for training and testing, respectively. The accuracy provides a performance index for correct classification, kappa score provides empirical probability of agreement associated to each label and the F1-score provides the arithmetic mean of precision and recall 

[16, 17].

Iv-a Feature Extraction with LBP

The feature vectors obtained through the application of LBP are conserved in histograms. We used rotationally invariant LBP with three parameter settings for radius and neighbors . For radius and neighbours we got features with 10 dimensions (bins). Radius and neighbours resulted in features with bins, and features with bins were the result of radius and neighbours .

Iv-B Deep Features using DenseNet121

Using pre-trained features for histopathology images has been the focus of attention in recent literature [13]

. Image features were extracted from the last average pooling layer in DenseNet121. Before passing through the feature extractor, we pre-processed all images by making the mean zero. The input tensor flows through convolutional feature layer to capture low-level information consisting of shapes from histopathology images. This was followed by a series of 4 dense blocks. These dense blocks have 6, 12, 16 and 24 layers of batch normalization, reLu and 2D- convolutional layers stacked alternatively. The feature vector from the last average pooling layer contained 1,024 dimensions which were passed to the classifier to represent the corresponding image magnification.

In total, we had 4 individual feature sets for our dataset. One may argue that using ANNs is considered a fine-tuning technique, but for all our experiments we treated classification methods separate from feature extraction models.

V Experiments And Results

V-a ANNs

The shallow classifier that used deep features excelled in performance. The accuracy was observed at 96.1% which was achieved using 2 hidden layers and one fully connected layer between input and output layers. The hidden layers had 512 neurons with batch normalization and a dropout rate of 0.5 followed by other fully connected layer of 256 neurons and ReLU activation. This finally was connected to output layer of 5 neurons which was activated using softmax. For every ANN we matched the input layer neurons with the dimensions of the input feature vector. For LBP features, ANNs achieved an accuracy of 68.4%, kappa score of 0.603 and f1-score of 0.676 for one split. From Table I

we can easily identify the higher performance of deep features compared to handcrafted LBP features. However, there is a trade-off between real-time high performance and time complexity plus the computational power utilized for extracting deep features. The confusion matrix in 

Figure 2 shows that images at a magnification level of 5x are mostly confused with class of images at 2.5x and 10x magnification. The individual class accuracy was 91% for 5x images. While images at 2.5x, 20x and 40x were separated excellently with misclassification rate of less than 1%.

V-B Random Forests

We searched for best parameters using random searchCV and grid searchCV from Scikit Learn library [17]

. We used 1000 estimators for training the classifier, with a maximum depth of 50 splits for individual trees. The final label assignment was determined through majority voting from each tree. The reported accuracy for classification using deep features was 96% with kappa score of 0.95 and F1-score of 0.96 while using LBP, the best accuracy obtained by RF classification model was 68.4% with kappa score of 0.60 and F1-score of 0.67. All results using RF classifier are listed in

Table I. Comparing the confusion matrices in  Figure 2 a and b, we can see that classifiers using deep features are more prone to confusing 5x images compared to shallow classifiers. While classes 20x and 40x are also more prone to misclassification when RFs are implemented.

(a) ANN Classifier
(b) RF Classifier
Fig. 2: Confusion matrix of the two classifiers.
Features Folds Random Forests Shallow Classifier
Acc Kappa F-Score Acc Kappa F-Score
DF Fold1 0.881 0.849 0.881 0.935 0.917 0.935
Fold2 0.879 0.849 0.879 0.96 0.95 0.96
Fold3 0.916 0.849 0.915 0.971 0.963 0.971
Fold4 0.924 0.905 0.924 0.972 0.966 0.973
Fold5 0.921 0.901 0.921 0.969 0.962 0.969
All Folds 0.904 0.01 0.879 0.025 0.904 0.01

Fold1 0.520 0.392 0.509 0.514 0.382 0.501
Fold2 0.612 0.513 0.604 0.601 0.499 0.588
Fold3 0.649 0.560 0.643 0.598 0.496 0.583
Fold4 0.684 0.605 0.679 0.646 0.557 0.631
Fold5 0.684 0.605 0.678 0.655 0.568 0.646
All Folds 0.629 0.06 0.535 0.07 0.622 0.06 0.602 0.05 0.50 0.06 0.589 0.05

Fold1 0.516 0.389 0.508 0.471 0.339 0.458
Fold2 0.582 0.476 0.571 0.565 0.453 0.556
Fold3 0.626 0.532 0.618 0.587 0.480 0.578
Fold4 0.656 0.569 0.649 0.613 0.516 0.607
Fold5 0.654 0.567 0.647 0.628 0.534 0.618
All Folds 0.606 0.05 0.506 0.06 0.598 0.05 0.572 0.05 0.464 0.06 0.563 0.05

Fold1 0.546 0.425 0.536 0.507 0.375 0.493
Fold2 0.609 0.510 0.599 0.582 0.476 0.574
Fold3 0.649 0.561 0.643 0.634 0.542 0.622
Fold4 0.682 0.603 0.676 0.668 0.584 0.656
Fold5 0.676 0.595 0.671 0.640 0.551 0.631
All Folds 0.632 0.05 0.538 0.06 0.624 0.05 0.606 0.05 0.505 0.07 0.595 0.05
TABLE I: Results for deep features (DF) and LBP with 3 parameter settings.

Vi Conclusions

In this paper, two feature extraction methods were used to train two classifiers for discrete classification of magnification levels in microscopic images. We used the TCGA dataset to construct a large dataset for training and testing. The feature extraction was done from raw images without any pre-processing or segmentation. A shallow classifier with deep features obtained from DenseNet121 had the best performance in terms of accuracy (96%), kappa (0.95) and F1-score (0.96). The score achieved by our model outperformed existing state-of-the-art regression method with 11% improved kappa score. Because the TCGA dataset consists of various categories of images and several organs, we assume the model offers good generalization for magnification recognition providing a basic pre-processing step for all digital pathology image processing to attain better performance.

Microscopic snapshots may have a different image quality than patches from whole slide images. Investigating this difference will be subject of future work.


  • [1] N. Bayramoglu, J. Kannala, and J. Heikkilä (2016) Deep learning for magnification independent breast cancer histopathology image classification. In

    2016 23rd International conference on pattern recognition (ICPR)

    pp. 2440–2445. Cited by: §I.
  • [2] A. Bosch, A. Zisserman, and X. Munoz (2007) Image classification using random forests and ferns. In 2007 IEEE 11th international conference on computer vision, pp. 1–8. Cited by: §II-B2.
  • [3] J. C. Caicedo, A. Cruz, and F. A. Gonzalez (2009) Histopathology image classification using bag of features and kernel functions. In

    Conference on Artificial Intelligence in Medicine in Europe

    pp. 126–135. Cited by: §II-A1.
  • [4] P. P. Clark, D. E. Miller, D. A. Mulford, and J. C. Ostrowski (1994-May 24) Microscope camera. Google Patents. Note: US Patent 5,315,344 Cited by: §I.
  • [5] M. Dinesh Kumar, M. Babaie, S. Zhu, S. Kalra, and H. R. Tizhoosh (2017-11) A comparative study of cnn, bovw and lbp for classification of histopathological images. In 2017 IEEE Symposium Series on Computational Intelligence (SSCI), Vol. , pp. 1–7. External Links: ISSN null Cited by: §I.
  • [6] H. Erfankhah, M. Yazdi, M. Babaie, and H. R. Tizhoosh (2019) Heterogeneity-aware local binary patterns for retrieval of histopathology images. IEEE Access 7, pp. 18354–18367. Cited by: §II-A1.
  • [7] A. Gordo, J. Almazán, J. Revaud, and D. Larlus (2016) Deep image retrieval: learning global representations for image search. In European conference on computer vision, pp. 241–257. Cited by: §II-A2.
  • [8] V. Gupta and A. Bhavsar (2017) Breast cancer histopathological image classification: is magnification important?. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 17–24. Cited by: §I.
  • [9] G. Huang, Z. Liu, L. Van Der Maaten, and K. Q. Weinberger (2017) Densely connected convolutional networks. In IEEE conference on computer vision and pattern recognition, pp. 4700–4708. Cited by: §II-A2.
  • [10] I. Jung and I. Oh (2010) Local binary pattern-based features for text identification of web images. In 2010 20th International Conference on Pattern Recognition, pp. 4320–4323. Cited by: §II-A1.
  • [11] S. Kalra, C. Choi, S. Shah, L. Pantanowitz, and H. Tizhoosh (2019) Yottixel–an image search engine for large archives of histopathology whole slide images. arXiv preprint arXiv:1911.08748. Cited by: §I.
  • [12] M. Kas, Y. El Merabet, Y. Ruichek, and R. Messoussi (2019)

    Survey on local binary pattern descriptors for face recognition


    Proceedings of the New Challenges in Data Sciences: Acts of the Second Conference of the Moroccan Classification Society

    pp. 5. Cited by: §II-A1.
  • [13] B. Kieffer, M. Babaie, S. Kalra, and H. R. Tizhoosh (2017) Convolutional neural networks for histopathology image classification: training vs. using pre-trained networks. In 2017 Seventh International Conference on Image Processing Theory, Tools and Applications (IPTA), pp. 1–6. Cited by: §IV-B.
  • [14] S. Nigam, R. Singh, and A. Misra (2019) Local binary patterns based facial expression recognition for efficient smart applications. In Security in Smart Cities: Models, Applications, and Challenges, pp. 297–322. Cited by: §II-A1.
  • [15] T. Ojala, M. Pietikainen, and T. Maenpaa (2002) Multiresolution gray-scale and rotation invariant texture classification with local binary patterns. IEEE Transactions on pattern analysis and machine intelligence 24 (7), pp. 971–987. Cited by: §II-A1.
  • [16] S. Otálora, M. Atzori, V. Andrearczyk, and H. Müller (2018) Image magnification regression using densenet for exploiting histopathology open access content. In Computational pathology and ophthalmic medical image analysis, pp. 148–155. Cited by: §I, §IV.
  • [17] F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel, M. Blondel, P. Prettenhofer, R. Weiss, V. Dubourg, J. Vanderplas, A. Passos, D. Cournapeau, M. Brucher, M. Perrot, and E. Duchesnay (2011) Scikit-learn: machine learning in Python. Journal of Machine Learning Research 12, pp. 2825–2830. Cited by: §IV, §V-B.
  • [18] C. L. Srinidhi, O. Ciga, and A. L. Martel (2019) Deep neural network models for computational histopathology: a survey. arXiv preprint arXiv:1912.12378. Cited by: §II-A2.
  • [19] H. R. Tizhoosh and L. Pantanowitz (2018) Artificial intelligence and digital pathology: challenges and opportunities. Journal of pathology informatics 9. Cited by: §I.
  • [20] X. Wang, T. X. Han, and S. Yan (2009) An hog-lbp human detector with partial occlusion handling. In 2009 IEEE 12th international conference on computer vision, pp. 32–39. Cited by: §II-A1.
  • [21] J. Winterot, J. Knoblich, T. Kaufhold, H. Tielebier, and G. Osten (2005-May 26) Microscope camera. Google Patents. Note: US Patent App. 10/993,660 Cited by: §I.