ConCORDe-Net: Cell Count Regularized Convolutional Neural Network for Cell Detection in Multiplex Immunohistochemistry Images

08/01/2019 ∙ by Yeman Brhane Hagos, et al. ∙ 7

In digital pathology, cell detection and classification are often prerequisites to quantify cell abundance and explore tissue spatial heterogeneity. However, these tasks are particularly challenging for multiplex immunohistochemistry (mIHC) images due to high levels of variability in staining, expression intensity, and inherent noise as a result of preprocessing artefacts. We proposed a deep learning method to detect and classify cells in mIHC whole-tumour slide images of breast cancer. Inspired by inception-v3, we developed Cell COunt RegularizeD Convolutional neural Network (ConCORDe-Net) which integrates conventional dice overlap and a new cell count loss function for optimizing cell detection, followed by a multi-stage convolutional neural network for cell classification. In total, 20447 cells, belonging to five cell classes were annotated by experts from 175 patches extracted from 6 whole-tumour mIHC images. These patches were randomly split into training, validation and testing sets. Using ConCORDe-Net, we obtained a cell detection F1 score of 0.873, which is the best score compared to three state of the art methods. In particular, ConCORDe-Net excels at detecting closely located and weakly stained cells compared to other methods. Incorporating cell count loss in the objective function regularizes the network to learn weak gradient boundaries and separate weakly stained cells from background artefacts. Moreover, cell classification accuracy of 96.5 support that incorporating problem-specific knowledge such as cell count into deep learning-based cell detection architectures improve the robustness of the algorithm.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 4

page 8

Code Repositories

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Cell detection and classification are often the first key steps in a wide range of histology image analysis tasks, such as investigating the interplay of the tumor and immune cells [1]. Multiplex Immunohistochemistry (mIHC) is a multi-parametric protocol that allows simultaneous examination of expression of multiple markers in a single section [2, 3]. Combined with robust cell detection and classification techniques, mIHC has the potential to allow detailed investigation of cells spatial interaction and signalling for the study of tumor heterogeneity [2].

The field of digital pathology has recently witnessed a surge of interest in the application of deep learning for cell classification [4], cell detection [5, 6], and cell counting[7, 8, 9, 10]. However, automated cell detection and classification remain challenging due to variation in slide preparation and cell morphological diversity in shape and size. For example, closely located cells with weak boundaries are often difficult to discern [5, 6, 7, 8]. Moreover, often a parameter such as a kernel size needed to be fixed [5], which cannot cater for cells with a range of size and shape. Furthermore, the need to differentiate cells with a subtle difference in marker expression intensity, as exemplified in Fig. 1a, adds another layer of complexity in mIHC image analysis.

In this paper, to address the above stated challenges, we developed a new cell detection method followed by multi-stage CNN to analyse mIHC images of breast cancer. Our work has the following main contributions: 1) We developed Cell Count RegularizeD Convolutional neural Network (ConCORDe-Net) inspired by inception-v3 which incorporates cell counter and designed for cell detection without the need of pre-specifying parameters such as cell size. 2) The parameters of ConCORDe-Net were optimized using an objective function that combines conventional Dice overlap and a new cell count loss function which regularizes the network parameters to detect closely located cells. 3) Our quantitative experiments support that ConCORDe-Net outperformed the state of the art methods at detecting closely located as well as weakly stained cells.

2 Materials

The dataset used in this paper were mIHC whole-tumor slide images from patients with breast cancer, and the images were scanned at 40X resolution. A total of regions/patches were annotated from different parts of 6 whole tumor images by experts. The patches were extracted from different regions of the slides to incorporate the variation in the data. The patches were then randomly split into training (), validation (), and testing (). Inside these patches cells were annotated and these belonged to five different types of cells as depicted in Table 1. Illustrative example of patches are shown in Fig. 1a. The distribution of the data for each cell is presented in Table 1.

Cell type Training Validation Test
CD8 2971 653 624
GAL8+ pSTAT- 4118 881 903
GAL8+ pSTAT+ strong 919 183 200
GAL8+ pSTAT+ moderate 1558 295 279
GAL8+ pSTAT+ weak 4770 1038 1102
Table 1: Distribution of dataset

3 Methodology

3.1 Dot Annotation to Cell Pseudo-segmentation

The reference ground truth obtained was a dot annotation at the center of a cell instead of cell spatial extent segmentation which is generally tedious task. However, to train the proposed cell detection pipeline, cells mask () and the number of cells () were needed as a target. is simply the number of annotated cells in the input patch. Cell pseudo-segmentation was generated from dot annotation using Equation (1).

(1)

where is pixel intensity value at of pseudo-segmentation image (), d is an Euclidean distance between pixel location and any of cell dot annotations, and r is threshold distance. r was empirically set to 4 pixels to guarantee pseudo-segmentation of cells do not touch each other.

3.2 Cell Counter

Our proposed cell counter network is shown in Fig. 1b. It is a mapping function, , where n is the size of the input patch, which is

in our case. It consists of feature extraction and regression parts. The feature extraction part is composed of four consecutive convolutional layers of

filter size, and ”same”padding. The number of neurons in these layers are

respectively. Every convolutional layer was followed by max-pooling layer of size (

) with stride

to reduce the dimensionality of features in the previous layer. The regressor part has a series of two dense layers of

neurons. The output dense layer has one neuron which computes estimated number of cells in the input tensor or image. The activation of all convolutional and dense layers was set to relu.

Parameters of all layers were randomly initialized using uniform glorot initialization [11]. Optimization of the parameters was done using Adam [12], learning rate of . Initially, we have experimented with Euclidean loss [10]

and exponential loss functions. However, these suffer from loss explosion during the initial epochs and we came up with a new cell count loss (

) function in Equation (2).

(2)

where the summation is over mini-batch images, and are predicted and true number of cells in the image, respectively. Fig. 2a shows profile of as a function of cell count difference () and it is bounded between and .

Before integrating the cell counter model to cell detection pipeline, it was trained and evaluated using pseudo-segmentation and number of cells as an input and output, respectively. To increase the amount of data, horizontal and vertical flipping were applied to all input training patches. The pseudo-segmentation is a binary image, however, when it is integrated with the cell detection model, a tensor of floating value will be fed. Thus, morphological and intensity deformation was applied as follows; Morphological erosion using rectangular structuring element of width

was performed to every patch with a probability

, where p and w

were empirically chosen. Then, the images were multiplied by a random matrix of the same size as the image with an empirically chosen probability

. All elements in the random matrix were in range to set pixel values between and .

Figure 1: a) Sample patches representing different types of cells. b) Schematics of ConCORDe-Net architecture. and indicate filter size of convolutional layers. TC = Transposed Convolution, MP = Max-pooling, C = Concatenate. The network has two outputs, probability map and predicted number of cells (). The probability map was thresholded using an empirically optimized threshold

to convert to binary image. The center of every binary object represents center of a cell. c) Schematics of inception module.

3.3 Cell Detection

Fig. 1b shows the proposed concoddenet cell detection convolutional neural network. The input is size patch. The network has three parts; encoder, decoder and cell counter. The encoder-decoder section is extended version U-Net [13]. The standard U-Net architecture [13] uses VGG-style in its encoder and decoder section. We have proposed to use inception-v3 module shown in Fig. 1c instead of VGG block. The parallel and varying size filters in inception block enables the network to extract multi-scale features in a given layer. The encoder contains three inception modules and the first two modules were followed by 2D max-pooling layers. The decoder is composed of transposed convolution, concatenation, and inception modules. The filter size convolutional layer at the end of the decoder is used to reduce the dimension of the tensor from to . The output of the decoder was taken as cell location prediction map (P) and connected to the pretrained cell counter model (explained in Section 3.2), which generates predicted number of cells (). Activation of all layers was set to relu, but sigmoid for the last layer in the decoder section. Therefore, the cell detection architecture has two outputs, cell location prediction map and predicted number of cells.

The parameters of cell counter model were transfer learned from cell pseudo-segmentation as explained in Section

3.2. Parameters of the other layers were randomly initialized using uniform glorot initialization [11], and optimized using Adam [12], learning rate= and an objective function shown in Equation (3). Cell detection loss () in Equation (3) has two parts. The first part is Dice overlap loss, and the second part is cell count loss.

(3)

where summations in the first part is over batch size () images, and pixels of the ground truth image, and prediction map, . The second part is same as Equation (2), but weighted by empirically optimized constant .

Horizontal and vertical flipping was applied to training patches to increase the amount and diversity of our data.

3.4 Cell Classification

In our dataset, there were five types of cells: CD8, GAL8+ pSTAT-, GAL8+ pSTAT+ strong, GAL8+ pSTAT+ moderate, and GAL8+ pSTAT+ weak. GAL8+ pSTAT+ cells were divided based on the expression level of pSTAT into strong, moderate, and weak. However, discriminating among GAL8+ pSTAT+ cells is challenging, even for experts. Inspired by the principle of divide and conquer algorithm, we convert the problem into multi-stage classification. The first classifier (classifier1) differentiates between CD8, Gal8+ pSTAT-, and all GAL8+ pSTAT+ cells. Then, a second classifier (classifier2) was trained to further divide GAL8+ pSTAT+ cells in to GAL8+ pSTAT+ strong, GAL8+ pSTAT+ moderate, and GAL8+ pSTAT+ weak.

Both classifiers were trained using patches which can cover the whole cell area for the majority of the cells. Similar network architecture was used for both classifiers. The classifier has feature extraction and classification sections. The feature extraction part is a modified version of VGG architecture [14] consisting of four convolutional layers of {} neurons with filters size , stride and ”same” padding. Each convolutional layers were followed by max-pooling. The classification layer consisted of two dense layers of {} neurons with dropout layer, rate= in between. Softmax activation was applied to the last dense layer and relu for the other layers. Categorical cross-entropy objective function was applied. Uniform glorot [12] was applied to initialize parameters of the layers and optimized using Adam [12], learning rate=. To handle class imbalance, in each mini-batch, an equal number of patches from all cell types were fed to the network and the number of iterations were determined by the number of patches in the most underestimated class. Moreover, runtime augmentation of flipping, and zooming with scale was applied with a probability of , where and were empirically optimized.

4 Results and Discussion

The proposed deep learning based unified cell detection and classification pipeline was evaluated on mIHC whole-tumor slide images. Implementation of the proposed approach was done in Python, and we used Keras API

[15] for development of the deep learning pipeline.

To investigate if cnn can regress the number of cells from an input image, the proposed cell counter model was trained and then, evaluated on a test patches pseudo-segmentation image before integrating to ConCORDe-Net. Pearson correlation was obtained between the true and predicted number of cells. The high correlation supports that the proposed cell counter network can be used as a cell count approximation function.

Quantitatively, we evaluated ConCORDe-Net using standard metrics: precision, recall and F1-score. A detection was considered true positive if it lies with in an Euclidean distance of pixels (, where is in Equation(1)) to a ground truth annotation.

Moreover, we compared ConCORDe-Net with state of the art methods, MapDe [5] and U-Net [13] as shown in Table 2. The same data augmentation as explained in Section 3.3 was applied to all models depicted in the Table. U-Net [13] was trained to regress pseudo-segmentation explained in Section 3.1. The output of cnn models in Table 2 is probability map that approximates pseudo-segmentation. The center of cells was regressed as follows from the probability map. Firstly, a global threshold maximizing F1-score was applied for each model to generate binary image. Secondly, hole filling morphological operation was applied to remove holes created after thresholding. Finally, the center of every connected component was computed which corresponds to center of a cell.

Method Precision Recall F1-score
concoddenet 0.854 0.892 0.873
U-Net [13] + Cell Counter 0.872 0.837 0.854
Model 0.908 0.80 0.845
U-Net [13] 0.908 0.785 0.841
MapDe [5] 0.804 0.876 0.838
Table 2: Cell detection performance comparison. Model is a model after cell counter is removed from concoddenet. U-Net [13] + Cell Counter is a cnn after integrating cell counter cnn to the original U-Net [13] architecture.

ConCORDe-Net achieved the highest recall and F1-score compared to state of the art methods, MapDe [5] and U-Net [13]. Moreover, in both concoddenet and U-Net [13], integrating cell counter cnn has improved cell detection F1-score. For MapDe [5], we used the parameters that were specified in the paper and tuning the dimensions of “mapping filter” might improve the result.

Precision of ConCORDe-Net was lower than the three other methods due to the following reasons: 1) ConCORDe-Net identifies weakly stained cells that were missed by other methods, which could be missed by expert too. 2) Over-detection of large cells when there are more than one intensity peaks within the cell. We believe that these limitations could be improved by training and validating on a large cohort.

Figure 2: a) Cell count loss profile. ROC and AUC evaluation of b) classifier1 c) classifier2 on test data. Where s=strong, m=moderate, w=weak

Performance of the proposed classifier models was qualitatively evaluated using roc, auc, accuracy, precision, recall, and F1-score on test data shown in Table 1. roc and auc of classifier1 are presented in Fig. 2b. AUC value of greater than 0.99 was achieved for all cell types. Overall accuracy computed on the original distribution of data was found around . Moreover, precision, recall and F1-score were all . Fig. 2c shows roc and auc of this classifier2. For all cell types, auc value was higher than and overall accuracy of around was obtained. After cascading the two classifiers, overall accuracy of was achieved.

Figure 3: Illustrative examples of the proposed unified cell detection and classification on test data, and comparison with state-of-the-art method, MapDe [5] and U-Net [13]. White, red, yellow, cyan and dark green colored points represent CD8, GAL8+ pSTAT-, GAL8+ pSTAT+ strong, GAL8+ pSTAT+ moderate, and GAL8+ pSTAT+ weak cells, respectively. The red circles on the top left input images highlights cells that were missed by MapDe [5] and U-Net [13], but detected using concoddenet.

Fig. 3 shows a visual output of ConCORDe-Net followed by cell classification and comparison with MapDe [5] and U-Net [13] which uses Dice overlap loss as an objective function. ConCORDe-Net is better in discerning touching cells with weak boundary gradient and weakly stained GAL8+ pSTAT- cells compared to MapDe [5] and U-Net [13]. By regularizing the objective function with cell count, the network was able to learns patterns that can separate closely located cells and identify weakly stained cells.

5 Conclusions

In this paper, we proposed a deep learning based unified cell detection and classification method in mIHC whole-tumor slide images of breast cancer. Cell count regularized CNN was employed for cell detection followed by multi-stage CNN to classify cells. The parameters in the cell detection architecture were learnt using a new objective function which optimizes dice overlap and cell count. F1 score of 0.873 was achieved on test data which outperformed state of the art methods MapDe [5] and U-Net [13]. Our proposed approach is better in detecting closely located and weakly stained cells compared to MapDe [5] and U-Net [13]. Moreover, classification accuracy was achieved. Our experiment shows that incorporating problem specific knowledge such as cell count improves robustness of the cell detection algorithm.

Acknowledgement

This project was funded by the European Union’s Horizon 2020 reaearch and innovation programme under the Marie Sklodowska-Curie grant agreement No 766030.

References

  • [1] Yinyin Yuan. Spatial Heterogeneity in the Tumor Microenvironment. Cold Spring Harbor Perspectives in Medicine, 6(8):a026583, aug 2016.
  • [2] Sami Blom, Lassi Paavolainen, Dmitrii Bychkov, Riku Turkki, Petra Mäki-Teeri, Annabrita Hemmes, Katja Välimäki, Johan Lundin, Olli Kallioniemi, and Teijo Pellinen. Systems pathology by multiplexed immunohistochemistry and whole-slide digital image analysis. Scientific Reports, 7(1):15580, dec 2017.
  • [3] Jessica Kalra and Jennifer Baker. Multiplex Immunohistochemistry for Mapping the Tumor Microenvironment. pages 237–251. 2017.
  • [4] Korsuk Sirinukunwattana, Shan E Ahmed Raza, Yee-Wah Tsang, David R. J. Snead, Ian A. Cree, and Nasir M. Rajpoot. Locality Sensitive Deep Learning for Detection and Classification of Nuclei in Routine Colon Cancer Histology Images. IEEE Transactions on Medical Imaging, 35(5):1196–1206, may 2016.
  • [5] Shan E Ahmed Raza, Khalid AbdulJabbar, Mariam Jamal-Hanjani, Selvaraju Veeriah, John Le Quesne, Charles Swanton, and Yinyin Yuan. Deconvolving convolution neural network for cell detection. jun 2018.
  • [6] Guang Yang, Cora Sau, Wan Lai, Joseph Cichon, and Wei Li. Efficient and Robust Cell Detection: A Structured Regression Approach. 344(6188):1173–1178, 2018.
  • [7] Weidi Xie, J Alison Noble, and Andrew Zisserman. Microscopy Cell Counting with Fully Convolutional Regression Networks. MICCAI 1st Workshop on Deep Learning in Medical Image Analysis, 2015.
  • [8] Reza Moradi Rad, Parvaneh Saeedi, Jason Au, and Jon Havelock. Blastomere cell counting and centroid localization in microscopic images of human embryo. 2018 IEEE 20th International Workshop on Multimedia Signal Processing, MMSP 2018, pages 1–6, 2018.
  • [9] Joseph Paul Cohen, Geneviève Boucher, Craig A. Glastonbury, Henry Z. Lo, and Yoshua Bengio. Count-ception: Counting by Fully Convolutional Redundant Counting.

    Proceedings - 2017 IEEE International Conference on Computer Vision Workshops, ICCVW 2017

    , 2018-Janua:18–26, 2018.
  • [10] Yao Xue, Nilanjan Ray B, Judith Hugh, and Gilbert Bigras. Cell Counting by Regression Using Convolutional Neural Network. 9913:274–290, 2016.
  • [11] Xavier Glorot and Yoshua Bengio. Understanding the difficulty of training deep feedforward neural networks. Technical report.
  • [12] Diederik P. Kingma and Jimmy Ba. Adam: A Method for Stochastic Optimization. dec 2014.
  • [13] Olaf Ronneberger, Philipp Fischer, and Thomas Brox. U-Net: Convolutional Networks for Biomedical Image Segmentation. may 2015.
  • [14] Karen Simonyan and Andrew Zisserman. Very Deep Convolutional Networks for Large-Scale Image Recognition. sep 2014.
  • [15] Francois Chollet et al. Keras, 2015.