Virtual staining for mitosis detection in Breast Histopathology

03/17/2020 ∙ by Caner Mercan, et al. ∙ 0

We propose a virtual staining methodology based on Generative Adversarial Networks to map histopathology images of breast cancer tissue from H E stain to PHH3 and vice versa. We use the resulting synthetic images to build Convolutional Neural Networks (CNN) for automatic detection of mitotic figures, a strong prognostic biomarker used in routine breast cancer diagnosis and grading. We propose several scenarios, in which CNN trained with synthetically generated histopathology images perform on par with or even better than the same baseline model trained with real images. We discuss the potential of this application to scale the number of training samples without the need for manual annotations.



There are no comments yet.


page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Mitosis is one of the phases of proliferating tumor cells. The number of mitotic figures (i.e., mitotic count) is a strong prognostic biomarker and is part of breast cancer grading, which is scored during histopathology routine diagnostics.

Detection of mitotic figures is subject to inter-observer variability among pathologists, resulting in low recall in slides stained with the standard Haemotoxylin and Eosin (H&E) dyes [5, 15]. As a consequence, performance of automatic detection algorithms are limited by such a variability when manual annotations are used as reference standard to build detection models. Furthermore, presence of largely varying shapes at different stages of the mitosis process and the acquisition process of the histopathology images may introduce artifacts that can alter the appearance of mitotic figures.

The immunohistochemical marker Phosphohistone-H3 (PHH3) relies on an antibody that targets cells undergoing mitosis by coloring them in brown colour [12, 13, 10], thus easing the detection task. However, PHH3 is an expensive additional procedure that is not routinely foreseen by breast cancer grading guidelines, originally designed for H&E stained tissues.

Figure 1: Generation of synthetic images from H&E to PHH3 (top to bottom) and vice versa (bottom to top).

In recent years, a substantial amount of work on generation of synthetic images and image transformation across different modalities has been presented in the medical image analysis community [9]. Most of these solutions are based on deep convolutional neural networks, in particular using the Generative Adversarial Networks (GAN).

In this paper, we propose a framework based on GANs to generate synthetic PHH3 images from H&E stained breast images and vice versa (see Figure 1). We use this framework to explore feasibility of building synthetic datasets of (i) histopathology images and (ii) feature maps for automatic detection of mitotic figures in breast cancer.

2 Related Work

GANs can be divided into two sub-categories whether they need to be trained with aligned pairs of images, as in the case of Conditional GAN [8] or can be trained with unaligned image sets as in the case of Cycle GAN [19]. Both type of setups have been applied in histopathology, e.g. for stain normalization [18, 1], and mapping H&E stain to immunohistochemistry [17]

, which has also been achieved using autoencoders in an unsupervised setting


Image mapping has been proven successful between various modalities in radiology, e.g. MR CT [11, 16], CT PET [2]. Methods that were used to quantify the quality of the synthetic images produced by GANs, included visual inspection and scoring by radiologists or pathologists, and carrying out a task, such as classification or segmentation, on synthetic images and comparing it with the performance of the algorithms applied on real images.

In a previous study involving mitosis detection [14], PHH3 images were used to generate mitosis annotations. A set of breast slides were first stained with H&E and then re-stained with PHH3 through a process called double staining [3]. Ultimately, two slides that had the same cell structures but stained with different markers were obtained. A simple CNN model was trained to detect mitotic figures (regions with brown color) in PHH3 images. However, additional inputs from pathologists were required to point out false positives in the brown staining. Finally, detections in PHH3-stained slides were mapped to the corresponding H&E stained slides by whole-slide image registration. The hereby created annotations of mitotic figures on H&E slides were used to train a CNN to detect mitoses on H&E slides. However, this work relies on a double-staining procedure to use (1) PHH3 to create a reference standard and (2) corresponding H&E slides for building CNN models for mitosis detection that can be applied to routine diagnostics.

In this paper, we propose to use Conditional GAN and Cycle GAN setups to learn a mapping between PHH3 and H&E and vice versa. We use this framework to both generate synthetic H&E images (see Section 4.1) and to extract features (see Section 4.2), with the aim of building detection models of mitosis in H&E slides. We show that our approach is competitive with the existing baseline work [14]

without the need for real H&E images or double staining and registration to train a classifier on synthetic images. We also show that this approach outperforms the baseline when a mitosis classifier is trained on GAN feature maps.

3 Data Set

The data sets used in this study were collected from whole slide images (WSIs) from triple negative breast cancer (TNBC) patients, called TNBC data set in [14]. The requirement for ethical approval was waived by the institutional review board (case number 2015-1711) of the Radboud University Medical Center (Radboudumc). All patient material and data were treated according to the Code of Conduct for the Use of Data in Health Research [6] and the Code of Conduct for dealing responsibly with human tissue in the context of health research [7].

Data preparation involved staining with H&E, scanning and subsequently destaining, restaining with PHH3, and scanning again. As a result, 18 pairs of H&E and PHH3 stained WSIs were obtained. Similar to [14], ground truth mitosis annotations (positive samples) were generated from the PHH3 stained WSIs by (1) generating candidates based on color deconvolution and (2) selecting mitosis using pre-trained CNN models. In order to create a dataset of non mitotic cells (negative samples) we detected the dark cell bodies with the Determinant of Hessian algorithm for blob detection111 in H&E and discarded from this set cells in location of PHH3-based positive samples.

4 Methodology

We used Conditional GAN [8] and Cycle GAN [19]

architectures for generation of synthetic images and compared their performance. In the GAN framework, the network used for the Generators is a ResNet sequence of two 2-strided convolution layers, nine residual blocks and two fractionally

-strided convolution layers. We investigate the use of synthetic images learned by the GANs and we also extract GAN feature maps from the output of the last residual block of the Generator, which we call the deep GAN feature layer (see Figure 2).

(a) Learning from synthetic GAN images. (b) Learning from GAN features.
Figure 2: We propose two approaches for mitosis detection; using (a) synthetically generated H&E images from PHH3 and (b) GAN feature maps of real H&E images.

4.1 Learning from Synthetic GAN Images

The GANs are used to learn a mapping from PHH3 to H&E to generate synthetic H&E images. Then, a CNN for patch classification is trained using synthetic H&E patches of mitosis. The architecture of the CNN follows from [14], which is a fully convolutional sequence of six 3x3 convolutions, mixed with two -strided convolutions and one final convolution to give a single output value for a input. The number of filters are also kept the same with (see [14] for details). Finally, the mitosis detection performance of the trained classifier are assessed on real H&E images.

We train Conditional GAN and Cycle GAN separately to investigate their mapping capabilities from PHH3 to H&E. The main difference in training these GANs is that Conditional GAN is trained on aligned pairs of H&E and PHH3 images, and it requires sets of slides that are double stained and registered. Cycle GAN, on the other hand, is trained on different sets of H&E and PHH3 images without any alignment; compared with previous work, this approach has the advantage of not requiring double staining or registration of H&E and PHH3 slides during the training of the classifiers.

4.2 Learning from GAN features

We also investigate the feature representation capabilities of GANs. We train a GAN to learn a mapping from H&E to PHH3 and extract feature maps from the deep GAN feature layer. A set of real H&E stained images can be fed to the trained GAN to output their feature representations. These features are then used to train a mitosis classification network. The architecture of this classifier is a fully convolutional sequence of seven convolutions, mixed with one -strided convolutions and one final convolution to give a single output value for an input GAN feature map of pixels. A reduction from down to filter is achieved in three steps of factor and the remainder in the last layer. Note that, because the inputs are real H&E images, the mitosis annotations are obtained from their PHH3 counterparts. Therefore, it is necessary to double stain and register the slides before training the classification network. We speculate that a GAN that learns a mapping from H&E to PHH3 images might learn higher features that combine information from both domains.

5 Experiments

In this section, we compare the performance of our approaches, learning from synthetic GAN images and learning from GAN features, to the baseline classifier from the previous work by [14]. We trained Conditional and Cycle GANs to learn mappings from both H&E to PHH3 and PHH3 to H&E. In our experiments, we used five out of the

WSI pairs for the training of GANs. As previously stated, Conditional GAN required aligned PHH3-H&E pairs, while Cycle GAN did not and we employed Conditional GAN for the feature extraction method which also relied on the aligned pairs of H&E-PHH3 images. Patches from nine WSIs were used to train the mitosis classification networks while the patches from the remaining four slides in the TNBC data set were used to evaluate the performance of the methods.

5.1 Experimental Setup

Due to the data set being heavily imbalanced with a ratio of mitotic to non-mitotic cell bodies of to , non-mitotic samples were taken at random until a desired balance of mitosis and non-mitosis is reached. In our experiments, we found out that a ratio of to for mitosis and non-mitosis performed better because of the addition of different variations of non-mitotic figures into the training. We evened the ratio by oversampling the mitosis (weighing the mitosis samples higher with a factor of ). While the training procedure of the classifiers involved this procedure to cope with the high imbalance, the test setup was kept imbalanced to mimic the real world scenario with the full set of dark cell bodies present in the data set (ratio of to ).

We applied data augmentation on the images with random vertical and horizontal flipping, and rotations by , , and degrees and skipped color augmentation due to the baseline work [14] showing that color augmentation did not provide any improvement when only the TNBC data set was considered during training and test. The type of data that the classification steps of the three approaches require as input is given as follows.

  • Baseline method: The training and the test setups of the mitosis classifier involved real H&E images, which follows from the baseline work in [14].

  • Learning from synthetic images: The classifier input was synthetic H&E images for training and real H&E images for test. Synthetic images were either produced by Conditional GAN or Cycle GAN. The labels during the training belonged to the PHH3 stained images that the synthetic H&E images were previously generated from. Therefore, no double staining or registration was required for classifier training.

  • GAN features scenario: The classifier of this approach was trained and tested on GAN feature maps of real H&E images from the deep GAN feature layer of the Generator. The class labels for the classifier required the PHH3 counterparts of the real H&E images; therefore, double staining and registration were required.

5.2 Evaluation Criteria

The aim of the classifiers is to predict an image as either mitosis or non-mitosis. The metric we use to evaluate the performance of the classifier is encapsulating the information of and

metrics. The predicted label given to an image was mitosis if the probability predicted by the classifier was higher than a threshold value and non-mitosis otherwise. We report the results for a range of threshold values that we used in our experiments.

5.3 Qualitative Results

The quality of the synthetic images produced by the trained Conditional and Cycle GANs was inspected by comparing them to their aligned real counterparts. We compare the synthetic images of H&E and PHH3 stains, produced by both of the GANs. Some example outputs of this procedure are presented in Figure 3.

(a) PHH3 to H&E mapping. (b) H&E to PHH3 mapping.
Figure 3: Example synthetic images generated by Conditional GAN (center columns) and cycle GAN (right columns) from reference images (left columns).

For the PHH3 to H&E mapping, both GANs were able to produce realistic looking synthetic H&E images. Mitotic cell bodies in synthetic H&E images produced by Cycle GAN generally appear larger and darker than in the real H&E images. Mitotic cell bodies in synthetic H&E images produced by Conditional GAN resemble their real counterparts and artifacts were generally replaced by realistic looking tissue structure.

For the H&E to PHH3 mapping, both GANs could produce realistic looking synthetic PHH3 images, but they both predicted the brown staining wrong frequently. Conditional GAN tended to overpredict brown staining, while Cycle GAN tended to underpredict it. If the GANs could predict the brown staining better, than we would not need an additional classifier on top of GANs and could use them directly for the classification task based on the presence of brown staining.

5.4 Quantitative Results

We compared the mitosis prediction performance of the classifier from the baseline method, the classifiers learned from the synthetic images and the classifier trained from GAN feature maps over a number of different training epochs. The

of the baseline method and the proposed methods are presented in Figure 4. The reference scenario achieved an F1-score of for the test set, after training epochs and for a threshold value of . Please note that, the baseline score was not as high as the score reported in [14], because we wanted to keep the playing field level for each approach and did not carry out additional training schemes such as hard negative mining. of classifiers trained on the synthetic H&E images generated by Conditional GAN and Cycle GAN were and , respectively. This outcome is in line with the qualitative results in Section 5.3. Finally, are shown for the classifiers trained on the GAN features of real H&E images in Figure 4. The best results were achieved with this approach with an of , which is a significant improvement over the baseline method. We can conclude that the mitosis detection performance of the classifier trained on synthetic H&E images from Conditional GAN setup achieved competitive results versus the classifier trained on real H&E images. Additionally, the classifier trained on the GAN features of real H&E images outperformed the other methods.

(a) (b) (c) (d)
Figure 4: for (a) the baseline classifier, the classifier from synthetic H&E images from (b) Conditional GAN, (c) Cycle GAN, and (d) the classifier from GAN features on the test set. The scores are provided as functions of various number of training epochs and of different threshold values on the output of the classifier.

6 Conclusion

In this paper, we presented several ways for performing mitosis classification using GANs as either synthetic image generators or feature extractors. We showed that when the synthetic H&E images from Conditional GAN (trained on PHH3 to H&E transformation) were used as inputs to a CNN mitosis classifier, the network was able to achieve the same performance with a classifier that required real H&E images as inputs which requires the additional steps of double staining and registration. Additionally, we showed that a conditional GAN (trained on H&E to PHH3 transformation) encoded useful features that could be used to extract feature maps from real H&E images. This approach significantly outperformed the baseline method. The use of immunohistochemistry in this curiosity-driven proof-of-concept framework alleviates the needs for manual annotations, as already showed in our previous work [14]. Furthermore, this work gives some indication on how a single staining could be used to build a detection model without the need for an expensive and time-consuming double-staining procedure. As a consequence, the number of training slides used to train CNN could be efficiently increased without additional costs other than staining.


  • [1] A. Bentaieb and G. Hamarneh (2018) Adversarial stain transfer for histopathology image analysis. IEEE Transactions on Medical Imaging 37(3), pp. 792–802. Cited by: §2.
  • [2] L. Bi, J. Kim, A. Kumar, D. Feng, and M. Fulham (2017) Synthesis of positron emission tomography (pet) images via multi-channel generative adversarial networks (gans). pp. 43–51. Cited by: §2.
  • [3] M. Brand, B. Hoevenaars, J. Sigmans, J. Meijer, P. Cleef, P. Groenen, K. Hebeda, and H. Krieken (2014-04) Sequential immunohistochemistry: a promising new tool for the pathology laboratory. Histopathology 65, pp. . External Links: Document Cited by: §2.
  • [4] W. Bulten and G. J. S. Litjens (2018) Unsupervised prostate cancer detection on h&e using convolutional adversarial autoencoders. ArXiv. External Links: arXiv:1808.05883 Cited by: §2.
  • [5] D. C Cireşan, A. Giusti, L. M Gambardella, and J. Schmidhuber (2013) Mitosis detection in breast cancer histology images with deep neural networks. Medical image computing and computer-assisted intervention 16(Pt 2), pp. 411–8. Cited by: §1.
  • [6] (2004) Foundation federation of dutch medical scientific societies (federa). code of conduct for medical research.. External Links: Link Cited by: §3.
  • [7] (2011) Foundation federation of dutch medical scientific societies (federa). human tissue and medical research: code of conduct for responsible use.. External Links: Link Cited by: §3.
  • [8] P. Isola, J.-Y. Zhu, T. Zhou, and A. A. Efros (2017) Image-to-image translation with conditional adversarial networks.

    International Conference on Computer Vision (ICCV)

    , pp. 5967–5976.
    Cited by: §2, §4.
  • [9] S. Kazeminia, C. Baur, A. Kuijper, B. van Ginneken, N. Navab, S. Albarqouni, and A. Mukhopadhyay (2018) GANs for medical image analysis. External Links: 1809.06222 Cited by: §1.
  • [10] C. M. Focke, K. Finsterbusch, T. Decker, and P. J. van Diest (2016) Performance of 4 immunohistochemical phosphohistone h3 antibodies for marking mitotic figures in breast cancer. Applied Immunohistochemistry & Molecular Morphology 26(1), pp. 20–26. Cited by: §1.
  • [11] D. Nie, R. Trullo, J. Lian, C. Petitjean, S. Ruan, Q. Wang, and D. Shen (2017) Medical image synthesis with context-aware generative adversarial networks. 65(12), pp. 417–425. Cited by: §2.
  • [12] T. Ribalta, I. E. McCutcheon, K. D. Aldape, J. M. Bruner, and G. N. Fuller (2004) The mitosis-specific antibody anti-phosphohistone-h3 (phh3) facilitates rapid reliable grading of meningiomas according to who 2000 criteria.. The American Journal of Surgical Pathology 28(11), pp. 1532–6. Cited by: §1.
  • [13] I. Skaland, E. Janssen, E. Gudlaugsson, J. Klos, K. H. Kjellevold, H. Søiland, and J. Baak (2009) Validating the prognostic value of proliferation measured by phosphohistone h3 (pph3) in invasive lymph node-negative breast cancer patients less than 71 years of age. Breast cancer research and treatment 114(1), pp. 39–45. Cited by: §1.
  • [14] D. Tellez, M. Balkenhol, I. Otte-Höller, R. van de Loo, R. Vogels, P. Bult, C. Wauters, W. Vreuls, S. Mol, N. Karssemeijer, G. Litjens, J. van der Laak, and F. Ciompi (20182018) Whole-slide mitosis detection in h&e breast histology using phh3 as a reference to train distilled stain-invariant convolutional networks. IEEE Transactions on Medical Imaging 37, pp. 2126–21362126–2136. Cited by: §2, §2, §3, §3, §4.1, 1st item, §5.1, §5.4, §5, §6.
  • [15] M. Veta, P. Diest, M. Jiwa, S. Al-Janabi, and J. Pluim (2016) Mitosis counting in breast cancer: object-level interobserver agreement and comparison to an automatic method. PLOS ONE 11(8), pp. e0161286. Cited by: §1.
  • [16] J.M. Wolterink, A.M. Dinkla, M.H.F. Savenije, C.A.T. v. d. B. P.R. Seevinck, and I. Išgum (2017) Deep mr to ct synthesis using unpaired data. 10557 LNCS. Cited by: §2.
  • [17] Z. Xu, C.F. Moro, B. Bozóky, and Q. Zhang (2019) GAN-based virtual re-staining: a promising solution for whole slide image analysis. ArXiv. External Links: arXiv:1901.04059 Cited by: §2.
  • [18] F. G. Zanjani, S. Zinger, B. E. Bejnordi, J. A. W. M. van der Laak, and P. H. N. de With (2018) Stain normalization of histopathology images using generative adversarial networks. pp. 573–577. Cited by: §2.
  • [19] J. Zhu, T. Park, P. Isola, and A. A. Efros (2017) Unpaired image-to-image translation using cycle-consistent adversarial networks. International Conference on Computer Vision (ICCV). Cited by: §2, §4.