A Comprehensive Review for Breast Histopathology Image Analysis Using Classical and Deep Neural Networks

03/27/2020
by   Xiaomin Zhou, et al.
0

Breast cancer is one of the most common and deadliest cancers among women. Since histopathological images contain sufficient phenotypic information, they play an indispensable role in the diagnosis and treatment of breast cancers. To improve the accuracy and objectivity of Breast Histopathological Image Analysis (BHIA), Artificial Neural Network (ANN) approaches are widely used in the segmentation and classification tasks of breast histopathological images. In this review, we present a comprehensive overview of the BHIA techniques based on ANNs. First of all, we categorize the BHIA systems into classical and deep neural networks for in-depth investigation. Then, the relevant studies based on BHIA systems are presented. After that, we analyze the existing models to discover the most suitable algorithms. Finally, publicly accessible datasets, along with their download links, are provided for the convenience of future researchers.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 5

page 6

page 7

page 8

page 20

page 21

page 22

page 23

05/29/2020

Artificial Neural Network Based Breast Cancer Screening: A Comprehensive Review

Breast cancer is a common fatal disease for women. Early diagnosis and d...
09/18/2021

A survey on deep learning approaches for breast cancer diagnosis

Deep learning has introduced several learning-based methods to recognize...
04/16/2019

Histopathologic Image Processing: A Review

Histopathologic Images (HI) are the gold standard for evaluation of some...
04/04/2017

Automatic Breast Ultrasound Image Segmentation: A Survey

Breast cancer is one of the leading causes of cancer death among women w...
11/03/2019

Gland Segmentation in Histopathological Images by Deep Neural Network

Histology method is vital in the diagnosis and prognosis of cancers and ...
06/04/2018

Y-Net: Joint Segmentation and Classification for Diagnosis of Breast Biopsy Images

In this paper, we introduce a conceptually simple network for generating...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

Breast cancer is the most commonly diagnosed and leading cause of cancer deaths among women [1]

. According to the World Health Organization (WHO), every year 2.1 million women have breast cancer worldwide. In 2018, an estimated 627,000 women died, representing about

of all cancer deaths among women[2]. In the United States, it ranks first in the record of the most common cancers that women are expected to be diagnosed in 2019 at a rate of up to  [3].

There are four types of breast tissue i.e., normal, benign, in-situ

carcinoma, and invasive carcinoma. Benign tissue refers to a minor change in the structure of the breast, but it is not classified as cancer, and in most cases, it is not harmful to health.

In-situ carcinoma remains in the mammary duct lobule system and does not affect other organs. If it diagnoses in time, in-situ carcinoma can be cured. However, invasive carcinoma is a malignant tumor that tends to spread in other organs. There are many techniques for breast cancer detection, such as X-ray mammography [4], 3-D Ultrasound (US) [5], Computed Tomography (CT), Positron Emission Tomography (PET) [6], Magnetic Resonance Imaging (MRI) [7], and breast temperature measurement [8]. However, pathological diagnosis is often regarded as a ”golden standard” [9]. For better observation and analysis, the removed tissues usually need to be stained, where the Hematoxylin and Eosin (H&E) staining approach is the most common method. The hematoxylin dyes the nuclei a dark purple color and the eosin dyes other structures (cytoplasm, stroma, etc.) a pink color. Make it like FIGURE 1, showing the different types of breast tissue images stained with H&E.

FIGURE 1: H&E stained images of different type, (a) is normal tissue, (b) is benign abnormality, (c) is in-situ carcinoma, and (d) is invasive carcinoma. These images are from the BACH dataset [10].

In histopathological research, the sections are examined under a microscope to analyze the characteristics and properties of the tissues by a histopathologist [11]. Traditionally, the tissue sections are observed by the naked eyes of the histopathologist directly, and the visual information is analyzed based on the prior medical knowledge manually. However, due to the complexity and diversity of histopathological images, this manual analysis can take much time. At the same time, the objectivity of this manual analyzing process is unstable, depending on the experience, workload, and mood of the histopathologist greatly.

In recent years, Artificial Intelligence

(AI) technology develops rapidly. In particular, it makes important achievements in computer vision, image processing and analysis. AI also shows potential advantages in histopathological analysis. AI-assisted diagnosis can undertake tedious focus screening work and quickly extract valuable information related to diagnosis from massive data. Meanwhile, AI has a strong objective analysis ability in histopathological detection and can avoid subjective differences caused by manual analysis. To some extent, the misjudgment of pathologists can be reduced and the working efficiency can be improved.

I-a General development of existing AI analysis Histopathology

AI is an umbrella term encompassing the techniques for a machine to mimic or go beyond human intelligence, mainly in cognitive capabilities [12]. As shown in FIGURE 2, the main contents of AI research include Machine Learning

(ML), pattern recognition, natural language processing, etc. Especially, ML has a significant contribution to the development of medicine.

FIGURE 2: The structure of ANN technology in the AI knowledge system.

ML is used in the pathological diagnosis of different cancer fields, e.g., cervical cancer, gastric cancer, colon cancer, lung cancer and breast cancer. The scope of the application focuses on the benign and malignant diagnosis, disease grading, staining analysis, and early tumor screening. For example, the work of [13] proposes a weakly supervised multi-layer hidden conditional random field model to classify the cervical histopathological images into well, moderate and poorly differentiated stages. In the experiment, the proposed method is tested on the six cervical IHC datasets and obtains an overall classification accuracy of and the highest one of the six is , showing the effectiveness and potential of the method. In the field of gastric cancer, the work of [14] proposes a deep learning based framework, namely GastricNet, for automatic gastric cancer identification. The experimental results show that this deep learning framework performs better than state-of-the-art networks like DenseNet, ResNet, and achieves an accuracy of for slice-based classification. The colorectal cancer is a malignant tumor that starts in the form of growth known as polyps mainly in the inner linings of the colon or rectum part. In the work of [15], proposing an automated supervised technique using deep learning to keep original image size is proposed in this paper for doing five-grade cancer classification via 31 layers Deep Convolutional Neural Network (DCNN). The proposed model results in classification accuracy of for two-class grading and for five-class cancer grading. About lung cancer pathology, Adenocarcinoma (LUAD) and squamous cell carcinoma (LUSC) are the most prevalent subtypes of lung cancer. The study of [16] trains a DCNN (inception-v3) on Whole Slide Images (WSIs) obtained from The Cancer Genome Atlas to accurately and automatically classify them into LUAD, LUSC or normal lung tissue, with an average Area Under the Curve (AUC) of 0.9.

It is worth noting that ANNs, as a branch of machine learning, play an important role in pathological diagnosis. ANN method, including classical and deep neural networks, is a kind of mathematical model or calculation model which imitates the structure and function of the biological neural network. In recent years, ANNs are widely used in Breast Histopathological Image Analysis

(BHIA) for image segmentation, feature extraction, and classification. The development trend of BHIA using ANNs is shown in FIGURE

3.

FIGURE 3: The development trend of ANN methods for BHIA tasks. The horizontal direction shows the time. The vertical direction shows the cumulative number of related works in each year.

I-B Motivation of our review paper

As far as we know, there exist some survey papers that summarize papers related to the BHIA work. (e.g., the reviews in [9, 12, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34]) In the following part, we go through the survey papers that are related to the BHIA work.

The survey of [9]

reviews machine learning methods that are usually employed in histopathological image processing, such as segmentation, feature extraction, unsupervised learning, and supervised learning. More than 130 papers about histopathological image analysis are summarized, but only five are about BHIA with ANNs.

The survey of [12] publishes a research survey, focusing on the use of AI and deep learning in the diagnosis of breast pathology images, and other recent developments in digital image analysis. Among them, we are only interested to summarize the development of deep learning in breast pathological diagnosis from the application direction. However, the article does not discuss the results that have been obtained for each research method.

The survey of [30] summarizes current deep learning techniques in mammography and breast histology. In this article we only focus on deep learning techniques on breast histopathology images. According to different tasks, namely nuclei analysis, tubular analysis, epithelial and stromal region analysis, mitotic activity analysis and other tasks in the breast digital histopathology image process, 16 papers are summarized based on BHIA.

The survey of [31] summarizes the deep learning applications in breast cancer image analysis in Screen-File Mammography (SFM), Digital Mammography (DM), US, MRI, and Digital fast tomosynthesis (DBT) imaging modes, respectively. At the same time, six papers are found based on the topic of our interest.

In [32], an overview of “recent trends in computer assisted diagnosis system for breast cancer diagnosis using histopathological images” with 106 related works is presented. This review summarizes those works by four technical steps, including image pre-processing, segmentation, feature extraction and selection, as well as classification. However, there are only 20 related works about BHIA with ANNs in this paper.

The survey of [33] publishes a review about the classification tasks of breast cancer on deep learning. The author reviews five aspects, namely datasets used, various medical imaging modalities exploited, image pre-processing techniques, types of DNNs, and the performance metrics used to construct and evaluate breast cancer classification models. The paper cites 49 studies, of which 27 are about histopathological images, and the rest are about mammograms. In the empirical evaluation, the analysis based on histopathological images and mammograms is not distinguished, and the time is only 2014-2018.

In our previous work [34], we propose a brief survey for breast histopathology image analysis using classical and deep neural networks. With more than 60 related works, referring to classical ANNs, deep ANNs and methodology analysis.

Besides our previous brief review about BHIA with ANN techniques, there is not a special one that focuses on the ANN approaches in this field. Hence, in order to clarify the BHIA work using ANN approaches in recent years, as of early 2020, based on our previous work in [34], we summarize more than 150 related works to prepare this comprehensive review. We propose this paper with the following structure: In Sec. II, the BHIA work using classical ANN methods are introduced; in Sec. III, the state-of-the-art deep ANN methods are summarized; Sec. IV, presents the method analysis; Sec. V concludes this paper and discusses the future work.

Ii BHIA Using Classical ANNs

An overview of the BHIA work using classical ANN methods is compiled in this section. Then, we analyze and summarize the chapter.

Ii-a Related Works

In this section, we divide related work into classification tasks and segmentation tasks according to the motivation. Then, we summarize the contributions, methods, and results of each paper.

Classification Tasks:

In [35], in order to evaluate two proposed texture features, a third-party software (LNKnet package) containing a neural network classifier is used. In the experiment, 536 samples are used for classifier training and 526 samples are used for testing. Finally, an accuracy of is achieved.

In [36], Support Vector Machine (SVM), -Nearest Neighbor

(KNN) and

Probabilistic Neural Networks

(PNN) classifiers are combined with signal-to-noise ratio feature ranking, sequential forward selection-based feature selection and principal component analysis feature extraction to distinguish the benign and malignant tumors of the breast. Finally, the best overall accuracies for breast cancer diagnosis are achieved by using an SVM classifier. The accuracy of

is achieved on dataset 1 (692 specimens of fine-needle aspirates of breast lumps), and is achieved on dataset 2 (295 microarrays). Similarly, PNN achieves and overall accuracy on dataset 1 and dataset 2, respectively.

In [37], four types of H&E stained breast histopathology images are classified, using eight features and a three-layer forward/back ANN classifier. In the experiment, 1808 training samples, 387 validation samples, and 387 test samples are tested, and an overall accuracy around is achieved.

In [38, 39, 40], an automatic breast cancer classification scheme based on histopathological images is proposed. First, edge, texture and intensity features are extracted. Then, based on each of the extracted features, an ANN classifier is designed, respectively. Thirdly, an ensemble learning approach, namely “random subspace ensemble”, is used to select and aggregate these classifiers for even better classification performance. Finally, a classification accuracy of is obtained on a public image dataset.

In [41], in order to classify low magnification () breast cancer histopathology images (H&E stained) into three malignancy grades, 30 texture features are extracted first. Then, feature selection is applied to find more effective information from the extracted features. Thirdly, a PNN classifier is built up based on the selected features. Lastly, 65 images are tested in the experiment, and an overall accuracy around is obtained.

In [42]

, morphological features are extracted to realize the classification of cancerous and non-cancerous cells in histopathological images, and 70 histopathological images are randomly selected as the dataset. In the experiment, a multi-layer perceptron, based on feed-forward artificial neural network modal, achieves

accuracy, sensitivity and AUC, respectively.

Segmentation Tasks:

In [43], a competitive neural network is applied as a clustering based method to segment breast cancer regions from needle biopsy microscopic images. In this work, 21 shape, texture and topological features are extracted. Then, the network is used to cluster the images into different regions based on these features. In the experiment, a dataset with over 500 images is tested, and an overall accuracy of around is achieved.

In [44], a supervised segmentation scheme using multilayer neural network and color active contour model to detect breast cancer nuclei is proposed. In this work, 24 images are used to test the method, and an average accuracy of is finally achieved. The flow chart is shown in FIGURE 4.

FIGURE 4: Flow chart of the proposed segmentation scheme for cancer nuclei detection in [44]. The yellow contour represents the outline of the identified positive nucleus, while the white contour represents the outline of the identified negative nucleus. This figure corresponds to Fig.2 in the original paper.

Ii-B Summary

According to the review above, we can see that the ANNs used in the BHIA field around 2012 are classical neural networks. The classical neural network has remarkable performance in various fields, but it also has some limitations, such as easier to overfit, slow training speed, and can only set parameters according to experience. Due to the low computational speed of the computer and the lack of sufficient data to train the computer system at the time, it is impossible to extract the effective ANN features from the raw data. Therefore, most of the classical neural networks in the field of BHIA are used as classifiers. In the aspect of feature selection, most research works use texture features and morphological features for segmentation and classification. TABLE I summarizes the work of different teams using classical neural networks in analyzing histopathological images of breast cancer. Further details of the method analysis are discussed in Sec. IV-A

Aim Detial Year Reference Team Data Information ANN type Evaluation
Classification 3 2006 [35] S. Petushi, et al. 24 slide images, Neural network Acc =
H&E staining
2 2010 [36] A. Osareh, et al. Dataset 1: 692 specimens of fine needle aspirates of PNN Acc =
breast lumps,
Dataset 2: 295 microarrays
4 2011 [37] S. Singh, et al. 1080 images, Feed forward Acc =
H&E staining, back propagation
( 1080 for training, 387 for validation, 387 for test ) neural network
3 2011 [38] Y. Zhang, et al. 361 images, MLP Acc =
2013 [39] H&E staining,
2013 [40] ( )
3 2013 [41] C. Loukas, et al. 65 regions of interests PNN Acc =
H&E staining,
( 20 grade I, 20 grade II, 25 grade III )
2 2017 [42] K. K. Shukla, et al. 70 images, MLP Acc = ,
H&E staining, Sn = ,
( 35 non-cancerous and 35 cancerous ) AUC =
Segmentation Nuclei 2013 [43] M. Kowal, et al. 500 cytological samples, Competitive neural network Acc =
H&E staining
Nuclei 2013 [44] A. Mouelhi, et al. 24 microscopic images MLP Acc =
IHC staining,
( )
TABLE I: Histopathology and classical ANNs based breast cancer image analysis. Multi-Layer Perceptron (MLP), Probabilistic Neural Networks (PNN), Multi-layer Neural Network (MNN), Accurcy (Acc), and Sensitivity (Sn). The second column “Detail”, shows the number of classes and segmentation regions.

Iii BHIA Using Deep Neural Networks

In the analysis of breast histopathology images based on deep neural networks, some publicly available datasets are frequently applied. As shown in TABLE II, we provide detailed information and download links for the datasets mentioned in our review.

Datasets Year Staining Detail Magnification Dataset size Website
ICPR 2012 2012 H&E \ 50 images corresponding to 50 high-power fields in 5 different biopsy slides Closed
IDC 2014 H&E \ 277,524 patches are from 162 IDC breast cancer histopathological slides  [45]
(198,738 IDC negative, 78,786 IDC positive)
BreaKHis 2015 H&E 4 , , , 7,909 histopathology images  [46]
Bioimaging 2015 breast 2015 H&E 4 249 images for training, 20 image for testing  [47]
histology classification challenge and an extended testset of 16 images
TUPAC 2016 2016 H&E \ 500 for training and 321 for testing breast cancer histopathology WSIs  [48]
Camelyon 2016 2016 H&E 2 ,, 400 WSIs of lymph node  [49]
Camelyon 2017 2017 H&E \ 200 WSIs of lymph node  [50]
BACH 2018 H&E 4 \ Part A: 400 microscopy images  [51]
Part B: 30 whole-slide images
TABLE II: Popular publicly available breast histopathology image dataset. The fourth column “Detail”, shows the number of classes.

Iii-a Related Works

In this section, we group related work according to the applied datasets. Then, we summarize the motivation, contribution, methods, and results of each paper in chronological order.

“BreaKHis” Tasks:

In 2015, BreaKHis dataset was released in [52]. This dataset is composed of 7,909 histopathological images from 82 clinical breast cancer patients. All the histopathological images of breast cancer are three channel RGB micrographs with a size of 460 pixels. Since objective lenses of different multiples are used in collecting these histopathological images of breast cancer, the entire dataset comprises four different sub-datasets, namely , , , and . All of these sub-datasets are grouped into benign and malignant tumors. Based on this dataset, many related works are carried out.

Related Works of BreaKHis in 2016:

In  [53], the classification of breast cancer histopathological images by a Convolutional Neural Network (CNN) independent of magnification is proposed. This paper uses two different architectures: Single-task CNN is used to predict malignancy, while multi-task CNN is used to predict both malignancy and image magnification levels simultaneously. FIGURE 5 is the overall process of this work. Finally, the average recognition rate of the single-task CNN model in the benign/malignant classification task is . The average recognition rate of the multi-task CNN model in the benign/malignant classification task is and the average recognition rate in the magnification estimation task is .

FIGURE 5: Schematic presentation for classifying breast histology images in  [53]. This figure corresponds to Fig.3 in the original paper.

Related Works of BreaKHis in 2017:

In [54, 55, 56], based on LeNet and AlexNet, deep ANN methods are used to classify breast histopathology images in the BreaKHis dataset. In the experiment, the dataset is divided into training () and testing () sets, and an overall accuracy around is obtained.

In [57]

, a transfer learning work is carried out, where an image is first represented by Fisher Vector (FV) encoding of local features extracted using the CNN model pre-trained on ImageNet. Then, a new adaptation layer is designed to fine-tune the whole deep learning structure. Finally, an accuracy around

is achieved on testing images. Similarly, in [58], another transfer learning strategy is applied to the same task, and achieves an overall accuracy around .

In [59], a deep learning structure with a single convolutional layer is proposed for classification task, which obtains an accuracy of . In contrast, in [60], a deep learning model with multi-layer CNNs is built up, and obtains an accuracy up to . Furthermore, in [61], a CNN model, namely the “Class Structure-based Deep CNN” (CSDCNN), is proposed to represent the spatial information within a deep CNN.

In [62], a DCNN based whole-slide histopathology classifier is presented. First, the posterior estimate of each view at a specific magnification is obtained from CNN at a specific magnification. Then the posterior estimate across random multi-views at multi-magnification is voting filtered to provide a slide level diagnosis. Finally, the experiment uses a patient-level 5-folded cross-validation and achieves an average accuracy of , sensitivity of , specificity of

and F-score of

.

In [63], a new method for breast cancer histopathological image classification based on DCNNs is proposed, called the BiCNN model, for two-class classification problems on pathological images. The BiCNN has more depth, more width and more complex architecture, which has little parameters and reliable performance. In the experiment, the average recognition rate for patient level is achieved as .

Related Works of BreaKHis in 2018:

In [64], different ResNet structures are tested and compared for this task. The ResNet-V1-152 model obtains the best performance with an overall accuracy of

after 3000 epochs. Similarly, in 

[65]

, the effectiveness of three well recognized pre-trained transfer learning models (VGG-16, VGG-19, and ResNet-50 networks) are compared in this task. In the experiment, the VGG-16 with a logistic regression classifier obtains the best performance of

accuracy. Furthermore, in [66], Inception-V1, Inception-V2, and ResNet-V1-50 based transfer learning methods are compared, and ResNet-V1-50 obtains the highest accuracy of .

In [67]

, two restricted Boltzmann machine and back propagation based DCNN models are proposed. Using these two models, the best average accuracy of

is obtained. Furthermore, in [68]

, based on CNN and Recurrent Neural Network (RNN) algorithms, a combined deep learning structure is introduced. In this work, unsupervised learning algorithms are first used to segment different tissues into different regions. Then, based on the segmentation result, the proposed deep learning approach is applied to the final classification task. Lastly, an accuracy of

is achieved. In addition, in the work of [69], five DCNN models are built up, considering handcraft features and deep learning features jointly. In the experiment, the second model obtains the best performance of accuracy.

In [70]

, a classification approach via deep active learning and confidence boosting is introduced, and achieves an overall accuracy of around

. Similarly, in [71], an implemented in-house CNN model is proposed, which combines the advantages of both machine learning features and classical color features.

In [72], a DenseNet based CNN model is proposed for classification, including four dense blocks and three transition layers. In the experiment, a accuracy is achieved. Similarly, in [73], a ResNet based 152 layer deep learning model is built and achieves a correct classification rate of .

In [74], CNNs are directly compared to classification based on hand-crafted features in binary classification (benign and malignant) and multi-class classification (benign and malignant sub-classes ) of breast cancer histological images. The results show that CNNs outperformed the hand-crafted feature based classifier, where the accuracy reach between to for the binary classification and to for the multi-class classification.

In [75], a multiple instance learning framework for CNN is proposed. A new pooling layer is proposed that would help to gather most of the informative features from the patches that make up the whole slide, without inter-patch overlap or global slide coverage. In the experiment, at , , and magnifications, the accuracy is , , and , respectively.

In [76], a new model for automatic classification of breast cancer tissue in histological images by DCNN is proposed, which does not consider the magnification factor of the image. The experimental results on BreaKHis achieved an average accuracy of .

In [77], three dimensionality reduction strategies, including PCA, Gaussian Random Projection (GRP) and Correlation based Feature Selection (CBFS), are applied to CNN-based features to classify histological images of breast cancer. In the experiment, the BreaKHis dataset, Epistroma dataset, and the Multi-class Kather’s dataset are tested. Finally, BreaKHis dataset at , , and magnifications, the accuracy is , , and , respectively. On the Epistroma dataset, the accuracy is . On Multi-class Kather’s dataset, the accuracy of is obtained.

Related Works of BreaKHis in 2019:

In [78], a novel framework based on the hybrid attention mechanism is proposed to classify breast cancer histopathology images. This framework could automatically find useful regions from raw images, and thus does not have to resize the raw images for the network to prevent information loss. At four different magnifications, the average accuracy is about while only of raw pixels are used.

In [79], a transfer learning and supervised classifier based prediction model for breast cancer is proposed. As can be seen from FIGURE 6, four pre-trained ConvNets are used for transfer learning to extract image features, and then PCA is applied to the feature vectors to reduce feature dimension. Finally, SVM, KNN, and Logistic regression are respectively used to classify images. In the experiment, the best results are obtained on . The ResNet-50 with SVM classifier has the maximum accuracy of and the best recall value of . On the other hand, with Inception ResNet-V2, SVM gives the highest precision of .

FIGURE 6: Overall structure of the proposed model in  [79]. This figure corresponds to Fig.4.6 in the original paper.

In [80], Inception-V3 and Inception-ResNet-V2 are trained using transfer learning techniques for binary and multi-class classification of breast cancer histopathological images. The results show that the Inception-ResNet-V2 network achieves the best results at magnification: In the binary classification task, the image level accuracy is , and in the multi-classification task, the image level accuracy is .

In [81], a breast cancer histopathology image classification network (BHCNet) is designed. BHCNet includes one plain convolutional layer, three SE-ResNet [82] blocks, and one fully connected layer. Each SE-ResNet block is stacked by small SE-ResNet modules, which is denoted as BHCNet-. In the results, the BHCNet-3 achieves the accuracy between and for the binary classification and the BHCNet-6 achieves the accuracy between and for the multi-class classification.

In [83], deep learning, transfer learning and Generative Antagonistic Network (GAN) are combined to improve the accuracy of breast cancer classification on a limited training dataset. First, the fine-tuned VGG-16 and VGG-19 are used to extract features and sent to CNN for classification. In addition, StyleGAN [84]

and Pix2Pix 

[85], two GAN models, are applied to generate 4,800 and 2,912 fake images, respectively. In the experiment, the proposed method is evaluated on the BreaKHis dataset and two generated datasets from BreaKHis by GAN. The experiments show that GAN images created much noise and affected classification accuracy. Finally, the best result is obtained in BreaKHis dataset. The accuracy is in the binary classification.

In [86], a method of classifying breast cancer histopathologic images based on double transfer learning is proposed. This method can be divided into two steps. In the first step, as shown in FIGURE 7, in order to improve the quality of the dataset before training the classifier using the BreaKHis dataset, an SVM is trained to classify relevant and irrelevant images in histopathological images, and then it is used as a filter to eliminate irrelevant images from the BreaKHis dataset. In the second step, as shown in FIGURE 8, another SVM is trained to classify benign and malignant. Both steps use transfer learning (Inception-v3 CNN pre-trained with ImageNet dataset). The best classification accuracy can be obtained by Inception-v3 + filter (Inception-v3 is used to extract features, and filtering refers to the removal of irrelevant images) method at and magnifications of and , respectively.

FIGURE 7: The idea of building the filter in [86]. PFTAS - Parameter Free Threshold Adjacency Statistics (hand-crafted features), CRC - colorectal cancer dataset, TR - training set, VL - validation set. This figure corresponds to Fig.1 in the original paper.
FIGURE 8: An overview of building the classifier in [86]: Patching, feature extraction (PFTAS or Inception-v3 + PCA), filtering by SVM, patient-wise splitting of relevant patches into training (TR) and test (TS) using the pre-defined folds, patch classification and aggregation using majority vote or sum rule. This figure corresponds to Fig.5 in the original paper.

Related Works of BreaKHis in 2020:

In  [87], a novel feature extraction method is proposed for the classification of breast histopathological images. First, the images are divided into small pieces that are not overlapped. Then, pre-trained CNNs (10 models in total) are used for feature extraction. Finally, an SVM is applied as a classifier. In the experiment, the best patient-level accuracy is obtained by the AlexNet-SVM model. At , , and magnifications, the accuracy is , , , and , respectively.

In [88], a ResHist model is designed, which is a residual learning-based 152-layered CNN to classify the histopathological images of breast cancer. In the experiment, histopathological images are first augmented and the ResHist model is trained end-to-end on the augmented dataset in a supervised learning manner. Finally, images are classified into benign and malignant categories by using the trained ResHist model. The result shows that the ResHist model achieves a best accuracy of and a F1-score of

. In addition, in order to study the discrimination ability of deep features of ResHist model, the extracted feature vectors are fed into KNN, random forest 

[89], quadratic discriminant analysis, and SVM classifiers. Among them, when the deep features are fed back to the SVM classifier, the best accuracy of is achieved.

“Camelyon” Tasks:

“Camelyon Grand Challenge” is a task to evaluate computational systems for the automated detection of metastatic breast cancer in WSIs of sentinel lymph node biopsies.

Related Works of Camelyon 2016:

In [90], a DCNN is built for this task, and achieves an AUC of . In [91], a GoogLeNet based deep learning method is introduced, where 270 images are used for training, and 130 are used for testing. Lastly, an AUC of is obtained. With the same experimental setting, in [92]

, a recurrent visual attention model is proposed, which includes three primary components composed of dense or convolutional layers to describe the information flow between components within one time-step. Finally, a

AUC is achieved.

In [93], a fast and dense screening framework (ScanNet) for detecting metastatic breast cancer from WSIs is proposed. ScanNet is implemented based on the VGG-16 network by changing the last three fully connected layers to fully convolutional layers. In the result, Free Response Operating Characteristic (FROC) of 0.8533 and AUC of are obtained.

In [94], Multiple Magnification Feature Embedding (MMFE) is introduced, which is an approach using transfer learning to detect breast cancer from digital pathology images without network training. The main idea of the MMFE method is to simulate the daily diagnosis process of a medical doctor. First, a low-resolution image is observed to identify suspicious areas. Then, it is switched to a high-resolution image for further confirmation. Experiments show that this approach can greatly improve the training and prediction speed of the model without reducing the performance of the model.

In [95], a summary of the Camelyon 2016 shows that: 25 of 32 submitted algorithms are deep learning based methods, and 19 top-performing algorithms are all DCNN approaches.

Related Works of Camelyon 2017:

In the Camelyon 2017 [96], in order to detect four types of breast cancer from the WSIs, a deep learning architecture is proposed with limited computational resources. In this work, two CNNs are applied in a cascade, followed by local maxima extraction and SVM classification of local maxima regions. In the experiment, 300 images are used for training, 200 are used for validation, 500 are for test, and an accuracy of is finally achieved.

“BACH” Tasks:

The Grand Challenge on BreAst Cancer Histology images (BACH) is co-organized with the 15th International Conference on Image Analysis and Recognition (ICIAR 2018) [10]. There are two goals in this challenge. Part A of the challenge consists of automatically classifying H&E stained breast histology microscopy images in four classes: Normal, benign, in-situ carcinoma and invasive carcinoma. The data in part A is composed of 400 H&E stained breast histology images. All images are of equal dimensions ( pixels). Part B consists of performing pixel-wise labeling of WSIs in the same four classes as Part A. The data in part B is composed of 20 WSIs of very large size. Each WSI can have multiple normal, benign, in-situ carcinoma and invasive carcinoma regions.

In order to classify four types of breast cancer histopathology images, an Inception-V3 based deep learning model is introduced in [97]. In the experiment, 300 images are used for training, and 100 are used for testing. Finally, an average accuracy of is achieved.

A two-stage CNN model is also proposed in [98], where the first stage is for pixel-level classification, and the second stage is for image-level classification. In the experiment, an overall accuracy around is obtained. Similarly, in [99], a two-stage classification approach is proposed. In the first stage, an AlexNet based feature extraction is applied. In the second stage, three different classifiers are used. In the experiment, an SVM classifier achieves the best result ( accuracy). Similarly, in [100], the AlexNet is also applied as a basic model to build a hierarchical classification model, and an accuracy of is obtained.

In [101], a transfer learning-based approach for the classification of breast cancer histology images is presented. Inception-V3 and ResNet-50 CNNs, both pre-trained on the ImageNet database, are used. In the experiment, the Inception-V3 network achieves an average test accuracy of for four classes, marginally outperforming the ResNet-50 network, which achieves an average accuracy of .

In [102], an Inception ResNet-V2 is proposed to classify histological images of breast cancer through transfer learning, fine-tuning, and data augmentation. Out of 100 images in each class, 70, 20, and 10 images are randomly selected for training, testing, and validation. The final results show that the accuracy of the test set is and the loss is 0.59, while the accuracy of the validation set is and the loss is 0.23.

In [103], a deep learning approach for analyzing breast histology images at both micro-level (patch-based image classification) and macro-level (WSI segmentation) is proposed. The approach contains two networks, as shown in FIGURE 9, which share the architecture and weights, especially the encoder in the network, to improve the utility of the trained network and available dataset. In the result, for patch classification, and accuracy are obtained on the training and test datasets, respectively. In terms of segmentation, an overall score of 0.7343 and 0.4945 are obtained on the training and test data sets, respectively.

FIGURE 9: Overall structure of the proposed model in  [103]. A classification network consists of an encoder and two processing layers. A segmentation network contains an encoder and decoder. This figure corresponds to Fig.1 in the original paper.

In [104], approaches for the classification of microscopic images as well as the segmentation of WSIs are presented. In both parts of the challenge, data preparation, scale selection, and augmentation are firstly carried out. In part B of the challenge, additional data are added. Finally, network training is conducted. The densenet-161 architecture with pre-training on ImageNet is selected and repeatedly trained in the expansion training set. The results show that a accuracy is achieved.

In [105], a method for classification of breast cancer histopathology images based on deep learning is proposed. The effects of various preprocessing methods are compared, and the classification results of CNN and CNN with SVM are also compared. Finally, an accuracy of is obtained in part A task.

In [106], a context-aware network for automated classification of breast cancer histopathological images is proposed. The method mainly includes two steps: First, the activation feature of a trained ResNet is used to classify the non-overlapping patches. Then, an SVM classifier is trained to classify the patches of overlapping blocks. Finally, the majority-voting method is used for image-wise classification. As a result, an average accuracy of is obtained.

In [107], six different feature extractors are compared: Hand-crafted features, ResNet-18, ResNeXt, NASNet-A, ResNet-152 and VGG-16. The result shows that the pre-trained deep learning network on ImageNet has better performance than the popular hand-crafted features used for breast cancer histology images. Finally, the integration method based on random forest dissimilarity is used to combine hand-crafted features with five deep learning feature groups, and an average accuracy of is obtained.

In [108], a deep learning framework for multi-class breast cancer image classification is presented. The framework of the approach is illustrated in FIGURE 10

. First, Inception-V3 is used for patch-wise classification. Then, the patch-wise predictions are passed through an ensemble fusion framework involving majority voting, a Gradient Boosting Machine (GBM), and logistic regression to obtain the image-wise prediction. Finally, an average accuracy of

is obtained.

FIGURE 10: An overview of the proposed framework in  [108]. This figure corresponds to Fig.2 in the original paper.

In [109]

, a new hybrid convolutional and recurrent deep neural network for breast cancer histopathological image classification is proposed. First, a fine-tuned Inception-V3 is used for feature extraction for each image patch. Then, the feature vectors are input into a 4-layer Bidirectional Long Short-Term Memory network (BLSTM)  

[110] for feature fusion. Finally, a complete image-wise classification is carried out. In the experiment, an average accuracy of is obtained in image-wise. Notably, a new dataset containing 3,771 histopathological images of breast cancer is published in this paper. It covers as many different subclasses spanning different age groups as possible. The dataset is publicly available at [111].

In [112], a patch-based classifier (PBC) using CNN for automatic classification of histopathological breast images is proposed. The proposed classification system works in two different modes: One patch in one decision (OPOD) and all patches in one decision (APOD). The flowchart of OPOD technology for patch classification is shown in FIGURE 11. OPOD is mainly responsible for predicting the class labels of each patch extracted from the pre-processed histopathological images. As shown in FIGURE 12, APOD technology classifies images by majority voting for each patch predicted by OPOD technology. In the test set, APOD technology achieves accuracy of in 4-class classification and in 2-class classification.

FIGURE 11: Overall structure of OPOD technology in  [112]. It is worth noting that (g) Patch label prediction by the trained patch based classifier (PBC) where . This figure corresponds to Fig.7 in the original paper.
FIGURE 12: Overall structure of the APOD technology in  [112]. (a) Patch labels of an image predicted by OPOD technique. (b) Image label prediction based on patch label majority voting by proposed APOD technique. This figure corresponds to Fig.8 in the original paper.

In [113], a method for the diagnosis of breast cancer histopathology images based on transfer learning and global pooling is proposed. Five DCNN architectures are used as feature extractors, namely Inception-V3, InceptionResNet-V2, Xception, VGG-16, and VGG-19. The experimental results show that the network structure based on the pre-trained Xception model is better than all other DCNN structures in average classification accuracy, reaching .

“ICPR 2012” Tasks:

In the 2012 International Conference on Pattern Recognition (ICPR), a “mitotic figure recognition contest” is released. The dataset is made up of 50 High Power Fields (HPF) coming from 5 different slides scanned by three different types of equipment at magnification. An HPF has a size of . These 50 HPFs contain a total of 326 mitotic cells on images of both scanners and 322 mitotic cells on the multispectral microscope [114].

In [115, 116], manually designed color, texture, and shape features are jointly used with the machine learning features extracted by a multi-layer CNN. Finally, this method obtains an F1-scores up to on color scanners and on multi-spectral scanners. Similarly, in the work of [117], handcrafted features and DCNN features are used in an ensemble learning process together, and an F1-score of is obtained.

In [118]

, in order to detect the mitosis in a breast histology image, a deep max-pooling CNN is built up, which is trained to classify each pixel in the image into a labeled region. In the experiment, 26 images are used for training, 9 for validation, and 15 for test. Finally, an F1-score of

is achieved. Furthermore, a similar method is used in the work of [119], and an F1-score of is obtained.

In [120], a novel deep cascade convolutional neural network (CasCNN) is designed to detect mitosis. CasCNN consists of two parts. First, using full CNN, a rough retrieval model is proposed to identify and locate mitotic candidates while maintaining high sensitivity. Then, a fine recognition model based on cross-domain knowledge transfer is proposed to further single out mitoses from the rough model. In the experiment, both the ICPR12 dataset and ICPR14 dataset are used. On the ICPR12 dataset, the precision of , recall of and F1-score of are obtained. On the ICPR14 dataset, the precision of , recall of and F1-score of are obtained.

In [114]

, a summary of the ICPR 2012 contest shows that 17 teams submit their results and the IDSIA team gets the best performance. In the work of the IDSIA team, a CNN is trained through ground truth mitosis provided in training dataset, and then the CNN is used to calculate a map of mitosis probabilities on the whole image. Finally, achieving a recall of

, the accuracy of , and the F-measure of .

“TCUG16” Tasks:

The Tumor Proliferation Assessment Challenge 2016 (TUPAC16) is the first challenge to predict tumor proliferation scores from WSIs. This challenge is organized in the context of the MICCAI 2016 conference in Athens, Greece. The goal of the challenge is to assess algorithms that predict the tumor proliferation scores from the WSIs. There are two tasks in this challenge. Task 1 is to predict the proliferation score based on mitosis counting. Task 2 is about the prediction of proliferation score based on molecular data. The participants can submit their results for either or both of the tasks.

In [121], a transfer learning system based DCNN algorithm is suggested for the segmentation and detection of mitoses in breast cancer histopathological images. This system uses two CNNs. A pre-trained CNN is used for segmentation, and Hybrid-CNN is employed for mitotic classification. Finally, in the task of mitosis detection, an F-measure of with area under the precision-recall curve is achieved.

In [122], a novel technique for the detection of mitosis by virtue of semantic segmentation is presented, called SegMitos. At the same time, a novel concentric label and concentric loss are proposed, which can train a dense prediction model with weak annotation. The idea of the experiment is as follows. First, the preparation of data and to produce concentric labels. Then to train the SegMitos model. Finally, the trained model is deployed to the testing images of the mitosis dataset. As a result, four datasets (ie: 2012 ICPR MITOSIS dataset, MITOS-ATYPIA-14 dataset, AMIDA13 dataset, and TUPAC16 dataset) are utilized to validate the proposed method. Finally, on the TUPAC16 dataset, an F-score of is obtained.

In [123], a summary of the TCUG16 shows that: 12 teams submit results for the first task, and 6 teams submit results for the second task. With the exception of one team, all teams use DCNNs as part of the processing pipeline. For the first task, the best performing method achieves a quadratic-weighted Cohen’s kappa score of = 0.567, CI [0.464, 0.671] between the predicted scores and the ground truth. For the second task, the predictions of the top method have a Spearman’s correlation coefficient of = 0.617, CI [0.581, 0.651] with the ground truth.

“Bioimaging 2015 Breast Histology Classification Challenge” Tasks:

In 2015, the Bioimaging 2015 Breast Histology Classification Challenge dataset was released in [124]. This dataset is composed of high-resolution ( pixels) and H&E stained breast cancer histology images. All the images are digitized with the same acquisition conditions, with the magnification of and pixel size of . There are four different types of histological images of breast cancer in the dataset, namely normal tissue, benign lesion, in-situ carcinoma and invasive carcinoma. A total of 285 images are included in the dataset. Of these, 249 images are used for the training set and 36 images are used for the test set. The pictures in the test set are divided into two groups, namely the initial group (20 images with less classification difficulty) and the extended group (16 images with more difficult classification).

In [124], a DCNN model is introduced to classify four breast cancer histopathology types in the whole slide image. In the experiment, 249 images are used for training, 20 images are used for testing, and accuracy of is obtained.

In [125], in order to classify different breast cancer types in H&E stained histopathology images, pre-trained ResNet-50 and ResNet-101 networks are applied with a fine-tuned process and a fusion strategy. In the experiment, the bioimaging 2015 breast histology classification challenge datasets and BACH datasets are tested. Finally, and accuracies are obtained on the Bioimaging 2015 Breast Histology Classification Challenge dataset and BACH dataset, respectively.

In [126], an extended version of the Bioimaging 2015 Breast Histology Classification Challenge dataset is used. This dataset is the same as the original data set in terms of image type, acquisition conditions, and size. The total number of images in this dataset is 400. The pre-trained ResNet-50, Inception-V3, and VGG-16 networks are fused into a deep learning structure and achieve a mean accuracy of .

In [127], an approach based on deep learning for multi-classification of breast histological images is proposed. The framework of the approach is illustrated in FIGURE 13. Firstly, two patches of different sizes are extracted from the histological images of breast cancer by sliding window mechanism, including cell-level and tissue-level features. In order to solve the problem of insufficient diagnostic information or label information error in some sampling patches, a batch screening method based on CNN and k-means is proposed to select more discriminant patches. Then, ResNet-50 is used as the feature extractor to extract features from the patch and P-norm pooling is used to obtain the final image features. Finally, an SVM is used for the final image classification. The result shows that a accuracy is achieved on the initial test set.

FIGURE 13: A schematic illustration of the proposed framework in  [127]. This figure corresponds to Fig.3 in the original paper.

In [128], a method for breast cancer classification with the Inception Recurrent Residual Convolutional Neural Network (IRRCNN) model is proposed. The IRRCNN approach is applied for breast cancer classification on two publicly available datasets including the Bioimaging 2015 Breast Histology Classification Challenge dataset and BreaKHis. On the Bioimaging 2015 Breast Histology Classification Challenge dataset, test accuracy of and is obtained for the binary and multi-class classifications, respectively. On the BreaKHis dataset, the accuracy of (image-level) and (patient-level) is obtained for the binary classification, respectively. In addition, the accuracy of (image-level) and (patient-level) is obtained for the multi-class classification, respectively.

In [129], transfer learning based on AlexNet, GoogleNet, and ResNet is used to classify the histopathological images of breast cancer. The result shows that ResNet has the highest accuracy, achieving and accuracy at the patch level and image level, respectively.

“IDC” Tasks:

Invasive Ductal Carcinoma (IDC) dataset is a publicly available dataset that was first introduced by Cruz-Roa et al.[130]. The dataset contains digital breast cancer histopathological slides from 162 women with IDC. All slides are digitized via a whole-slide scanner at magnification. The dataset contains 277,524 patches of size pixels (198,738 IDC negative and 78,786 IDC positive).

In [130], a deep learning approach for automatic detection and visual analysis of IDC tissue regions in WSIs of breast cancer is presented. The framework of the approach is illustrated in FIGURE 14. First, grid-sampling image patches of all the regions containing tissue in WSI. Then, CNN is trained from the sampled patch to predict the probability of patch belonging to IDC. Finally, a probability map is built on the WSI, highlighting patches that have IDC with a probability greater than 0.29. In the experiment, 162 original slices are divided into 3 different subsets: 84 of them are used for training, 29 are used as the validation set and 49 are used for testing. As a result, an F-measure of and an accuracy of are obtained for automatic detection of IDC regions in WSI.

FIGURE 14: Overall framework for automated detection of IDC in WSI using CNN in  [130]. This figure corresponds to Fig.1 in the original paper.

In [131]

, a new CNN-based model for identifying IDC cells in histopathological slides is proposed. This model, which is derived from the Inception architecture, proposes a multi-level batch normalization module between each convolution step. In the experiment, 94,543 patches are used for training, 31,514 for validation, and 151,465 for testing. Finally, a balanced accuracy of

and an F1-score of are obtained.

Other Tasks:

Iii-A1 Classification

In [132], a Principal Component Analysis Network (PCANet) is introduced to classify Ductal Carcinoma In-Situ (DCIS) and Usual Ductal Hyperplasia (UDH) images. In this work, a dataset with 20 DCIS and 31 UDH images are tested, where 10,000 patches are randomly sampled from the training set to learn the models. Finally, an accuracy around is achieved.

In [133], a classification method based on CNN is proposed for the WSIs of breast tissue. In this work, two CNNs are trained. CNN-I is used to classify the WSI into the epithelium, stroma, and fat. CNN-II operates on the stromal regions output by classification of CNN-I, and then classifies the stromal regions as normal stroma or cancer-associated stroma. The dataset contains 646 sections of breast tissue stained with H&E. In the experiment, 270 images are considered for training, 80 copies are for validation, and 296 are for testing. Finally, an area under Receiver-Operating Characteristic (ROC) of 0.921 is obtained.

In [134], to distinguish four breast cancer types in histopathological images, a deep learning method is introduced with hierarchical loss and global pooling. In this work, VGG-16 and VGG-19 networks are applied as the basic deep learning structures, and a dataset with 400 images are tested. In the experiment, 280 images are used for training, 60 images are for validation and 60 are for testing. Finally, an average accuracy of around is obtained.

In [135], a work is carried out to classify five diagnostic breast cancer styles in the whole histopathological image. First, a saliency detector performs multi-scale localization of diagnostically relevant regions of interest in the images. Then, a CNN classifies image patches as five types of carcinoma. Lastly, the saliency and classification maps are fused for final categorization. In the experiment, 240 images are used to examine the effectiveness of the proposed method, and a accuracy is finally achieved. The highlight of this work is that 45 pathologists take part in the final evaluation of the test images, and an average accuracy of around is obtained. Hence, the performance of the proposed method is comparable to the performance of the pathologists that practice breast pathology in their daily routines.

In [136], an image analysis method is developed that uses deep learning to classify tumor grade, ER status, PAM50 intrinsic subtype, histological subtype, and recurrence risk score (ROR-PT). In the experiment, 571 examples of breast tumors are used for training and 288 are for testing. Finally, it can be distinguished from low-intermediate and high tumor grades ( accuracy), ER status ( accuracy), Base-like and non-base-like ( accuracy), Ductal vs. lobules ( accuracy), and high vs. low-medium ROR-PT score ( accuracy).

In [137], pre-trained CNN architectures (GoogLeNet, VGGNet, and ResNet) are used to extract features from images, and these features are fed to a fully connected layer. The average pool classification is used to classify malignant cells and benign cells. Two breast microscopic image datasets are used: The first is a standard benchmark dataset  [52] and the other is developed locally at LRH hospital in Peshawar, Pakistan. In the experiment, 6000 images are considered to train the architecture and 2000 images are used for testing. Finally, an average classification accuracy of is achieved.

In [138], Human epidermal growth factor receptor 2 (HER2) Scoring Contest is introduced. HER2 is an important prognostic factor for breast cancer, so the task of automatic HER2 scoring has great clinical significance. The paper shows that a total of 18 submissions from 14 teams are received for evaluation. Among the comprehensive results of all submitted automated methods, 8 of the top 10 teams used CNN-based learning methods. It can be seen that CNN-based learning methods play an important role in HER2 automatic scoring tasks.

Iii-A2 Segmentation

In [139], a CNN based model with three hidden layers is built to segment the breast cancer cell nucleus in histopathological images. In this work, 58 H&E stained images are tested, and overall accuracy of around is achieved on both RGB and Lab color spaces.

In [140], a fast scanning deep convolutional neural network (fCNN) is proposed for pixel-wise region segmentation. In the work, 92 images are used, which are selected from 20 patients in The Cancer Genome Atlas (TCGA) breast cancer dataset. 75 images are used for training and 17 images are used for testing. In the experiment, it takes only 2.3 seconds to segment an image with size pixels. Also, a mean precision of , a mean recall of , and a mean F1-score of are achieved.

In [141], a DCNN based feature learning is presented to automatically segment or classify epithelial and stromal regions in histopathological images. In this work, colorectal cancer dataset and breast cancer dataset are used separately. The breast cancer dataset consists of 157 H&E stained images. The data is acquired from two independent cohorts: Netherlands Cancer Institute (NKI: 106) and Vancouver General Hospital (VGH: 51). In the experiment, a superpixel-based scheme is used to over-segment the image into atomic regions. Then, the atomic regions are adjusted to square images of fixed size, and then they are fed back to the DCNN for feature learning. Finally, F-score of , and accuracy of , are obtained on NKI and VGH, respectively.

In [142], an automatic nuclei segmentation technique using DCNN is introduced. FIGURE 15 depicts a flowchart for the proposed framework. In the training stage, a DCNN model is trained and a kernel shape library based on a selection-based dictionary learning algorithm is obtained. In the test stage, the CNN model is applied to the images to generate probability maps, and then iterative region merging is performed to initialize the shape of each kernel. Then, the proposed kernel segmentation algorithm uses the local repulsive deformation model for shape deformation, and uses the shape priors of the sparse shape model for shape inference. Finally, an accuracy of is achieved on the breast cancer dataset.

FIGURE 15: The architecture of segmentation framework in [142]. This figure corresponds to Fig.2 in the original paper.

In [143], an automated nuclei segmentation method is proposed. FIGURE 16 is the overall process of nuclei segmentation. This process can be divided into three main stages. First, the Sparse Reconstruction (SR) method is used to roughly remove the background and highlight the nuclei of the pathological image. Then the gradient descent technique is used to train the DCN cascade of the multi-layer convolutional network in order to effectively segment the nucleus from the background. At this stage, the patch and its corresponding label are randomly extracted from pathological images and input into the training network. Finally, morphological operation and prior knowledge are introduced to improve segmentation performance and reduce errors. In this work, the pixel segmentation accuracy of and the F1-measure of are obtained.

FIGURE 16: An overview of the method proposed in  [143]. This figure corresponds to Fig.1 in the original paper.

In [144], a method of nuclear segmentation in histopathological images based on deep learning and mathematical morphology is proposed. In addition, an image dataset containing 33 images with 2754 annotated cells is provided. This dataset can be obtained at [145]

. In this work, a set of manual annotation images is trained on a deep neural network and the posterior probability map is processed to achieve joint segmentation of the nuclei. Finally, the accuracy of

, recall of , precision of and F1-score of are achieved.

In [146], an advanced supervised full CNN method for nuclear separation in histopathological images is proposed. First, a histopathological image is normalized to the same color space. Then a complete image is split into overlapping small blocks. The proposed nuclear boundary model is used to detect the nucleus and boundary on each plaque, and all the predictions are seamlessly recombined. Finally, fast and parameterless post-processing is applied to generate the kernel segmentation results. In the experiment, multiple organ H&E stained image dataset (MOD) [147], breast cancer histopathology image dataset (BCD) and breast cancer image dataset (BNS) [144] are used. Finally, an image with a size of pixels can be segmented in less than 5 seconds.

In [148], an automatic end-to-end framework using deep neural networks for tissue-level segmentation is proposed. In this work, a new dataset of WSIs with different subtypes of breast cancer, consisting of 11 WSIs fully annotated, is tested. Finally, the results of U-Net, SegNet, FCN, and DeepLab are evaluated by using pixel-by-pixel indexes, with the Dice Coefficient (DC) values of 0.86, 0.87, 0.86, and 0.86, respectively.

Iii-A3 Detection

In [149], a deep learning strategy named “Stacked Sparse Auto-Encoder” (SSAE), is presented to detect nuclei on high resolution breast cancer images.

In [150], a DCNN model is proposed to detect breast cancer metastasis in sentinel lymph nodes. In the experiment, 100 examples are used for training, 50 for validation, and 75 for testing. Finally, a sensitivity of is achieved.

In [151], a novel accurate and high-throughput method (HASHI) for automatic invasive breast cancer detection in WSIs is presented. The test is conducted in three different data queues involving 500 cases. Finally, the comparison results of intensive sampling (6 million samples in 24 hours) and less samples (2000 samples in 1 minute) are obtained, an average DC reaches on the independent test dataset.

In [152], handcrafted features are combined with high-level features based on deep learning, which are directly fed back to the first fully connected layer for mitotic detection. In this work, three datasets are used, including the MITOS-ATYPIA-14 dataset, ICPR-2012 dataset, and AMIDA-13 dataset. Finally, a precision of , a recall of and an F-score of are obtained.

In [153], a DCNN architecture is introduced to detect mitosis from histopathological images of breast cancer cells. The data set of MITOS atypia is tested. The result shows that the deeper CNN layer has better the performance of breast cancer image detection. In the 17 layer CNN architecture, accuracy, TPR, FNR and loss are achieved on average.

Iii-B Summary

From the survey above, we can find that deep ANN has been increasingly used in the field of BHIA since 2012. Among them, the method based on CNN is dominant. The main reasons for this development trend are as follows: (a) The emergence of high-performance GPU computing makes it possible to train networks with more layers. (b) More and more institutions have released datasets of breast histopathological images, to a certain extent alleviating the lack of labeled public datasets. The large increase in training data reduces the risk of over-fitting. (c) Compared with traditional image classification methods, deep learning can automatically learn features from data, avoiding the complexity and limitations of artificial design and feature extraction in traditional algorithms. (d) CNN has been widely applied in natural language processing, object recognition, image classification, and recognition, laying a foundation for the application of CNN to histopathological images of breast cancer. The work of different teams in the analysis of breast histopathological images using deep neural networks is summarized in TABLE III. Further details of the method analysis are discussed in Sec. IV-B and Sec. IV-C.

Task Aim Detail Year Reference Team ANN type Evaluation
BreaKHis C 2, mul 2016 [53] N. Bayramoglu, et al. CNN The single-task CNN model: A-Acc = ,
The multi-task CNN model: A-Acc =
C 2 2017 [54, 55, 56] F. Spanhol, et al. LeNet and AlexNet Overall Acc =
C 2 2017 [57] Y. Song, et al. Transfer Learning based VGG-VD Acc =
C 2 2017 [58] W. Zhi, et al. Transfer Learning based VGGNet Acc =
C 2 2017 [59] E. Nejad, et al. CNN Acc =
C 2 2017 [60] Q. Li, et al. CNN Acc =
C 2 2017 [62] K.Das, et al. DCNN A-Acc = , Sn = , Sp = , F-score =
C 2 2017 [63] B. Wei, et al. DCNN Acc =
C 2 2018 [64] NH. Motlagh, et al. ResNet Acc =
C 2 2018 [65] R. Mehra, et al. Transfer Learning based VGG-16 obtains the best Acc =
VGG-16, VGG-19, and ResNet-50
C mul 2018 [66] M. Nawaz, et al. Inception-V1, Inception-V2 ResNet-V1-50 obtains the highest Acc = .
and ResNet-V1-50
C 2 2018 [67] A. Nahid, et al. DCNN The best A-Acc =
2018 [68] CNN and RNN Acc =
2018 [69] DCNN Acc =
C 2 2018 [70] B. Du, et al. CNN Acc =
C mul 2018 [72] M. Nawaz, et al. DenseNet based CNN Acc =
C 2 2018 [73] Z. Gandomkar, et al. ResNet Acc =
C 2, mul 2018 [74] D. Bardou, et al. CNN Binary classification: Acc = between and ,
Multi-class classification: Acc = between and
C 2 2018 [75] K.Das, et al. CNN : Acc = , : Acc = ,
: Acc = , : Acc =
C 2 2018 [76] Shallu, et al. DCNN A-Acc =
C 2, mul 2018 [77] S. Cascianelli, et al. CNN : Acc = , : Acc = ,
: Acc = , : Acc =
Epistroma dataset: Acc =
Multi-class Kather’s dataset: Acc =
C 2 2019 [78] B.Xu, et al. SA-Net A-Acc =
C 2 2019 [79] M.N.Q. Bhuiyan, et al. Transfer Learning based Acc= , R = , P =
ResNet-50, Inception-V2,
Inception ResNet-V2 and Xception
C 2, mul 2019 [80] J. Xie, et al. DCNN Binary classification: Acc = ,
Multi-class classification: Acc =
C 2, mul 2019 [81] Y. Jiang, et al. BHCNet Binary classification: Acc = between and
Multi-class classification: Acc = between and
C 2 2019 [83] M.B.H. Thuy, et al. Transfer Learning based Acc =
VGG-16 and VGG-19
C 2 2019 [86] J. de. Matos, et al. Transfer Learning based Inception-v3 : Acc = , : Acc =
C 2 2020 [87] S. Saxena, et al. CNN : Acc = , : Acc = ,
: Acc = , : Acc =
C 2 2020 [88] M. Gour, et al. ResHist Acc = , F1-score =
Camelyon 2016 D 2017 [90] Y. Liu, et al. CNN AUC =
D 2016 [91] D. Wang, et al. GoogLeNet AUC =
D 2018 [92] A. BenTaieb, et al. Recurrent Visual Attention Model AUC =
D 2018 [93] H. Lin, et al. ScanNet FROC score = 0.8533, AUC =
Camelyon 2017 C 4 2017 [96] L. Chervony, et al. CNNs Acc =
BACH C 4 2018 [97] A. Golatkar, et al. Inception-v3 Acc =
C 4 2018 [98] K. Nazeri, et al. CNN Acc =
C 4 2018 [99] K. Kiambe, et al. AlexNet Acc =
C 4 2018 [100] N. Ranjan, et al. AlexNet Acc =
C 4 2018 [101] S. Vesal, et al. Transfer Learning based Inception-V3: A-Acc = ,
Inception-V3 and ResNet-50 ResNet-50: A-Acc=
C 4 2018 [102] C.A. Ferreira, et al. Inception ResNet-V2 Test: Acc = , loss = 0.59,
Validation: Acc = , loss = 0.23
C&S 2018 [103] Q.D. Vu, et al. Encoder and decoder Patch classification: Acc = (train), Acc = (test),
Segmentation: overall score = 0.7343 (train), overall score = 0.4945 (test)
C&S 2018 [104] M. Kohl, et al. Densenet-161 Acc =
C 4 2018 [105] Y. Wang, et al. CNN Acc =
C 4 2018 [106] R. Awan, et al. ResNet A-Acc =
C 4 2018 [107] H. Cao, et al. Transfer Learning based ResNet-18, A-Acc =
ResNeXt, NASNet-A,ResNet-152 and VGG-16
C 4 2018 [108] Y. S. Vang, et al. Inception-V3 A-Acc =
C 4 2019 [109] R. Yan, et al. Transfer Learning based Inception-V3 A-Acc =
C 4, 2 2019 [112] K.Roy, et al. CNN 4-class classification: Acc = ,
2-class classification: Acc =
C 2019 [113] S.H. Kassani, et al. Transfer Learning based Xception A-Acc =
ICPR 2012 D 2013 [114] L. Roux, et al. CNN R = , Acc = , F-measure =
D&C 2013 [118] D. Ciresan, et al. DCNN F1-score =
S 2014 [119] M. Veta, et al. DCNN F1-score =
C 2 2012 [115] C. Malon, et al. CNN Color scanners: F1-score = ,
C 2013 [116] Multi-spectral scanners: F1-score =
D 2014 [117] H. Wang, et al. DCNN F1-score =
D 2016 [120] H. Chen, et al. CasCNN ICPR12: P = , R = , F1-score = ,
ICPR14: P = , R = , F1-score =
TCUG16 S&D 2019 [121] N. Wahab, et al. Transfer Learning based DCNN F-measure =
D 2019 [122] C. Li, et al. Deep cascade CNN F-score =
D 2019 [123] M. Veta, et al. DCNNs Task 1: = 0.567, CI [0.464, 0.671] between the predicted scores and the ground truth.
Task 2: = 0.617, CI [0.581 0.651] with the ground truth.
C 4 2017 [124] T. Araujo, et al. DCNN Acc =
C 4 2018 [125] A. Mahbod, et al. ResNet-50 and ResNet-101 Acc = ,
Bioimaging BACH dataset: Acc =
2015 C 4, 2 2018 [126] A. Rakhlin, et al. Transfer Learning based 4-class classification task: Acc = ,
Breast ResNet-50, Inception-V3 and VGG-16 2-class classification task: Acc = , AUC = , Sn = , Sp =
Histology C 4 2019 [127] Y. Li, et al. CNN, ResNet-50 Acc =
Classification C 2, mul 2019 [128] M. Z. Alom, et al. IRRCNN Binary classification: Acc = ,
Challenge Multi-class classification: Acc =
C 4 2019 [129] H.M. Ahmad, et al. Transfer Learning based Pach level: the best Acc = ,
AlexNet, GoogleNet, and ResNet Image level: the best Acc =
IDC D 2014 [130] A. Cruz-Roa, et al. CNN F-measure = , Acc =
D 2019 [131] F. P. Romero, et al. CNN F1-score = , Acc =
Others-Class C 2014 [132] J. Wu, et al. PCANet Acc =
C 2 2017 [133] B. E. Bejnordi, et al. CNN ROC = 0.921
C 4 2018 [134] Z. Wang, et al. VGG-16 and VGG-19 A-Acc =
C mul 2018 [135] B. Gecer, et al. CNN Acc =
C mul 2018 [136] H. D. Couture, et al. CNN A-Acc =
C 2 2019 [137] S. Khan, et al. Transfer Learning based A-Acc =
GoogLeNet, VGGNet, and ResNet
Others-Seg S 2010 [139] B. Pang, et al. CNN Acc =
S 2015 [140] H. Su, et al. fCNN One image can be segmented in 2.3 seconds,
Mean: P = , R = , F1-score =
S 2016 [141] J. xu, et al. DCNN NKI dataset: F-score = , Acc = ,
VGH dataset: F-score = , Acc =
S 2016 [142] F. Xing, et al. DCNN Acc =
S 2017 [143] X. Pan, et al. DCN F1-measure = , Acc =
S 2017 [144] P. Naylor, et al. Deep Learning Accuracy = , R = , P = , F1-score =
S 2018 [146] Y. Cui, et al. CNN One image can be segmented in less than 5 seconds
S 2019 [148] S. Mejbri, et al. Deep neural networks U-Net: DC = 0.86, SegNet: DC = 0.87, FCN: DC = 0.86, DeepLab: DC = 0.86
Others-Det D 2016 [150] G. Litjens, et al. DCNN Sn =
D 2018 [151] A. Cruz-Roa, et al. CNN A-DC =
D 2018 [152] M. Saha, et al. Deep learning P = , R = , F-score =
D 2019 [153] Z. Zainudin, et al. DCNN A-Acc =
TABLE III: Continue: Summary of reviewed works of deep neural network methods for BHIA tasks.

Iv Methodology Analysis

An overview of the deeper analysis of classical ANNs and deep ANNs is compiled in this section. Meanwhile, the outstanding methods in different tasks are analyzed.

Iv-a Analysis of classical ANN methods

According to the survey on classical ANNs, the Multi-Layer Perceptron (MLP) and Probabilistic Neural Network (PNN) are used more frequently in the analysis of breast histopathological images. Since the datasets employed in each work are different, the evaluation of each method cannot be evaluated longitudinally. Therefore, it is analyzed from the perspective of the neural network itself. MLP is known as a feed-forward neural network, which can solve the problem of linear inseparability and can be trained to accurately generalize when presented with new, unseen data

[154]. However, the connection mode between its hidden layers is “fully connected”, which causes too many training parameters. Therefore, It makes it difficult to have too many layers to solve more complex problems. At the same time, the learning speed of MLP is slow and it is easy to fall into local extremes. The papers involved in this article are  [38],  [39],  [40][42]. PNN is also a kind of feed-forward neural network. Compared with the MLP, the training speed of the PNN is faster and the PNN is usually more accurate than the MLP. The disadvantage is that it is slower than the MLP in classifying new cases and requires additional storage space to store the model. The papers involved in this article are  [36],  [41].

Iv-B Analysis of Deep ANN methods

In deep ANNs, transfer learning strategies are applied more frequently in the classification of breast histopathological images in recent four years. The papers involved in this article are [57],[58],[65],[66],[79],[83],[86],[87],[101],[107],[109],[113],[121],[125],[126],[129],[137]. Transfer learning is a method used to transfer knowledge acquired from one task to resolve another[155]. There are two main approaches for applying transfer learning: (1) Fine-tuning the parameters in the pre-training network according to the required tasks. (e.g. [58],[66],[101],[121],[125],[129]) (2) Using a pre-trained network as a feature extractor, and then using these features to train a new classifier. (e.g.[57],[65],[79],[83],[86],[87],[107],[109],[113],[126],[137]) In the transfer learning, the VGG16 [156], VGG19, and ResNet50 [157] are very popular pre-trained CNN models due to their more in-depth architectures [65]. The main reasons are as follows: First, due to the inherent complexity and diversity of breast tissue pathological images, it is not easy to label the images, and the cost of labeling data by medical experts is very expensive. Therefore, there are few publicly available large-scale labeled image datasets. However, transfer learning can overcome the problem of small datasets effectively [158]. Secondly, in the classification task of breast histopathology images, most of the pre-trained models are from the ImageNet Large Scale Visual Recognition Challenge[159]. They achieve stable performance on specific tasks and can be safely used for transfer learning in breast cancer classification tasks. Finally, the transfer learning process are helpful to improve accuracy or reduce training time [160], which is an important reason for its popularity.

Iv-C Analysis of the outstanding methods in each reviewed task

In different review tasks, there are some excellent methods proposed. For example, in the BreakHis dataset task, the best results are obtained in [81], where a small SE-ResNet model based on the combination of residual module and Squeeze-and-Excitation block is designed, which can effectively reduce model training parameters. Furthermore, a new learning rate scheduler named Gaussian error scheduler is proposed, which can get excellent performance without complicatedly fine-tuning the learning rate. In the Camelyon 2016 dataset task, the best results are obtained in [93]. In order to detect metastatic breast cancer from WSIs, a fast and dense scanning framework is proposed, referred to as ScanNet. ScanNet is implemented based on the VGG-16 network by changing the last three fully connected layers to fully convolutional layers. In the end, faster performance on tumor localization tasks is achieved and even surpasses human performance on WSI classification tasks. In the ICPR 2012 dataset task, the best results are obtained in [120], where a novel deep cascaded convolutional neural network (CasCNN) is designed to detect mitosis. The advantage of the CasCNN is that it can significantly reduce the detection time and achieve satisfactory accuracy. In the Bioimaging 2015 Breast Histology Classification Challenge dataset task, the best results are obtained in [128], where a method for breast cancer classification with the Inception Recurrent Residual Convolutional Neural Network (IRRCNN) model is proposed. The IRRCNN [161, 162] is one of the improved hybrid DCNN architectures based on inception [163], residual networks [157], and the RCNN architectures [164]. Compared with them, the main advantage of this model is that better recognition performance can be achieved using the same number or fewer network parameters.

Iv-D The potential of the methods mentioned in this review in other fields

In addition, this review discussed the deep ANNs method not only can be applied in the field of breast histopathological image analysis, but also in the field of other closed microscopic image analysis, such as: Cervical histopathological analysis[165], [166], [167], cervical cytopathological analysis [168], [169], [170], stem cell analysis [171], [172], microbiological image analysis [173][174][175], sperm quality analysis [176], [177], [178], and rock microstructural analysis [179], [180]. No matter from the aspects of image pre-processing, feature extraction and selection, segmentation, and classification, or from the aspects of deep ANN model design and proposed framework idea, the methods of deep ANN summarized in this review can bring a new perspective to the research in other fields.

V Conclusion and Future Work

In this review, the methods of breast cancer histopathological image analysis based on the artificial neural network are comprehensively summarized, which are grouped into the classical artificial neural network and deep neural network methods. In addition, when summarizing the deep neural network method, the related work is grouped according to the applied datasets. In each dataset, the related works are arranged in ascending chronological order. From classical review works in Sec. II and subsequent analysis in Sec. II-A, it is found that the ANNs used in the BHIA field around 2012 are classical neural networks. In the analysis of histopathological images of breast cancer, MLP and PNN are the most applied classical ANNs. However, they are only used as classifiers. In feature extraction, most of the research is used for texture features and morphological features. Among deep learning based methods whose related works and corresponding analysis are discussed in Sec. III-A and Sec. III-B, deep learning technology, especially deep convolutional neural networks, has made excellent achievements in the classification and segmentation of breast histopathological images, which will help patients with early detection, diagnosis, and treatment of breast cancer. According to the survey, transfer learning methods based on CNN are the most frequently used. But from the review works in Sec. IV-C, improved and novel network frameworks tend to perform better in different datasets.

In the future, there is still room for improvement. First, researchers can combine the characteristics of pathological images to develop a new network model to analyze the histopathological images of breast cancer. Secondly, there is still a lack of large-scale, comprehensive, and fully-labeled WSI datasets. Therefore, the establishment of large public datasets is of great value for future research. Thirdly, the classification system of breast cancer is complex, and there are many subtypes under each lesion type. Studying patterns correlated to molecular subtype, treatment response, and prognosis to refine the diagnosis in precision medicine remains a significant challenge [12]. Finally, GANs are currently used to generate datasets. However, the advantages of microscopic image analysis have not been explored, which will be a research direction with great potential and value in the future.

Acknowledgements

We thank Miss Zixian Li and Mr. Guoxian Li for their important discussion. We also thank M.E. Dan Xue for her contribution in our previous work.

References

  • [1] F. Bray, J. Ferlay, I. Soerjomataram, and et al. Global cancer statistics 2018: Globocan estimates of incidence and mortality worldwide for 36 cancers in 185 countries. CA: A Cancer Journal for Clinicians, 68(6):394–424, 2018.
  • [2] Worl Health Organization, (2020). [Onlion]. Available: www.who.int/cancer/prevention/diagnosis-screening/breast-cancer/en/.
  • [3] R. L. Siegel, K. D. Miller, and A. Jemal. Cancer statistics, 2019. CA: A Cancer Journal for Clinicians, 69(1):7–34, 2019.
  • [4] M. Moghbel, C. Y. Ooi, N. Ismail, and et al. A review of breast boundary and pectoral muscle segmentation methods in computer-aided detection/diagnosis of breast mammography. Artificial Intelligence Review, pages 1–46, 2019.
  • [5] E. Kozegar, M. Soryani, H. Behnam, and et al. Computer aided detection in automated 3-d breast ultrasound images: a survey. Artificial Intelligence Review, pages 1–23, 2019.
  • [6] I. Domingues, G. Pereira, P. Martins, and et al. Using deep learning techniques in medical imaging: a systematic review of applications on ct and pet. Artificial Intelligence Review, pages 1–68, 2019.
  • [7] G. Murtaza, L. Shuib, and A. W. A. Wahaband et al. Deep learning-based breast cancer classification through medical imaging modalities: state of the art and research challenges. Artificial Intelligence Review, pages 1–66, 2019.
  • [8] M. Moghbel and S. Mashohor. A review of computer assisted detection/diagnosis (cad) in breast thermography for breast cancer detection. Artificial Intelligence Review, 39(4):305–313, 2013.
  • [9] J. de Matos and et al. Histopathologic image processing: A review. arXiv preprint arXiv:1904.07900, 2019.
  • [10] G. Aresta, T. Araújo, S. Kwok, and et al. Bach: Grand challenge on breast cancer histology images. Medical Image Analysis, 2019.
  • [11] J. Ramos-Vara. Principles and Methods of Immunohistochemistry. In J. Gautier, editor, Drug Safety Evaluation. Methods in Molecular Biology (Methods and Protocols, vol 691), pages 83–96. Springer: Humana Press, Germany, 2011.
  • [12] S. Robertson, H. Azizpour, K. Smith, and J. Hartman. Digital image analysis in breast pathology–from image processing techniques to artificial intelligence. Translational Research, 194:19–35, 2018.
  • [13] C. Li, H. Chen, L. Zhang, and et al. Cervical histopathology image classification using multilayer hidden conditional random fields and weakly supervised learning. IEEE Access, 7:90378–90397, 2019.
  • [14] Y. Li, X. Li, X. Xie, and L. Shen. Deep learning based gastric cancer identification. In Proc. of ISBI 2018, pages 182–185. IEEE, 2018.
  • [15] M. Dabass, R. Vig, and S. Vashisth. Five-grade cancer classification of colon histology images via deep learning. In ICCCS 2018, Taylor and Francis 2nd International Conference on Commuincation and Computing System, 2018.
  • [16] N. Coudray, P. S. Ocampo, T. Sakellaropoulos, and et al. Classification and mutation prediction from non–small cell lung cancer histopathology images using deep learning. Nature Medicine, 24(10):1559–1567, 2018.
  • [17] J. Gil, H. Wu, and B. Y. Wang. Image Analysis and Morphometry in the Diagnosis of Breast Cancer. Microscopy Research and Technique, 59(2):109–118, 2002.
  • [18] C. Demir and B. Yener. Automated Cancer Diagnosis Based on Histopathological Images: A Systematic Survey. Technical Report: Rensselaer Polytechnic Institute, Department of Computer, TR-05-09., 2005.
  • [19] M. Gurcan, L. Boucheron, A. Can, and et al. Histopathological Image Analysis: A Review. IEEE Reviews in Biomedical Engineering, 2:147–171, 2009.
  • [20] L. He, L. Long, S. Antani, and G. Thoma. Computer Assisted Diagnosis in Histopathology. In Z. Zhao, editor, Sequence and Genome Analysis: Methods and Applications, pages 271–287. iConcept Press, Hong Kong, 2010.
  • [21] L. He, L. Long, S. Antani, and G. Thoma. Histology Image Analysis for Carcinoma Detection and Grading. Computer Methods and Programs in Biomedicine, 107(3):538–556, 2012.
  • [22] H. Irshad, A. Veillard, L. Roux, and D. Racoceanu. Methods for Nuclei Detection, Segmentation, and Classification in Digital Histopathology: A Review – Current Status and Future Potential. IEEE Reviews in Biomedical Engineering, 7:97–114, 2014.
  • [23] M. Veta, J. Pluim, P. Diest, and M. Viergever. Breast Cancer Histopathology Image Analysis: A Review. IEEE Transactions on Biomedical Engineering, 61(5):1400–1411, 2014.
  • [24] S. Bhattacharjee, J. Mukherjee, S. Nag, and et al. Review on Histopathological Slide Analysis using Digital Microscopy. International Journal of Advanced Science and Technology, 62:65–96, 2014.
  • [25] J. Arevalo, A. Cruz-Roa, and F. Gonzelez. Histopathology Image Representation for Automatic Analysis: A State-of-the-art Review. Revista Med, 22(2):79–91, 2014.
  • [26] M. Aswathy and M. Jagannath. Detection of Breast Cancer on Digital Histopathology Images: Present Status and Future Possibilities. Informatics in Medicine Unlocked, 8:74–79, 2017.
  • [27] J. Chen, Y. Li, J. Xu, and et al. Computer-aided Prognosis on Breast Cancer with Hematoxylin and Eosin Histopathology Images: A Review. Tumor Biology, 39(3):1–12, 2017.
  • [28] D. Steiner, R. MacDonald, Y. Liu, and et al. Impact of Deep Learning Assistance on the Histopathologic Review of Lymph Nodes for Metastatic Breast Cancer. The American Journal of Surgical Pathology, 42(12):1636–1646, 2018.
  • [29] B. Acs and D. Rimm. Not Just Digital Pathology, Intelligent Digital Pathology. Journal of American Medical Association, 4(3):403–404, 2018.
  • [30] A. Hamidinekooand E. Denton, A. Rampun, and et al. Deep learning in mammography and breast histology, an overview and future trends. Medical Image Analysis, 47:45–67, 2018.
  • [31] T. G. Debelee, F. Schwenkerand A. Ibenthal, and D. Yohannes. Survey of deep learning in breast cancer image analysis. Evolving Systems, pages 1–21, 2019.
  • [32] C. Kaushal, S. Bhat, D. Koundal, and A. Singla. Recent trends in computer assisted diagnosis (cad) system for breast cancer diagnosis using histopathological images. IRBM, 2019.
  • [33] G. Murtaza, L. Shuib, A. W. A. Wahab, G. Mujtaba, and et al. Deep learning-based breast cancer classification through medical imaging modalities: state of the art and research challenges. Artificial Intelligence Review, pages 1–66, 2019.
  • [34] C. Li, D. Xue, Z. Hu, and et al. A survey for breast histopathology image analysis using classical and deep neural networks. In International Conference on Information Technologies in Biomedicine, pages 222–233. Springer, 2019.
  • [35] S. Petushi, P. Garcia, M. Haber, and et al. Large-scale Computations on Histology Images Reveal Grade-differentiating Parameters for Breast Cancer. BMC Medical Imaging, 6(14):1–11, 2006.
  • [36] A. Osareh and B. Shadgar. Machine learning techniques to diagnose breast cancer. In 2010 5th International Symposium on Health Informatics and Bioinformatics, pages 114–120. IEEE, 2010.
  • [37] S. Singh, P. Gupta, and M. Sharma. Breast Cancer Detection and Classification of Histopathological Images. International Journal of Engineering Science and Technology, 3(5):4228–4332, 2011.
  • [38] Y. Zhang, B. Zhang, and W. Lu. Breast Cancer Classification from Histological Images with Multiple Features and Random Subspace Classifier Ensemble. In Proc. of AIP 1371(1), pages 19–28, 2011.
  • [39] Y. Zhang, B. Zhang, F. Coenen, and W. Lu. Breast Cancer Diagnosis from Biopsy Images with Highly Reliable Random Subspace Classifier Ensembles. Machine Vision and Applications, 24(7):1405–1420, 2013.
  • [40] Y. Zhang, B. Zhang, and W. Lu. Breast Cancer Histological Image Classification with Multiple Features and Random Subspace Classifier Ensemble. In T. D. Pham and L. C. Jain, editors, Knowledge-based Systems in Biomedicine, SCI 450, pages 27–42. Springer, Germany, 2013.
  • [41] C. Loukas, S. Kostopoulos, A. Tanoglidi, and et al. Breast Cancer Characterization based on Image Classification of Tissue Sections Visualized under Low Magnification. Computational and Mathematical Methods in Medicine, 2013:1–8, 2013.
  • [42] K. K. Shukla, A. Tiwari, S. Sharma, and et al. Classification of histopathological images of breast cancerous and non cancerous cells based on morphological features. Biomedical and Pharmacology Journal, 10(1):353–366, 2017.
  • [43] M. Kowal, P. Filipczuk, A. Obuchowicz, and J. Korbicz. Computer-aided Diagnosis of Breast Cancer Based on Fine Needle Biopsy Microscopic Images. Computers in Biology and Medicine, 43(10):1563–1572, 2013.
  • [44] A. Mouelhi, M. Sayadi, and F. Fnaiech. A Supervised Segmentation Scheme Based on Multilayer Neural Network and Color Active Contour Model for Breast Cancer Nuclei Detection. In Proc. of ICEESA, pages 1–6, 2013.
  • [45] IDC: Invasive Ductal Carcinoma, (2014). [Onlion]. Available: http://www.andrewjanowczyk.com/use-case-6-invasive-ductal-carcinoma-idc-segmentation/.
  • [46] BreakHis: Breast Cancer Histopathological Database BreakHis, (2015). [Onlion]. Available: http://web.inf.ufpr.br/vri/databases/breast-cancer-histopathological-database-breakhis/.
  • [47] Bioimaging 2015 Breast Histology Classification Challenge, (2015). [Onlion]. Available: https://rdm.inesctec.pt/dataset/nis-2017-003.
  • [48] TUPAC: The Tumor Proliferation Assessment Challenge 2016, (2016). [Onlion]. Available: http://tupac.tue-image.nl/.
  • [49] Camelyon 2016: Camelyon Grand Challenge 2016, (2016). [Onlion]. Available: https://camelyon16.grand-challenge.org/Data/.
  • [50] Camelyon 2017: Camelyon Grand Challenge 2017, (2017). [Onlion]. Available: https://camelyon17.grand-challenge.org/.
  • [51] BACH: The Grand Challenge on BreAst Cancer Histology images, (2018). [Onlion]. Available: https://iciar2018-challenge.grand-challenge.org/.
  • [52] F. Spanhol, L. Oliveira, C. Petitjean, and L. Heutte. A Dataset for Breast Cancer Histopathological Image Classification. IEEE Transactions on Biomedical Engineering, 63(7):1455–1462, 2015.
  • [53] Neslihan Bayramoglu, Juho Kannala, and Janne Heikkilä. Deep learning for magnification independent breast cancer histopathology image classification. In Proc. of ICPR, pages 2440–2445. IEEE, 2016.
  • [54] F. Spanhol, L. Oliveira, C. Petitjean, and L. Heutte. Breast Cancer Histopathological Image Classification Using Convolutional Neural Networks. In Proc. of IJCNN, page Online, 2016.
  • [55] F. Spanhol, P. Cavalin, L. Oliveira, and et al. Deep Features for Breast Cancer Histopathological Image Classification. In Proc. of SMC, pages 1868–1873, 2017.
  • [56] F. Spanhol. Automatic Breast Cancer Classification from Histopathological Images: A Hybrid Approach. PhD Thesis: Federal University of Parana, Brazil, 2018.
  • [57] Y. Song, J. Zou, H. Chang, and W. Cai. Adapting Fisher Vectors for Histopathology Image Classification. In Proc. of ISBI 2017, pages 600–603, 2017.
  • [58] W. Zhi, H. Yueng, Z. Chen, and et al. Using Transfer Learning with Convolutional Neural Networks to Diagnose Breast Cancer from Histopathological Images. In Proc. of ICONIP 2017, pages 669–676, 2017.
  • [59] E. Nejad, L. Affendey, R. Latip, and I. Ishak. Classification of Histopathology Images of Breast into Benign and Malignant using a Single-layer Convolutional Neural Network. In Proc. of ICISPC 2017, pages 50–53, 2017.
  • [60] Q. Li and W. Li. Using Deep Learning for Breast Cancer Diagnosis. Technical Report: Chinese University of Hong Kong, China, 2017.
  • [61] Z. Han, B. Wei, Y. Zheng, and et al. Breast Cancer Multi-classification from Histopathological Images with Structured Deep Learning Model. Scientific Reports, 7(4172):1–10, 2017.
  • [62] K. Das, S. P. K. Karri, A. G. Roy, and et al. Classifying histopathology whole-slides using fusion of decisions from deep convolutional network on a collection of random multi-views at multi-magnification. In Proc. of ISBI 2017, pages 1024–1027. IEEE, 2017.
  • [63] B. Wei, Z. Han, X. He, and Y. Yin. Deep learning model based breast cancer histopathological image classification. In Proc. of ICCCBDA, pages 348–353. IEEE, 2017.
  • [64] N. H. Motlagh, M. Jannesary, H. Aboulkheyr, and et al. Breast cancer histopathological image classification: A deep learning approach. bioRxiv, page 242818, 2018.
  • [65] R. Mehra and et al. Breast cancer histology images classification: Training from scratch or transfer learning? ICT Express, 4(4):247–254, 2018.
  • [66] M. Nawaz, A. Sewissy, and T. Soliman. Automated Classification of Breast Cancer Histology Images Using Deep Learning Based Convolutional Neural Networks. International Journal of Computer Science and Network Security, 18(4):152–160, 2018.
  • [67] A. Nahid, A. Mikaelian, and Y. Kong.

    Histopathological Breast-image Classification with Restricted Boltzmann Machine Along with Backpropagation.

    Biomedical Research, 29(10):2068–2077, 2018.
  • [68] A. Nahid, M. Mehrabi, and Y. Kong. Histopathological Breast Cancer Image Classification by Deep Neural Network Techniques Guided by Local Clustering. BioMed Research International, 2018:1–20, 2018.
  • [69] A. Nahid and Y. Kong.

    Histopathological Breast-image Classification Using Local and Frequency Domains by Convolutional Neural Network.

    Information, 9(19):1–26, 2018.
  • [70] B. Du, Q. Qi, H. Zheng, and et al. Breast Cancer Histopathological Image Classification via Deep Active Learning and Confidence Boosting. In Proc. of ICANN 2018, pages 109–116, 2018.
  • [71] G. Lee, M. Bajger, and K. Clark. Deep Learning and Color Variability in Breast Cancer Histopathological Images: A Preliminary Study. In Proc. of SPIE 10718, page Online, 2018.
  • [72] M. Nawaz, A. Sewissy, and T. Soliman. Multi-class Breast Cancer Classification using Deep Learning Convolutional Neural Network. International Journal of Advanced Computer Science and Applications, 9(6):316–332, 2018.
  • [73] Z. Gandomkar, P. Brennan, and C. Mello-Thoms. A Framework for Distinguishing Benign from Malignant Breast Histopathological Images Using Deep Residual Networks. In Proc. of SPIE 10718, page Online, 2018.
  • [74] D. Bardou, K. Zhang, and S. M. Ahmad. Classification of Breast Cancer Based on Histology Images Using Convolutional Neural Networks. IEEE Access, 6:24680–24693, 2018.
  • [75] K. Das, S. Conjeti, A. G. Roy, and et al. Multiple instance learning of deep convolutional neural networks for breast histopathology whole slide classification. In Proc. of ISBI 2018, pages 578–581. IEEE, 2018.
  • [76] R. Mehra and et al. Automatic Magnification Independent Classification of Breast Cancer Tissue in Histological Images Using Deep Convolutional Neural Network. In International Conference on Advanced Informatics for Computing Research, pages 772–781. Springer, 2018.
  • [77] S. Cascianelli, R. Bello-Cerezo, F. Bianconi, and et al. Dimensionality reduction strategies for cnn-based classification of histopathological images. In International Conference on Intelligent Interactive Multimedia Systems and Services, pages 21–30. Springer, 2018.
  • [78] B. Xu, J. Liu, X. Hou, and et al. Look, Investigate, and Classify: A Deep Hybrid Attention Method for Breast Cancer Classification. arXiv preprint arXiv:1902.10946, 2019.
  • [79] M. N. Q. Bhuiyan, M. Shamsujjoha, S. H. Ripon, and et al. Transfer Learning and Supervised Classifier Based Prediction Model for Breast Cancer. In Big Data Analytics for Intelligent Healthcare Management, pages 59–86. Elsevier, 2019.
  • [80] J. Xie, R. Liu, IV J. Luttrell, and C. Zhang. Deep learning based analysis of histopathological images of breast cancer. Frontiers in Genetics, 10, 2019.
  • [81] Y. Jiang, L. Chen, H. Zhang, and X. Xiao. Breast cancer histopathological image classification using convolutional neural networks with small SE-ResNet module. PLOS ONE, 14(3), 2019.
  • [82] J. Hu, L. Shen, and G. Sun. Squeeze-and-excitation networks. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 7132–7141, 2018.
  • [83] M. B. H. Thuy and V. T. Hoang. Fusing of deep learning, transfer learning and gan for breast cancer histopathological image classification. In International Conference on Computer Science, Applied Mathematics and Applications, pages 255–266. Springer, 2019.
  • [84] T. Karras, S. Laine, and T. Aila. A style-based generator architecture for generative adversarial networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 4401–4410, 2019.
  • [85] P. Isola, J. Y. Zhu, T. Zhou, and et al.

    Image-to-image translation with conditional adversarial networks.

    In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 1125–1134, 2017.
  • [86] J. de Matos, A. S. Britto, L. E. S. Oliveira, and et al. Double transfer learning for breast cancer histopathologic image classification. arXiv preprint arXiv:1904.07834, 2019.
  • [87] S. Saxena, S. Shukla, and M. Gyanchandani. Pre-trained convolutional neural networks as feature extractors for diagnosis of breast cancer using histopathology. International Journal of Imaging Systems and Technology, 2020.
  • [88] M. Gour, S. Jain, and T. Sunil Kumar. Residual learning based cnn for breast cancer histopathological image classification. International Journal of Imaging Systems and Technology, 2020.
  • [89] L. Breiman. Random forests. Machine Learning, 45(1):5–32, 2001.
  • [90] Y. Liu, K. Gadepalli, M. Norouzi, and et al. Detecting Cancer Metastases on Gigapixel Pathology Images. arXiv: Camelyon Grand Challenge 2016, 2017.
  • [91] D. Wang, A. Khosla, R. Gargeya, and et al. Deep Learning for Identifying Metastatic Breast Cancer. arXiv: Camelyon Grand Challenge 2016, 2016.
  • [92] A. BenTaieb and G. Hamarneh. Predicting Cancer with a Recurrent Visual Attention Model for Histopathology Images. In Proc. of MICCAI 2018, pages 129–137, 2018.
  • [93] H. Lin, H. Chen, Q. Dou, and et al. Scannet: A fast and dense scanning framework for metastastic breast cancer detection from whole-slide image. In Proc. of WACV, pages 539–546. IEEE, 2018.
  • [94] H. Pang, Lin W, C. Wang, and C. Zhao. Using Transfer Learning to Detect Breast Cancer without Network Training. In Proc. of CCIS, pages 381–385. IEEE, 2018.
  • [95] B. Bejnordi, M. Veta, P. Diest., and et al. Diagnostic Assessment of Deep Learning Algorithms for Detection of Lymph Node Metastases in Women With Breast Cancer. JAMA, 318(22):2199–2210, 2017.
  • [96] L. Chervony and S. Polak. Fast Classification of Whole Slide Histopathology Images for Breast Cancer Detection. Camelyon Grand Challenge 2017, 2017.
  • [97] A. Golatkar, D. Anand, and A. Sethi. Classification of Breast Cancer Histology Using Deep Learning. arXiv: Breast Cancer Histology Challenge 2018, 2018.
  • [98] K. Nazeri, A. Aminpour, and M. Ebrahimi. Two-stage Convolutional Neural Network for Breast Cancer Histology Image Classication. arXiv: Breast Cancer Histology Challenge 2018, 2018.
  • [99] K. Kiambe. Breast Histopathological Image Feature Extraction with Convolutional Neural Networks for Classification. ICSES Transactions on Image Processing and Pattern Recognition, 4(2):4–12, 2018.
  • [100] N. Ranjan, P. Machingal, S. Jammalmadka, and et al. Hierarchical Approach for Breast Cancer Histopathology Images Classification. In Proc. of MIDL 2018, pages 1–7, 2018.
  • [101] S. Vesal, N. Ravikumar, A. Davari, S. Ellmann, and A. Maier. Classification of Breast Cancer Histology Images Using Transfer Learning. In ter Haar Romeny B. Campilho A., Karray F., editor, International Conference Image Analysis and Recognition, pages 812–819. Springer, Springer:Cham, 2018.
  • [102] C. A. Ferreira, T. Melo, P. Sousa, and et al. Classification of Breast Cancer Histology Images Through Transfer Learning Using a Pre-trained Inception Resnet V2. In ter Haar Romeny B. Campilho A., Karray F., editor, International Conference Image Analysis and Recognition, pages 763–770. Springer, Springer:Cham, 2018.
  • [103] Q. D. Vu, M. N. N. To, E. Kim, and et al. Micro and Macro Breast Histology Image Analysis by Partial Network Re-use. In International Conference Image Analysis and Recognition, pages 895–902. Springer, 2018.
  • [104] M. Kohl, C. Walz, F. Ludwig, S. Braunewell, and M. Baust. Assessment of breast cancer histology using densely connected convolutional networks. In International Conference Image Analysis and Recognition, pages 903–913. Springer, 2018.
  • [105] Y. Wang, L. Sun, K. Ma, and J. Fang. Breast cancer microscope image classification based on cnn with image deformation. In International Conference Image Analysis and Recognition, pages 845–852. Springer, 2018.
  • [106] R. Awan, N. A. Koohbanani, M. Shaban, and et al. Context-aware learning using transferable features for classification of breast cancer histology images. In International Conference Image Analysis and Recognition, pages 788–795. Springer, 2018.
  • [107] H. Cao, S. Bernard, L. Heutte, and R. Sabourin. Improve the performance of transfer learning without fine-tuning using dissimilarity-based multi-view learning for breast cancer histology images. In International conference image analysis and recognition, pages 779–787. Springer, 2018.
  • [108] Y. S. Vang, Z. Chen, and X. Xie. Deep learning framework for multi-class breast cancer histology image classification. In International Conference Image Analysis and Recognition, pages 914–922. Springer, 2018.
  • [109] R. Yan, F. Ren, Z. Wang, and et al. Breast cancer histopathological image classification using a hybrid deep neural network. Methods, 2019.
  • [110] M. Schuster and K. K. Paliwal. Bidirectional recurrent neural networks. IEEE Transactions on Signal Processing, 45(11):2673–2681, 1997.
  • [111] Dataset proposed by Yan, (2019). [Onlion]. Available: http://ear.ict.ac.cn/?page_id=1616.
  • [112] K. Roy, D. Banik, D. Bhattacharjee, and M. Nasipuri. Patch-based system for Classification of Breast Histology images using deep learning. Computerized Medical Imaging and Graphics, 71:90–103, 2019.
  • [113] S. H. Kassani, P. H. Kassani, M. J. Wesolowski, and et al. Breast cancer diagnosis with transfer learning and global pooling. arXiv preprint arXiv:1909.11839, 2019.
  • [114] L. Roux, D. Racoceanu, N. Loménie, and et al. Mitosis detection in breast cancer histological images an icpr 2012 contest. Journal of Pathology Informatics, 4, 2013.
  • [115] C. Malon, E. Brachtel, E. Cosatto, and et al. Mitotic figure recognition: Agreement among pathologists and computerized detector. Analytical Cellular Pathology, 35(2):97–100, 2012.
  • [116] C. Malon and E. Cosatto. Classification of Mitotic Figures with Convolutional Neural Networks and Seeded Blob Features. Journal of Pathology Informatics, 4(8):Online, 2013.
  • [117] H. Wang, A. Cruz-Roa, A. Basavahally, and et al. Cascaded Ensemble of Convolutional Neural Networks and Handcrafted Features for Mitosis Detection. In Proc. of SPIE 9041, page Online, 2014.
  • [118] D. Ciresan, A. Giusti, L. Gambardella, and J. Schmidhuber. Mitosis Detection in Breast Cancer Histology Images with Deep Neural Networks. In Proc. of MICCAI 2013, pages 411–418, 2013.
  • [119] M. Veta. Breast Cancer Histopathology Image Analysis. PhD Thesis in Utrecht University, Netherlands, 2014.
  • [120] H. Chen, Q. Dou, X. Wang, and et al. Mitosis detection in breast cancer histology images via deep cascaded networks. In Thirtieth AAAI Conference on Artificial Intelligence, 2016.
  • [121] N. Wahab, A. Khan, and Y. S. Lee. Transfer learning based deep CNN for segmentation and detection of mitoses in breast cancer histopathological images. Microscopy, 68:216––233, 2019.
  • [122] C. Li, X. Wang, W. Liu, and et al. Weakly supervised mitosis detection in breast histopathology images using concentric loss. Medical Image Analysis, 53:165––178, 2019.
  • [123] M. Veta, Y. J. Heng, N. Stathonikos, and et al. Predicting breast tumor proliferation from whole-slide images: the tupac16 challenge. Medical Image Analysis, 54:111–121, 2019.
  • [124] T. Araujo, G. Aresta, E. Castro, and et al. Classification of Breast Cancer Histology Images Using Convolutional Neural Networks. PLOS ONE, 12(6):1–14, 2017.
  • [125] A. Mahbod, I. Ellinger, R. Ecker, and et al. Breast Cancer Histological Image Classification Using Fine-tuned Deep Network Fusion. In Proc. of ICIAR 2018, pages 754–762, 2018.
  • [126] A. Rakhlin, A. Shvets, V. Iglovikov, and A. Kalinin. Deep Convolutional Neural Networks for Breast Cancer Histology Image Analysis. In Proc. of ICIAR 2018, pages 737–744, 2018.
  • [127] Y. Li, J. Wu, and Q. Wu. Classification of Breast Cancer Histology Images Using Multi-Size and Discriminative Patches Based on Deep Learning. IEEE Acess, 7:21400–21408, 2019.
  • [128] M. Z. Alom, C. Yakopcic, M. S. Nasrin, and et al. Breast Cancer Classification from Histopathological Images with Inception Recurrent Residual Convolutional Neural Network. Journal of Digital Imaging, pages 1–13, 2019.
  • [129] H. M. Ahmad, S. Ghuffar, and K. Khurshid. Classification of Breast Cancer Histology Images Using Transfer Learning. In Proc. of IBCAST, pages 328–332, 2019.
  • [130] A. Cruz-Roa, A. Basavanhally, F. González, and et al. Automatic detection of invasive ductal carcinoma in whole slide images with convolutional neural networks. In Medical Imaging 2014: Digital Pathology, volume 9041, page 904103. International Society for Optics and Photonics, 2014.
  • [131] F. P. Romero, A. Tang, and S. Kadoury. Multi-Level Batch Normalization In Deep Networks For Invasive Ductal Carcinoma Cell Discrimination In Histopathology Images. arXiv preprint arXiv:1901.03684, 2019.
  • [132] J. Wu, J. Shi, Y. Li, and et al. Histopathological Image Classification Using Random Binary Hashing based PCANet and Bilinear Classifier. In Proc. of EUSIPCO, pages 2050–2054, 2016.
  • [133] B. E. Bejnordi, J. Lin, B. Glass, and et al. Deep learning-based assessment of tumor-associated stroma for diagnosing breast cancer in histopathology images. In Proc. of ISBI 2017, pages 929–932. IEEE, 2017.
  • [134] Z. Wang, N. Dong, W. Dai, and et al. Classification of Breast Cancer Histopathological Images Using Convolutional Neural Networks with Hierarchical Loss and Global Pooling. In Proc. of ICIAR 2018, pages 745–753, 2018.
  • [135] B. Gecer, S. Aksoy, E. Mercan, and et al. Detection and Classification of Cancer in Whole Slide Breast Histopathology Images Using Deep Convolutional Networks. Pattern Recognition, 84:345–356, 2018.
  • [136] H. D. Couture, L. A. Williams, J. Geradts, and et al. Image analysis with deep learning to predict breast cancer grade,ER status,histologic subtype,and intrinsic subtype. NPJ Breast Cancer, 4(1):30, 2018.
  • [137] S. Khan, N. Islam, Z. Jan, and et al. A Novel Deep Learning based Framework for the Detection and Classification of Breast Cancer Using Transfer Learning. Pattern Recognition Letters, 2019.
  • [138] T. Qaiser, A. Mukherjee, C. Reddy Pb, and et al. Her 2 challenge contest: a detailed assessment of automated her 2 scoring algorithms in whole slide images of breast cancer tissues. Histopathology, 72(2):227–238, 2018.
  • [139] B. Pang, Y. Zhang, Q. Chen, and et al. Cell Nucleus Segmentation in Color Histopathological Imagery Using Convolutional Networks. In Proc. of CCPR, pages 1–5, 2010.
  • [140] H. Su, F. Liu, Y. Xie, and et al. Region segmentation in histopathological breast cancer images using deep convolutional neural network. In Proc. of ISBI, pages 55–58. IEEE, 2015.
  • [141] J. Xu, X. Luo, G. Wang, and et al. A deep convolutional neural network for segmenting and classifying epithelial and stromal regions in histopathological images. Neurocomputing, 191:214–223, 2016.
  • [142] F. Xing, Y. Xie, and L. Yang. An automatic learning-based framework for robust nucleus segmentation. IEEE Transactions on Medical Imaging, 35(2):550–566, 2016.
  • [143] X. Pan, L. Li, H. Yang, and et al. Accurate segmentation of nuclei in pathological images via sparse reconstruction and deep convolutional networks. Neurocomputing, 229:88–99, 2017.
  • [144] P. Naylor, M. Laé, F. Reyal, and T. Walter. Nuclei segmentation in histopathology images using deep neural networks. In Proc. of ISBI 2017, pages 933–936. IEEE, 2017.
  • [145] P. Naylor. Dataset, (2017). Available: http://cbio.mines-paristech.fr/~pnaylor/BNS.zip.
  • [146] Y. Cui, G. Zhang, Z. Liu, Z. Xiong, and J. Hu. A Deep Learning Algorithm for One-step Contour Aware Nuclei Segmentation of Histopathological Images. arXiv preprint arXiv:1803.02786, 2018.
  • [147] N. Kumar, R. Verma, S. Sharma, and et al. A dataset and a technique for generalized nuclear segmentation for computational pathology. IEEE Transactions on Medical Imaging, 36(7):1550–1560, 2017.
  • [148] S. Mejbri, C. Franchet, I. A. Reshma, and et al. Deep Analysis of CNN Settings for New Cancer whole-slide Histological Images Segmentation: the Case of Small Training Sets. In 6th International conference on BioImaging (BIOIMAGING 2019), pages 120–128, 2019.
  • [149] J. Xu, L. Xiang, Q. Liu, and et al.

    Stacked Sparse Autoencoder (SSAE) for Nuclei Detection on Breast Cancer Histopathology Images.

    IEEE Transactions on Medical Imaging, 35(1):119–130, 2016.
  • [150] G. Litjens, C. Sachez, N. Timofeeva, and et al. Deep Learning as a Tool for Increased Accuracy and Efficiency of Histopathology Diagnosis. Scientific Reports, 6(26286):1–11, 2016.
  • [151] A. Cruz-Roa, H. Gilmore, A. Basavanhally, and et al. High-throughput adaptive sampling for whole-slide histopathology image analysis (HASHI) via convolutional neuralnetworks: Application to invasive breast cancer detection. PLOS ONE, 13(5), 2018.
  • [152] M. Saha, C. Chakraborty, and D. Racoceanu. Efficient deep learning model for mitosis detection using breast histopathology images. Computerized Medical Imaging and Graphics, 64:29–40, 2018.
  • [153] Z. Zainudin, S. M. Shamsuddin, and S. Hasan. Deep Layer CNN Architecture for Breast Cancer Histopathology Image Detection. In International Conference on Advanced Machine Learning Technologies and Applications, pages 43–51. Springer, 2019.
  • [154] M. W. Gardner and S. R. Dorling.

    Artificial neural networks (the multilayer perceptron)—a review of applications in the atmospheric sciences.

    Atmospheric Environment, 32(14-15):2627–2636, 1998.
  • [155] R. Ribani and M. Marengoni. A survey of transfer learning for convolutional neural networks. In Proc. of SIBGRAPI-T, pages 47–57. IEEE, 2019.
  • [156] K. Simonyan and A. Zisserman. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556, 2014.
  • [157] K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 770–778, 2016.
  • [158] O. Hadad, R. Bakalo, R. Ben-Ari, and et al. Classification of breast lesions using cross-modal deep learning. In ISBI 2017, pages 109–112. IEEE, 2017.
  • [159] O. Russakovsky, J. Deng, H. Su, and et al. Imagenet large scale visual recognition challenge. International Journal of Computer Vision, 115(3):211–252, 2015.
  • [160] D. Sarkar, R. Bali, and T. Ghosh.

    Hands-On Transfer Learning with Python: Implement advanced deep learning and neural network models using TensorFlow and Keras

    .
    Packt Publishing Ltd, 2018.
  • [161] M. Z. Alom, M. Hasan, C. Yakopcic, and et al. Improved inception-residual convolutional neural network for object recognition. Neural Computing and Applications, pages 1–15, 2018.
  • [162] M. Z. Alom, M. Hasan, C. Yakopcic, and T. M. Taha. Inception recurrent convolutional neural network for object recognition. arXiv preprint arXiv:1704.07709, 2017.
  • [163] C. Szegedy, S. Ioffe, V. Vanhoucke, and A. Alemi.

    Inception-v4, inception-resnet and the impact of residual connections on learning.

    In Thirty-first AAAI conference on artificial intelligence, 2017.
  • [164] M. Liang and X. Hu. Recurrent convolutional neural network for object recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 3367–3375, 2015.
  • [165] Y. Chen, X. Qin, J. Xiong, and et al. Deep transfer learning for histopathological diagnosis of cervical cancer using convolutional neural networks with visualization schemes. Journal of Medical Imaging and Health Informatics, 10(2):391–400, 2020.
  • [166] S. Sornapudi, R. J. Stanley, W. V. Stoecker, and et al. Deep learning nuclei detection in digitized histology images by superpixels. Journal of Pathology Informatics, 9, 2018.
  • [167] F. Sheikhzadeh, R. K. Ward, D. van Niekerk, and M. Guillaud. Automatic labeling of molecular biomarkers of immunohistochemistry images using fully convolutional networks. PLOS ONE, 13(1), 2018.
  • [168] M. Wu, C. Yan, H. Liu, and et al. Automatic classification of cervical cancer from cytological images by using convolutional neural network. Bioscience Reports, 38(6), 2018.
  • [169] S. Gautam, N. Jith, A. K. Sao, and et al. Considerations for a pap smear image analysis system with cnn features. arXiv preprint arXiv:1806.09025, 2018.
  • [170] Y. Song, E. Tan, X. Jiang, and et al. Accurate cervical cell segmentation from overlapping clumps in pap smear images. IEEE Transactions on Medical Imaging, 36(1):288–300, 2016.
  • [171] S. Padi, P. Manescu, and N. Schauband et al. Comparison of artificial intelligence based approaches to cell function prediction. Informatics in Medicine Unlocked, 18:100270, 2020.
  • [172] D. Kusumoto, M. Lachmann, T. Kunihiro, and et al. Automated deep learning-based system to identify endothelial cells derived from induced pluripotent stem cells. Stem Cell Reports, 10(6):1687–1695, 2018.
  • [173] E. Ito, T. Sato, D. Sano, E. Utagawa, and T. Kato. Virus particle detection by convolutional neural network in transmission electron microscopy images. Food and Environmental Virology, 10(2):201–208, 2018.
  • [174] D. J. Matuszewski and I. M. Sintorn. Minimal annotation training for segmentation of microscopy images. In ISBI 2018, pages 387–390. IEEE, 2018.
  • [175] S. Kosov, K. Shirahama, C. Li, and M. Grzegorzek. Environmental microorganism classification using conditional random fields and deep convolutional neural networks. Pattern Recognition, 77:248–261, 2018.
  • [176] S. Javadi and S. A. Mirroshandel. A novel deep learning method for automatic assessment of human sperm images. Computers in Biology and Medicine, 109:182–194, 2019.
  • [177] J. Riordon, C. McCallum, and D. Sinton. Deep learning for the classification of human sperm. Computers in Biology and Medicine, 111:103342, 2019.
  • [178] S. Javadi and S. A. Mirroshandel. A novel deep learning method for automatic assessment of human sperm images. Computers in Biology and Medicine, 109:182–194, 2019.
  • [179] N. Alqahtani, R. T. Armstrong, P. Mostaghimi, and et al. Deep learning convolutional neural networks to predict porous media properties. In SPE Asia Pacific oil and gas conference and exhibition. Society of Petroleum Engineers, 2018.
  • [180] S. Karimpouli and P. Tahmasebi. Segmentation of digital rock images using deep convolutional autoencoder networks. Computers & Geosciences, 126:142–150, 2019.