Look, Investigate, and Classify: A Deep Hybrid Attention Method for Breast Cancer Classification

02/28/2019
by   Bolei Xu, et al.
0

One issue with computer based histopathology image analysis is that the size of the raw image is usually very large. Taking the raw image as input to the deep learning model would be computationally expensive while resizing the raw image to low resolution would incur information loss. In this paper, we present a novel deep hybrid attention approach to breast cancer classification. It first adaptively selects a sequence of coarse regions from the raw image by a hard visual attention algorithm, and then for each such region it is able to investigate the abnormal parts based on a soft-attention mechanism. A recurrent network is then built to make decisions to classify the image region and also to predict the location of the image region to be investigated at the next time step. As the region selection process is non-differentiable, we optimize the whole network through a reinforcement approach to learn an optimal policy to classify the regions. Based on this novel Look, Investigate and Classify approach, we only need to process a fraction of the pixels in the raw image resulting in significant saving in computational resources without sacrificing performances. Our approach is evaluated on a public breast cancer histopathology database, where it demonstrates superior performance to the state-of-the-art deep learning approaches, achieving around 96% classification accuracy while only 15

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

11/10/2018

Breast Cancer Classification from Histopathological Images with Inception Recurrent Residual Convolutional Neural Network

The Deep Convolutional Neural Network (DCNN) is one of the most powerful...
07/24/2017

Automatic breast cancer grading in lymph nodes using a deep neural network

The progression of breast cancer can be quantified in lymph node whole-s...
07/29/2018

Reinforced Auto-Zoom Net: Towards Accurate and Fast Breast Cancer Segmentation in Whole-slide Images

Convolutional neural networks have led to significant breakthroughs in t...
11/03/2021

Breast Cancer Classification Using: Pixel Interpolation

Image Processing represents the backbone research area within engineerin...
03/26/2019

Learning Where to See: A Novel Attention Model for Automated Immunohistochemical Scoring

Estimating over-amplification of human epidermal growth factor receptor ...
10/07/2020

Attention Model Enhanced Network for Classification of Breast Cancer Image

Breast cancer classification remains a challenging task due to inter-cla...
06/20/2016

Cutting out the middleman: measuring nuclear area in histopathology slides without segmentation

The size of nuclei in histological preparations from excised breast tumo...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Breast Cancer is a major concern among women for its higher mortality when comparing with other cancer death [1]. Thus, early detection and accurate assessment are necessary to increase survival rates. In the process of clinical breast examination, it is usually fatigue and time-consuming to obtain diagnostic report by pathologist. Thus, there is large demand to develop computer-aided diagnosis (CADx) to relieve workload from pathologists.

In recent years, deep learning approaches are widely applied to the histopathology image analysis for its significant performance on various medical imaging tasks. However, one issue with deep learning approaches is that the size of raw image is large. By directly inputting raw images to the deep neural network, it would be computational expensive and requires days to train on GPUs. Some previous approaches address this problem by either resizing raw images to low resolution

[2, 3, 4] or randomly cropping patches [5] from raw images. However, both approaches would lead to information loss and the detailed features of abnormality part could be missing, which might cause the misdiagnosed result. Another approach is to use sliding-window to crop image patches. However, there would be a large number of patches that are not related to the lesion part, since in some cases the abnormality part is usually in small portion.

One property of human visual system is that it does not have to process the whole image at once. In clinical diagnose, pathologist would first selectively pay attention to the abnormality region, and then investigate the region for details. In this paper, we formulate the problem as a Partially Observed Markov Decision Process

[6]

, and we propose a novel deep hybrid attention model to mimic human perception system. We build a recurrent model that is able to select image patches that are highly related to abnormality part from raw image at each time step, which so-called the “hard-attention”. Instead of directly working on the raw image, we could thus learn image features from the cropped patch. We further investigate the cropped patch through a “soft-attention” mechanism that is to highlight pixels most related to the lesion part for classification. It should be noticed that our approach does not directly access to the raw image, and thus the computation amount of our approach is

independent of the raw image size. The patch selection process is non-differentiable, we regard the problem as a control problem, and thus could optimize the network through a reinforcement learning approach.

Figure 1:

The overall framework of our deep hybrid attention network. ”FC” denotes fully-connected layer with ReLu activation. In each time step, the network has three stage to classify image. In the “Look” stage, a patch is cropped by hard-attention. Then in the “Investigate” stage, the abnormal features of image patch are extracted by the SA-Net as shown in Figure.

2. Finally, in the “Classify” stage, a LSTM is employed to process the image features and also to classify image and to predict region for the next time step. For each raw image. the network crops five patches for classification.

The contribution of this paper could be summarized in three-fold: (1) A novel framework is introduced to the classification of breast cancer histopathology image based on the hybrid attention mechanism. (2) The proposed approach can automatically select useful region from raw image, which is able to prevent information loss and also to save computational cost. (3) Our approach demonstrates superior performance to previous state-of-the-art methods on a public dataset.

2 Methodology

2.1 Network Architecture

We formulate the histopathology image classification problem as a Partially Observable Markov Decision Process (POMDP), which means at each time step, the network does not have full access to the image and it has to make decisions based on the current observed region. It takes three stages including “Look”, “Investigate” and “Classify” stages as shown in Figure. 1.

Look Stage: At each time step , a hard-attention sensor receives a partial image patch based on the location information , which has smaller image size than the raw image x. It is a coarse region that might be related to abnormality part.

Investigate Stage: The soft-attention mechanism that is parameterized by encodes the observed image region to a soft-attention map where the valuable information is highlighted. It is achieved by a soft-attention network (SA-Net) as shown in Figure.2. In the SA-Net, it contains a mask branch and a trunk branch. The soft mask branch aims to learn a mask in range of by a symmetrical top-down architecture and a sigmoid layer to normalize the output. The trunk branch outputs the feature map and the final attention map is computed by:

(1)

and the soft-attention features are then learned by a global average pooling over the attention map . In order to fuse both learned attention features and location information, we build a fusion network

to finally produce fused feature vector

based on a fully-connected layer with ReLu activation.

Figure 2: The structure of SA-Net. Here Conv() denotes a convolutional layer with kernel size of

and stride of

. We use 64 convolutional filters for the last Conv layers. ’BN’ denotes batch normalization. MP(

) means max-pooling size is set to 3 and stride is 2. ’PReLU’ refers to the activation function PReLU is applied. ’Upsample’ denotes upsampling by bilinear interpolation. The sturcture of residual unit is shown in Figure.

3
Figure 3: The structure of residual unit in SA-Net. We use 64 convolutional filters in each Conv layer.

Classify Stage: We further use a LSTM to process the learned fused feature . The advantage of LSTM is that it is able to summarize the past information, and to learn an optimal classification policy , where is decision to classify image at time step t and represents the past history . The internal state is formed and updated by the hidden unit in LSTM [7]: . The recurrent LSTM network then has to choose actions including how to classify image and where to look at in the next time step based on the internal state. In this work, both actions are drawn stochastically from two distributions. The classification action is drawn from classification network by softmax output at step : . Similarly, the location is also drawn from a location network by .

When executing the chosen actions, we could receive a image patch and also a reward referring to whether we have correctly classified image. The total reward could be written as: . In this paper, we set reward to 0 for all other time steps except the last time step. In the last time step, the reward is set to 1 if the image is classified correctly and 0 if not.

2.2 Network Optimization

As the hard-attention mechanism is non-differentiable, we optimize the whole network through policy gradient approach. In this paper, we aim to maximize the reward as:

(2)

In order to maximize , the gradient of could be approximate by:

(3)

where

is the running epochs

[8]. Equation. 3

encourages network to adjust parameters for the chosen probability of actions that would lead to high cumulative reward and to decrease probability of actions that would decrease reward. To achieve this, we could update the network by:

(4)

At the meanwhile, we could also combine Equation. 4 with the supervised classification training approach, i.e. to also train the network by the cross-entropy loss with ground-truth label. Thus, the network could be learned by minimizing the total loss:

(5)

where is the ground-truth classification label, is predicted label from network, and is the cross-entropy classification loss.

3 Experiment

3.1 Datasets and Parameters Setting

We evaluated our approach on a public dataset BreakHis [9]. The dataset contains 7,909 images collected from 82 patients including 58 for malignant and 24 for benign. These tumor tissue images are captured at four kinds of optical magnifications of , , , and .

In the experiment, we randomly select 58 patients (70%) for training and 24 patients (30%) for testing. Before training, we augmented raw image by applying rotation, horizontal and vertical flips, which results in 3 times the original training data. The raw image size in the dataset is . The size of five cropped images in our network is set to , which means we only have to process around 15% pixels of raw image. We choose Adam optimizer with a learning rate of that exponentially decay over epochs. In the training stage, it usually takes around 200 epochs to convergence. The experiment is conducted on a workstation with four Nvidia 1080 Ti GPUs.

The performance of our approach is evaluated by the Patient recognition rate (PRR), in order to be comparable with previous work. PRR aims to calculate a ratio of correctly classified tissues to all the number of tissues. It could be formulated as:

(6)

where is the total number of patients in the testing data. is the correctly classified tissues of patient and is total tissue number from patient .

3.2 Comparison with other approaches

Methods Magnification
Spanhol [9]
Spanhol [10]
Gupta [11]
Sequential [12]
FV+CNN [13]
MIL+CNN [14] n/a n/a n/a n/a
MIL [15] n/a n/a n/a n/a
S-CNN [3]
Ours w/o SA
Ours
Table 1: Performance comparison of magnification specific system (in %).“Ours w/o SA” denotes the SA-Net is removed. n/a denotes the authors did not report the corresponding data.
Figure 4: An example of how hard-attention mechanism selects image patches.

To evaluate the performance of our approach to histopathology image classification, we compare our proposed deep learning framework with the state-of-the-art approaches. The results is shown in Table.1 which demonstrates our approach outperforms all previous approaches. It should be noticed that our approach achieves much higher accuracy rate than most CNN approaches [13, 14, 15]. It is achieved by the well-designed attention mechanisms to select useful regions for the decision network (Figure.4). The hard-attention mechanism finds out the regions most related to abnormality part and the soft-attention mechanism highlight those abnormal features. Apart from the superior performance to the previous approaches, our approaches prevents to resize raw image which might leads to information loss, and also enables network to process image in the small size image patch in order to save computational cost.

We also conducted an ablation study to evaluate the effectiveness of the soft-attention. We remove SA-Net to test the performance of rest network. It could be seen that classification accuracy dropped down by around 10%. The decreasing of performance is due to some redundant features are also processed by the network, which might contains noise features that leading to misclassification. Thus, it is essential to apply soft-attention mechanism to highlight useful features and also encourage network to neglect those unnecessary image features.

4 Conclusion

In this paper, we introduce a novel deep hybrid attention network to the breast cancer histopathology image classification. The hard-attention mechanism in the network could automatically find the useful region from raw image, and thus does not have to resize raw image for the network to prevent information loss. The built-in recurrent network can make decisions to classify image and also to predict region for next time step. We evaluate our approach on a public dataset, and it achieves around 96% accuracy on four different magnifications while only 15% of raw image pixels are used to make decisions to classify input image.

References

  • [1] American Cancer Society, Cancer facts & figures, The Society, 2008.
  • [2] Fabio A Spanhol, Luiz S Oliveira, Paulo R Cavalin, Caroline Petitjean, and Laurent Heutte,

    Deep features for breast cancer histopathological image classification,”

    in Systems, Man, and Cybernetics (SMC), 2017 IEEE International Conference on. IEEE, 2017, pp. 1868–1873.
  • [3] Zhongyi Han, Benzheng Wei, Yuanjie Zheng, Yilong Yin, Kejian Li, and Shuo Li, “Breast cancer multi-classification from histopathological images with structured deep learning model,” Scientific reports, vol. 7, no. 1, pp. 4172, 2017.
  • [4] Nima Habibzadeh Motlagh, Mahboobeh Jannesary, HamidReza Aboulkheyr, Pegah Khosravi, Olivier Elemento, Mehdi Totonchi, and Iman Hajirasouliha, “Breast cancer histopathological image classification: A deep learning approach,” bioRxiv, p. 242818, 2018.
  • [5] Alexander Rakhlin, Alexey Shvets, Vladimir Iglovikov, and Alexandr A Kalinin,

    “Deep convolutional neural networks for breast cancer histology image analysis,”

    in International Conference Image Analysis and Recognition. Springer, 2018, pp. 737–744.
  • [6] Volodymyr Mnih, Nicolas Heess, Alex Graves, et al., “Recurrent models of visual attention,” in Advances in neural information processing systems, 2014, pp. 2204–2212.
  • [7] Sepp Hochreiter and Jürgen Schmidhuber,

    Long short-term memory,”

    Neural computation, vol. 9, no. 8, pp. 1735–1780, 1997.
  • [8] Ronald J Williams, “Simple statistical gradient-following algorithms for connectionist reinforcement learning,” Machine learning, vol. 8, no. 3-4, pp. 229–256, 1992.
  • [9] Fabio A Spanhol, Luiz S Oliveira, Caroline Petitjean, and Laurent Heutte, “A dataset for breast cancer histopathological image classification,” IEEE Transactions on Biomedical Engineering, vol. 63, no. 7, pp. 1455–1462, 2016.
  • [10] Fabio Alexandre Spanhol, Luiz S Oliveira, Caroline Petitjean, and Laurent Heutte, “Breast cancer histopathological image classification using convolutional neural networks,” in Neural Networks (IJCNN), 2016 International Joint Conference on. IEEE, 2016, pp. 2560–2567.
  • [11] Vibha Gupta and Arnav Bhavsar, “Breast cancer histopathological image classification: is magnification important?,” in

    IEEE Conf. on Computer Vision and Pattern Recognition Workshops (CVPRW)

    , 2017.
  • [12] Vibha Gupta and Arnav Bhavsar, “Sequential modeling of deep features for breast cancer histopathological image classification,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, 2018, pp. 2254–2261.
  • [13] Yang Song, Ju Jia Zou, Hang Chang, and Weidong Cai, “Adapting fisher vectors for histopathology image classification,” in Biomedical Imaging (ISBI 2017), 2017 IEEE 14th International Symposium on. IEEE, 2017, pp. 600–603.
  • [14] Jiajun Wu, Yinan Yu, Chang Huang, and Kai Yu, “Deep multiple instance learning for image classification and auto-annotation,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2015, pp. 3460–3469.
  • [15] Kausik Das, Sailesh Conjeti, Abhijit Guha Roy, Jyotirmoy Chatterjee, and Debdoot Sheet, “Multiple instance learning of deep convolutional neural networks for breast histopathology whole slide classification,” in Biomedical Imaging (ISBI 2018), 2018 IEEE 15th International Symposium on. IEEE, 2018, pp. 578–581.