Breast Cancer is a major concern among women for its higher mortality when comparing with other cancer death . Thus, early detection and accurate assessment are necessary to increase survival rates. In the process of clinical breast examination, it is usually fatigue and time-consuming to obtain diagnostic report by pathologist. Thus, there is large demand to develop computer-aided diagnosis (CADx) to relieve workload from pathologists.
In recent years, deep learning approaches are widely applied to the histopathology image analysis for its significant performance on various medical imaging tasks. However, one issue with deep learning approaches is that the size of raw image is large. By directly inputting raw images to the deep neural network, it would be computational expensive and requires days to train on GPUs. Some previous approaches address this problem by either resizing raw images to low resolution[2, 3, 4] or randomly cropping patches  from raw images. However, both approaches would lead to information loss and the detailed features of abnormality part could be missing, which might cause the misdiagnosed result. Another approach is to use sliding-window to crop image patches. However, there would be a large number of patches that are not related to the lesion part, since in some cases the abnormality part is usually in small portion.
One property of human visual system is that it does not have to process the whole image at once. In clinical diagnose, pathologist would first selectively pay attention to the abnormality region, and then investigate the region for details. In this paper, we formulate the problem as a Partially Observed Markov Decision Process
, and we propose a novel deep hybrid attention model to mimic human perception system. We build a recurrent model that is able to select image patches that are highly related to abnormality part from raw image at each time step, which so-called the “hard-attention”. Instead of directly working on the raw image, we could thus learn image features from the cropped patch. We further investigate the cropped patch through a “soft-attention” mechanism that is to highlight pixels most related to the lesion part for classification. It should be noticed that our approach does not directly access to the raw image, and thus the computation amount of our approach isindependent of the raw image size. The patch selection process is non-differentiable, we regard the problem as a control problem, and thus could optimize the network through a reinforcement learning approach.
The contribution of this paper could be summarized in three-fold: (1) A novel framework is introduced to the classification of breast cancer histopathology image based on the hybrid attention mechanism. (2) The proposed approach can automatically select useful region from raw image, which is able to prevent information loss and also to save computational cost. (3) Our approach demonstrates superior performance to previous state-of-the-art methods on a public dataset.
2.1 Network Architecture
We formulate the histopathology image classification problem as a Partially Observable Markov Decision Process (POMDP), which means at each time step, the network does not have full access to the image and it has to make decisions based on the current observed region. It takes three stages including “Look”, “Investigate” and “Classify” stages as shown in Figure. 1.
Look Stage: At each time step , a hard-attention sensor receives a partial image patch based on the location information , which has smaller image size than the raw image x. It is a coarse region that might be related to abnormality part.
Investigate Stage: The soft-attention mechanism that is parameterized by encodes the observed image region to a soft-attention map where the valuable information is highlighted. It is achieved by a soft-attention network (SA-Net) as shown in Figure.2. In the SA-Net, it contains a mask branch and a trunk branch. The soft mask branch aims to learn a mask in range of by a symmetrical top-down architecture and a sigmoid layer to normalize the output. The trunk branch outputs the feature map and the final attention map is computed by:
and the soft-attention features are then learned by a global average pooling over the attention map . In order to fuse both learned attention features and location information, we build a fusion network
to finally produce fused feature vectorbased on a fully-connected layer with ReLu activation.
Classify Stage: We further use a LSTM to process the learned fused feature . The advantage of LSTM is that it is able to summarize the past information, and to learn an optimal classification policy , where is decision to classify image at time step t and represents the past history . The internal state is formed and updated by the hidden unit in LSTM : . The recurrent LSTM network then has to choose actions including how to classify image and where to look at in the next time step based on the internal state. In this work, both actions are drawn stochastically from two distributions. The classification action is drawn from classification network by softmax output at step : . Similarly, the location is also drawn from a location network by .
When executing the chosen actions, we could receive a image patch and also a reward referring to whether we have correctly classified image. The total reward could be written as: . In this paper, we set reward to 0 for all other time steps except the last time step. In the last time step, the reward is set to 1 if the image is classified correctly and 0 if not.
2.2 Network Optimization
As the hard-attention mechanism is non-differentiable, we optimize the whole network through policy gradient approach. In this paper, we aim to maximize the reward as:
In order to maximize , the gradient of could be approximate by:
is the running epochs. Equation. 3
encourages network to adjust parameters for the chosen probability of actions that would lead to high cumulative reward and to decrease probability of actions that would decrease reward. To achieve this, we could update the network by:
At the meanwhile, we could also combine Equation. 4 with the supervised classification training approach, i.e. to also train the network by the cross-entropy loss with ground-truth label. Thus, the network could be learned by minimizing the total loss:
where is the ground-truth classification label, is predicted label from network, and is the cross-entropy classification loss.
3.1 Datasets and Parameters Setting
We evaluated our approach on a public dataset BreakHis . The dataset contains 7,909 images collected from 82 patients including 58 for malignant and 24 for benign. These tumor tissue images are captured at four kinds of optical magnifications of , , , and .
In the experiment, we randomly select 58 patients (70%) for training and 24 patients (30%) for testing. Before training, we augmented raw image by applying rotation, horizontal and vertical flips, which results in 3 times the original training data. The raw image size in the dataset is . The size of five cropped images in our network is set to , which means we only have to process around 15% pixels of raw image. We choose Adam optimizer with a learning rate of that exponentially decay over epochs. In the training stage, it usually takes around 200 epochs to convergence. The experiment is conducted on a workstation with four Nvidia 1080 Ti GPUs.
The performance of our approach is evaluated by the Patient recognition rate (PRR), in order to be comparable with previous work. PRR aims to calculate a ratio of correctly classified tissues to all the number of tissues. It could be formulated as:
where is the total number of patients in the testing data. is the correctly classified tissues of patient and is total tissue number from patient .
3.2 Comparison with other approaches
|Ours w/o SA|
To evaluate the performance of our approach to histopathology image classification, we compare our proposed deep learning framework with the state-of-the-art approaches. The results is shown in Table.1 which demonstrates our approach outperforms all previous approaches. It should be noticed that our approach achieves much higher accuracy rate than most CNN approaches [13, 14, 15]. It is achieved by the well-designed attention mechanisms to select useful regions for the decision network (Figure.4). The hard-attention mechanism finds out the regions most related to abnormality part and the soft-attention mechanism highlight those abnormal features. Apart from the superior performance to the previous approaches, our approaches prevents to resize raw image which might leads to information loss, and also enables network to process image in the small size image patch in order to save computational cost.
We also conducted an ablation study to evaluate the effectiveness of the soft-attention. We remove SA-Net to test the performance of rest network. It could be seen that classification accuracy dropped down by around 10%. The decreasing of performance is due to some redundant features are also processed by the network, which might contains noise features that leading to misclassification. Thus, it is essential to apply soft-attention mechanism to highlight useful features and also encourage network to neglect those unnecessary image features.
In this paper, we introduce a novel deep hybrid attention network to the breast cancer histopathology image classification. The hard-attention mechanism in the network could automatically find the useful region from raw image, and thus does not have to resize raw image for the network to prevent information loss. The built-in recurrent network can make decisions to classify image and also to predict region for next time step. We evaluate our approach on a public dataset, and it achieves around 96% accuracy on four different magnifications while only 15% of raw image pixels are used to make decisions to classify input image.
-  American Cancer Society, Cancer facts & figures, The Society, 2008.
Fabio A Spanhol, Luiz S Oliveira, Paulo R Cavalin, Caroline Petitjean, and
“Deep features for breast cancer histopathological image classification,”in Systems, Man, and Cybernetics (SMC), 2017 IEEE International Conference on. IEEE, 2017, pp. 1868–1873.
-  Zhongyi Han, Benzheng Wei, Yuanjie Zheng, Yilong Yin, Kejian Li, and Shuo Li, “Breast cancer multi-classification from histopathological images with structured deep learning model,” Scientific reports, vol. 7, no. 1, pp. 4172, 2017.
-  Nima Habibzadeh Motlagh, Mahboobeh Jannesary, HamidReza Aboulkheyr, Pegah Khosravi, Olivier Elemento, Mehdi Totonchi, and Iman Hajirasouliha, “Breast cancer histopathological image classification: A deep learning approach,” bioRxiv, p. 242818, 2018.
Alexander Rakhlin, Alexey Shvets, Vladimir Iglovikov, and Alexandr A Kalinin,
“Deep convolutional neural networks for breast cancer histology image analysis,”in International Conference Image Analysis and Recognition. Springer, 2018, pp. 737–744.
-  Volodymyr Mnih, Nicolas Heess, Alex Graves, et al., “Recurrent models of visual attention,” in Advances in neural information processing systems, 2014, pp. 2204–2212.
-  Sepp Hochreiter and Jürgen Schmidhuber, Neural computation, vol. 9, no. 8, pp. 1735–1780, 1997.
-  Ronald J Williams, “Simple statistical gradient-following algorithms for connectionist reinforcement learning,” Machine learning, vol. 8, no. 3-4, pp. 229–256, 1992.
-  Fabio A Spanhol, Luiz S Oliveira, Caroline Petitjean, and Laurent Heutte, “A dataset for breast cancer histopathological image classification,” IEEE Transactions on Biomedical Engineering, vol. 63, no. 7, pp. 1455–1462, 2016.
-  Fabio Alexandre Spanhol, Luiz S Oliveira, Caroline Petitjean, and Laurent Heutte, “Breast cancer histopathological image classification using convolutional neural networks,” in Neural Networks (IJCNN), 2016 International Joint Conference on. IEEE, 2016, pp. 2560–2567.
-  Vibha Gupta and Arnav Bhavsar, “Breast cancer histopathological image classification: is magnification important?,” in , 2017.
-  Vibha Gupta and Arnav Bhavsar, “Sequential modeling of deep features for breast cancer histopathological image classification,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, 2018, pp. 2254–2261.
-  Yang Song, Ju Jia Zou, Hang Chang, and Weidong Cai, “Adapting fisher vectors for histopathology image classification,” in Biomedical Imaging (ISBI 2017), 2017 IEEE 14th International Symposium on. IEEE, 2017, pp. 600–603.
-  Jiajun Wu, Yinan Yu, Chang Huang, and Kai Yu, “Deep multiple instance learning for image classification and auto-annotation,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2015, pp. 3460–3469.
-  Kausik Das, Sailesh Conjeti, Abhijit Guha Roy, Jyotirmoy Chatterjee, and Debdoot Sheet, “Multiple instance learning of deep convolutional neural networks for breast histopathology whole slide classification,” in Biomedical Imaging (ISBI 2018), 2018 IEEE 15th International Symposium on. IEEE, 2018, pp. 578–581.