Chest X-rays (CXR) are the most commonly used diagnosis exams for chest-related diseases. They use a very small dose of ionizing radiation to produce pictures of the inside of the chest. CXR scans help radiologists to diagnose or monitor treatment for conditions such as pneumonia, heart failure, emphysema, lung cancer, positioning of medical devices, as well as fluid and air collection around the lungs. An expert radiologist is typically able to detect radiological abnormalities by looking in the ’right places’"’ and making quick comparisons to normal standards. For example, for the detection of an enlarged heart, or cardiomegaly, the size of the heart is assessed in relation to the total thoracic width. Given that chest X-rays are routinely used to detect several abnormalities or diseases, a careful interpretation of a scan requires expertise and time resources that are not always available, especially since large numbers of CXR exams need to be reported daily. This leads to diagnostic errors that, for some pathologies, have been estimated to be in the range ofTudor et al. (1997).
Our ultimate objective is to develop a fully-automated system that learns to identify radiological abnormalities using only large volumes of labelled historical exams. We are motivated by recent work on attention-based models which have been used for digit classification Mnih and et al. (2014), sequential prediction of street view house numbers Ba et al. (2015) and a fine-grained categorization task Sermanet et al. (2015). However, we are not aware of applications of such attention models to the challenging task of chest X-ray interpretation. Here we report on the initial performance of a recurrent attention model (RAM), similar to the model originally presented in Mnih and et al. (2014), and trained end-to-end on a very large number of historical X-rays exams.
For this study we collected and prepared a dataset consisting of X-ray plain films of the chest along with their corresponding radiological reports. All the historical exams were extracted from the historical archives of Guy’s and St Thomas’ Hospital in London (UK), and covered more than a decade, from to
. Each scan was labelled according to the clinical findings that were originally reported by the consultant radiologist and recorded in an electronic clinical report. The labelling task was automated using a natural language processing (NLP) system that implements a combination of machine learning and rule-based algorithms for clinical entity recognition, negation detection and entity classification. An early version of the system used a bidirectional long-short term memory (LSTM) model for modelling the radiological language and detecting clinical findings and their negationsCornegruta et al. (2016).
For the purpose of this study, we only used scans labelled as normal (i.e. those with no reported abnormalities), and those reported as having an enlarged heart (i.e. a large cardiac silhouette) and a medical device (e.g. a pacemaker). The number of scans within these three categories was , and , respectively. We were interested in the detection of enlarged hearts and medical devices, and a separate model was trained for each task and tested on and randomly selected exams, respectively. All the remaining images were used for both training and validation. In all our experiments we scaled the size of the images down to pixels.
3 Recurrent attention model (RAM)
The RAM model implemented here is similar to the one originally proposed in Mnih and et al. (2014). Mimicking the human visual attention mechanism, the this model learns to focus and process only a certain region of an image that is relevant to the classification task. In this section we provide a brief overview of the model and describe how our implementation differs from the original architecture. We refer the reader to Mnih and et al. (2014) for further details on the training algorithm.
Glimpse Layer: At each time , the model does not have full access to the input image but instead receives a partial observation, or “glimpse”, denoted by . The glimpse consists of two image patches of different size centred at the same location , each one capturing a different context around . Both patches are matched in size and passed as input to an encoder, as illustrated in Figure 1.
Encoder: The encoder implemented here differs from the one used in Mnih and et al. (2014). In our application we have a complex visual environment featuring high variability in both luminance and object complexity. This is due to the large variability in patient’s’ anatomy as well as image acquisition as the X-ray scans were aquired using more than
different X-ray devices. The goal of the encoder is to compress the information of the glimpse by extracting a robust representation. To achieve this, each image of the glimpse is passed through a stack of two convolutional autoencoders with max-pooling(Masci et al., 2011). Each convolutional autoencoder in the stack is pre-trained separately from the RAM model. During training, at each time the glimpse representation is concatenated with the location representation and passed as input to a fully connected (FC) layer. The output of the FC layer is denoted as and is passed as input to the core RAM model, as seen in Figure 1.
Core RAM: In each time step
, the output vector
and the previous hidden representationare passed as input to the LSTM layer. The locator receives the hidden representation from the LSTM unit and passes on to a FC layer, resulting in a vector (see Figure 1). The locator then decides the position of the next glimpse by sampling
, i.e. from a normal distribution with meanand diagonal covariance matrix . The location represents the x-y coordinates of the glimpse at time step
. At the very first step, we initiate the algorithm at the center of the image, and always use a fixed variance.
Table 1 summaries the classification performance of the RAM model alongside with the performance of state-of-the-art convolutional neural networks trained and tested on the same dataset. RAM, usingmillion parameters, reaches and accuracy for the detection of medical devices and enlarged hearts, respectively. For the same tasks, Inception-v3 Szegedy et al. (2015) achieves the highest accuracy with and , but uses times more parameters compared to the RAM model.
|Model||Heart Enlarged||Medical Devices||Number of Parameters|
|VGG Simonyan and Zisserman (2015)||million|
|ResNet-18 He et al. (2015)||million|
|Inception-v3 Szegedy et al. (2015)||million|
AlexNet Krizhevsky et al. (2012)
In Figure 3
we illustrate the performance on the validation set, and highlight the locations attended by the model when trying to detect medical devices. Here it can be noted how initially the model explores randomly selected portions of the image, and its classification performance remains low. After a specific number of epochs, the model discovers that the most informative parts of a chest X-ray are those containing the lungs and spine, and selectively prefers those regions in subsequent paths. This is a reasonable policy since most of the medical devices to be found in chest X-rays, such as pacemakers and tubes, are located in those areas.
Figure 3 (A) shows the locations mostly attended by the RAM model when looking for medical devices. From this figure it is obvious that the learnt policy explores only the relevant areas where these devices can generally be found. Two examples of paths followed by the algorithm after learning the policy are illustrated in Figures 3 (B), (C). In these examples, starting from the center of the image, the algorithms moves closer to a region that is likely to contain a pacemaker, which is then correctly identified. The circle and triangle points (in red) indicate the coordinates of the first and last glimpse in the learnt policy, respectively.
Analogously, Figure 4 (A) highlights frequently explored locations when trying to discriminate between normal and enlarged hearts. Here it can be observed how the models learns to focus on the cardiac area. Two samples of the learned policy are illustrated in Figure 4 (B), (C). The trajectories followed here demonstrates how the policy has learned that exploring the extremities of the heart is required in order to conclude whether the heart is enlarged or not.
5 Conclusion and Perspectives
In this work we have investigated whether a visual attention mechanism, the RAM model, is capable of learning how to interpret chest X-ray scans. Our experiments show that the model not only has the potential to achieve classifcation performance comparable to state-of-the-art convolutional architectures using far fewer parameters, but also learns to identify specific portions of the images that are likely to contain the anatomical information required to reach correct conclusions. The relevant areas are explored according to policies that seem appropriate for each task. Current work is being directed towards enabling the model to learn each policy as quickly and precisely as possible using full-scale images and for a much larger number of clinically important radiological classes.
- Ba et al. (2015) Ba, J., Mnih, V., Kavukcuoglu, K.: Multiple object recognition with visual attention. In: ICLR (2015)
- Cornegruta et al. (2016) Cornegruta, S., Bakewell, R., Withey, S., Montana, G.: Modelling radiological language with bidirectional long short-term memory networks. In: In EMNLP (2016)
- He et al. (2015) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. arXiv:1512.03385v1 (2015)
Krizhevsky et al. (2012)
Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks.In: NIPS. pp. 1106–1114 (2012)
Masci et al. (2011)
Masci, J., Meier, U., Ciresan, D., Schmidhuber, J.: Stacked convolutional auto-encoders for hierarchical feature extraction.In: ICANN (2011)
- Mnih and et al. (2014) Mnih, V., et al.: Recurrent models of visual attention. In: NIPS (2014)
- Sermanet et al. (2015) Sermanet, P., Frome, A., Real, E.: Attention for fine-grained categorization. arXiv:1412.7054v3 (2015)
- Simonyan and Zisserman (2015) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large scale image recognition. In: ICLR (2015)
- Szegedy et al. (2015) Szegedy, C., Vanhoucke, V., Ioffe, S., Shlens, J., Wojna, Z.: Rethinking the inception architecture for computer vision. arXiv:1512.00567v3 (2015)
- Tudor et al. (1997) Tudor, G., Finlay, D., Taub, N.: An assessment of inter-observer agreement and accuracy when reporting plain radiographs. Clinical Radiology 52(3), 235–238 (1997)