Learning what to look in chest X-rays with a recurrent visual attention model

01/23/2017 ∙ by Petros-Pavlos Ypsilantis, et al. ∙ King's College London 0

X-rays are commonly performed imaging tests that use small amounts of radiation to produce pictures of the organs, tissues, and bones of the body. X-rays of the chest are used to detect abnormalities or diseases of the airways, blood vessels, bones, heart, and lungs. In this work we present a stochastic attention-based model that is capable of learning what regions within a chest X-ray scan should be visually explored in order to conclude that the scan contains a specific radiological abnormality. The proposed model is a recurrent neural network (RNN) that learns to sequentially sample the entire X-ray and focus only on informative areas that are likely to contain the relevant information. We report on experiments carried out with more than 100,000 X-rays containing enlarged hearts or medical devices. The model has been trained using reinforcement learning methods to learn task-specific policies.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Chest X-rays (CXR) are the most commonly used diagnosis exams for chest-related diseases. They use a very small dose of ionizing radiation to produce pictures of the inside of the chest. CXR scans help radiologists to diagnose or monitor treatment for conditions such as pneumonia, heart failure, emphysema, lung cancer, positioning of medical devices, as well as fluid and air collection around the lungs. An expert radiologist is typically able to detect radiological abnormalities by looking in the ’right places’"’ and making quick comparisons to normal standards. For example, for the detection of an enlarged heart, or cardiomegaly, the size of the heart is assessed in relation to the total thoracic width. Given that chest X-rays are routinely used to detect several abnormalities or diseases, a careful interpretation of a scan requires expertise and time resources that are not always available, especially since large numbers of CXR exams need to be reported daily. This leads to diagnostic errors that, for some pathologies, have been estimated to be in the range of

Tudor et al. (1997).

Our ultimate objective is to develop a fully-automated system that learns to identify radiological abnormalities using only large volumes of labelled historical exams. We are motivated by recent work on attention-based models which have been used for digit classification Mnih and et al. (2014), sequential prediction of street view house numbers Ba et al. (2015) and a fine-grained categorization task Sermanet et al. (2015). However, we are not aware of applications of such attention models to the challenging task of chest X-ray interpretation. Here we report on the initial performance of a recurrent attention model (RAM), similar to the model originally presented in Mnih and et al. (2014), and trained end-to-end on a very large number of historical X-rays exams.

2 Dataset

For this study we collected and prepared a dataset consisting of X-ray plain films of the chest along with their corresponding radiological reports. All the historical exams were extracted from the historical archives of Guy’s and St Thomas’ Hospital in London (UK), and covered more than a decade, from to

. Each scan was labelled according to the clinical findings that were originally reported by the consultant radiologist and recorded in an electronic clinical report. The labelling task was automated using a natural language processing (NLP) system that implements a combination of machine learning and rule-based algorithms for clinical entity recognition, negation detection and entity classification. An early version of the system used a bidirectional long-short term memory (LSTM) model for modelling the radiological language and detecting clinical findings and their negations

Cornegruta et al. (2016).

For the purpose of this study, we only used scans labelled as normal (i.e. those with no reported abnormalities), and those reported as having an enlarged heart (i.e. a large cardiac silhouette) and a medical device (e.g. a pacemaker). The number of scans within these three categories was , and , respectively. We were interested in the detection of enlarged hearts and medical devices, and a separate model was trained for each task and tested on and randomly selected exams, respectively. All the remaining images were used for both training and validation. In all our experiments we scaled the size of the images down to pixels.

3 Recurrent attention model (RAM)

The RAM model implemented here is similar to the one originally proposed in Mnih and et al. (2014). Mimicking the human visual attention mechanism, the this model learns to focus and process only a certain region of an image that is relevant to the classification task. In this section we provide a brief overview of the model and describe how our implementation differs from the original architecture. We refer the reader to Mnih and et al. (2014) for further details on the training algorithm.

Glimpse Layer: At each time , the model does not have full access to the input image but instead receives a partial observation, or “glimpse”, denoted by . The glimpse consists of two image patches of different size centred at the same location , each one capturing a different context around . Both patches are matched in size and passed as input to an encoder, as illustrated in Figure 1.

Encoder: The encoder implemented here differs from the one used in Mnih and et al. (2014). In our application we have a complex visual environment featuring high variability in both luminance and object complexity. This is due to the large variability in patient’s’ anatomy as well as image acquisition as the X-ray scans were aquired using more than

different X-ray devices. The goal of the encoder is to compress the information of the glimpse by extracting a robust representation. To achieve this, each image of the glimpse is passed through a stack of two convolutional autoencoders with max-pooling

(Masci et al., 2011). Each convolutional autoencoder in the stack is pre-trained separately from the RAM model. During training, at each time the glimpse representation is concatenated with the location representation and passed as input to a fully connected (FC) layer. The output of the FC layer is denoted as and is passed as input to the core RAM model, as seen in Figure 1.

Core RAM: In each time step

, the output vector

and the previous hidden representation

are passed as input to the LSTM layer. The locator receives the hidden representation from the LSTM unit and passes on to a FC layer, resulting in a vector (see Figure 1). The locator then decides the position of the next glimpse by sampling

, i.e. from a normal distribution with mean

and diagonal covariance matrix . The location represents the x-y coordinates of the glimpse at time step

. At the very first step, we initiate the algorithm at the center of the image, and always use a fixed variance

.

Figure 1: RAM. At each time step the Core RAM samples a location of where to attend next. The location is used to extract the glimpse (red frames of different size). The image patches are down-scaled and passed through the encoder. The representation of the encoder and the previous hidden state of the Core RAM are passed as inputs to the LSTM of the step . The locator receives as input the hidden state of the current LSTM and then it samples the location coordinates for the glimpse in the next step . This process continuous recursively until step where the output of the LSTM

is used to classify the input image.

4 Results

Table 1 summaries the classification performance of the RAM model alongside with the performance of state-of-the-art convolutional neural networks trained and tested on the same dataset. RAM, using

million parameters, reaches and accuracy for the detection of medical devices and enlarged hearts, respectively. For the same tasks, Inception-v3 Szegedy et al. (2015) achieves the highest accuracy with and , but uses times more parameters compared to the RAM model.

Model Heart Enlarged Medical Devices Number of Parameters
VGG Simonyan and Zisserman (2015) million
ResNet-18 He et al. (2015) million
Inception-v3 Szegedy et al. (2015) million


AlexNet Krizhevsky et al. (2012)
million


RAM
million


baseline ResNet-18
million


baseline VGG
million


baseline AlexNet
million

Table 1: Accuracy () on the classification between images with normal radiological appearance and images with enlarged heart and medical devices.

In Figure 3

we illustrate the performance on the validation set, and highlight the locations attended by the model when trying to detect medical devices. Here it can be noted how initially the model explores randomly selected portions of the image, and its classification performance remains low. After a specific number of epochs, the model discovers that the most informative parts of a chest X-ray are those containing the lungs and spine, and selectively prefers those regions in subsequent paths. This is a reasonable policy since most of the medical devices to be found in chest X-rays, such as pacemakers and tubes, are located in those areas.

Figure 2: Top: Accuracy () on the validation set during the training of the model. Bottom: Image locations that the model attends during the validation. The grids corresponds to image areas of size pixels. We split the total number of epochs into chunks of epochs. For each chunk the model is validated times and the locations from each validation are summarized in the corresponding image. Image regions with low transparency correspond to locations that the model visits with high frequency.

Figure 3 (A) shows the locations mostly attended by the RAM model when looking for medical devices. From this figure it is obvious that the learnt policy explores only the relevant areas where these devices can generally be found. Two examples of paths followed by the algorithm after learning the policy are illustrated in Figures 3 (B), (C). In these examples, starting from the center of the image, the algorithms moves closer to a region that is likely to contain a pacemaker, which is then correctly identified. The circle and triangle points (in red) indicate the coordinates of the first and last glimpse in the learnt policy, respectively.

Analogously, Figure 4 (A) highlights frequently explored locations when trying to discriminate between normal and enlarged hearts. Here it can be observed how the models learns to focus on the cardiac area. Two samples of the learned policy are illustrated in Figure 4 (B), (C). The trajectories followed here demonstrates how the policy has learned that exploring the extremities of the heart is required in order to conclude whether the heart is enlarged or not.

Figure 3: (A) Image locations attended by the RAM model for the detection of medical devices. (B) and (C) are two different samples of the learnt policy on test images.
Figure 4: (A) Image locations attended by the RAM model for the detection of enlarged hearts. (B) and (C) are two different samples of the learnt policy on test images.

5 Conclusion and Perspectives

In this work we have investigated whether a visual attention mechanism, the RAM model, is capable of learning how to interpret chest X-ray scans. Our experiments show that the model not only has the potential to achieve classifcation performance comparable to state-of-the-art convolutional architectures using far fewer parameters, but also learns to identify specific portions of the images that are likely to contain the anatomical information required to reach correct conclusions. The relevant areas are explored according to policies that seem appropriate for each task. Current work is being directed towards enabling the model to learn each policy as quickly and precisely as possible using full-scale images and for a much larger number of clinically important radiological classes.

References