RTEX: A novel methodology for Ranking, Tagging, and Explanatory diagnostic captioning of radiography exams

06/11/2020 ∙ by Vasiliki Kougia, et al. ∙ Karolinska Institutet 26

This paper introduces RTEx, a novel methodology for a) ranking radiography exams based on their probability to contain an abnormality, b) generating abnormality tags for abnormal exams, and c) providing a diagnostic explanation in natural language for each abnormal exam. The task of ranking radiography exams is an important first step for practitioners who want to identify and prioritize those radiography exams that are more likely to contain abnormalities, for example, to avoid mistakes due to tiredness or to manage heavy workload (e.g., during a pandemic). We used two publicly available datasets to assess our methodology and demonstrate that for the task of ranking it outperforms its competitors in terms of NDCG@k. For each abnormal radiography exam RTEx generates a set of abnormality tags alongside an explanatory diagnostic text to explain the tags and guide the medical expert. Our tagging component outperforms two strong competitor methods in terms of F1. Moreover, the diagnostic captioning component of RTEx, which exploits the already extracted tags to constrain the captioning process, outperforms all competitors with respect to clinical precision and recall.



There are no comments yet.


page 1

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1. Introduction

Figure 1. A PA/lateral chest radiography exam along with the corresponding human-authored diagnostic text from IU X-Ray, and the abnormality tags. The ‘XXXX’ is due to the de-identification process.

Medical imaging is the method of forming visual representations of the anatomy or a function of the human body using a variety of imaging modalities (e.g., CR, CT, MRI) (Suetens, 2009; Aerts et al., 2014)

. In this paper, we particularly focus on chest radiography exams, which contain medical images produced by X-Rays. It is estimated that over three billion radiography exams are performed annually worldwide

(Krupinski, 2010), making the daily need for processing and interpretation of the produced radiographs paramount. An example of a radiography exam is provided in Fig. 1, that consists of two chest radiographs together with the diagnostic text, describing the medical observations on the radiographs, and a list of abnormality tags indicating most critical observations in the exam. In the diagnostic text, we observe that two findings are normal (i.e., cardiac contours and lungs), while three are abnormal, i.e., thoracic spondylosis, lower cervical arthritis, and basilar atelectasis. These abnormal findings are also consistent with the abnormality tags.

Our main objective in this paper is to introduce a novel methodology for automated and explainable Diagnostic Tagging of a collection of radiography exams, each comprising several radiographs, that can (1) accurately rank exams with abnormalities included in the radiographs, (2) automatically provide tags corresponding to the medical findings of the abnormal exams, (3) produce a diagnostic text describing the abnormality findings by exploiting both radiographs and the generated tags. Despite the importance of this problem, existing solutions are hindered by three major challenges.

Challenge I - Screening and prioritization. The daily routine of diagnostic radiologists includes the examination of radiographs, i.e., medical images produced by X-Rays, for abnormalities or other findings, and an explanation of these findings in the form of a medical report per radiography exam (Monshi et al., 2020). This is a rather challenging and time-consuming task imposing a high burden both to radiologists and patients. For example, approximately 230,000 of patients in England are waiting for over a month for their imaging test results (of Radiologists, 2016), while 71% of the clinics in the U.K. report a lack of clinical radiologists (of Radiologists, 2019). While several methods have emerged that automatically detect abnormalities in radiographs (Pelka et al., 2019) or generate a diagnostic text (Jing et al., 2018; Li et al., 2019; Liu et al., 2019a), little emphasis has been given on case prioritization and screening. There is hence a need for a new diagnostic approach that can automatically screen radiography exams with abnormalities and prioritize those with higher probability of containing an abnormality.

Challenge II - Clinically correct diagnostic captioning. Methods that can automatically generate (or retrieve) diagnostic text can be used to assist inexperienced physicians while they can also yield a draft to speed up the authoring process (Kougia et al., 2019a; Liu et al., 2019a; Li et al., 2019). However, to the best of our knowledge, all the diagnostic captioning models suggested in the literature are not optimized in terms of clinical correctness, mainly because they are trained on both normal and abnormal radiography exams. This makes them less effective compared to being trained only on abnormal exams, as we also demonstrate in Sec. 4.3.3. There is hence a need for a diagnostic captioning approach that is optimized for captioning abnormal radiographs.

Challenge III - Explainability111Explicitly required by EU’s General Data Protection Regulation (GDRP, Art. 13§2.f: gdpr.eu/article-13-personal-data-collected/ and clinical relevance are often provided in the form of visual highlights (e.g., heatmaps) alongside diagnostic tags (Karim et al., 2020). Nonetheless, system-generated visual explanations only function as means for highlighting image parts relevant to the diagnostic tags, without any textual explanation. On the other hand, Diagnostic Captioning methods can provide both a diagnosis and an explanation for the problem at hand, since they provide a whole text instead of a tag or a label. Nonetheless, the produced reports are typically of low clinical correctness, as they are not particularly optimized in terms of clinical relevance (Bluemke et al., 2020). The above deficiencies could be addressed by a diagnostic tagging approach that first produces tags for abnormal radiographs, and then employs the generated tags for providing clinically relevant explanations in the form of diagnostic text.

Contributions. This paper addresses the aforementioned challenges with the main contributions summarized as follows:

  • Novelty. We introduce RTEx, a novel methodology for explainable diagnostic tagging of radiography exams, that addresses the aforementioned challenges with the help of three key functionalities: (1) Ranking of abnormal radiography exams: a ranking approach is employed for prioritizing exams likelier to include an abnormality from a large collection of normal and abnormal radiography exams; (2) Diagnostic tagging: a tag generator is employed for generating a set of abnormality tags for the highly ranked radiography exams, trained on an independent set of abnormal radiographs; (3) Diagnostic captioning: the extracted tags are finally used by RTEx to generate (or retrieve) a diagnostic text, in natural language, that provides a clinically relevant explanation of the detected abnormal findings.

  • Applicability and efficiency. We provide an empirical evaluation of the proposed methodology, using two publicly available datasets of radiography exams (Demner-Fushman et al., 2015; Johnson et al., 2019). Our experimental benchmarks assess the performance of RTEx on the ability to (a) rank abnormal radiography exams higher than normal ones, (b) produce the correct medical abnormality tags for abnormal radiography exams, and (c) explain the reasoning behind the selection of the detected tags in the form of diagnostic text. Moreover, a runtime experiment demonstrates the time efficiency of RTEx, showing that it requires only 19.78 seconds to rank 500 radiography exams, and 19.43 seconds for tagging and diagnostic captioning of the top-100 ranked exams.

  • Effectiveness and clinical accuracy. Our experiments demonstrate the effectiveness of RTEx against state-of-the-art competitors for the tasks of ranking and tagging. Our findings additionally suggest that diagnostic captioning using the tags produced by RTEx can provide more clinically accurate diagnostic text compared to not using the generated tags.

The remainder of this paper is organized as follows: in Sec. 2 we outline the related work, while in Sec. 3 we describe RTEx. In Sec. 4 we introduce the datasets used for our empirical evaluation, we provide the experimental setup and report our results. Finally, Sec. 5 concludes the paper and provides directions for future work.

2. Related work

In this section, we outline the main body of related work on medical image ranking, medical image tagging, and diagnostic captioning. To the best of our knowledge, while many earlier works have targeted these problems individually, there is yet no comprehensive methodology for combining these three tasks with focus on radiography exams that contain abnormalities.

Automated screening of radiography exams is not a novel idea (Taguchi, 2010; Jaeger et al., 2013; Oliveira et al., 2008)

. When the number of exams is overwhelming, as for example during a pandemic, the employment of an automated system to exclude normal cases can lead to faster treatment of abnormal cases. Recently, pre-trained deep learning models, such as DenseNet-121

(Huang et al., 2017) and VGG-19 (Simonyan and Zisserman, 2014), were found to discriminate well normal cases from ones with pneumonia and COVID-19 (90% Precision and 83% Recall) (Karim et al., 2020). The authors noted that their approach aims to ease the work of radiologists and such an assistance scenario is suggested in this work. In our solution, we also employ DenseNet-121 CNN for multi-label classification, which is considered to be the state-of-the-art (Baltruschat et al., 2019).

Researchers have focused on labeling radiography exams that are associated with a single abnormality finding; e.g., lymph node (Roth et al., 2015) or end- diastole/systole frames in cine-MRI of the heart (Kong et al., 2016). This means that an assumption is made that the problem is a priori known (e.g., abnormality related to lymph node

). This is not always the case, for example when the radiographs of a new patient arrive for the first time to the clinic. Another line of research, that of exploring multiple abnormality types, has been focusing on associating medical tags (a.k.a. concepts) to radiographs which is related to content-based image retrieval (CBIR).

Liu et al. (2016)

trained a custom CNN to classify radiographs in 193 classes and obtained a descriptive feature vector to be further processed and used for image retrieval. Their approach was found to be more accurate than many submissions to an earlier CLEF medical image annotation challenge, but it was also inferior than the state of the art. A similar medical image annotation challenge still exists today (

ICLEFcaption) with tens of submissions each year (Pelka et al., 2019). Participating systems were asked to tag medical images extracted from open access biomedical journal articles of PubMed Central,222https://www.ncbi.nlm.nih.gov/pmc/ where the tags were automatically extracted from each figure caption using QuickUMLS (Soldaini and Goharian, 2016)

. Not very highly ranked systems used engineered visual features (Scale-Invariant Feature Transform) to encode the images (26th/49), while systems using CNNs to encode the images were better placed. The 4th best system was a ResNet-101 CNN followed by an attentional RNN multi-label image classifier. The 3rd best system was a DenseNet-121 CNN encoder followed by a K-NN image retrieval system, while the 1st place was awarded to a DenseNet-121 CNN followed by a Feed Forward Neural Network classifier. This work builds on top of the two best performing systems

333The 2nd place was awarded to an ensemble of the two best performing systems..

Diagnostic Captioning has not yet been investigated in the literature as an explainability step of diagnostic tagging. While Gale et al. (2018) suggested the use of captioning as an explanation step, they manually assembled sentence templates for systems to learn to fill. A dataset that comprised medical images and texts was introduced for a challenge (Eickhoff et al., 2017; de Herrera et al., 2018), but it was very noisy (images were figures extracted from scientific articles and the gold reports were their captions) (Kougia et al., 2019a). Diagnostic Captioning methods are usually Encoder-Decoders (Hossain et al., 2019; Bai and An, 2018; Liu et al., 2019b), which often originate from Generic Image Captioning. Although different variations have been suggested in the literature (Zhang et al., 2017; Li et al., 2018; Liu et al., 2019a; Jing et al., 2018), most of these methods extend the very well-known Show & Tell (S&T) model (Vinyals et al., 2015) with hierarchical decoding (Jing et al., 2018), elaborate visual attention mechanisms (Wang et al., 2018)

, or reinforcement learning

(Li et al., 2018). S&T comprises a CNN to encode the image and uses the visual representation to initialize a decoding LSTM. We employed this model to generate diagnostic text, having extended it to also encode the tags along with the image. Li et al. (2018) employ an Encoder-Decoder approach to either generate or retrieve the diagnostic text from a medical image. Their hybrid approach initially uses a DenseNet (Huang et al., 2017) or a VGG-19 (Simonyan and Zisserman, 2014) CNN to encode the image. The encoded image is used through an attention mechanism (Lu et al., 2017; Xu et al., 2015)

in a stacked RNN that generates sentence embeddings, each of which is used along with the encoded image by another word-decoding RNN to generate the words of the sentence. Each sentence embedding is provided as input to a Feed Forward Neural Network (FFNN) which outputs a probability distribution over a number of fixed sentences and a word decoder. If a fixed sentence has the highest probability, then this sentence is retrieved as the next sentence instead of using the word-decoding RNN. For the explanation stage of our methodology, we also experiment with CNN-RNN Encoder-Decoder methods but mainly to explain the extracted diagnostic tags, while the Encoder-Decoders are trained only on abnormal studies, which makes sentence retrieval redundant.

3. The RTEx methodology

We present RTEx, a three-stage novel methodology for ranking and explainable diagnostic tagging of radiography exams, with an overview of the whole pipeline depicted in Fig. 2. First, we provide the problem formulation presenting the three sub-problems, addressed by each stage of RTEx.

3.1. Formulation

Let be a set of radiography exams, where each exam is a set of radiographs, i.e., . In our target application, we have , that is, each is a pair of radiographs (one frontal and one lateral). Our formulation and approach can, however, be generalized to contain an arbitrary number of radiographs .

Assume an alphabet of abnormality tags . Each radiography exam is assigned with a set of labels , either listing the abnormalities that are detected in the image or returning an empty list indicating that the image contains no abnormalities.

Based on the above, the first objective of RTEx can be formulated as follows:

Problem 1 ().

(radiography exam ranking) Given a set of radiography exams , a ranking function and an integer , identify the set of the top abnormal exams in such that is maximized.

Next, given the retrieved set of the top exams, our goal is to produce a set of abnormality tags. This brings us to the second objective of RTEx, which can be formulated as follows:

Problem 2 ().

(abnormal radiography exam diagnostic tagging) Given a set of abnormal radiography exams , produce a set of abnormality tags , with each tag originating from set . Each set of tags describes each exam .

In other words, all images contained in a single radiography exam () are described by a common set of abnormality tags ().

Eventually, given the set of produced tags, our final goal is to obtain a diagnostic caption explaining the abnormalities shown in the images contained in the radiography exam, and referenced by the extracted tags. More formally, RTEx’s third objective is formulated as follows:

Problem 3 ().

(abnormal radiography exam diagnostic captioning) Given a set of abnormal radiography exams and a set of tags describing the abnormalities in each exam, provide a set of captions , where each caption describes radiography exam .

3.2. The three stages of RTEx

The three stages of RTEx are outlined in Alg. 1. Next, we provide more details for each stage.

Figure 2. A depiction of our RTEx methodology. First, it ranks the radiography exams based on their probability (i.e., using the radiographs of each exam) to include an abnormality. The highest ranked are tagged with abnormality terms and an explanatory diagnostic text is automatically provided to assist the expert.
Data: a set of radiography exams and the number of exams to retrieve.
Result: a set of abnormality tags and a set of captions.
1 // define a list to maintain the score of each radiography exam;
2 ;
3 // apply the RTEx ranking function;
4 for  do
5       ;
7// sort S with respect to their scores in descending order;
8 ;
9 // filter the top k abnormal exams;
10 ;
11 ;
12 for  do
13       // apply the RTEx tagging function;
14       ;
15       // apply the RTEx captioning function;
16       ;
Algorithm 1 Outline of the RTEx methodology

3.2.1. RTEx@R: Ranking

For the first stage in our methodology we implement an architecture which we refer to as RTEx@R, shown in Fig. 3. More concretely, we employ the same visual encoder as in (Rajpurkar et al., 2017). That is the DenseNet-121 CNN, which is followed by a Feed Forward Neural Network (FFNN). The input of the network are images of radiography exams while the output is a score representing the probability that the exam in question is abnormal. First, both images of the exam are fed to DenseNet-121 (depicted inside the box in the center) and an embedding for each image is extracted from its last average pooling layer. These embeddings are concatenated to yield a single embedding for the radiography exam. Then, the exam embedding is passed to a FFNN with a to return a score from 0 (normal) to 1 (abnormal).

3.2.2. RTEx@T: Diagnostic tagging

The second stage of our methodology, referred to as RTEx@T comprises the assignment of a set of tags to a radiography exam . Our method for addressing this task is called RTEx@T and shown in Fig. 4. It is similar to RTEx@R in that it uses the DenseNet-121 CNN encoder and a FFNN. But it differs in that the FFNN has one output and one sigmoid activation per abnormality tag in the dataset, leading to different output nodes (the right most arrows in the figure). In effect, it returns a probability distribution over the abnormality tags and if the probability of an abnormality tag (i.e., its respective node) exceeds a learned threshold, then the tag is assigned to the radiography exam.

Figure 3. The architecture of RTEx@R. The input is a radiography exam and the output is a probability of the exam to be abnormal.
Figure 4. The architecture of RTEx@T, which is similar to RTEx@R, but the input is an abnormal radiography exam and the output consists of binary nodes, where is the total number of tags in the dataset. The nodes that yield probabilities higher than a defined threshold, indicate the presence of the respective medical abnormalities.

3.2.3. RTEx@X: Diagnostic captioning

For the last stage of our methodology (the rightmost part of Fig. 2), referred to as RTEx@X

, we use a method that comprises a DenseNet-121 CNN encoder, calibrated for the task of diagnostic captioning. More specifically, each radiography exam in the database is encoded (offline) by our CNN to an embedding (i.e., two image embeddings extracted from the last average pooling layer of the encoder, concatenated). Our CNN also encodes any new test exam. Then, the cosine similarity between the test embedding and all the training embeddings in the database are calculated and the most similar exam is retrieved from the database. Its diagnostic text is then assigned to the test exam.

RTEx@X limits its search to training exams that have the exact same tags as the ones predicted (during the tagging stage) for the test exam. However, the whole database is searched, when no exams exist with the same tags. We note that all the embeddings are first normalized (using ), so that the cosine similarity between a test embedding and all the training embeddings in the database is computed with a single matrix (element wise) multiplication. This reduces the search time from minutes to milliseconds, making this method in effect the most efficient compared to its competitors.

4. Empirical evaluation

In this section, we describe the datasets used for our experiments, we provide details on the experimental setup, and present our results.

4.1. Datasets

Datasets that can be used for Diagnostic Captioning comprise medical images and associated diagnostic reports 444We limit our radiography exams to datasets with reports in English.. We are aware of four such publicly available datasets. Namely, IU X-Ray, MIMIC-CXR, PEIR Gross and ICLEFcaption, but we employ only the former two, which are of high quality (Kougia et al., 2019a).555PEIR Gross comprises medical images and photographs, mainly for educational purposes. ICLEFcaption comprises images extracted from scientific articles and uses the caption of each such image as the respective report.

IU X-Ray: IU X-ray (Demner-Fushman et al., 2015) is a collection of radiology exams, including chest radiographs, abnormality tags, and radiologist narrative reports, and is publicly available through the Open Access Biomedical Image Search Engine (OpenI).666https://openi.nlm.nih.gov/ The dataset consists of 3,995 reports (one report per patient) and 7,470 frontal or lateral radiographs, with each radiology report consisting of an ‘Indication’ (e.g., symptoms), a ‘Comparison’ (e.g., previous information about the patient), a ‘Findings’ and an ‘Impression’ section. Each report contains two groups of tags. First, there are manual tags777A combination of MeSH (https://goo.gl/iDvwj2

) and Radiology Lexicon codes (

http://www.radlex.org/). assigned by two trained coders, each comprising a heading (disorder, anatomy, object, or sign) and subheadings (e.g., ‘Hiatal/large’, where ‘large’ indicates the anatomical site of the disease). Second, the ‘Findings’ and ‘Impression’ sections were used to associate each report with a number of automatically extracted tags, produced by Medical Text Indexer (Mork et al., 2013) (MTI tags). An example case is that shown in Fig. 1, where it can be seen that the MTI tags are simple words or terms (e.g., ‘Hiatus’).

For the ranking stage of our methodology, each exam was labeled as abnormal, if one or more manual abnormality tags were assigned, and normal, otherwise (the tag ‘normal’ or ‘no indexing’ was assigned). For the tagging stage of our methodology, we employed the MTI codes, because the manual codes do not explicitly describe the abnormality, but most often also include other information (e.g., anatomical site). For the explanation stage, we employed the ‘Findings’ section. Also, in our experiments we used only exams with two images considering this to be the standard (one frontal and one lateral radiograph), and excluded the rest. We also discarded the exams that did not have a ‘Findings’ section. This resulted in 2,790 exams, from which 1,952 are used for training, 276 for validation and 562 for testing.888We used the same split as in Li et al. (2018, 2019) The class ratio in the dataset is slightly imbalanced with 39% normal radiology exams. Abnormal exams are assigned with 3 tags on average, while the most frequent tag is ‘degenerative change’ (216 exams). The length of the diagnostic text in each report is 40 words on average. For the normal exams the diagnostic text can be exactly the same for many different patients, e.g., the following finding ‘The heart is normal in size. The mediastinum is unremarkable. The lungs are clear.’ appears in 29 exams. By contrast, the most frequent abnormal report appeared exactly the same in 7 reports.

MIMIC-CXR: This dataset comprises 377,110 chest radiographs associated with 227,835 radiography exams which come from 64,588 patients of the Beth Israel Deaconess Medical Center between 2011-2016.999MIMIC-CXR v2.0.0, https://mimic-cxr.mit.edu/ As in IU X-Ray, reports in MIMIC are organized in sections, but some reports include additional sections such as ‘History’, ‘Examination’, or ‘Technique’, but not in a consistent manner, because the structure of the reports and the section names were not enforced by the hospital’s user interface (Johnson et al., 2019). The current version of the dataset does not contain the initial labels, so we re-produced them by applying the CheXpert disease mention labeler (Irvin et al., 2019) on the reports as described in Johnson et al. (2019). CheXpert classifies texts into 14 labels (13 diagnoses and ‘No Finding’), each as ‘negative’, ‘positive’, or ‘uncertain’ for a specific text. We treated those labeled uncertain as positive. For the ranking step, we labeled exams as normal when the ‘No Finding’ label was assigned. In total, there are 40,306 exams with two images that correspond to 29,482 patients. After removing 11 exams that did not have a ‘Findings’ section, which we used for the explanation stage of our RTEx, we split the dataset to 70% (training), 10% (validation), and 20% (test) with respect to patients. For our experiments we randomly kept one exam per patient and sampled 2,300 patients from the training set, 300 from the validation set and 650 from the test set; with 68% of this final dataset consisting of normal exams. Each abnormal exam has 2 labels on average, while the most common label is ‘Pneumonia’. The average diagnostic text length is 55 words. In this dataset many normal cases have the same diagnostic text, e.g, the most common normal caption appears in 53 exams. Considering only the abnormal exams the most frequent caption appears 4 times.

4.2. Experimental setup

For each of the three stages of RTEx we benchmark each technique against competitors. Next, we outline the competitor methods and the performance metrics used for each benchmark.

4.2.1. Ranking and Tagging

We investigated one baseline method, referred to as Random, and two competitor methods, referred to as CNN+NN and


, for both ranking and tagging stages. The methods were benchmarked against RTEx@R and RTEx@T, respectively. For the two competitors, the ranking is determined based on the produced tags by these methods trained on both normal and abnormal exams. Moreover, at the tagging stage, the tags are obtained by retraining the same methods only on abnormal exams. Next, we describe the baseline as well as the two tagging methods.

Random. This is a baseline method used both for ranking and tagging and simulates the case where no screening is performed. For the ranking task it randomly returns a number serving as the abnormality probability. For tagging, it simply assigns a set of random tags from the training set. The number of tags assigned is the average number of tags per training exam.

CNN+NN. This method employs a DenseNet-121 CNN (Huang et al., 2017)

encoder. It is pre-trained on ImageNet and fine-tuned on our datasets (

IU X-Ray or MIMIC-CXR). CNN+NN encodes all images (from the training and test sets) and concatenates the obtained representations for each radiograph in an exam (), to yield a single representation per exam (). Then, for each test representation, the cosine similarity against all the training representations is computed and the nearest exam is returned. When generating tags then the abnormality tags of the nearest exam are returned and assigned to the test exam.

CNN+kNN. This method is an extension of CNN+NN that uses the -most similar training exams to compute the tags for exam . To constrain the number of returned tags (), only the most frequent tags of the exams are held. Moreover, we set to be the average number of tags per exam of the particular retrieved exams. We observe that CNN+kNN is considered a very strong baseline for tagging. It was ranked third in a recent medical tagging competition (Kougia et al., 2019b). The first two methods are RTEx@T (see Section 3.2.2) and an ensemble of CNN+kNN and RTEx@T, respectively.

For solving the problem of ranking, we adapted CNN+NN and CNN+kNN as follows. The abnormality tags of the most similar radiography exam in the training set are returned and a probability score is computed using the following formula:


where are all the ground truth tags of the dataset, are the generated tags for radiography exam and when and zero otherwise. will usually be close to zero. The main intuition is that the more the assigned tags, the higher the and the likelier it is that this exam is abnormal.

Evaluation metrics. Ranking methods were evaluated in terms of , with a varying . We also used , but preliminary experiments showed that this measure correlates highly with . Tagging methods were evaluated in terms of . We used the top- abnormal cases (ranked by RTEx@R) to compute the F1 score between their predicted and their gold tags.

4.2.2. Diagnostic captioning

We benchmarked three competitors for the task of diagnostic captioning showing the benefits in terms of clinical correctness when using the generated tags.

S&T This method was introduced by Vinyals et al. (2015) for image captioning and it is only applicable for the stage of diagnostic captioning. As the encoder of the S&T architecture we employ the DenseNet-121 (Huang et al., 2017) CNN, which is used to initialize an LSTM-RNN decoder (Hochreiter and Schmidhuber, 1997). A dense layer on top outputs a probability distribution over the words of the vocabulary, so that the decoder generates a word at a time. The word generation process continues until a special ‘end’ token is produced or the maximum caption length is reached.

S&T+ This method extends S&T

(also applicable solely to diagnostic captioning), so that the generated text explains the predicted tags. Hence, after the encoding phase and prior to the decoding phase (before the generation of the first word), the tags are provided to the decoder, as if they were words of the diagnostic text; similar to

teacher forcing (Courville, 2016). Since the decoder is an RNN, this acts as a prior during the decoding that will follow.

EtD This method follows a tag and image constrained Encoder-Decoder architecture. A DenseNet-121 CNN (Huang et al., 2017) yields one visual embedding per exam. The decoder is an LSTM constrained from the visual embedding and the tags that were assigned to exam during the previous step (see Section 3.2.2). We call this method EtD. More formally, the decoder at each time step learns a hidden state as the non-linear combination (the weight matrix is learned) of the input word and the previous hidden state :

where , are the LSTM input and forget gates regulating the information from this and the previous cell to be forgotten. is the visual representation from the last average pooling layer of the DenseNet encoder. is the centroid of the word embeddings of the tags :

For all the text generation methods mentioned above, we preprocessed the text by tokenizing, lower-casing the words, removing digits and words with length 1. We used the Adam optimizer (Kingma and Ba, 2014) everywhere with initial learning rate 10e-3. RTEx@T and RTEx@R used a learning rate reduced mechanism (Rajpurkar et al., 2017).

Evaluation metrics. We employed both word-overlap and clinical correctness measures to evaluate the system-produced diagnostic text. The most common word-overlap measures in diagnostic captioning are BLEU (Papineni et al., 2002) and ROUGE-L (Lin, 2004)

. BLEU is precision-based and measures word n-gram overlap between the produced and the ground truth texts. ROUGE-L measures the ratio of the length of the longest common n-gram shared by the produced text and the ground truth texts, to either the length of the ground truth text (ROUGE-L Recall) or the length of the generated text (ROUGE-L Precision). We employ the harmonic mean of the two (ROUGE-L F-measure). For the implementations of BLEU and ROUGE-L, we used respectively sacrebleu

101010https://github.com/mjpost/sacrebleu/blob/master/sacrebleu/sacrebleu.py and MSCOCO111111https://github.com/salaniz/pycocoevalcap/tree/master/rouge. To evaluate the clinical correctness, following the work of (Liu et al., 2019a), we used the CheXPert labeler (Irvin et al., 2019) to extract labels from both the ground truth and the system-generated diagnostic texts. Clinical precision (CP) is then the average number of labels shared between the ground truth and system-generated texts, to the number of labels of the latter. Similarly, clinical recall (CR) is the average number of labels shared between the ground truth and system-generated texts, to the number of labels of the former.

4.3. Experimental results

Next, we present our results with regard to ranking, tagging, and diagnostic captioning. Finally, we provide a discussion of our findings and assess the overall performance of RTEx.

4.3.1. Ranking

Fig. 5 (a) and (b) depict the performance of the methods in terms of .121212Similar results were obtained in terms of . We used bootstrapping, sampling 100 exams at a time, varying from 10 to 80 radiography exams. Random is outperformed by all competitors, while RTEx@R is the overall winner for both datasets, with the second best being RTEx@T for MIMIC-CXR and CNN+kNN for IU X-Ray.

(b) IU X-ray.
Figure 5. NDCG@K of all methods for the task of ranking radiography exams based on the probability of abnormality for MIMIC-CXR (a) and IU X-Ray (b). We used bootstrapping (1000 samples of 100 exams each) and report the average value. varies from 10 to 80 and moving average was used with a window of 5. For MIMIC-CXR we observe that RTEx@R and RTEx@T consistently outperform the other methods, while for IU X-Ray the winners are RTEx@R and CNN+kNN.

4.3.2. Diagnostic tagging

(b) IU X-ray.
Figure 6. F1 of diagnostic tagging methods, on the top 100 ranked radiography exams. The cases were ranked by RTEx@R, based on their abnormality probability for MIMIC-CXR (a) and IU X-Ray (b). We observe that RTEx@R is the winner for both datasets, with CNN+kNN being the second best by up to a factor of two for MIMIC-CXR.

During this step we assume that the radiography exams are already ranked based on an abnormality probability. Thus, we evaluate various methods with respect to their ability to correctly detect the correct abnormality tags. We report Macro F1 (macro averaging across exams), which is also the standard measure of a recent competition on medical term tagging (Pelka et al., 2019). As it can be seen in Fig, 6, RTEx@T outperforms the two competitors in both datasets, with the second best being CNN+kNN with a difference of up to a factor of two for MIMIC-CXR.

4.3.3. Diagnostic captioning

Dataset Model BLEU ROU CP CR
MIMIC-CXR S&T@all 7.8 25.7 0.080 0.118
S&T 8.2 25.2 0.208 0.151
S&T+ 9.8 26.2 0.081 0.117
EtD 6.9 25.5 0.171 0.144
RTEx@X 5.9 20.5 0.229 0.284
IU X-ray S&T@all 6.9 23.6 0.118 0.088
S&T 6.5 23.0 0.153 0.113
S&T+ 9.5 23.4 0.085 0.071
EtD 10.0 26.7 0.131 0.124
RTEx@X 5.5 20.2 0.193 0.222
Table 1. The results of our explanatory captioning phase, evaluated with BLEU, ROUGE-L (ROU), Clinical Precision (CP) and Clinical Recall (CR). Clinical correctness decreases when S&T is trained also on normal exams (S&T@all). Our RTEx@X outperforms all other methods in clinical precision and recall.

Table 1 provides the results of the methods for the task of Diagnostic Captioning. We considered as ground truth, i.e., set , the correct reports and as predicted captions the system-produced diagnostic texts. Our RTEx@X outperforms all methods in terms of clinical precision and recall. Generative models achieve higher word-overlap scores, mainly because they learn to repeat common phrases that exist in the reports. On the other hand, retrieval methods assign texts that are written from radiologists, so they have a higher clinical value. When training S&T on all exams (S&T@all), using both normal and abnormal cases, clinical precision and recall decrease in both datasets. By contrast, the performance in terms of word-overlap measures (BLEU and ROUGE-L) was slightly improved overall, probably because the decoder is now better in generating text present in normal reports, which however is also present in abnormal reports (see Fig. 1).

4.3.4. Runtime

As a final benchmark we calculated the runtime of RTEx on ranking, tagging, and captioning on 500 randomly selected radiography exams from our IUXray test set. Ranking lasted 19.78 seconds. Producing tags and diagnostic texts for the top 100 ranked exams lasted 19.43 seconds. Nonetheless, all 100 top-ranked exams in this experiment were abnormal. Note that an experienced radiologist needs 2 minutes on average (of Radiologists, 2019) for reporting a radiography exam, hence 200 minutes for 100 exams. The experiment was performed on a 32-core server with 256GB RAM and 4GPUs.

Repeatability. For repeatability purposes, the code for the best performing pipeline of RTEx is available on github.131313https://github.com/ipavlopoulos/rtex.git

5. Conclusions

We introduced a new methodology that can be used for (1) ranking radiography exams based on the probability of containing an abnormality, (2) producing diagnostic tags using abnormal exams for training, and (3) providing diagnostic text produced based on both the radiographs and tags, as means of explaining the predicted tags. This is an important step for practitioners to prioritize cases with abnormalities. Our methodology can be further used to predict abnormality tags and complement them with an automatically suggested explanatory diagnostic text to guide the medical expert. We experimented with two publicly available datasets showing that our ranking and tagging components outperform two strong competitors and a baseline. Our diagnostic captioning component demonstrates the benefit of employing tags for generating text of higher clinical correctness. We also demonstrated that limiting our training data to only abnormal exams improves the clinical correctness of the automatically provided text. Future directions include further experimentation with data of a larger scale and deployment to hospitals.


  • H. J. W. L. Aerts, E. R. Velazquez, R. T. H. Leijenaar, et al. (2014) Decoding tumour phenotype by noninvasive imaging using a quantitative radiomics approach. Nature Communications 5, pp. 4006. Cited by: §1.
  • S. Bai and S. An (2018) A survey on automatic image caption generation. Neurocomputing 311, pp. 291–304. Cited by: §2.
  • I. M. Baltruschat, H. Nickisch, M. Grass, T. Knopp, and A. Saalbach (2019) Comparison of deep learning approaches for multi-label chest x-ray classification. Scientific reports 9 (1), pp. 1–10. Cited by: §2.
  • D. Bluemke, L. Moy, M. A. Bredella, B. B. Ertl-Wagner, K. J. Fowler, V. J. Goh, E. F. Halpern, C. P. Hess, M. L. Schiebler, and C. R. Weiss (2020)

    Assessing radiology research on artificial intelligence: a brief guide for authors, reviewers, and readers—from the radiology editorial board

    Radiological Society of North America. Cited by: §1.
  • A. Courville (2016) Deep learning. MIT press. Cited by: §4.2.2.
  • A. G. S. de Herrera, C. Eickhoff, V. Andrearczyk, and H. Müller (2018) Overview of the imageclef 2018 caption prediction tasks. In CLEF CEUR Workshop, Avignon, France. Cited by: §2.
  • D. Demner-Fushman, M. D. Kohli, M. B. Rosenman, S. E. Shooshan, L. Rodriguez, S. Antani, G. R. Thoma, and C. J. McDonald (2015) Preparing a collection of radiology examinations for distribution and retrieval. Journal of the American Medical Informatics Association 23 (2), pp. 304–310. Cited by: 2nd item, §4.1.
  • C. Eickhoff, I. Schwall, A. G. S. de Herrera, and H. Müller (2017) Overview of imageclefcaption 2017 - the image caption prediction and concept extraction tasks to understand biomedical images. In CLEF CEUR Workshop, Dublin, Ireland. Cited by: §2.
  • W. Gale, L. Oakden-Rayner, G. Carneiro, A. P. Bradley, and L. J. Palmer (2018) Producing radiologist-quality reports for interpretable artificial intelligence. CoRR abs/1806.00340. External Links: arxiv1806.00340 Cited by: §2.
  • S. Hochreiter and J. Schmidhuber (1997) Long short-term memory. Neural Computation 9 (8), pp. 1735–1780. Cited by: §4.2.2.
  • M. D. Hossain, F. Sohel, M. F. Shiratuddin, and H. Laga (2019) A comprehensive survey of deep learning for image captioning. CSUR 51 (6), pp. 118. Cited by: §2.
  • G. Huang, Z. Liu, L. V. D. Maaten, and K. Q. Weinberger (2017) Densely connected convolutional networks. In

    Proceedings of the IEEE conference on computer vision and pattern recognition

    Hawaii, HI, USA, pp. 4700–4708. Cited by: §2, §2, §4.2.1, §4.2.2, §4.2.2.
  • J. Irvin, P. Rajpurkar, M. Ko, Y. Yu, S. Ciurea-Ilcus, C. Chute, H. Marklund, B. Haghgoo, R. Ball, K. Shpanskaya, et al. (2019) Chexpert: a large chest radiograph dataset with uncertainty labels and expert comparison. CoRR abs/1901.07031. External Links: arXiv:1901.07031 Cited by: §4.1, §4.2.2.
  • S. Jaeger, A. Karargyris, S. Candemir, J. Siegelman, L. Folio, S. Antani, and G. Thoma (2013) Automatic screening for tuberculosis in chest radiographs: a survey. Quantitative imaging in medicine and surgery 3 (2), pp. 89. Cited by: §2.
  • B. Jing, P. Xie, and E. Xing (2018) On the automatic generation of medical imaging reports. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), Melbourne, Australia, pp. 2577–2586. Cited by: §1, §2.
  • A. E. W. Johnson, T. J. Pollard, S. Berkowitz, N. R. Greenbaum, M. P. Lungren, C.-Y. Deng, R. G. Mark, and S. Horng (2019) MIMIC-cxr: a large publicly available database of labeled chest radiographs. arXiv preprint arXiv:1901.07042. Cited by: 2nd item, §4.1.
  • Md. Karim, T. Döhmen, D. Rebholz-Schuhmann, S. Decker, M. Cochez, and O. Beyan (2020) Deepcovidexplainer: explainable covid-19 predictions based on chest x-ray images. arXiv preprint arXiv:2004.04582. Cited by: §1, §2.
  • D. P. Kingma and J. Ba (2014) Adam: A Method for Stochastic Optimization. arXiv:1412.6980. Cited by: §4.2.2.
  • B. Kong, Y. Zhan, M. Shin, T. Denny, and S. Zhang (2016) Recognizing end-diastole and end-systole frames via deep temporal regression network. In International Conference on Medical Image Computing and Computer-Assisted Intervention, Athens, Greece, pp. 264–272. Cited by: §2.
  • V. Kougia, J. Pavlopoulos, and I. Androutsopoulos (2019a) A survey on biomedical image captioning. In Workshop on Shortcomings in Vision and Language of the Annual Conference of the North American Chapter, Minneapolis, MN, USA, pp. 26–36. Cited by: §1, §2, §4.1.
  • V. Kougia, J. Pavlopoulos, and I. Androutsopoulos (2019b) Aueb nlp group at imageclefmed caption 2019. In CLEF2019 Working Notes. CEUR Workshop Proceedings, Lugano, Switzerland, pp. 09–12. Cited by: §4.2.1.
  • E. A. Krupinski (2010) Current perspectives in medical image perception. Attention, Perception, & Psychophysics 72 (5), pp. 1205–1217. Cited by: §1.
  • Y. Li, X. Liang, Z. Hu, and E. Xing (2018) Hybrid retrieval-generation reinforced agent for medical image report generation. In Advances in neural information processing systems, Montreal, Canada, pp. 1530–1540. Cited by: §2, footnote 8.
  • Y. Li, X. Liang, Z. Hu, and E. Xing (2019) Knowledge-driven encode, retrieve, paraphrase for medical image report generation. In Proceedings of the AAAI Conference on Artificial Intelligence, Hawaii, U.S.A., pp. 6666–6673. Cited by: §1, §1, footnote 8.
  • C.-Y. Lin (2004) ROUGE: a package for automatic evaluation of summaries. In

    Workshop on Text Summarization Branches Out of the Annual Conference of the Association for Computational Linguistics

    Barcelona, Spain, pp. 74–81. Cited by: §4.2.2.
  • G. Liu, T.-M. H. Hsu, M. McDermott, W. Boag, W.-H. Weng, P. Szolovits, and M. Ghassemi (2019a) Clinically accurate chest x-ray report generation. CoRR abs/1904.02633. External Links: arXiv:1904.02633 Cited by: §1, §1, §2, §4.2.2.
  • X. Liu, H. R. Tizhoosh, and J. Kofman (2016) Generating binary tags for fast medical image retrieval based on convolutional nets and radon transform. In International Joint Conference on Neural Networks, Vancouver, British Columbia, Canada, pp. 2872–2878. Cited by: §2.
  • X. Liu, Q. Xu, and N. Wang (2019b) A survey on deep neural network-based image captioning. The Visual Computer 35 (3), pp. 445–470. Cited by: §2.
  • J. Lu, C. Xiong, D. Parikh, and R. Socher (2017) Knowing when to look: adaptive attention via a visual sentinel for image captioning. In CVPR, Honolulu, HI, USA, pp. 375–383. Cited by: §2.
  • M. M. A. Monshi, J. Poon, and V. Chung (2020) Deep learning in generating radiology reports: a survey. Artificial Intelligence in Medicine, pp. 101878. Cited by: §1.
  • J. G. Mork, A. Jimeno-Yepes, and A. R. Aronson (2013) The nlm medical text indexer system for indexing biomedical literature. In BioASQ Workshop, Valencia, Spain. Cited by: §4.1.
  • R. C. of Radiologists (2016) Clinical radiology uk workforce census 2015 report. The Royal College of Radiologists London. Cited by: §1.
  • R. C. of Radiologists (2019) Clinical radiology uk workforce census 2019 report. The Royal College of Radiologists London. Cited by: §1, §4.3.4.
  • L. G. Oliveira, S. A. e Silva, L. H. V. Ribeiro, M. R. de Oliveira, C. J. Coelho, and A. L. Andrade (2008) Computer-aided diagnosis in chest radiography for detection of childhood pneumonia. International journal of medical informatics 77 (8), pp. 555–564. Cited by: §2.
  • K. Papineni, S. Roukos, T. Ward, and W.-J. Zhu (2002) BLEU: a method for automatic evaluation of machine translation. In ACL, Philadelphia, PA, USA, pp. 311–318. Cited by: §4.2.2.
  • O. Pelka, C. M. Friedrich, A. G. S. de Herrera, and H. Müller (2019) Overview of the imageclefmed 2019 concept detection task. CLEF working notes, CEUR. Cited by: §1, §2, §4.3.2.
  • P. Rajpurkar, J. Irvin, K. Zhu, B. Yang, H. Mehta, et al. (2017) Chexnet: radiologist-level pneumonia detection on chest x-rays with deep learning. arXiv. Cited by: §3.2.1, §4.2.2.
  • H. R. Roth, L. Lu, J. Liu, J. Yao, A. Seff, K. Cherry, L. Kim, and R. M. Summers (2015)

    Improving computer-aided detection using convolutional neural networks and random view aggregation

    IEEE Transactions On Medical Imaging 35 (5), pp. 1170–1181. Cited by: §2.
  • K. Simonyan and A. Zisserman (2014) Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556. Cited by: §2, §2.
  • L. Soldaini and N. Goharian (2016) Quickumls: a fast, unsupervised approach for medical concept extraction. In Medical Information Retrieval (MedIR) Workshop, Pisa, Italy. Cited by: §2.
  • P. Suetens (2009) Fundamentals of medical imaging. Cambridge medicine, Cambridge University Press. External Links: ISBN 9780521519151, LCCN 2010483551, Link Cited by: §1.
  • A. Taguchi (2010) Triage screening for osteoporosis in dental clinics using panoramic radiographs. Oral diseases 16 (4), pp. 316–327. Cited by: §2.
  • O. Vinyals, A. Toshev, S. Bengio, and D. Erhan (2015) Show and tell: a neural image caption generator. In Proceedings of the IEEE conference on computer vision and pattern recognition, Boston, MA, USA, pp. 3156–3164. Cited by: §2, §4.2.2.
  • X. Wang, Y. Peng, L. Lu, Z. Lu, and R. M. Summers (2018) Tienet: text-image embedding network for common thorax disease classification and reporting in chest x-rays. In CVPR, Quebec City, Canada, pp. 9049–9058. Cited by: §2.
  • K. Xu, J. Ba, R. Kiros, K. Cho, A. Courville, R. Salakhudinov, R. Zemel, and Y. Bengio (2015) Show, attend and tell: neural image caption generation with visual attention. In ICML, Lille, France, pp. 2048–2057. Cited by: §2.
  • Z. Zhang, Y. Xie, F. Xing, M. McGough, and L. Yang (2017) MDNet: a semantically and visually interpretable medical image diagnosis network. In Proceedings of the IEEE conference on computer vision and pattern recognition, Honolulu, HI, USA, pp. 6428–6436. Cited by: §2.