The chest X-Ray (CXR) is the one of the most common clinical exam used to diagnose thoracic diseases and abnormalities. The volume of CXR scans generated daily in hospitals is huge. Therefore, an automated diagnosis system able to save the effort of doctors is of great value. At present, the applications of artificial intelligence in CXR diagnosis usually use pattern recognition to classify the scans. However, such methods rely on labeled databases, which are costly and usually have large error rates. In this work, we built a database containing more than 12,000 CXR scans and radiological reports, and developed a model based on deep convolutional neural network and recurrent network with attention mechanism. The model learns features from the CXR scans and the associated raw radiological reports directly; no additional labeling of the scans are needed. The model provides automated recognition of given scans and generation of reports. The quality of the generated reports was evaluated with both the CIDEr scores and by radiologists as well. The CIDEr scores are found to be around 5.8 on average for the testing dataset. Further blind evaluation suggested a comparable performance against human radiologist.
The chest X-Ray (CXR) is one of the most common diagnostic techniques for respiratory system. It iss quick and inexpensive, and yields low radiation. The volume of daily CXR scans in hospitals is huge and their examination and interpretation consume lots of time and effort of radiologists. Therefore, it is desirable to develop an automated system that is able to examine and interpret CXR radiographs automatically. Moreover, an automated system may help reduce inter-observer variations due to the factors including individual experience, quality of the radiograph, time and personality type Tudor et al. (1997). The adoption of an automated system will ḻead to a more standardized terminology and treatment, and benefit the collaborations between different parties. The system may further evoke new applications such as remote diagnosis, self-service diagnosis and so on.
Previously, many works focused on automated classification of the CXR scans. These works are usually based on variants of Convolutional Neural Networks (CNN) and supervised learningRajpurkar et al. (2017); Ypsilantis and Montana (2017); Pesce et al. (2017); Islam et al. (2017); Yao et al. (2017); Yan et al. (2018); Guan et al. (2018); Rubin et al. (2018). However, at least two problems hinder the practical applications of automated systems in hospital. First, the sensitivity and false positive rates of these classification approaches seem to be saturated. Further improvement may need significant increase the amount and quantity of labeled samples, which are very expensive in the medical field. Second, the decision strategy underlying these systems has not been well understood, making it difficult to track the errors and gain the trust of doctors and patients.
In this work, we developed a model based on deep recurrent neural networks with attention mechanism that learns from CXR images and the raw radiological reports simultaneously. The deep neural networks have shown great potential in characterizing and classifying complex data in a broad range of fieldsGoodfellow et al. (2016); Silver et al. (2016); Hui Y et al. (2015). After training with our database, the neural network is able to automatically generate radiological reports for given scans. Our work has the following novel features. First, the training of our model is only weakly supervised; the model directly learns from the image and the raw radiological reports stored in the hospital database; no further classification and labeling of the images by human is required. That is, in contrast to most machine learning models. This feature greatly facilitates the acquisition of data and training of large-scale models. Second, instead of a simple classification of the case into one or several disease categories, the output of our model is a descriptive report regarding different conditions of the chest; the output is directly readable for patients. Third, the implementation of the attention mechanism adds another level of understanding of how the model works, facilitating debugging and optimizing of the model. In the following sections, we describe the model architecture, training and testing procedures, and the performance evaluated with the CIDEr scores and by human radiologists.
0.1 The architecture of the network
shows the overall architecture of the neural network. During training, the neural network reads in both CXR images and the raw radiological reports, and outputs human readable texts. The output is then compared with the ground truth to calculate the loss function, which is minimized with the gradient back-propagation technique. After training, the model is able to automatically generate reports for given CXR images.
The design of the model architecture was inspired by the pioneer work of Xu et al. Xu et al. (2015), where they developed an RNN to generate captions for daily images, such as those in Flickr and MS COCO databases. The model is also similar to those by Zhang et al. Zhang et al. (2017) and Wang et al. Wang et al. (2018) in terms of the purpose of automatically generating medical reports. However, the architecture of our model was redesigned to better fit the organization of our database.
The model contains a 121-layer Densely Connected Convolutional Network (DenseNet) Huang et al. (2017)
, which is used as a visual information encoder to extract features from the input images. The encoder is composed of four blocks; each block contains several convolutional layers, each takes all preceding feature-maps as inputs. The blocks are connected by transition layers. According to Huang et al., DenseNets alleviate the vanishing gradient problem, strengthen feature propagation, encourage feature reuse, and substantially reduce the number of parametersHuang et al. (2017)
. Compared with many other CNNs, they converge faster and are appropriate for smaller datasets. Therefore, DenseNets are suitable for medical images. The output of the last layer of the DenseNet block is fed to the Long Short-Term Memory (LSTM) network to generate descriptions for the given CXR image.
The LSTM network Hochreiter and Schmidhuber (1997) is adopted to generate texts for the given CXR image word by word. At each step, it reads the output of the last layer of the DenseNet block and the previous generated word, and outputs the next word. In detail, our LSTM implementation follows that of Zaremba et al. (2014), i.e.,
where , , , , are the input, forget, memory, output and hidden gates of the LSTM, respectively. is the weight matrix,
the logistic sigmoid function,the global average of DenseNet output, the previous generated word, and is an embedding matrix. The symbol denotes element-wise multiplication. The subscript is the size of the hidden states, the vocabulary size, and is the channel number of the output of DenseNet.
Attention mechanism has been widely adopted in visual image processing since it improves the model performance and adds a level of understanding of how the model works. It mimics the human visual attention mechanism by learning to focus on a certain image region. Specifically, a soft attention mechanism is implemented in the model, which calculates a set of weights conditioning on the image representation and on the hidden state. These weights are multiplied with the output vectors of the DenseNet to get a weighted representation of the image, which is then utilized by the recurrent neural network to generate descriptions. The corresponding equations are
where is the attention weight, the output of DenseNet, the weighted context vector,
the probability distribution of the words, andis the predicted word sampled from .
The loss function is the cross entropy between the ground truth and the prediction distributions of the texts,
where is the probability distribution predicted at the -th step, the index of the -th word in the ground truth text, and the length of the text.
All the chest X-ray scans and the associated radiological reports were provided by Nanjing Drum Tower Hospital. In total the dataset contains 12, 219 images and the same number of reports in Chinese language. All reports were investigated by one expert (of attending doctor level or above) and double checked by another expert (of associate chief physician level or above). The dataset was randomly split into three sets, 80% samples for training, 10% for validating, and 10% for testing. Figure 2 shows some examples randomly taken from the dataset.
Since there are no blanks between Chinese words, the python module jieba Sun (2012) was used for text segmentation. After processing all the radiological reports in the dataset, a vocabulary of size 424 was obtained. The words were represented with one-hot-vectors. The words that appeared less than three times were represented with a special token nou. Two other special tokens were start and end, indicating the beginning and ending of the reports, respectively.
0.3 Training Procedures
Transferring learning technique was employed to speed up the convergence of training. Specifically, the 121-layer DenseNet was pre-trained with the ChestX-ray8 dataset released in September 2017 Wang et al. (2017) for a classification task in a supervised way. The training procedure was similar to that in Rajpurkar et al. (2017). The ChestX-ray8 dataset contains 110k chest x-ray images and 14 types of diseases labels. The resulted weights were then transferred to the encoder module of the model. In the training process that followed, the parameters in the first two dense blocks were fixed while that in the others were fine-tuned by the gradient back-propagation algorithm.
During training, the original x-ray images were resized to 256*256 pixels and then processed with histogram equalization to increase the contrast. The size of the hidden unit of LSTM was set to 512 and the embedding size was 256. Adam optimizer was used for the optimization process. The learning rate of the DenseNet was set to and that of the LSTM was set to .
0.4 Evaluation metrics
In order to evaluate for image how well a generated sentence matched the consensus of a set of descriptions , the Consensus-based Image Description Evaluation score (CIDEr) Vedantam et al. (2015) was used.
To calculate the CIDEr score, the Term Frequency Inverse Document Frequency (TF-IDF) weighting for each -gram in the sentence was first calculated as
where is the number of times an -gram occurs in the sentence , the vocabulary of all -grams and the set of all images in the dataset.
Then the CIDEr score for -grams of length was calculated as
where is a vector formed by corresponding to all -grams of length n. is similarly defined for the generated sentence .
At last, the CIDEr score was calculated as the average over all -grams,
(a) shows the losses of the training and validation datasets as a function of the training epoch. It can be seen that the training loss keeps decreasing, while the validation loss saturates at about the 10th epoch, indicating that the generalization ability of the model reaches its maximum. Therefore, the parameters obtained at the 10th epoch are used by the model to generate the results in the following sections.
Figure 3(b) shows the calculated CIDEr values for the testing dataset as a function of epoch. Note that for the testing set, the ground truth sentences were not used when generating the descriptions; they were only used to evaluate the descriptions after their generation. The Beam Search technique Sutskever et al. (2014) was used to generate multiple sentences for a given CXR image, and each sentence was assigned a preference probability. The top three sentences with the highest probabilities were recorded. Their CIDEr values were calculated against the ground truth and the highest one was used to calculate the curve shown in Fig. 3(b). According to the figure, the average CIDEr value of the testing set increases as a function of the epoch, and saturates around 5.8 at the 10th epoch.
Fig. 4. shows several examples of the generated descriptions. For each scan, the top three predictions given by the model are shown, labeled as Pd1, Pd2, and Pd3, respectively, in the decreasing order of the preference probability. More examples are given in the supplemental materials. Fig. 4(a) shows a normal case with increased lung markings in both lungs. The model correctly recognizes the situation and generates descriptions with “increased lung markings in both lungs” in Pd1 and Pd3; while in Pd2, the model says “no obvious abnormalities”. Fig. 4(b) shows the scan of a patient with chronic bronchitis and inflammation, concluded based on the image as well as on the medical history. The model reports “increased lung markings” in Pd1 and Pd2, and directly gives “bronchitis” in Pd3, which is amazing since the model has no information of the medical history. Fig. 4(c) is a case with pleural effusion on the right side. The model identifies the symptom and correctly generates report for it. For the case in Fig. 4(d), the model correctly recognizes the increased size of the heart and also assumes the image to be a postoperative view. It is not known whether the model draws this conclusion based on the thin and bright strips near the heart region.
Figure 5 shows the alignment of generated words with the relevant parts of the CXR images. In general, the alignments are consistent with human intuition. The alignments are enabled by the attention mechanism and provide another level of understanding of how the network works. They also facilitate debugging of the results.
The quality of the automatically generated reports were also evaluated by human experts. The procedure was as follows. 100 CXR scans were randomly extracted from the testing dataset and fed one by one to the neural network to generate reports. Another 100 CXR scans and the corresponding raw reports, which were written by human radiologists, were randomly extracted from the same dataset. These 200 scans and the associated reports were put together, shuffled and sent to experts for human inspection. Two radiologists (of associate chief physician level) were invited to examine the images and assess the quality of the associated reports, without knowing the origin of the reports - from human or machine. This was to prevent possible bias, either to human or to machine. The radiologists gave scores from 1 to 5 for each report, according to the standard as follows. An report with all conditions identified and accurately described was scored 5; an report with major conditions identified correctly but minor problems outside chest missed was scored 4, e.g., scoliosis, foreign matter in vitro; an report with major conditions identified correctly but minor problems inside the chest missed was scored 3, e.g., old lesions, fibrous stripes, post thoracic surgery, aortic calcification; if major conditions were identified but described inaccurately, a score 2 was given; If major conditions were missed or identified incorrectly, the score would be 1.
Figure 6 shows the score distributions for two groups of reports. It can be seen that for both groups, the majority are scored 5. For the group of reports given by human, 77% are scored 5; while for the reports from the model, 72% are scored 5. At the level of score 4, these two numbers are 9% and 14%, respectively. If we assume the scores of 4 or above are acceptable, then both groups have 86% falling in this range, suggesting that the neural network is able to generate reports with quality comparable with that of human experts.
Discussion and Conclusion
In summary, we developed a scheme that is able to automatically generate radiological reports/descriptions for given CXR, based on a deep convolutional neural network and an recurrent neural network with attention mechanism. We built a database containing more than 12,000 CXR scans and trained the model, and then evaluated the quality of the generated descriptions. The comparison between the generated descriptions and the ground truth gave a CIDEr value of 5.8. We also blended the generated descriptions with that given by radiologists, and invited other radiologists to score them. It was found that among the reports given by radiologists, 77% received the highest score 5; while for the reports generated by the model, 72% were scored 5. For the reports with score 4, the percentages were 9% and 14%, respectively. Therefore, the model is able to generate reports with high quality comparable to that of radiologists, and has the potential to be significantly improved as more training data are available.
The model developed here has some particular features. First, it learns from the raw radiological reports and is able to directly utilize the huge volume of CXR data generated in the hospital; no additional labeling work is required. This feature is particular useful since the acquisition of relevant annotations/labels is very expensive in the medical field. Second, the model outputs description for a given CXR image instead of classifying it into a disease category. The consideration for this design is as follows. In clinical practice, it is not always feasible to draw solid conclusions on underlying diseases solely based on CXR images. For example, prominent/increased lung markings likely indicate an infection, chronic bronchitis, interstitial lung disease, heart failure, or normal aging. In case of such a symptom is observed, it is more appropriate to just describe the symptom, instead of giving a classification of diseases. The model is designed to follow this strategy. Moreover, this model behavior is similar to what radiologists usually do in their daily practice.
However, the model still requires improvement. Since the model is an end-to-end architecture that directly learns from reports and also generates reports, it does not explicitly give classification results. This makes it difficult to quantitatively evaluate the model performance. Currently we rely on human inspection for this purpose. We are dealing with this problem by adding a classification module in the neural network.
We believe the automated AI system developed in this work is useful and will greatly reduce the labor of doctors in the near future.
Ethics Committee Approvement
The usage the above described chest X-ray data for this study has been approved by the Ethics Committee of Drum Tower Hospital, Nanjing University, under the document number 2019-100-01.
Role of the Funding Source
The funding source has no influence on the scientific program and no role in the writing of this report or in the decision to submit it for publication.
Conflict of interests
The authors declare no conflict of interests.
- Tudor et al. (1997) G. Tudor, D. Finlay, N. Taub, An assessment of inter-observer agreement and accuracy when reporting plain radiographs, Clinical radiology 52 (1997) 235–238. https://doi.org/10.1016/S0009-9260(97)80280-2.
- Rajpurkar et al. (2017) P. Rajpurkar, J. Irvin, K. Zhu, B. Yang, H. Mehta, T. Duan, D. Ding, A. Bagul, C. Langlotz, K. Shpanskaya, Chexnet: Radiologist-level pneumonia detection on chest x-rays with deep learning, arXiv preprint arXiv:1711.05225 (2017). https://arxiv.org/abs/1711.05225.
- Ypsilantis and Montana (2017) P. P. Ypsilantis, G. Montana, Learning what to look in chest x-rays with a recurrent visual attention model, arXiv preprint arXiv:1701.06452 (2017). https://arxiv.org/abs/1701.06452.
- Pesce et al. (2017) E. Pesce, P. P. Ypsilantis, S. Withey, R. Bakewell, V. Goh, G. Montana, Learning to detect chest radiographs containing lung nodules using visual attention networks, arXiv preprint arXiv:1712.00996 (2017). https://arxiv.org/abs/1712.00996.
- Islam et al. (2017) M. T. Islam, M. A. Aowal, A. T. Minhaz, K. Ashraf, Abnormality detection and localization in chest x-rays using deep convolutional neural networks, arXiv preprint arXiv:1705.09850 (2017). https://arxiv.org/abs/1705.09850.
- Yao et al. (2017) L. Yao, E. Poblenz, D. Dagunts, B. Covington, D. Bernard, K. Lyman, Learning to diagnose from scratch by exploiting dependencies among labels, arXiv preprint arXiv:1710.10501 (2017). https://arxiv.org/abs/1710.10501.
- Yan et al. (2018) C. Yan, J. Yao, R. Li, Z. Xu, J. Huang, Weakly supervised deep learning for thoracic disease classification and localization on chest x-rays, in: Proceedings of the 2018 ACM International Conference on Bioinformatics, Computational Biology, and Health Informatics, ACM, 2018, pp. 103–110. https://doi.org/10.1145/3233547.3233573.
- Guan et al. (2018) Q. Guan, Y. Huang, Z. Zhong, Z. Zheng, L. Zheng, Y. Yang, Diagnose like a radiologist: Attention guided convolutional neural network for thorax disease classification, arXiv preprint arXiv:1801.09927 (2018). https://arxiv.org/abs/1801.09927.
- Rubin et al. (2018) J. Rubin, D. Sanghavi, C. Zhao, K. Lee, A. Qadir, M. Xuwilson, Large scale automated reading of frontal and lateral chest x-rays using dual convolutional neural networks, arXiv preprint arXiv:1804.07839 (2018). https://arxiv.org/abs/1804.07839.
- Goodfellow et al. (2016) I. Goodfellow, Y. Bengio, A. Courville, Deep learning (adaptive computation and machine learning series), Adaptive Computation and Machine Learning series (2016) 800.
- Silver et al. (2016) D. Silver, A. Huang, C. J. Maddison, A. Guez, L. Sifre, d. D. G. Van, J. Schrittwieser, I. Antonoglou, V. Panneershelvam, M. Lanctot, Mastering the game of go with deep neural networks and tree search, Nature 529 (2016) 484–489. https://doi.org/10.1038/nature16961.
- Hui Y et al. (2015) X. Hui Y, A. Babak, L. Leo J, B. Hannes, M. Daniele, Y. Ryan K C, H. Yimin, G. Serge, N. Hamed S, H. Timothy R, Rna splicing. the human splicing code reveals new insights into the genetic determinants of disease, Science 347 (2015) 1254806. https://doi.org/10.1126/science.1254806.
- Xu et al. (2015) K. Xu, J. Ba, R. Kiros, K. Cho, A. Courville, R. Salakhudinov, R. Zemel, Y. Bengio, Show, attend and tell: Neural image caption generation with visual attention, in: International conference on machine learning, 2015, pp. 2048–2057.
Zhang et al. (2017)
Z. Zhang, Y. Xie,
F. Xing, M. McGough,
Mdnet: A semantically and visually interpretable
medical image diagnosis network,
in: Proceedings of the IEEE conference on computer vision and pattern recognition, 2017, pp. 6428–6436.https://doi.org/10.1109/CVPR.2017.378.
- Wang et al. (2018) X. Wang, Y. Peng, L. Lu, Z. Lu, R. M. Summers, Tienet: Text-image embedding network for common thorax disease classification and reporting in chest x-rays, in: Computer Vision and Pattern Recognition, 2018, pp. 9049–9058.
- Huang et al. (2017) G. Huang, Z. Liu, L. V. D. Maaten, K. Q. Weinberger, Densely connected convolutional networks, in: Computer Vision and Pattern Recognition, 2017, pp. 2261–2269. https://doi.org/10.1109/CVPR.2017.243.
- Hochreiter and Schmidhuber (1997) S. Hochreiter, J. Schmidhuber, Long short-term memory, Neural Computation 9 (1997) 1735–1780. https://doi.org/10.1162/neco.19126.96.36.1995.
- Zaremba et al. (2014) W. Zaremba, I. Sutskever, O. Vinyals, Recurrent neural network regularization, arXiv preprint arXiv:1409.2329 (2014). https://arxiv.org/abs/1409.2329.
- Sun (2012) J. Sun, Chinese word segmentation module, https://github.com/fxsjy/jieba, 2012. (accessed 7 October 2012).
- Wang et al. (2017) X. Wang, Y. Peng, L. Lu, Z. Lu, M. Bagheri, R. M. Summers, Chestx-ray8: Hospital-scale chest x-ray database and benchmarks on weakly-supervised classification and localization of common thorax diseases, in: Computer Vision and Pattern Recognition, 2017, pp. 3462–3471. https://doi.org/10.1109/CVPR.2017.369.
- Vedantam et al. (2015) R. Vedantam, C. L. Zitnick, D. Parikh, Cider: Consensus-based image description evaluation, in: Computer Vision and Pattern Recognition, 2015, pp. 4566–4575. https://doi.org/10.1109/CVPR.2015.7299087.
- Sutskever et al. (2014) I. Sutskever, O. Vinyals, Q. V. Le, Sequence to sequence learning with neural networks, in: Advances in neural information processing systems, 2014, pp. 3104–3112.