Chest X-rays (CXR) are one of the most commonly performed medical imaging exams as part of the initial diagnostic workup and screening processes in various clinical settings, ranging from the primary care office to the emergency room department. Given the widespread use of this modality, automation of the CXR read has become an important goal of the medical imaging community [1, 2]. These efforts have been aided by the publication of two fairly large public datasets [3, 4].
The use of these image datasets in classifier training required natural language processing to extract the training labels from the CXR radiology reports to build labeled datasets at scale. Viewing any CXR radiology report, one would soon notice three main categories for each finding: 1) finding label was affirmed/positive, 2) finding label was negated, and 3) finding label was not mentioned in the report. In fact, because CXR is often used as a screening exam to rule out abnormal findings, a large number of sentences in most reports would specifically mention that some findings are not present (negated). An example would be “no pneumothorax, pleural effusion and consolidation”. Therefore, directly predicting a negated finding output for some findings could be just as useful clinically as a positive finding prediction because the information is still helpful in guiding the subsequent patient management.
However, it is less certain how one should deal with the “no mention” category for all the different finding types, which represent an even larger proportion of the whole label space. Given the wide clinical applications of CXRs and over a hundred different types of findings of varying prevalence, there are multiple reasons why any finding might not be mentioned in the radiology report. The “no mention” cases could be due to: 1) true negation: the finding label is not present but also clinically not important enough to specifically negate in report, or 2) false negative: the finding is present but the radiologist missed it or did not think it was clinically relevant enough to mention in that particular setting (e.g. reporting an irrelevant chronic finding like shoulder arthritis in an acute trauma case).
Therefore, adjusting the training of classifiers built using NLP-labelled image collections to deal with the “no mention” cases is a key problem to address to avoid discarding a large amount of CXR examples that were essentially “partially” labeled. To inform our experiments, we did a manual exploratory analysis of three open source CXR datasets [5, 6, 4] and a radiology literature review of missed findings in CXRs. The exploratory analysis of 150 random CXR images, where the NLP labels were “no findings” or “normal”, showed that 6% of the images had an abnormal anatomical finding. The literature search suggested that lung nodules and fractures have a missed finding rate between 20-40% amongst patients who eventually had the finding confirmed [7, 8, 9].
This paper explores a solution to optimize the training of a disease finding classifier in situations where both positive and negated labels are present, by addressing the uncertainties of the no mention cases in each label. We propose a scheme to apply reasonable class weight modifiers to our loss function for the no mention cases during training. Our clinically guided hypothesis is that if a positive finding is not mentioned in the report, we are more certain that it is a true negative. On the other hand, if a negated finding is not mentioned, it has a higher chance to be a false negative. Therefore, in our scenario, we propose a higher weight to the no mention case for the positive label than for the corresponding negated label to reduce the influence of the ambiguous no mention cases.
We train two different deep neural net architectures and show that in both cases the use of these class weight modifiers with the loss function result in improved accuracy in detection of negative labels. The studied architectures are DenseNet and a custom network architecture reported in this paper for the first time.
In order to build a deep neural network for producing findings necessary to compose a CXR report, we needed a very large number of labeled images. We achieved this by automatic text analysis of the reports accompanied by the MIMIC-CXR dataset . In this paper, we mostly discuss the process of building the finding classifier and the novel loss function and architecture proposed here, with only a brief description of the text analysis methodology that produced the labels from text reports.
2.1 Label extraction
A top-down knowledge-driven plus a bottom up text curation process was used to identify a set of unique finding concepts relevant for CXRs. We also used an NLP concept expansion engine  to semantically map the different ways a finding could be described in reports to a discrete finding label set validated by radiologists. We then applied context recognition NLP to differentiate between negated and affirmed instances for each finding mention . Where CXR reports did not mention a finding, it is flagged as a ”no mention” case. We picked the three most frequently occurring finding labels, and their negated versions, to conduct the experiments in this study.
2.2 Class weights and loss function
For each semantic label, the numbers of positive and negated samples can be highly unbalanced, and the class with the higher frequency can dominate the loss function and lead to suboptimal classifiers. Therefore, class weights are usually used to alleviate this issue. We compute the class weights as:
with and the weights for the positive and negated classes, and and the numbers of the positive and negated samples, respectively. The loss of each semantic label can then be computed as the weighted binary cross-entropy:
where = 1 for positive samples and 0 otherwise.
is the sigmoid output from the network prediction. The average loss of all semantic labels is used for the backpropagation.
2.3 Class weight modifiers
With the introduction of negations in the semantic labels, the interpretation of a sample with both negatives (0, 0) for a pair (a semantic label and its negation, e.g. ”consolidation” and ”no consolidation”) can be ambiguous. Table 1 shows the possible combinations of a negated pair. For a semantic label, as the positives (1’s) are explicitly mentioned by radiologists, they are certain findings. On the other hand, the negatives (0’s) are not mentioned and can be ambiguous, because apart from the negative meaning of the semantic label, the 0’s can also mean the finding is missed or not considered. For example, for the negated label ”no consolidation”, a 0 can mean there is consolidation, or ”no consolidation” is not considered at all. Therefore, the (1, 1) pair is contradicting and should not exist, the (1, 0) and (0, 1) pairs should follow the meanings of 1’s as they are conscious annotations, and the (0, 0) pair is ambiguous.
To handle such ambiguity in training, we propose the weight modifiers to modify the class weights of each sample with the (0, 0) negated pair when computing the loss function. In fact, although 0’s are ambiguous in general, the level of ambiguity is different between a semantic label and its negation. For findings such as ”consolidation”, the chance of being missed or not considered should be low as radiologists are trained to report anomalies. For the negations such as ”no consolidation”, the chance of being not considered is high as radiologists are usually not required to explicitly mention non-existence of all findings. Therefore, the weight modifiers for a semantic label () and its negation () are different, which are given as:
the normal or Gaussian distribution with mean. and are multiplied by in (1) during training. A larger means we trust a semantic label more than its negation, and vice versa. Instead of a constant
, a normal distribution is used to model the uncertainties caused by ambiguity.was fixed at 0.05 in the experiments, while different values of were investigated.
2.4 Network architectures
To show that the proposed weight modifiers are generally applicable, we performed experiments on a custom architecture, and also on a widely used architecture DenseNet .
The custom architecture comprises the proposed Dilated Bottleneck (DB) blocks (Fig. 1). In each block, the efficient bottleneck architecture of ResNet is used so that deeper network can be trained . Dilated convolutions with dilation rates of 1 and 2 are also used to aggregate multi-scale context . Identity mappings and pre-activations are used for better information propagation 
, and spatial dropouts with dropout probability of 0.2 are used to alleviate overfitting. Therefore, each block allows efficient learning of multi-scale information. To further alleviate overfitting, a Gaussian noise layer, global average pooling, and dropout with probability of 0.5 are used with the cascaded DB blocks to form the network architecture. Images are resized to 128128 with this architecture.
We also used DenseNet  for the same problem to show the improvements from modifiers can be repeated on other networks. DenseNet utilizes skip connections to feed information to latter layers. We used DenseNet with 201 layers, and 18,319,554 trainable parameters.
2.5 Training strategy
Image augmentation with rigid transformations is used to avoid overfitting. As most of an image should be included, we limit the augmentation to rotation (10 degrees), shifting (10%), and scaling ([0.95, 1.05]). The probability of an image to be transformed is 80%. The optimizer Adam is used with a learning rate of 10
, a batch size of 64, and 20 epochs.
We are using an IBM POWER9 Accelerated Computer Server (AC922) that is designed to accommodate the data-intensive characteristics of modern analytics and AI workloads by fully exploiting its GPU capabilities, eliminating I/O bottlenecks and sharing memory across GPUs and CPUs. The machine is equipped with four V100 NVidia GPUs in its air-cooled configuration which we used.
As a proof of concept, six semantic labels of three negated pairs (”consolidation”, ”no consolidation”), (”pneumothorax”, ”no pneumothorax”), and (”pulmonary edema”, ”no pulmonary edema”) were used, resulting in 204k frontal chest X-ray images. The choice of these pairs was intentional because these pairs have a high frequency in the MIMIC-CXR dataset and thus made our experiments statistically safe. The break-down of samples is listed in Table 2.
The dataset was divided into 70% for training, 10% for validation, and 20% for testing, and the testing results are reported. Different values of in (3) were investigated. A value of 0.9 means we trust a semantic label more than its negation, and a value of 0.1 means the opposite. Note that while all possible sample combinations are included in the training phase, at the time of testing, we only test on samples that are not ambiguously labeled so we can measure the performance changes without ambiguity.
Our first observation is that a large number of cases in MIMIC-CXR radiology reports contained ambiguous disease findings (e.g. 50% ambiguous consolidation cases, 23% ambiguous pneumothorax cases, 66% ambiguous pulmonary edema cases). This shows the importance of modeling the ambiguity of labels during training.
|Dilated Block, baseline||0.83||0.82||0.81||0.69||0.87||0.80|
Dilated block network: The baseline performance of Dilated block net on the six labels, along with the performance at best weight combination in the proposed method are reported in the Table 3. Fig. 2 depicts the ROC per label for all combinations of weights for to . The optimal weight was , chosen based on average area under ROC curve for all six findings. The improvement is primarily on the negated labels as expected. The area under ROC curve for no pneumothorax increases from 0.69 to 0.80, and from 0.80 to 0.87 for no pulmonary edema. The performance change for no consolidation is smaller.
DenseNet Results: DenseNet results are in Fig. 3 and the second half of Table 3. The optimal weight was . Again the improvement is primarily on the negated with the area under ROC curve for no pneumothorax increasing from 0.72 to 0.80, and from 0.81 to 0.87 for no pulmonary edema. The performance stays similar to baseline for positive findings.
Examples of corrections: Since the test set consists of only non-ambiguous labels, the performance improvement translates to objectively more accurate findings. Nevertheless, for illustration purposes, Fig. 4 shows three examples within the test set of cases where the use of the proposed weight modifiers at the time of training has changed the prediction from false positive to true negative. The text under each image is the original report written by the radiologist and the negated finding of interest is highlighted.
In this paper we presented a methodology to deal with ambiguity of disease findings in radiology reports. Our approach to model this ambiguity is to add a class weight modifier and evaluate a range of weights from 0.1 to 0.9 for impact on classification accuracy in non-ambiguous test cases. The optimal balance of probabilities is that 80-90% of the ambiguous cases are negated disease findings. This was verified by two independent state-of-the-art neural networks evaluated on many images. We observed a large improvement in negated disease findings classification on a very large dataset while maintaining similar levels of accuracy in positive disease findings. This work builds toward our efforts of building a radiologist assistant application that can perform similar to an entry level radiologist and prepare the first version of a radiology read for CXR.
-  X. Wang, Y. Peng, L. Lu, Z. Lu, M. Bagheri, and R. M. Summers, “ChestX-ray8: Hospital-scale chest X-ray database and benchmarks on weakly-supervised classification and localization of common thorax diseases,” in , 2017, pp. 2097–2106.
-  J. Laserson, C. D. Lantsman, M. Cohen-Sfady, I. Tamir, E. Goz, C. Brestel, S. Bar, M. Atar, and E. Elnekave, “Textray: Mining clinical reports to gain a broad understanding of chest x-rays,” in MICCAI, 2018, pp. 553–561.
-  A. E. W. Johnson, T. J. Pollard, S. J. Berkowitz, N. R. Greenbaum, M. P. Lungren, C.-y. Deng, R. G. Mark, and S. Horng, “MIMIC-CXR: A large publicly available database of labeled chest radiographs,” arXiv:1901.07042 [cs.CV], 2019.
-  X. Wang, Y. Peng, L. Lu, Z. Lu, M. Bagheri, and R. M. Summers, “Chestx-ray8: Hospital-scale chest x-ray database and benchmarks on weakly-supervised classification and localization of common thorax diseases,” in Computer Vision and Pattern Recognition (CVPR), 2017 IEEE Conference on. IEEE, 2017, pp. 3462–3471.
-  D. Demner-Fushman, M. D. Kohli, M. B. Rosenman, S. E. Shooshan, L. Rodriguez, S. Antani, G. R. Thoma, and C. J. McDonald, “Preparing a collection of radiology examinations for distribution and retrieval,” Journal of the American Medical Informatics Association, vol. 23, no. 2, pp. 304–310, 2015.
-  J. K. Gohagan, P. C. Prorok, R. B. Hayes, and B.-S. Kramer, “The prostate, lung, colorectal and ovarian (plco) cancer screening trial of the national cancer institute: history, organization, and status,” Controlled clinical trials, vol. 21, no. 6, pp. 251S–272S, 2000.
-  L. G. Quekel, A. G. Kessels, R. Goei, and J. M. van Engelshoven, “Miss rate of lung cancer on the chest radiograph in clinical practice,” Chest, vol. 115, no. 3, pp. 720–724, 1999.
-  P. M. de Groot, B. W. Carter, G. F. Abbott, and C. C. Wu, “Pitfalls in chest radiographic interpretation: blind spots,” in Seminars in roentgenology. Elsevier, 2015, vol. 50, pp. 197–209.
-  R. Harris and J. Harris Jr, “The prevalence and significance of missed scapular fractures in blunt chest trauma,” American Journal of Roentgenology, vol. 151, no. 4, pp. 747–750, 1988.
A. Coden, D. Gruhl, N. Lewis, M. Tanenblatt, and J. Terdiman,
“Spot the drug! an unsupervised pattern matching method to extract drug names from very large clinical corpora,”in Healthcare Informatics, Imaging and Systems Biology (HISB), 2012 IEEE Second International Conference on. IEEE, 2012, pp. 33–39.
-  T. Syeda-Mahmood, R. Kumar, and C. Compas, “Learning the correlation between images and disease labels using ambiguous learning,” in International Conference on Medical Image Computing and Computer-Assisted Intervention. Springer, 2015, pp. 185–193.
-  G. Huang, Z. Liu, L. Van Der Maaten, and K. Q. Weinberger, “Densely connected convolutional networks.,” in CVPR, 2017, vol. 1, p. 3.
-  K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in IEEE Conference on Computer Vision and Pattern Recognition, 2016, pp. 770–778.
-  F. Yu and V. Koltun, “Multi-scale context aggregation by dilated convolutions,” arXiv:1511.07122 [cs.CV], 2015.
-  K. He, X. Zhang, S. Ren, and J. Sun, “Identity mappings in deep residual networks,” in European Conference on Computer Vision, 2016, vol. 9908 of LNCS, pp. 630–645.
-  J. Tompson, R. Goroshin, A. Jain, Y. LeCun, and C. Bregler, “Efficient object localization using convolutional networks,” in IEEE Conference on Computer Vision and Pattern Recognition, 2015, pp. 648–656.