Learning Invariant Feature Representation to Improve Generalization across Chest X-ray Datasets

by   Sandesh Ghimire, et al.

Chest radiography is the most common medical image examination for screening and diagnosis in hospitals. Automatic interpretation of chest X-rays at the level of an entry-level radiologist can greatly benefit work prioritization and assist in analyzing a larger population. Subsequently, several datasets and deep learning-based solutions have been proposed to identify diseases based on chest X-ray images. However, these methods are shown to be vulnerable to shift in the source of data: a deep learning model performing well when tested on the same dataset as training data, starts to perform poorly when it is tested on a dataset from a different source. In this work, we address this challenge of generalization to a new source by forcing the network to learn a source-invariant representation. By employing an adversarial training strategy, we show that a network can be forced to learn a source-invariant representation. Through pneumonia-classification experiments on multi-source chest X-ray datasets, we show that this algorithm helps in improving classification accuracy on a new source of X-ray dataset.



There are no comments yet.


page 8


Deep learning classification of chest x-ray images

We propose a deep learning based method for classification of commonly o...

Generalization of Deep Convolutional Neural Networks – A Case-study on Open-source Chest Radiographs

Deep Convolutional Neural Networks (DCNNs) have attracted extensive atte...

Demonstrating The Risk of Imbalanced Datasets in Chest X-ray Image-based Diagnostics by Prototypical Relevance Propagation

The recent trend of integrating multi-source Chest X-Ray datasets to imp...

Pneumothorax and chest tube classification on chest x-rays for detection of missed pneumothorax

Chest x-ray imaging is widely used for the diagnosis of pneumothorax and...

Mitigating the Effect of Dataset Bias on Training Deep Models for Chest X-rays

Deep learning has gained tremendous attention on CAD (Computer-aided Dia...

Pseudo Bias-Balanced Learning for Debiased Chest X-ray Classification

Deep learning models were frequently reported to learn from shortcuts li...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Automatic interpretation and disease detection in chest X-ray images is a potential use case for artificial intelligence in reducing the costs and improving access to healthcare. It is one of the most commonly requested imaging procedures not only in the context of clinical examination but also for routine screening and even legal procedures such as health surveys for immigration purposes. Therefore, analysis of X-ray images through several computer vision algorithms has been an important topic of research in the past. Recently, with the release of several large open source public datasets, deep learning-based image classification

[21, 10] has found important applications in this area. The recent outbreak of COVID-19 pandemic and the need for widespread screening has further amplified the need for identification of pneumonia and consolidation findings on X-ray radiographs, as opposed to computed tomography (CT).

Most of the reported deep learning approaches are trained and tested on the same dataset and/or a single source. This is an unrealistic assumption in the case of medical image analysis with widespread screening applications. In radiology, we can always expect different images coming from different scanners, population, or image settings and therefore we can expect test images are different from the ones used in training. In non-quantitative imaging modalities, such as X-ray, this inconsistency of images across datasets is even more drastic. This is a significant hurdle for the adaptation of automated disease classification algorithms in the practice of radiology. Generalization across X-ray sources is therefore necessary to make deep learning algorithms viable in clinical practice. Recently this has been recognized with the radiology editorial board encouraging testing in external test set [2]

. Some works have tried to answer the question of generalization by intensity normalization and adding Gaussian noise layers to neural networks

[14] while others use simple ensemble strategy as in [20].

Drawing ideas from causality and invariant risk minimization [1], we propose that the key to resolve this issue is to learn features that are invariant in several X-ray datasets, and would be valid features even for the new test cases. The main contribution of our work is that we enforce feature invariance to source of data by using an adversarial penalization strategy. We show thus with different X-ray datasets that exhibit similar diseases, but come from different sources/institutions. We have access to four public chest X-ray datasets and validate our method by leave-one-dataset-out experiments of training and testing [21, 25, 12]. Given the recent interest in pneumonia like conditions, we chose to target pneumonia and consolidation. We show that the out of source testing error can be reduced with our proposed adversarial penalization method. We also perform experiments using Grad-CAM [22] to create activation maps and qualitatively evaluate and compare the behavior of the baseline and the proposed method in terms of focus on relevant area of the image.

2 Related Work

Earlier works on generalization concentrated on statistical learning theory

[24, 3]

, studying the worst-case generalization bound based on the capacity of the classifier. Later on, differing viewpoints emerged like PAC Bayes

[19], information-theoretic [26] and stability based methods [4]. Modern works on generalization, however, find statistical learning theory insufficient [27] and propose other theories from an analytical perspectives [13]. Our work is quite different from these works. Most of these works are about in-source generalization and assume that data is independent and identically distributed (i.i.d) both in training and testing. We, however, start with the assumption that the training and testing could be from different distributions but share some common, causal features. Based on the principles of Invariant Risk Minimization [1], we propose the idea that learning invariant features from multiple sources could lead to learning causal features that would help in generalization to new sources.

Another closely related area to our work is that of domain adaptation [23, 7], and its application in medical imaging [5]. In a domain adaptation setting, the data is available from source and target domains; but, the labels are available only from the source domain. The objective is to learn to adapt knowledge from the source to predict the label of the target. Although similar in spirit, our work is quite different from domain adaptation in that we do not have target data to adapt to during training. Other ideas of distribution matching like Maximum Mean Discrepancy (MMD) [16, 15] are related to our work. In comparison, the adversarial approach is very powerful and easily extendable to more than two sources, which is cumbersome to realize using MMD.

3 Method

Causation as Invariance: Following reasoning similar to [1], we argue that extracting invariant features from many different sources would help the network focus on the causal features. This would help the network generalize to new sources in the future assuming that it would extract causal features from the new X-ray images obtained in the future.

To force a network to learn invariant features, we propose an architecture as shown in Fig. 1 based on adversarial penalization strategy. It has three major components: Feature extractor, Discriminator and Classifier. Drawing ideas from unsupervised domain adaptation [7]

, we train the discriminator to classify which source the image was obtained from just using the latent features extracted by the feature extractor. The discriminator is trained to well identify the source from the features. The feature extractor, however, is trained adversarially to make it very difficult for the discriminator to classify among sources. This way, we force the feature extractor network to extract features from the X-ray images that are invariant across different sources for if there were any element in the latent feature that is indicative of the source, it would be easier for the discriminator to identify the sources. In the end, we expect the feature extractor and discriminator to reach an equilibrium where the feature extractor generates features that are invariant to the sources. Meanwhile, the same features are fed to the disease classifier which is trained to properly identify disease. Hence, the features must be source invariant and at the same time discriminative enough of the disease. Next, we describe three main components of our network.

1. Feature extractor: The feature extractor is the first component that takes in the input X-ray image and gives a latent representation. In Fig. 1, the feature extractor consists of a Resnet 34 [9] architecture up to layer 4 followed by a global average pooling layer.
2. Discriminator: The discriminator consists of fully connected layers that take in features after the global average pooling layer and tries to classify which of the sources the image is obtained from. If adversarial training reaches equilibrium, it would mean that feature representation from different sources are indistinguishable (source invariant).
3. Classifier: The output of the feature extractor network should not only be source invariant but also be discriminative to simultaneously classify X-ray images according to the presence or absence of disease. In our simple model, we simply use a fully connected layer followed by sigmoid as the classifier.

3.1 Training

Figure 1: Proposed architecture to learn source invariant representation while simultaneously classifying disease labels

From Fig. 1, the disease classification loss and source classification (discrimination) loss are respectively defined as:


where is the binary cross entropy loss and similarly is the cross entropy loss. We train extractor, classifier and discriminator by solving following min-max problem.


It is easy to note that this is a two player min-max game where two players are trying to optimize an objective in opposite directions: note the negative sign and positive sign in front of loss in eq.(3). Such min-max games in GAN literature are notorious for being difficult to optimize. However, in our case optimization was smooth as there was no issue with stability.

To perform adversarial optimization, two methods are prevalent in the literature. The first method, originally proposed in [8], trains the discriminator while freezing feature extractor and then freezes discriminator to train feature extractor while inverting the sign of loss. The second approach was proposed in [7], which uses a gradient reversal layer to train both the discriminator and feature extractor in a single pass. Note that the former method allows multiple updates of the discriminator before updating the feature extractor while the latter method does not. Many works in GAN literature reported that this strategy helped in learning better discriminators. In our experiments, we tried both and found no significant difference between the two methods in terms of stability or result. Hence, we used gradient reversal because it was time-efficient. To optimize the discriminator, it helps if we have a balanced dataset from each source. To account for the imbalanced dataset from each source, we resample data from the source with the small size until the source of the largest size is exhausted. By such resampling, we ensure that there is a balanced stream of data from each source to train the discriminator.

3.2 Grad-CAM Visualization

Grad-CAM [22]

identifies important locations in an image for the downstream tasks like classification. It visualizes the last feature extraction layer of a neural network scaled by the backpropagated gradient and interpolated to the actual image size.

In this paper, we use Grad-CAM to visualize which location in the X-ray is being attended by the neural network when we train with and without adversarial penalization. We hypothesize that a method that extracts source invariant features should be extracting more relevant features for the disease to be identified, whereas a network which was trained without specific guidance to extract source invariant features would be less focused in the specific diseases and may be attending to irrelevant features in the input X-ray image. Using Grad-CAM, we qualitatively verify this hypothesis.

4 Datasets and Pneumonia/Consolidation Labeling Scheme

We used three large scale publicly available datasets for our study - NIH ChestXray14 [25], MIMIC-CXR dataset [12], and Stanford CheXpert dataset[10]. Further, a smaller internally curated dataset of images originating from Deccan Hospital in India was used.

We are interested in the classification task detecting signs of pneumonia and consolidation in chest X-ray images. Consolidation is a sign of the disease (occurring when alveoli are filled with something other than air, such as blood) whereas Pneumonia is a disease often causing consolidation. Radiologists use consolidation, potentially with other signs and symptoms, to diagnose pneumonia. In a radiology report, both of these may be mentioned. Therefore, we have used both to build a dataset of pneumonia/consolidation.

We have used all four datasets listed above. The Stanford CheXpert dataset [10] is released with images and labels, but without accompanying reports. The NIH dataset is also publicly available with only images and no reports. A subset of 16,000 images from this dataset were de novo reported by crowd-sourced radiologists. For the MIMIC dataset, we have full-fledged de-identified reports provided under a consortium agreement to us for the MIMIC-4 collection recently released [11]

. For Deccan collection, we have the de-identified reports along with images. For the NIH, MIMIC and Deccan datasets, we used our natural language processing (NLP) labeling pipeline, described below, to find positive and negative examples in the reports; whereas, for the Stanford dataset, we used the labels provided.

The pipeline utilizes a CXR ontology curated by our clinicians from a large corpus of CXR reports using a concept expansion tool [6] applied to a large collection of radiology reports. Abnormal terminologies from reports are lexically and semantically grouped into radiology finding concepts. Each concept is then ontologically categorized under major anatomical structures in the chest (lungs, pleura, mediastinum, bones, major airways, and other soft tissues), or medical devices (including various prosthesis, post-surgical material, support tubes, and lines). Given a CXR report, the text pipeline 1) tokenizes the sentences with NLTK [17], 2) excludes any sentence from the history and indication sections of the report via key section phrases so only the main body of the text is considered, 3) extracts finding mentions from the remaining sentences, and 4) finally performs negation and hypothetical context detection on the last relevant sentence for each finding label. Finally, clinician driven filtering rules are applied to some finding labels to increase specificity (e.g. ”collapse” means ”fracture” if mentioned with bones, but should mean ”lobar/segmental collapse” if mentioned with lungs).

Using NLP generated and available labels (for CheXpert), we created a training dataset by including images with a positive indication of pneumonia or consolidation in our positive set and those with no indication of pneumonia or consolidation in the negative set. Table 1 lists the number of images from each class for each dataset. These new labels/images will be open-sourced to encourage further research before MICCAI 2020.

Leave out Dataset Train Test
Positive Negative Positive Negative
Stanford 15183 123493 1686 13720
MIMIC 83288 49335 23478 13704
NIH 1588 6374 363 1868
Deccan Hospital 50 1306 12 379
Total 100109 180508 25539 29671
Table 1: The distribution of the datasets used in the paper. The breakdown of the Positive (pneumonia/consolidation) and Negative (not pneumonia/consolidation) cases.

5 Experiments and Results

We use four datasets as shown in Table.1. We use a simple Resnet-34 architecture with classifier as our baseline so that enforcement of invariance through discriminator is the only difference between baseline and proposed method. Experiments using both the architectures use a leave-one-dataset-out strategy: we trained on three of the four datasets and left one out. Each experiment has two test sets: 1)in-source test that draws from only the unseen samples from datasets used for training, 2) out-of-source test set, only including test samples from the fourth dataset that is not used in training. Note that all images from all sources are resized to 512x512.

Leave out Dataset Baseline Proposed Architecture
in-source test out-of-source test in-source test out-of-source test
Stanford 0.74 0.65 0.74 0.70
MIMIC 0.80 0.64 0.80 0.64
NIH 0.82 0.73 0.71 0.76
Deccan Hospital 0.73 0.67 0.75 0.70
Table 2: The classification results in terms of area under ROC curve from baseline ResNet34 model, and our proposed architecture. Each row lists a leave-one-dataset-out experiment.
Figure 2: The qualitative comparison of the activation maps of the proposed and the baseline models with the annotation of an expert radiologist. The first column shows the region marked by the expert as the area of the lung affected by pneumonia. The second column shows the original image for reference. The third and fourth columns are the Grad-CAM activation of the proposed and baseline models respectively.

The results of the classification experiments are listed in Table 2. We have chosen the area under the ROC curve (AUC-ROC) as the classification metric since this is the standard metric in computer-aided diagnosis. The first observation is that in all experiments, both for baseline and our proposed architecture, the AUC-ROC curve decreases as we move from the in-source test set to the out-of-source test set as expected. However, this drop in accuracy is generally smaller in our proposed architecture. For example, when the Stanford dataset is left out of training, in the baseline method the difference between in-source and out-of-source tests is 0.09 (from 0.74 to 0.65), whereas, in our proposed architecture, the drop in AUC-ROC is only 0.05 (from 0.74 to 0.70). While the performance on the in-source test stays flat, we gain a 5% improvement in area under the ROC curve, from 0.65 to 0.70 for the out-of-source test.

A similar pattern holds in both the case of NIH and Deccan datasets: in both cases, the drop in performance due to out-of-source testing is smaller for the proposed architecture compared with the baseline classifier. Surprisingly for the NIH dataset, the out-of-source testing results in higher accuracy, which we interpret as heavy regularization during training. In the case of the MIMIC dataset, the performance remains the same for the baseline and the proposed method.

Fig. 2 shows Grad-CAM visualization to qualitatively differentiate between the regions or features focused by a baseline model and the proposed model while classifying X-ray images. Three positive examples and their activation maps are shown. The interpretation of activation maps in chest X-ray images is generally challenging. However, the evident pattern is that the heatmaps from the proposed method (third column) tend to agree more than the baseline (fourth column) with the clinician’s marking in the first column. Furthermore, the proposed method shows fewer spurious activations. This is especially true in row 2 wherein the opacity from the shoulder blades is falsely highlighted as lung pneumonia.

To compare our algorithm with domain generalization approach, we tested on the method in [18] using pseudo clusters. This methods has the state of the art performance on natural images. On testing with the Stanford leave-out set, the ROC for in-source and out-of-source tests were 0.74 and 0.68 respectively which is slightly below the performance reported here in row 1 of Table 2.

6 Conclusion and future work

We tackled the problem of out of source generalization in the context of a chest X-ray image classification by proposing an adversarial penalization strategy to obtain a source-invariant representation. In experiments, we show that the proposed algorithm provides improved generalization compared to the baseline. In the course of this work, we developed labeling methods and applied to the text reports accompanying four datasets to find positive samples for pneumonia/consolidation. These pneumonia/consolidation label lists constitute a new resource for the community and will be released publicly.

It is important to note that the performance on the in-source test set does not necessarily increase in our method. Mostly it stays flat except in one case, namely the NIH set, where the baseline beats the proposed method in the in-source test. This can be understood as a trade-off between in-source and out-of-source performance induced by the strategy to learn invariant representation. By learning invariant features our objective is to improve on the out-of-source test cases even if in-source performance degrades. A possible route for further examination is the impact of the size of the training datasets and left-out set on the behavior of the model. It is noteworthy that we have kept the feature extractor and classifier components of our current architecture fairly simple to avoid excessive computational cost owing to adversarial training and large data and image size. A more sophisticated architecture might enhance the disease classification performance and is left as future work.


  • [1] M. Arjovsky, L. Bottou, I. Gulrajani, and D. Lopez-Paz (2019) Invariant risk minimization. arXiv preprint arXiv:1907.02893. Cited by: §1, §2, §3.
  • [2] D. A. Bluemke, L. Moy, M. A. Bredella, B. B. Ertl-Wagner, K. J. Fowler, V. J. Goh, E. F. Halpern, C. P. Hess, M. L. Schiebler, and C. R. Weiss (2020) Assessing radiology research on artificial intelligence: a brief guide for authors, reviewers, and readers—from the radiology editorial board. Radiological Society of North America. Cited by: §1.
  • [3] O. Bousquet, S. Boucheron, and G. Lugosi (2003) Introduction to statistical learning theory. In

    Summer School on Machine Learning

    pp. 169–207. Cited by: §2.
  • [4] O. Bousquet and A. Elisseeff (2002) Stability and generalization. Journal of machine learning research 2 (Mar), pp. 499–526. Cited by: §2.
  • [5] C. Chen, Q. Dou, H. Chen, and P. Heng (2018) Semantic-aware generative adversarial nets for unsupervised domain adaptation in chest x-ray segmentation. In International Workshop on Machine Learning in Medical Imaging, pp. 143–151. Cited by: §2.
  • [6] A. Coden, D. Gruhl, N. Lewis, M. Tanenblatt, and J. Terdiman (2012)

    SPOT the drug! An unsupervised pattern matching method to extract drug names from very large clinical corpora

    In 2012 IEEE Second International Conference on Healthcare Informatics, Imaging and Systems Biology, pp. 33–39. Cited by: §4.
  • [7] Y. Ganin and V. Lempitsky (2015) Unsupervised domain adaptation by backpropagation. In Proceedings of the 32nd International Conference on International Conference on Machine Learning-Volume 37, pp. 1180–1189. Cited by: §2, §3.1, §3.
  • [8] I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio (2014) Generative adversarial nets. In Advances in neural information processing systems, pp. 2672–2680. Cited by: §3.1.
  • [9] K. He, X. Zhang, S. Ren, and J. Sun (2015)

    Deep residual learning for image recognition. computer vision and pattern recognition (cvpr)

    In 2016 IEEE Conference on, Vol. 5, pp. 6. Cited by: §3.
  • [10] J. Irvin, P. Rajpurkar, M. Ko, Y. Yu, S. Ciurea-Ilcus, C. Chute, H. Marklund, B. Haghgoo, R. L. Ball, K. S. Shpanskaya, J. Seekins, D. A. Mong, S. S. Halabi, J. K. Sandberg, R. Jones, D. B. Larson, C. P. Langlotz, B. N. Patel, M. P. Lungren, and A. Y. Ng (2019) CheXpert: A large chest radiograph dataset with uncertainty labels and expert comparison. CoRR abs/1901.07031. External Links: Link, 1901.07031 Cited by: §1, §4, §4.
  • [11] A. E. W. Johnson, T. J. Pollard, S. J. Berkowitz, N. R. Greenbaum, M. P. Lungren, C. Deng, R. G. Mark, and S. Horng (2019) MIMIC-CXR: A large publicly available database of labeled chest radiographs. arXiv:1901.07042 [cs.CV]. Cited by: §4.
  • [12] A. E. Johnson, T. J. Pollard, L. Shen, H. L. Li-wei, M. Feng, M. Ghassemi, B. Moody, P. Szolovits, L. A. Celi, and R. G. Mark (2016) MIMIC-III, a freely accessible critical care database. Scientific Data 3, pp. 160035. Cited by: §1, §4.
  • [13] K. Kawaguchi, Y. Bengio, V. Verma, and L. P. Kaelbling (2018) Towards understanding generalization via analytical learning theory. arXiv preprint arXiv:1802.07426. Cited by: §2.
  • [14] G. Klambauer, T. Unterthiner, A. Mayr, and S. Hochreiter (2017) Self-normalizing neural networks. CoRR abs/1706.02515. External Links: Link, 1706.02515 Cited by: §1.
  • [15] C. Li, W. Chang, Y. Cheng, Y. Yang, and B. Póczos (2017)

    Mmd gan: towards deeper understanding of moment matching network

    In Advances in Neural Information Processing Systems, pp. 2203–2213. Cited by: §2.
  • [16] Y. Li, K. Swersky, and R. Zemel (2015) Generative moment matching networks. In International Conference on Machine Learning, pp. 1718–1727. Cited by: §2.
  • [17] E. Loper and S. Bird (2002) NLTK: the natural language toolkit. arXiv:cs/0205028 [cs.CL]. Cited by: §4.
  • [18] T. Matsuura and T. Harada (2019) Domain generalization using a mixture of multiple latent domains. arXiv preprint arXiv:1911.07661. Cited by: §5.
  • [19] D. A. McAllester (1999) Some pac-bayesian theorems. Machine Learning 37 (3), pp. 355–363. Cited by: §2.
  • [20] S. M. McKinney, M. Sieniek, V. Godbole, J. Godwin, N. Antropova, H. Ashrafian, T. Back, M. Chesus, G. C. Corrado, A. Darzi, et al. (2020) International evaluation of an ai system for breast cancer screening. Nature 577 (7788), pp. 89–94. Cited by: §1.
  • [21] P. Rajpurkar, J. Irvin, K. Zhu, B. Yang, H. Mehta, T. Duan, D. Ding, A. Bagul, C. Langlotz, K. Shpanskaya, et al. (2017) Chexnet: radiologist-level pneumonia detection on chest x-rays with deep learning. arXiv preprint arXiv:1711.05225. Cited by: §1, §1.
  • [22] R. R. Selvaraju, M. Cogswell, A. Das, R. Vedantam, D. Parikh, and D. Batra (2017) Grad-cam: visual explanations from deep networks via gradient-based localization. In Proceedings of the IEEE International Conference on Computer Vision, pp. 618–626. Cited by: §1, §3.2.
  • [23] O. Sener, H. O. Song, A. Saxena, and S. Savarese (2016) Learning transferrable representations for unsupervised domain adaptation. In Advances in Neural Information Processing Systems, pp. 2110–2118. Cited by: §2.
  • [24] V. Vapnik (2013) The nature of statistical learning theory. Springer science & business media. Cited by: §2.
  • [25] X. Wang, Y. Peng, L. Lu, Z. Lu, M. Bagheri, and R. M. Summers (2017) ChestX-ray8: hospital-scale chest X-ray database and benchmarks on weakly-supervised classification and localization of common thorax diseases. In IEEE Conference on Computer Vision and Pattern Recognition, pp. 2097–2106. Cited by: §1, §4.
  • [26] A. Xu and M. Raginsky (2017) Information-theoretic analysis of generalization capability of learning algorithms. In Advances in Neural Information Processing Systems, pp. 2524–2533. Cited by: §2.
  • [27] C. Zhang, S. Bengio, M. Hardt, B. Recht, and O. Vinyals (2017) Understanding deep learning requires rethinking generalization. International Conference in Learning and Representation. Cited by: §2.