The chest X-ray (radiograph) is a fast and painless screening test that is commonly performed to diagnose various thoracic abnormalities, such as pneumonias, pneumothoraces and lung nodules. It is one of the most cost-effective imaging examinations and imparts minimal radiation exposure to the patient while displaying a wide range of visual diagnostic information. Identifying and distinguishing the various chest abnormalities in chest X-rays is a challenging task even to the human observer. Therefore, its interpretation has been performed mostly by board-certified radiologists or other physicians. There are huge demands on developing computer-aided detection (CADe) methods to assist radiologists and other physicians in reading and comprehending chest X-ray images.
Currently, deep learning methods, especially convolutional neural networks (CNN)[4, 9], have become ubiquitous. They have achieved compelling performance across a number of tasks in the medical imaging domain [13, 10]. Most of these applications typically involve only one particular type of disease or lesion, such as automated classification of pulmonary tuberculosis , pneumonia detection , and lung nodule segmentation . Wang et al. 
recently introduced a hospital-scale chest X-ray (ChestX-ray14) dataset containing 112,120 frontal-view X-ray images, with 14 thoracic disease labels text-mined from associated radiology reports using natural language processing (NLP) techniques. Furthermore, a weakly-supervised CNN based multi-label thoracic disease classification and localization framework was proposed in using only image-level labels. Li et al.  presented a unified network that simultaneously improves classification and localization with the help of extra bounding boxes indicating disease location.
In addition to the disease labels that represent the presence or absence of certain disease, we also want to utilize the attributes of those diseases contained in the radiology reports. Disease severity level (DSL) is one of the most critical attributes, since different severity levels are correlated with highly different visual appearances in chest X-rays (see examples in Fig. 1). Radiologists tend to state such disease severity levels (i.e., [minimal, tiny, small, mild], [middle-size, moderate], [remarkable, large, severe], etc.) when describing the findings in chest X-rays. This type of disease attribute information can be exploited to enhance and enrich the accuracy of NLP-mined disease labels, which consequently may facilitate to build more accurate and robust disease classification and localization framework than . More recently, Wang et al.  proposed the TieNet (Text-Image Embedding Network), which was an end-to-end CNN-RNN architecture for learning to embed visual images and text reports for image classification and report generation. However, the disease attributes were not explicitly modeled in the TieNet framework.
In this paper, we propose an attention-guided curriculum learning (AGCL) framework for the task of joint thoracic disease classification and weakly supervised localization, where only image-level disease labels and severity level information of a subset are available. Note, we do not use bounding boxes for training. In AGCL, we use the disease severity level to group the data samples as a means to build the curriculum for curriculum learning . For each disease category, we begin by learning from severe samples, progressively adding moderate and mild samples as the CNN matures and converges gradually by seeing samples from “easy” to “hard”. The intuition behind curriculum learning is to mimic the common human process of gradual learning, starting from the easiest or obvious samples to harder or more ambiguous ones, which is notably the case for medical students learning to read radiographs. Furthermore, we use the CNN generated disease heatmaps (visual attention) of “confident” seed
images to guide the CNN in an iterative training process. The initial seeds are composed of: (1) images of severe and moderate disease level, and (2) images with high classification probability scores from the current CNN classifier. A two-path multi-task learning network architecture is designed to regress the heatmaps from selected seed samples in addition to the original classification task. In each iteration, the joint learning scheme can harvest more seeds of high quality as the network fine-tuning process iterates, resulting in increased guidance and more discriminative CNN for better classification and localization.
We test our proposed method on the public ChestXray14 dataset to evaluate the multi-label disease classification and localization performance. Comprehensive experimental results demonstrate the effectiveness of our framework in acquiring high-quality seeds, and the visual attention generated from seed images are evidently beneficial in guiding the learning procedure to improve both the classification and localization accuracy.
2.1 A CNN Based Classification and Localization Framework
The proposed AGCL approach starts by initializing a CNN pre-trained from ImageNet and then fine-tuning it on the ChestX-ray14 dataset on all() disease categories. This is similar to  except that the transition layer is discarded. This serves as the baseline of our method for multi-label classification and localization. The flowchart of the baseline framework is shown in Fig. 2(a).
2.2 Disease Severity-Level Based Curriculum Learning
Generally, the knowledge to be acquired by students is meticulously designed in a curriculum, so that “easier” concepts are introduced first, and more in-depth knowledge is systematically acquired by mastering concepts with increasing difficulty. The “easy-to-hard” principle of curriculum learning  has been helpful for both image classification and weakly supervised object detection 
in computer vision. The curriculum that controls what training data should be fed to the model is usually built based on prior information, such as object size (the larger, the easier) or other more sophisticated human supervision.
We mine the disease severity level (DSL) attributes from radiology reports using a similar NLP techniques introduced in . Severity descriptions that correlated to the disease keyword are extracted using the dependency graph built on each sentence in the report. DSL attributes are then collected whenever available from the whole training set and are grouped into three clusters, namely mild, moderate and severe. We treat the severity attributes as prior knowledge to build the curriculum. The prediction layers of the baseline model are replaced with a randomly initialized 2-way fully connected (FC) layer and a softmax cross-entropy loss, and we fine-tune the baseline model to a binary classification network for each disease category. The training samples are presented to the network in order of decreasing severity levels (increasing difficulties) of a certain disease, that is from severe samples to moderate to mild gradually as the CNN becomes more adept at later iterations during training. The negative samples come from normal cases (without any diseases mentioned in the radiology report) in the dataset and the number of negative samples for each category is balanced with the number of positive samples of that category. Note that we fine-tune from the weights learned from the baseline model in Sec. 2.1 because (1) the images annotated with all severity levels account for only about 25% of the training samples, which is not sufficient for training a deep CNN with millions of parameters, and (2) the baseline model is expected to have captured an overall concept distribution of the target dataset, which could be a useful starting point for curriculum learning.
The disease-specific class activation map  (CAM, or heatmap) of a chest X-ray image for a positive disease class is:
where is the weights of the FC layer corresponding to the positive disease category in each binary classification network, and is the activation of the -th (
) neuron of the last convolutional layer at a spatial coordinate, where is number of feature maps.
The heatmaps can be further employed as visual attention guidance for the CNN in the succeeding iterative refinement steps described in Sec. 2.3. The reason to split the network into individual binary models per disease instead of fine-tuning as a whole is that the severity levels tend to be inconsistent among different diseases in a multi-label situation. Moreover, the binary models are empirically found to be more discriminative and spatially accurate on generating disease-specific heatmaps.
2.3 Attention Guided Iterative Refinement
In this section, we explore and harvest highly “confident” seed images. We assume their computed disease-specific heatmaps could highlight the regions that potentially contribute more to the final disease recognition than the non-seed images. The highlighted regions represent the Region of Interest (ROI), or in other words, visual attention of disease patterns. In addition to the curriculum learning for each disease category shown in Fig. 2(b), we introduce a heatmap regression path (shown in Fig. 2(c)) to enforce the attention-guided learning of better convolutional features, which in turn generate more meaningful heatmaps. By using such an iterative refinement loop, we demonstrated that both the classification and localization results could be simultaneously and significantly improved over the baseline.
Harvesting Seeds: Ideally, image samples with severe and moderate disease severity levels could potentially be selected as seeds (denoted as ) since their visual appearances are relatively easier to recognize than mild ones. Additional selection criterion requires that an image is labeled with a certain disease and is correctly classified by the corresponding binary classifier introduced in Sec. 2.2 with a probability score larger than a threshold (seeds collected as ). We believe that the disease-specific heatmaps (seed attention maps) generated by Eq. (1) from those seeds image samples () shall exhibit higher precision in localizing disease patterns than other samples.
Attention Guidance from Seeds: We create a branch in the original classification network to guide the learning of better convolutional features using the seed attention maps. This branch shares all the convolutional blocks with the baseline model in Sec. 2.1 and includes an additional heatmap regression path. The regression loss is modeled as the sum of the channel-wise smooth losses over feature channels between the heatmap generated by the current network () and the seed attention map of the last iteration ().
The final objective function to be optimized is a weighted sum of the sigmoid cross-entropy loss for multi-label classification () and the heatmap regression loss () for localization:
where if an image is a seed for disease category , otherwise . is used to balance the classification and regression loss so that they have roughly equal contributions. We empirically set to .
Harvesting Additional Seeds: Once the network is retrained with the attention guidance, the curriculum learning procedures will be conducted again to harvest new confident seeds. All non-seed positive training images in each disease category will be inputted to their corresponding binary classifiers, among which highly scored images are harvested as additional seeds. Together with the initial seeds, their heatmaps are further fed to refinement framework as additional visual attention to guide the CNN to focus on disease-specific attended regions in the chest X-rays. Consequently, more and more confident seeds could be harvested while the accuracy of classification and localization improves gradually.
We extensively evaluate the proposed AGCL approach on the ChestX-ray14 dataset , which contains 112,120 frontal-view chest X-ray images of 30,805 unique patients, with 14 thoracic disease labels, extracted from associated radiology reports using NLP techniques. A small subset of 880 images within the dataset are annotated by board-certified radiologists, resulting in 984 bounding box locations containing 8 types of disease. We further extracted severity attributes along with the disease keywords. For classification, we use the same patient-level data splits provided in the dataset, which uses roughly 70% of the images for training, 10% for validation and 20% for testing. The disease localization is evaluated on all the 984 bounding boxes (not used for training).
We resize the original 3-channel images to pixels due to the trade-off between higher resolution and affordable computational load. ResNet-50 
is employed as the backbone of the proposed CNN architectures. For the baseline method and all the AGCL steps, we optimize the network using SGD with momentum and stop training after the validation loss reaches a plateau. The learning rate is set to be 0.001 and divided by 10 every 10 epochs. The AGCL is implemented using the Caffe framework.
We quantitatively evaluate the disease classification performance using the AUC (area under the ROC curve) score for each category. We ablate the curriculum learning step in the AGCL framework to assess its effect. It is denoted by AGL (attention-guided learning), where seeds are only initialized with high scored images from the baseline model. The per-disease AUC comparisons of the benchmark , our baseline method, AGL, and AGCL with one refinement step (AGCL-1) and two refinement steps (AGCL-2) are shown in Table 1. A higher AUC score implies a better classifier.
Compared with the benchmark results , our baseline model achieves higher AUC scores for all the categories except Hernia, which contains a very limited number () of samples. AGL consistently achieves better performance than the baseline model, demonstrating the effectiveness of attention-guided learning in our framework. Furthermore, the proposed AGCL improves upon AGL by introducing more confident heatmaps from seed images using curriculum learning. The iterative refinement process using AGCL is proven to be effective given the fact that AGCL-2 achieves better classification results than AGCL-1. We experimentally find that AGCL-3 has similar results as AGCL-2, which we believe the iterative refinement process has reached the convergence.
|Disease||GT||Detected Box||True Positive||Recall||Precision|
Weakly Supervised Disease Localization: We generate bounding boxes from the disease-specific heatmap for each image with corresponding disease, following the benchmark method applied in  and , and evaluate their qualities against ground-truth (GT) bounding boxes annotated by radiologists. A box is considered as a true positive (TP) if its Intersection of the GT Over the detected Bounding Box area ratio (IoBB , similar to Area of Precision or Purity) is larger than a threshold . Table. 2 shows the comparison of the baseline model, AGL and AGCL-2 (denoted as AGCL in the table.)
Overall, AGCL achieves the best localization results by generating the least number of bounding boxes (3498) but with the most number of true positives (1518), namely the highest precision (0.44). It recalls 73% of the ground-truth boxes by proposing an average of 3.5 boxes per image. AGCL employs more seed images than AGL by incorporating disease severity level (DSL) based curriculum learning, which improves upon AGL as shown in Table 2. The relative performance improvements of AGCL are more significant for Effusion, Mass and Infiltration, where more moderate and severe samples are labeled than other disease types. Nodule is often labeled as small therefore curriculum learning barely helps. However, visual attention based iterative learning (AGL) outperformed the baseline model even for very difficult diseases such as Nodule and Pneumothorax. We show some qualitative localization heatmap examples in Fig. 3.
In this paper, we exploit to utilize the NLP-mined disease severity level information from radiology reports to facilitate the curriculum learning for more accurate thoracic disease classification and localization. In addition, an iterative attention-guided refinement framework is developed to further improve the classification and weakly-supervised localization performance. Extensive experimental evaluations on the ChestXray14 database validate the effectiveness on significant performance improvement derived from both the overall framework and each of its components individually. Future work includes formulating structured reports, extracting richer information from the reports such as coarse location of lesions and using follow up studies, and mining common disease patterns, to help develop more precise predictive models.
This research was supported by the Intramural Research Program of the National Institutes of Health Clinical Center and by the Ping An Technology Co., Ltd. through a Cooperative Research and Development Agreement. The authors thank NVIDIA for GPU donation.
-  Bengio, Y., Louradour, J., et al.: Curriculum learning. In: ICML (2009)
-  He, K., et al.: Deep residual learning for image recognition. In: IEEE CVPR (2016)
-  Jin, D., Xu, Z., et al.: CT-realistic lung nodule simulation from 3D conditional generative adversarial networks for robust lung segmentation. In: MICCAI (2018)
-  Krizhevsky, A., Sutskever, I., Hinton, G.E.: Imagenet classification with deep convolutional neural networks. In: NIPS (2012)
-  Lakhani, P., Sundaram, B.: Deep learning at chest radiography: Automated classification of pulmonary tuberculosis by using convolutional neural networks. Radiology 284(2) (2017)
-  Li, Z., Wang, C., Han, M., Xue, Y., Wei, W., Li, L.J., Fei-Fei, L.: Thoracic disease identification and localization with limited supervision. In: IEEE CVPR (2018)
-  Rajpurkar, P., Irvin, J., Zhu, K., Yang, B., et al.: Chexnet: Radiologist-level pneumonia detection on chest x-rays with deep learning. arXiv: 1711.05225 (2017)
Shi, M., Ferrari, V.: Weakly supervised object localization using size estimates. In: ECCV (2016)
-  Tang, Y., Wang, J., Gao, B., et al.: Large scale semi-supervised object detection using visual and semantic knowledge transfer. In: IEEE CVPR (2016)
-  Tang, Y., et al.: Semi-automatic RECIST labeling on CT scans with cascaded convolutional neural networks. In: MICCAI (2018)
-  Wang, X., Peng, Y., Lu, L., Lu, Z., Bagheri, M., Summers, R.M.: Chestx-ray8: Hospital-scale chest x-ray database and benchmarks on weakly-supervised classification and localization of common thorax diseases. In: IEEE CVPR (2017)
-  Wang, X., Peng, Y., et al.: Tienet: Text-image embedding network for common thorax disease classification and reporting in chest x-rays. In: IEEE CVPR (2018)
-  Yan, K., et al.: Deeplesion: automated mining of large-scale lesion annotations and universal lesion detection with deep learning. J. Med. Imag. 5(3) (2018)
Zhou, B., Khosla, A., Lapedriza, A., Oliva, A., Torralba, A.: Learning deep features for discriminative localization. In: IEEE CVPR (2016)