GREN: Graph-Regularized Embedding Network for Weakly-Supervised Disease Localization in X-ray images

07/14/2021 ∙ by Baolian Qi, et al. ∙ 15

Locating diseases in chest X-ray images with few careful annotations saves large human effort. Recent works approached this task with innovative weakly-supervised algorithms such as multi-instance learning (MIL) and class activation maps (CAM), however, these methods often yield inaccurate or incomplete regions. One of the reasons is the neglection of the pathological implications hidden in the relationship across anatomical regions within each image and the relationship across images. In this paper, we argue that the cross-region and cross-image relationship, as contextual and compensating information, is vital to obtain more consistent and integral regions. To model the relationship, we propose the Graph Regularized Embedding Network (GREN), which leverages the intra-image and inter-image information to locate diseases on chest X-ray images. GREN uses a pre-trained U-Net to segment the lung lobes, and then models the intra-image relationship between the lung lobes using an intra-image graph to compare different regions. Meanwhile, the relationship between in-batch images is modeled by an inter-image graph to compare multiple images. This process mimics the training and decision-making process of a radiologist: comparing multiple regions and images for diagnosis. In order for the deep embedding layers of the neural network to retain structural information (important in the localization task), we use the Hash coding and Hamming distance to compute the graphs, which are used as regularizers to facilitate training. By means of this, our approach achieves the state-of-the-art result on NIH chest X-ray dataset for weakly-supervised disease localization. Our codes are accessible online.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 4

page 8

page 9

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

The chest X-ray images can be conveniently acquired for the diagnosis of multiple diseases. Automatic disease localization of chest X-ray images has become an increasingly important technique to support clinical diagnosis and treatment planning. In the past decade, convolutional neural networks (CNNs) have been widely applied in medical image analysis, including disease classification 

[25, 34, 9, 13, 3, 24, 1, 18], segmentation [5, 8, 31, 17, 7], detection [38, 26, 4, 28], and automatic report generation [15, 19, 33]. CNNs typically require a large amount of finely annotated data to acquire the ability of disease localization. However, in the medical field, the annotation is time-consuming, tedious and laborious for radiologists. In reality, the X-ray datasets often only have the image-level label and few location labels, which leads to poor model performance and inaccurate localization result. Therefore, achieving more accurate localization with limited location annotation is urgent and important, which is a weakly-supervised localization task.

Existing weakly-supervised disease localization methods are mainly based on multi-instance learning (MIL) [2, 20] and class activation maps (CAM) [27, 32, 36, 29, 26, 11, 14]. However, these methods often generate inaccurate or incomplete regions. For example, the CAM-based methods only activates the most discriminative regions and generates incomplete targets. We argue that complementary information can be obtained by considering the relationship between anatomical regions and between different images, which can lead to a more accurate localization of the diseases. This insight coincides with the priori domain knowledge in the medical field: radiologists are trained to read a large number of X-ray images and analyse them by recognizing and comparing the differences (e.g., shapes, textures, contrast, etc.), including comparing multiple images and comparing different regions of single image. They usually make decisions based on what they have learned and observed though comparison.

Recently, there are a series of attention-based works [32, 22, 37] to incorporate the knowledge of comparing abnormal and normal X-ray images into CNN. The results show that attention is beneficial to the disease localization in weakly-supervised scenarios. The main shortcoming of attention-based methods is that they have no direct supervision (e.g., the position label) to conduct error feedback, and it is difficult to interpret what the model has learned. To overcome the insufficiency of these methods, we propose the Graph Regularized Embedding Network (GREN) that uses relational graphs to regularize the training of deep neural networks. As is shown in Figure 1

, The similarities between graph nodes is used to model the intra-image and inter-image relationship in X-ray images, where the Hash coding is used to encode the lung regions at the macro level and the Hamming distances between paired regions are calculated. The method does not need the attention operation and the selection of normal controls, so it avoids the disadvantages. Specially, there are some phenomena of “different X-ray images of the similar diseases” and “different diseases of the similar X-ray images”, so adding the structural relationship information between different X-ray images into the deep learning model can make the models distinguish this phenomena. Furthermore, we calculate the relationship between the two lung lobes since most chest diseases such as pneumonia, infiltration and consolidation rarely appear on both sides symmetrically 

[37]. Therefore, the features of these abnormal regions can be identified via comparing the lung lobes. Inspired by this, we distinguish the left lung and the right lung by structural relationship calculation. Our main contributions are highlighted as follows.

Figure 1: The general design of GREN. The inter-image and intra-image relations in chest X-ray images modeled by structural graphs are used to regularize CNNs to obtain accurate and complete regions. GREN mimics a radiologist by leveraging cross-image and cross-region information during training and decision-making.
  • We propose the GREN, the first method exploring relational graphs to improve the disease localization in weakly-supervised scenarios. Compared with the attention-based methods, the global structure is considered without the normal control sampling problem. Compared with graph-based methods such as Graph-Rise [16], GREN does not need additional manual annotation and is carried out in an unsupervised manner.

  • We propose the intra-image knowledge learning module and inter-image knowledge learning module to learn the intra-image and inter-image structural information of X-ray images. The information can be used to regularize the training of deep neural networks, so as to achieve more accurate and integral localization results with few careful annotations.

  • We achieve the state-of-the-art result for disease localization on NIH Chest X-ray dataset. Ablation experiments demonstrate the effectiveness of each component. We make the code publicly available for subsequent studies.

2 Related work

Automatic disease diagnosis in Chest X-ray images is an important part in the computer-assisted diagnosis (CAD). Recently, deep learning has been widely developed and applied in X-ray images analysis, including disease classification [32, 29, 11] and detection [26, 4, 20, 22, 28]. However, the task of locating diseases in chest X-ray images with few careful annotations remains a challenging problem. Wang et al. [32] firstly proposed a carefully annotated chest X-ray dataset and localized the disease with a multi-label disease classification model and a thresholding method of the disease heatmaps. Li et al. [20] proposed a framework that jointly models disease identification and localization with multi-instance level loss and binary cross-entropy loss, which has a good performance for disease localization with limited localization annotation data. Tang et al. [29] proposed an iterative attention-guided refinement framework to improve the classification and weakly-supervised localization performance by using CAM [27]. Sedai et al. [26] proposed a weakly supervised method based on class-aware multiscale convolutional feature to localize chest pathologies in X-ray images. Cai et al. [4] proposed an attention mining strategy to improve the sensitivity and saliency of model to disease patterns. All of the above methods only use the deep learning method to process individual images, but ignore the prior knowledge in medical images, such as considering the image-to-image and region-to-region relationship explicitly during modelling.

For the last two years, researchers paid much effort to utilize medical experts experience for disease diagnosis, mining medical knowledge and embedding it into deep convolutional neural networks (DCNN). There are some methods to integrate image relationship information into DCNN. Zhao et al. [35]

proposed the contralateral context information for enhancing feature representations of disease proposals by using a spatial transformer network. Lian et al. 

[21] leveraged on constant structure and disease relations extracted from domain knowledge using a structure-aware relation extracted network. However, these methods only use the region-to-region relationship in individual image and overlook the image-to-image relationship. Recently, there are also some works using inter-image relationship for weakly-supervised medical image analysis. Liu et al. [22] utilized the contrast-induced attention acquired on paired images between healthy and unhealthy samples to provide more information for localization. Zhou et al. [37]

exploited two contrastive abnormal attention models and a dual-weighting graph convolution to improve the performance of thoracic multi-disease recognition, which designed a left-right lung contrastive network to learn intra-attentive abnormal features, and an inter-contrastive abnormal attention model to compare healthy samples with multiple unhealthy samples to compute the abnormal attention map. These methods used the operation of the normal control and the attention-based methods. However, the uncertainty of the normal control operation may lead to the uncertainty of model performance. Furthermore, the attention-based methods have no direct supervision to conduct error feedback, so it is difficult to explain what the model has learned. Juan et al. 

[16]

proposed a neural graph learning framework that leverages graph structure to regularize the training of deep neural networks. The similarity between images is used to measure the relationship between graph nodes, but this relationship information needs much manual annotation. Different from all these works, we propose the GREN, which is based on the structural relationship without manual annotations. Compared with the attention-based methods, more macro and global structural information can be retained without normal control sampling problems by using the GREN. Compared with the methods such as Graph-Rise 

[16], the GREN does not need additional manual annotation and is carried out in an unsupervised manner.

Figure 2: Flowchart of the proposed GREN. The main part conducts weakly-supervised localization using a CNN-based feature extractor and a task head optimized via MIL. The graph-induced regularization module (in blue background) models the inter-image and intra-image similarities, which are used to regularize the representations of the main task for better structural and contextual awareness. The CNNs (FCs) refer to the same model.

3 Proposed methods

The architecture of GREN is presented at Figure 2. The input images are chest X-ray images in a mini-batch of the network. They are first conducted with a pre-processing of parenchymal segmentation [12], and the left lung region and right lung region of each image are thus obtained, which is . Secondly, the input images and the images of lung regions are fed to the network to produce their feature map and , and then the distance of the feature map and the feature map is computed to obtain the intra-image distance and inter-image distance, respectively. At the same time, the Hash coding with digits is used to represent each patch and calculate the Hamming distances between paired patches, resulting into the similarity of intra-image and inter-image. Finally, the intra-image similarity and Inter-image similarity are used to minimize the intra-image distance and the inter-image distance, respectively. A task head is connected behind the embedding space to complete the weakly-supervised localization task. Please note that the feature extractor and the task head share weights. The GREN aims to utilize the information of the intra and inter structure relationship of images and formulate a more informative embedding space to improve the performance in the weakly-supervised localization.

3.1 Framework

The ResNet-50 pre-trained on the ImageNet dataset is used as the feature extractor of GREN. The input images

. After removing the final classification and global pooling layer of ResNet-50, the feature map is extracted, which is 32 times down-sampled comparing with the input. Meanwhile, the input image is divided into

patch grids, and for each grid, the existent probability of disease is predicted by the network. Note that P is an adjustable hyperparameter (P=16 in our experiments). Then, we pass

through two convolutional layers and a sigmoid layer to obtain the final predictions of with channels, where is the number of possible disease types. Finally, we compute losses and make predictions in each channel for the corresponding class, following the paradigm used in [22] and [20].

For images with box-level annotations, if the grid in the feature map is covered partially by the ground truth box, we assign label 1 to the grid. Otherwise, we assign 0 to it. Therefore, we use the binary cross-entropy (BCE) loss for each grid:

(1)

where , , and are the index of classes, samples and grids, respectively. denotes the target label of the th grid in the th image for the class , and denotes the predicted probability of the th grid in the th image for the class .

For images with only disease-level annotations, we use the MIL loss:

(2)

where denotes the target label of the image. For the network, the whole loss is formulated as follows:

(3)

where denotes the network parameters, denotes if the class in the sample has box annotation, and is the balance weight of the two losses and is set to 4.

3.2 Graph Regularization

While the existing weakly supervised localization methods indeed pave the way to locate and classify diseases in chest X-ray images with limited box-level labels, there are more additional data information about the inter and intra structure relationship (similarity) that can be exploited to compensate for the lack of labels. These relationship information can be represented as graph, where the graph nodes denote the images and the edges denote their relationships. Therefore, we propose to train the network using the graph regularization to explore the relationship of chest X-ray images. Namely,

the more similar the images are, the closer their representations are in the embedded space. To achieve this goal, we design two graph regularization terms to train the neural network to encourage similar images (within batches) to move closer in the embedding space.

In an X-ray image, the areas with the common lesions are the left and right lung regions, while the clavicle area or the blank area outside the body contained the rare information. Therefore, the left and right lung regions of all samples are firstly extracted by the segmentation algorithm [12]. Hash coding is an algorithm that produces a snippet or fingerprint of images, which analyses the image structure on luminance and shows whether two images look nearly identical. Since the Hash coding  [30] can fast calculate the Hamming distance to obtain the similarity of two images without any additional manual annotation, we use the Hash coding and Hamming distance to measure the similarity of X-ray images.

3.2.1 Intra-image Knowledge Learning

The regions of left and right lungs in chest X-ray image show good symmetric structure, and the lesions of most diseases rarely appear in both sides symmetrically, such as pneumonia, infiltration and consolidation [37]. Therefore, it is very useful to compare and distinguish the differences (e.g., textures and shadows) between the left and right lung regions for radiologists in clinical practice. To imitate such a diagnosis strategy, a module of Intra-image knowledge learning is proposed to learn the differences between left and right lung regions.

The structure relationship information of different regions of a X-ray image is modeled to get the graph , where the graph nodes denote the two regions of left lung and right lung in an image, the edges denote the similarity of the graph nodes ,

(4)

where and is the representation of the Hash coding of the left lung regions and the right lung regions in image, respectively. calculates the Hamming distance of and . The is a normalized parameter, which makes the , its value is the product of the length of the Hash coding ( = 64 in our experiments). The graph regularization term is obtained,

(5)

where denotes the similarity of the left lung region and the right lung region , is the distance metric function, which is the Euclidean distance, and mean the feature map of the left lung region and the right lung , respectively. Please note that the and are obtained by taking the masks of the left lung region and the right lung region in the feature map .

3.2.2 Inter-image Knowledge Learning

The different X-ray images may have the phenomenon of “different X-ray images of the similar diseases” and “different diseases of the similar X-ray images”, it is difficult for the deep learning model to locate and classify diseases with limited box-level labels. Therefore, a module of inter-image knowledge learning is proposed to learn the differences of the structural relationship information between different X-ray images, which provides a large difference for the similar diseases and similar X-ray images.

A pair of images and are randomly selected from the input images X. The structure relationship information of different X-ray images is modeled to get the graph , where the graph nodes denote the images in a mini-batch of the network ( in our experiments), the edges denote the similarity of graph nodes . Since there are left and right lung regions in an image, the Hamming distance of two lung regions in all images should be calculated,

(6)

where , , and are the representation of Hash coding of left and right lung regions of the image and the image , respectively, calculates their Hamming distance. The is a normalized parameter, its value is the product of the length of the Hash coding ( = 64 in our experiments). The graph regularization term is obtained,

(7)

where denotes the similarity of the image and image , is the distance metric function, which is the Euclidean distance, and mean the feature map of image and image , respectively.

The final objective is the sum of the loss of the localization network and the graph regularization terms and ,

(8)

where the parameter and control the balance between the three losses and are set to 0.11 and 0.15 by searching from 0 to 1 with a step of 0.05. When = 0 and = 0, the network reduces to a localization model without the graph regularization.

4 Experiments Results

T (IoU) Models Atelectasis Cardiomegaly Effusion Infiltration Mass Nodule Pneumonia Pneumothorax Mean
0.3 CAM [32] 0.24 0.46 0.30 0.28 0.15 0.04 0.17 0.13 0.22
MIL [20] 0.36 0.94 0.56 0.66 0.45 0.17 0.39 0.44 0.49
CIA-Net [22] 0.53 0.88 0.57 0.73 0.48 0.10 0.49 0.40 0.53
Contrast-Attention [37] 0.62 0.95 0.63 0.79 0.52 0.22 0.56 0.48 0.60
Baseline (B) 0.46 1.00 0.45 0.79 0.51 0.11 0.72 0.24 0.54
B+Intra 0.51 0.89 0.68 0.80 0.58 0.14 0.67 0.40 0.58
B+Inter 0.46 0.93 0.68 0.88 0.58 0.29 0.54 0.50 0.61
Ours (B+Intra+Inter) 0.46 0.93 0.72 0.88 0.47 0.29 0.67 0.55 0.62
0.5 CAM [32] 0.05 0.18 0.11 0.07 0.01 0.01 0.03 0.03 0.06
MIL [20] 0.14 0.84 0.22 0.30 0.22 0.07 0.17 0.19 0.27
CIA-Net [22] 0.32 0.78 0.40 0.61 0.33 0.05 0.37 0.23 0.39
Contrast-Attention [37] 0.39 0.86 0.46 0.65 0.39 0.13 0.43 0.27 0.45
Baseline (B) 0.34 0.95 0.37 0.62 0.40 0.07 0.61 0.09 0.43
B+Intra 0.37 0.86 0.48 0.80 0.37 0.14 0.58 0.30 0.49
B+Inter 0.32 0.89 0.52 0.80 0.42 0.21 0.50 0.35 0.50
Ours (B+Intra+Inter) 0.37 0.86 0.52 0.84 0.42 0.29 0.54 0.45 0.54
0.7 CAM [32] 0.01 0.03 0.02 0.00 0.00 0.00 0.01 0.02 0.01
MIL [20] 0.04 0.52 0.07 0.09 0.11 0.01 0.05 0.05 0.12
CIA-Net[22] 0.18 0.70 0.28 0.41 0.27 0.04 0.25 0.18 0.29
Contrast-Attention [37] 0.24 0.75 0.33 0.45 0.33 0.09 0.35 0.23 0.35
Baseline (B) 0.25 0.93 0.25 0.50 0.27 0.04 0.54 0.09 0.36
B+Intra 0.27 0.86 0.44 0.72 0.32 0.14 0.58 0.20 0.44
B+Inter 0.29 0.86 0.48 0.68 0.37 0.21 0.46 0.35 0.46
Ours (B+Intra+Inter) 0.34 0.86 0.48 0.60 0.37 0.21 0.46 0.35 0.46
Table 1: Performance comparison of disease localization using 50% unannotated images and 80% annotated images. Note that the data partition settings are inherited from existing studies for fair comparisons. For each column, the bold or red values denote the best results.

4.1 Datasets and Evaluation Metrics

NIH chest X-ray dataset [32] consists of 112,120 frontal-view X-ray images with 14 classes of diseases. Furthermore, the dataset contains 880 images with 984 labeled bounding boxes, and the provided bounding boxes only have 8 type of disease instances. Since we pay more attention to the task of locating diseases, we follow the terms in [20] and [22] to call the 880 images with labeled bounding boxes as ‘annotated’ and the remaining 111,240 images as ‘unannotated’. For fast processing, we resize the original 3-channel images from resolution of to without any data augmentation techniques.

We follow the metrics used in [20]. For localization, the intersection over union (IoU) between predictions and ground truths is used to evaluate the performance of the models. Additionally, the localization results are regarded as correct when , where is the threshold, including three thresholds of 0.3, 0.5, and 0.7. For more accurate localization with limited location annotation data, the localization predictions are discrete small rectangles. The model performance of the eight diseases with ground truth boxes are reported in this paper.

4.2 Experimental Setting

All the models are trained on NIH chest X-ray dataset using the stochastic gradient descent (SGD) algorithm with the Nesterov momentum. The learning rate starts from 0.001 and decreases by 10 times after every 4 epochs with a total of 9 epochs. Additionally, the weight decay is 0.0001 and the momentum is 0.9. All the weights are initialized with pre-trained ResNet-50 

[10]

model on ImageNet 

[6]

. The mini-batch size is set to 4 with the NVIDIA 2080Ti GPU. All proposed methods in this paper are implemented with PyTorch 

[23]. The threshold of 0.5 is used to distinguish positive grids from negative grids in the class-wise feature map, which has been adopted in previous studies [20] and [22]. Please note that the up-sampled operation of feature maps is used to gain a more accurate localization for lesion location before two last fully convolutional layers. Furthermore, the paired images are random sample in the train phase, and the test phase does not need the pair images.

T(IoU) Models Atelectasis Cardiomegaly Effusion Infiltration Mass Nodule Pneumonia Pneumothorax Mean
0.1 CAM [32] 0.18 0.59 0.23 0.27 0.39 0.23 0.44 0.06 0.30
MIL [20] 0.59 0.81 0.72 0.84 0.68 0.28 0.22 0.37 0.57
CIA-Net [22] 0.39 0.90 0.65 0.85 0.69 0.38 0.30 0.39 0.60
CIAN-CEN [35] 0.71 0.91 0.74 0.85 0.79 0.47 0.40 0.50 0.67
Baseline (B) 0.68 0.84 0.80 0.81 0.68 0.29 0.07 0.43 0.57
B+Intra 0.71 0.88 0.80 0.89 0.76 0.49 0.42 0.58 0.69
B+Inter 0.67 0.86 0.84 0.86 0.72 0.48 0.18 0.50 0.64
Ours (B+Intra+Inter) 0.69 0.93 0.80 0.85 0.75 0.49 0.23 0.50 0.66
0.3 CAM [32] 0.01 0.54 0.03 0.13 0.16 0.20 0.18 0.00 0.16
MIL [20] 0.34 0.26 0.52 0.72 0.40 0.09 0.00 0.23 0.32
CIA-Net [22] 0.34 0.71 0.39 0.65 0.48 0.09 0.16 0.20 0.38
CIAN-CEN [35] 0.47 0.75 0.55 0.76 0.62 0.21 0.19 0.29 0.48
Baseline (B) 0.49 0.75 0.51 0.75 0.53 0.13 0.04 0.28 0.43
B+Intra 0.48 0.82 0.57 0.84 0.55 0.14 0.28 0.37 0.51
B+Inter 0.45 0.83 0.58 0.80 0.53 0.16 0.10 0.33 0.47
Ours (B+Intra+Inter) 0.49 0.90 0.57 0.79 0.58 0.20 0.15 0.28 0.49
0.5 CAM [32] 0.00 0.25 0.01 0.06 0.09 0.05 0.06 0.00 0.07
MIL [20] 0.18 0.10 0.27 0.46 0.18 0.03 0.00 0.11 0.17
CIA-Net [22] 0.19 0.53 0.19 0.47 0.33 0.03 0.08 0.11 0.24
CIAN-CEN [35] 0.32 0.68 0.39 0.61 0.49 0.07 0.15 0.21 0.36
Baseline (B) 0.36 0.66 0.37 0.69 0.35 0.05 0.03 0.21 0.34
B+Intra 0.30 0.79 0.38 0.72 0.42 0.05 0.25 0.27 0.40
B+Inter 0.31 0.79 0.37 0.70 0.41 0.08 0.08 0.24 0.37
Ours (B+Intra+Inter) 0.36 0.86 0.41 0.70 0.49 0.10 0.11 0.22 0.41
0.7 CAM [32] 0.00 0.01 0.00 0.03 0.04 0.04 0.01 0.00 0.01
MIL [20] 0.09 0.01 0.07 0.28 0.08 0.01 0.00 0.05 0.07
CIA-Net [22] 0.08 0.30 0.09 0.25 0.19 0.01 0.04 0.07 0.13
CIAN-CEN [35] 0.13 0.54 0.19 0.27 0.27 0.04 0.05 0.17 0.21
Baseline (B) 0.25 0.55 0.25 0.62 0.26 0.03 0.03 0.14 0.27
B+Intra 0.21 0.74 0.22 0.62 0.35 0.03 0.18 0.18 0.32
B+Inter 0.20 0.75 0.25 0.55 0.38 0.04 0.06 0.21 0.30
Ours (B+Intra+Inter) 0.28 0.77 0.22 0.61 0.44 0.06 0.09 0.16 0.33
Table 2: Performance comparison of disease localization using unannotated images only. For each column, the bold or red values denote the best results. Note that the accuracy of high T(IoU) is more valuable in practice.
T(IoU) Models Atelectasis Cardiomegaly Effusion Infiltration Mass Nodule Pneumonia Pneumothorax Mean
0.3 MIL [20] 0.39 0.80 0.61 0.78 0.39 0.08 0.48 0.35 0.48
CIA-Net [22] 0.55 0.73 0.55 0.76 0.48 0.22 0.39 0.30 0.50
Baseline (B) 0.63 0.89 0.74 0.89 0.44 0.08 0.72 0.42 0.60
B+Intra 0.61 0.94 0.77 0.88 0.63 0.23 0.64 0.47 0.65
B+Inter 0.57 0.78 0.78 0.89 0.67 0.19 0.74 0.45 0.63
Ours (B+Intra+Inter) 0.53 0.87 0.71 0.90 0.67 0.19 0.81 0.42 0.64
0.5 MIL [20] 0.23 0.72 0.30 0.60 0.22 0.02 0.32 0.20 0.32
CIA-Net [22] 0.36 0.57 0.37 0.62 0.34 0.13 0.23 0.17 0.35
Baseline (B) 0.46 0.83 0.63 0.82 0.31 0.04 0.59 0.29 0.50
B+Intra 0.44 0.91 0.60 0.82 0.52 0.13 0.58 0.33 0.54
B+Inter 0.45 0.74 0.62 0.85 0.43 0.19 0.70 0.33 0.54
Ours (B+Intra+Inter) 0.42 0.83 0.60 0.86 0.54 0.10 0.71 0.36 0.55
0.7 MIL [20] 0.07 0.64 0.17 0.38 0.17 0.00 0.20 0.17 0.21
CIA-Net [22] 0.19 0.47 0.20 0.41 0.22 0.06 0.12 0.11 0.22
Baseline (B) 0.36 0.82 0.48 0.71 0.26 0.04 0.48 0.26 0.43
B+Intra 0.35 0.89 0.49 0.76 0.37 0.06 0.51 0.26 0.46
B+Inter 0.38 0.71 0.49 0.72 0.43 0.13 0.58 0.24 0.46
Ours (B+Intra+Inter) 0.32 0.78 0.52 0.74 0.43 0.06 0.65 0.26 0.47
Table 3: Performance comparison of disease localization using 100% unannotated images and 40% annotated images. For each column, the bold or red values denote the best results.

4.3 Comparison With State-of-the-Arts

In order to effectively evaluate the proposed methods in weakly supervised disease localization, we follow [22] using three data participation methods and computing the average values with a five-fold cross validation for each participation. In the first participation, we use 50% unannotated images and 80% annotated images for training and the remaining 20% annotated images for testing. In the second participation, we use the 100% unannotated images without annotated images for training and all annotated images for testing. In the third participation, we use the 100% unannotated images and 40% annotated images for training and the remaining 60% annotated images for testing. Additionally, our experimental results are mainly compared with five methods. The first method is CAM [32], which uses a multi-label disease classification model and a thresholding method of the disease heatmaps for location disease. The second method is MIL [20], which integrates the multi-instance level loss and binary cross-entropy loss into a framework for disease localization. The third method is CIA-Net [22], which utilizes the contrast-induced attention acquired on paired images between healthy and unhealthy samples to provide more information for localization. The fourth method is Contrast-Attention [37], which exploits two contrastive abnormal attention models and a dual-weighting graph convolution to improve the performance of thoracic multi-disease recognition. The fifth method is CIAN-CEN [35]

, which proposes the contralateral context information for enhancing feature representations of disease proposals by using a spatial transformer network.

In the first participation, we compare the localization results of our model with [32][20][22], and [37]. We can observe that our model outperforms the existing methods in most cases, as shown in Table 1. Particularly, with increase of T(IoU), our model has greater advantages over the reference models. For examples, when T(IoU) = 0.3, the mean accuracy of Ours is 0.62, and outperforms [32][20][22], and [37] by 0.40, 0.13, 0.09, and 0.02. However, when T(IoU) = 0.7, the mean accuracy of Ours is 0.46, and outperforms [32][20][22], and [37] by 0.45, 0.34, 0.17, and 0.11. In the second participation, we train our model without any annotated images. Since [20, 22, 35] provide the results at T(IoU) = 0.1, in order to better show the performance of our model, we add an evaluation method of T(IoU) = 0.1 and reproduce the other results. Please note that the denotes the reproduction results. For each disease, the bold and red values denote the best results of the eight disease types and mean accuracy, respectively. We compare the localization results of our model with [32][20][22], and [35]. We can also observe that our model outperforms existing methods in most cases. In practice, we usually pay most attention to the accuracy of high thresholds of T(IoU), so it can be seen that our model has greater advantages over the reference models, as shown in Table 2. In the third participation, we use more annotated images comparing the second participation. We compare the localization results of our model with [20], and [22] in same data setting. It can be seen that our model still has the advantages over the existing methods, as shown in Table 3. Overall, the experimental results demonstrate that our method is more accurate for disease localization. Even if there is no annotated data for training, our method can achieve decent localization results.

4.4 Ablation Studies

We compare the localization results of the baseline model (B) with three models with different modules, including the model with the intra-image knowledge learning module (B+Intra), the model with the inter-image knowledge learning module (B+Inter), and the model with the intra-image knowledge learning module and the inter-image knowledge learning module (B+Intra+Inter).

The experimental results in Table 1, Table 2, Table 3 show that the model (B+Intra) and the model (B+Inter) demonstrate more advantages over the baseline method, and the model (B+Intra+Inter) achieves the best localization result. For example, in Table 1, when T(IoU) = 0.5, the mean accuracy of the model (B+Intra) and the model (B+Inter) are 0.49 and 0.50, and outperform the baseline (B) by 0.06 and 0.07, respectively. The model (B+Intra+Inter) achieves the best localization results, which is 0.54, and outperforms the baseline (B) by 0.11. In practice, we usually pay most attention to the accuracy of high thresholds of T(IoU). Thus, in Table 2, the localization results of the model (B+Intra+Inter) have more practical value compared with the model (B+Intra) because the localization results of the model (B+Intra+Inter) outperform the model (B+Intra) by 0.1 at T(IoU) = 0.5 and 0.7. Additionally, the localization results in Table 1, Table 2 and Table 3 demonstrate that when there is more annotated data in the training process, the model using the inter-image knowledge learning module has more advantages; when there is no annotated data in the training process, the model using the intra-image knowledge learning module has more advantages. For example, the model (B+Inter) performs better in most cases compared with the model (B+Intra) in Table 1, and the model (B+Intra) performs better in most cases compared with the model (B+Inter) in Table 2. Overall, the experimental results demonstrate that using structure relationship information of intra-image and inter-image can improve the performance of models for disease localization.

We also compare the localization results of our model with the attention-based method, which is a model that integrating the attention-based method into the baseline model to learn the difference of lung regions. In Table 4, it can be seen that the model (B+Attention) outperforms the baseline model by 0.06, 0.06, and 0.07 at T(IoU) = 0.3, 0.5, and 0.7, but our model still has advantages, it ourperforms the model (B+Attention) by 0.02, 0.05, and 0.03. Overall, the attention-based method can improve the accuracy of the model, but our model using the intra-image and inter-image structure relationship information has more advantages to improve the accuracy of model for weakly-supervised disease localization.

We explore the influence of different number of graph nodes on the proposed method for ablation studies as shown in Figure 3. Its value is equal to the batch size of the model, including 2, 4, 8, and 16. The data of 100% unannotated images and 40% annotated images is used to evaluate the performance of our model. It can be seen that the mean accuracies are improved with the increase of batch size, but the trend is not an unbounded growth. For example, when T(IoU) = 0.7, the mean accuracy of the model with batch size of 16 is 0.47, and outperforms the the model with batch size of 2 by 0.03, but when T(IoU) = 0.5, the mean accuracy of the model with batch size of 16 is 0.54, which is the same as the model with the batch size of 4. Overall, the experimental results show more images in each graph can be better for regularization, but it is not an unbounded growth, which grows slowly after the batch size of 4. Therefore, the batch size is set to 4 in our experiment considering the calculation amount and accuracy comprehensively.

T (IoU) Models Atelectasis Cardiomegaly Effusion Infiltration Mass Nodule Pneumonia Pneumothorax Mean
0.3 Baseline (B) 0.46 1.00 0.45 0.79 0.51 0.11 0.72 0.24 0.54
B+Attention 0.49 0.93 0.76 0.92 0.47 0.21 0.75 0.25 0.60
Ours 0.46 0.93 0.72 0.88 0.47 0.29 0.67 0.55 0.62
0.5 Baseline (B) 0.34 0.95 0.37 0.62 0.40 0.07 0.61 0.09 0.43
B+Attention 0.34 0.89 0.52 0.72 0.37 0.21 0.71 0.15 0.49
Ours 0.37 0.86 0.52 0.84 0.42 0.29 0.54 0.45 0.54
0.7 Baseline (B) 0.25 0.93 0.25 0.50 0.27 0.04 0.54 0.09 0.36
B+Attention 0.29 0.89 0.44 0.64 0.32 0.14 0.58 0.15 0.43
Ours 0.34 0.86 0.48 0.60 0.37 0.21 0.46 0.35 0.46
Table 4: Performance comparison of disease localization using 50% unannotated images and 80% annotated images. For each column, the bold or red values denote the best results.

Figure 3: The performance comparison of different batch sizes. The batch size is set to 2, 4, 8, and 16. The mean accuracies improves with the increase of batch size, but the trend is not an unbounded growth.

We also explore the influence of the similarity of inter-image and intra-image as shown in Figure 4. We randomly choose 10 images from the dataset. In Figure 4 (a), the horizontal and longitudinal axis are the same lung region images, and the two adjacent images have the same classes. We calculate the similarity of lung regions of these images using the Hash coding and Hamming distance. It can be seen that the two adjacent images have higher similarity compared with other images, where the darker color denotes higher similarity. For example, the diagonal rectangles are the similarity of the same images, so its value is 1. Additionally, we also calculate the similarity of the left lung and right lung region in an image, which is diagonal rectangle, the other rectangles are filled with the same color. In Figure 4 (b), the horizontal and longitudinal axis are the corresponding left lung and right lung regions in an image. It can be seen that the rectangle with darker color denotes the higher similarity of the left lung and right lung in an image. Particularly, the sixth and ninth darker color rectangles (from upper left to low right) denote that the sixth and ninth images have highly similar left and right lung, and their categories are actually “No Finding”, (The categories of the second, sixth, and ninth images are “No Finding”). The first, fourth, and eighth light color rectangles denote the left and right lung regions of the first, fourth, and eighth images have low similarity, and their categories are “Cardiomegaly”, “Cardiomegaly”, and “Pneumonia”, respectively. Overall, to some degree, the similarity of inter-image can indicate the categories of images without any annotation data, and the similarity of intra-image can indicate the information of differences of left and right lung regions, which is helpful to distinguish the unilateral disease of X-ray images.

Figure 4: The inter-image and intra-image similarities. Each color denotes a value of similarity, where the darker color denotes higher similarity. (a). The inter-image similarity. The horizontal and longitudinal axis are the same lung region images. The two adjacent images have the same classes and higher similarity compared with other images. (b). The intra-image similarity. The horizontal and longitudinal axis are the left lung region and right lung region in an image. Please note that the same color boxes denote the same classes of the images and the grey box denote the image only has one class.

Figure 5: Visualization of the predicted results on both the baseline model (MIL) and our method. The first column shows the original images, the second and third columns show the predicted results of MIL and GREN. The green bounding box and red area mean the ground truth and prediction, respectively. MIL often suffers from inaccurate and incomplete localization for smaller targets (e.g., Atelectasis, Effusion, Nodule). GREN alleviates this problem, which is consistent with the results of quantitative experiments.

Figure 6: Visualization of the predicted results on both the CAM-based method and our method. The first column shows the original images, the second and third columns show the predicted results of the CAM-based method and our method. The green bounding box and red area mean the the ground truth and prediction. CAM only highlights the most discriminative areas, and GREN makes more integral predictions.

To better demonstrate the final effect of our method on disease localization, we visualize some of typical predictions of both the baseline model and our method, as shown in Figure 5. The first column shows the original images, the second and the third column show the localization results of the baseline model and our model. The green bounding box and red area mean the ground truth and prediction. It can be seen that our model can predict more accurate in most cases compared with the baseline model. For example, the class of “Atelectasis”, “Effusion”, and “Nodule”, the localization results of the baseline model are completely inconsistent with the ground truth, but the localization results of our model are consistent with the ground truth. Additionally, we also visualize some of typical predictions of both the CAM-based method and our method, as shown in Figure 6. The first column shows the original images, the second and the third column show the localization results of the CAM-based method and our method, where the green bounding box and red area mean the ground truth and prediction. It can be seen that our model has greater advantages over the CAM-based method. Overall, it shows that using the intra-image and inter-image structural information can improve the performance of automatic lesion detection.

5 Conclusion

In this paper, we propose the GREN, which leverages the intra-image and inter-image information as a regularizer to preserve the structural similarity between lung regions and image pairs in the embedding space, and improve the performance of disease localization with limited supervision. The experimental results on the NIH Chest-14 dataset demonstrate that the proposed methods achieve the state-of-the-art performance in different settings. Particularly, our proposed method has a very practical use in the weakly-supervised localization task. For future work, we will investigate algorithms applicable to the localization of small targets (especially pulmonary nodules).

References

  • [1] A. I. Aviles-Rivero, N. Papadakis, R. Li, P. Sellars, Q. Fan, R. T. Tan, and C. Schönlieb (2020) GraphX chest x-ray classification under extreme minimal supervision. External Links: 1907.10085 Cited by: §1.
  • [2] B. Babenko (2008) Multiple instance learning: algorithms and applications. View Article PubMed/NCBI Google Scholar, pp. 1–19. Cited by: §1.
  • [3] C. Brestel, R. Shadmi, I. Tamir, M. Cohen-Sfaty, and E. Elnekave (2018) Radbot-cxr: classification of four clinical finding categories in chest x-ray using deep learning. Cited by: §1.
  • [4] J. Cai, L. Lu, A. P. Harrison, X. Shi, P. Chen, and L. Yang (2018) Iterative attention mining for weakly supervised thoracic disease pattern localization in chest x-rays. In International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 589–598. Cited by: §1, §2.
  • [5] C. Chen, Q. Dou, H. Chen, and P. Heng (2018) Semantic-aware generative adversarial nets for unsupervised domain adaptation in chest x-ray segmentation. In

    International workshop on machine learning in medical imaging

    ,
    pp. 143–151. Cited by: §1.
  • [6] J. Deng, W. Dong, R. Socher, L. Li, K. Li, and L. Fei-Fei (2009) Imagenet: a large-scale hierarchical image database. In

    2009 IEEE conference on computer vision and pattern recognition

    ,
    pp. 248–255. Cited by: §4.2.
  • [7] M. Eslami, S. Tabarestani, S. Albarqouni, E. Adeli, N. Navab, and M. Adjouadi (2020) Image-to-images translation for multi-task organ segmentation and bone suppression in chest x-ray radiography. IEEE Transactions on Medical Imaging 39 (7), pp. 2553–2565. External Links: Document Cited by: §1.
  • [8] G. Gaál, B. Maga, and A. Lukács (2020) Attention u-net based adversarial architectures for chest x-ray lung segmentation. arXiv preprint arXiv:2003.10304. Cited by: §1.
  • [9] Q. Guan, Y. Huang, Z. Zhong, Z. Zheng, L. Zheng, and Y. Yang (2018) Diagnose like a radiologist: attention guided convolutional neural network for thorax disease classification. arXiv preprint arXiv:1801.09927. Cited by: §1.
  • [10] K. He, X. Zhang, S. Ren, and J. Sun (2016) Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 770–778. Cited by: §4.2.
  • [11] R. Hermoza, G. Maicas, J. C. Nascimento, and G. Carneiro (2020) Region proposals for saliency map refinement for weakly-supervised disease localisation and classification. In International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 539–549. Cited by: §1, §2.
  • [12] Https://pypi.org/project/lungs-segmentation/. Cited by: §3.2, §3.
  • [13] J. Irvin, P. Rajpurkar, M. Ko, Y. Yu, S. Ciurea-Ilcus, C. Chute, H. Marklund, B. Haghgoo, R. Ball, K. Shpanskaya, et al. (2019) Chexpert: a large chest radiograph dataset with uncertainty labels and expert comparison. In Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 33, pp. 590–597. Cited by: §1.
  • [14] M. Izadyyazdanabadi, E. Belykh, C. Cavallo, X. Zhao, S. Gandhi, L. B. Moreira, J. Eschbacher, P. Nakaji, M. C. Preul, and Y. Yang (2018)

    Weakly-supervised learning-based feature localization for confocal laser endomicroscopy glioma images

    .
    In International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 300–308. Cited by: §1.
  • [15] B. Jing, P. Xie, and E. Xing (2017) On the automatic generation of medical imaging reports. arXiv preprint arXiv:1711.08195. Cited by: §1.
  • [16] D. Juan, C. Lu, Z. Li, F. Peng, A. Timofeev, Y. Chen, Y. Gao, T. Duerig, A. Tomkins, and S. Ravi (2019) Graph-rise: graph-regularized image semantic embedding. External Links: 1902.10814 Cited by: 1st item, §2.
  • [17] A. J. Larrazabal, C. Martínez, B. Glocker, and E. Ferrante (2020)

    Post-dae: anatomically plausible segmentation via post-processing with denoising autoencoders

    .
    IEEE Transactions on Medical Imaging 39 (12), pp. 3813–3820. Cited by: §1.
  • [18] J. Li, G. Zhao, Y. Tao, P. Zhai, H. Chen, H. He, and T. Cai (2021) Multi-task contrastive learning for automatic ct and x-ray diagnosis of covid-19. Pattern Recognition 114, pp. 107848. Cited by: §1.
  • [19] Y. Li, X. Liang, Z. Hu, and E. P. Xing (2018) Hybrid retrieval-generation reinforced agent for medical image report generation. In Advances in Neural Information Processing Systems, pp. 1537–1547. Cited by: §1.
  • [20] Z. Li, C. Wang, M. Han, Y. Xue, W. Wei, L. Li, and L. Fei-Fei (2018) Thoracic disease identification and localization with limited supervision. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8290–8299. Cited by: §1, §2, §3.1, §4.1, §4.1, §4.2, §4.3, §4.3, Table 1, Table 2, Table 3.
  • [21] J. Lian, J. Liu, S. Zhang, K. Gao, X. Liu, D. Zhang, and Y. Yu (2021) A structure-aware relation network for thoracic diseases detection and segmentation. arXiv preprint arXiv:2104.10326. Cited by: §2.
  • [22] J. Liu, G. Zhao, Y. Fei, M. Zhang, Y. Wang, and Y. Yu (2019) Align, attend and locate: chest x-ray diagnosis via contrast induced attention network with limited supervision. In 2019 IEEE/CVF International Conference on Computer Vision (ICCV), Vol. , pp. 10631–10640. External Links: Document Cited by: §1, §2, §2, §3.1, §4.1, §4.2, §4.3, §4.3, Table 1, Table 2, Table 3.
  • [23] A. Paszke, S. Gross, S. Chintala, G. Chanan, E. Yang, Z. DeVito, Z. Lin, A. Desmaison, L. Antiga, and A. Lerer (2017) Automatic differentiation in pytorch. Cited by: §4.2.
  • [24] C. Qin, D. Yao, Y. Shi, and Z. Song (2018) Computer-aided detection in chest radiography based on artificial intelligence: a survey. Biomedical engineering online 17 (1), pp. 1–23. Cited by: §1.
  • [25] P. Rajpurkar, J. Irvin, K. Zhu, B. Yang, H. Mehta, T. Duan, D. Ding, A. Bagul, C. Langlotz, K. Shpanskaya, et al. (2017) Chexnet: radiologist-level pneumonia detection on chest x-rays with deep learning. arXiv preprint arXiv:1711.05225. Cited by: §1.
  • [26] S. Sedai, D. Mahapatra, Z. Ge, R. Chakravorty, and R. Garnavi (2018) Deep multiscale convolutional feature learning for weakly supervised localization of chest pathologies in x-ray images. In International Workshop on Machine Learning in Medical Imaging, pp. 267–275. Cited by: §1, §1, §2.
  • [27] R. Selvaraju, A. Das, R. Vedantam, M. Cogswell, D. Parikh, and D. Batra (2016) Grad-cam: why did you say that? visual explanations from deep networks via gradient-based localization. arXiv:1611.01646. Cited by: §1, §2.
  • [28] L. K. Tam, X. Wang, E. Turkbey, K. Lu, Y. Wen, and D. Xu (2020) Weakly supervised one-stage vision and language disease detection using large scale pneumonia and pneumothorax studies. In International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 45–55. Cited by: §1, §2.
  • [29] Y. Tang (2016) Large scale semi-supervised object detection using visual and semantic knowledge transfer. In Computer Vision and Pattern Recognition (CVPR). Cited by: §1, §2.
  • [30] R. Venkatesan, S. M. Koon, M. H. Jakubowski, and P. Moulin (2000) Robust image hashing. In International Conference on Image Processing, Cited by: §3.2.
  • [31] O. Viniavskyi, M. Dobko, and O. Dobosevych (2020) Weakly-supervised segmentation for disease localization in chest x-ray images. In International Conference on Artificial Intelligence in Medicine, pp. 249–259. Cited by: §1.
  • [32] X. Wang, Y. Peng, L. Lu, Z. Lu, M. Bagheri, and R. M. Summers (2017) Chestx-ray8: hospital-scale chest x-ray database and benchmarks on weakly-supervised classification and localization of common thorax diseases. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2097–2106. Cited by: §1, §1, §2, §4.1, §4.3, §4.3, Table 1, Table 2.
  • [33] X. Wang, Y. Peng, L. Lu, Z. Lu, and R. M. Summers (2018) Tienet: text-image embedding network for common thorax disease classification and reporting in chest x-rays. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 9049–9058. Cited by: §1.
  • [34] C. Yan, J. Yao, R. Li, Z. Xus, and J. Huang (2018) Weakly supervised deep learning for thoracic disease classification and localization on chest x-rays. In Proceedings of the 2018 ACM International Conference on Bioinformatics, Computational Biology, and Health Informatics, pp. 103–110. Cited by: §1.
  • [35] G. Zhao, C. Fang, G. Li, Z. Zhou, J. Lia, L. Jiao, and Y. Yu (2021) Contralaterally enhanced networks for thoracic disease detection. IEEE Transactions on Medical Imaging. Cited by: §2, §4.3, §4.3, Table 2.
  • [36] B. Zhou, A. Khosla, A. Lapedriza, A. Oliva, and A. Torralba (2016)

    Learning deep features for discriminative localization

    .
    In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 2921–2929. Cited by: §1.
  • [37] Y. Zhou, T. Zhou, T. Zhou, H. Fu, J. Liu, and L. Shao (2021) Contrast-attentive thoracic disease recognition with dual-weighting graph reasoning. IEEE Transactions on Medical Imaging 40 (4), pp. 1196–1206. Cited by: §1, §2, §3.2.1, §4.3, §4.3, Table 1.
  • [38] W. Zhu, Y. S. Vang, Y. Huang, and X. Xie (2018) Deepem: deep 3d convnets with em for weakly supervised pulmonary nodule detection. In International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 812–820. Cited by: §1.