Bias Field Poses a Threat to DNN-based X-Ray Recognition

09/19/2020 ∙ by Binyu Tian, et al. ∙ Tianjin University 9

The chest X-ray plays a key role in screening and diagnosis of many lung diseases including the COVID-19. More recently, many works construct deep neural networks (DNNs) for chest X-ray images to realize automated and efficient diagnosis of lung diseases. However, bias field caused by the improper medical image acquisition process widely exists in the chest X-ray images while the robustness of DNNs to the bias field is rarely explored, which definitely poses a threat to the X-ray-based automated diagnosis system. In this paper, we study this problem based on the recent adversarial attack and propose a brand new attack, i.e., the adversarial bias field attack where the bias field instead of the additive noise works as the adversarial perturbations for fooling the DNNs. This novel attack posts a key problem: how to locally tune the bias field to realize high attack success rate while maintaining its spatial smoothness to guarantee high realisticity. These two goals contradict each other and thus has made the attack significantly challenging. To overcome this challenge, we propose the adversarial-smooth bias field attack that can locally tune the bias field with joint smooth adversarial constraints. As a result, the adversarial X-ray images can not only fool the DNNs effectively but also retain very high level of realisticity. We validate our method on real chest X-ray datasets with powerful DNNs, e.g., ResNet50, DenseNet121, and MobileNet, and show different properties to the state-of-the-art attacks in both image realisticity and attack transferability. Our method reveals the potential threat to the DNN-based X-ray automated diagnosis and can definitely benefit the development of bias-field-robust automated diagnosis system.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

page 5

page 7

page 8

page 9

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Figure 1: Two cases of our adversarial bias field examples. Our proposed adversarial-smooth bias field attack can adversarially but imperceptibly altered the bias field, misleading the advanced DNN models, e.g., ResNet50, to diagnose the normal X-ray image as the pneumonia one. More troubling, the DNN could be fooled to diagnose the pneumonia X-ray image as the normal one, having higher risk of delaying patients’ treatment.

Medical image diagnosis and recognition is starting to be automated by DNNs with a clear advantage of being very efficient in diagnosing the disease outcomes. However, unlike human experts, such automated methods based on DNNs still have some caveats. For example, with the presence of image-level degradations during the image acquisition process, the recognition accuracy can be dramatically suppressed. Sometimes, such DNN-based medical image recognition system can even become entirely vulnerable when maliciously attacked by an adversary or an abuser that is financially incentivized.

There are mainly two types of image perturbations or degradations in medical imagery: (1) image noise, and (2) image bias field. The image noise is primarily caused by the image sensor noise and the image bias field is caused by the spatial variations of radiation Vovk et al. (2007), which is very common among medical imaging, ranging from magnetic resonance imaging (MRI) Ahmed et al. (2002), computed tomography (CT) Li et al. (2008); Guo et al. (2017c, 2018), to X-ray imaging, etc. The bias field appears as the intensity inhomogeneity in the MRI, CT, or X-ray images. For consumer digital imaging, the bias field shows up as the illumination changes or vignetting effect.

In this work, we want to reveal this vulnerability caused by image bias field. To the best of our knowledge, this is the very first attempt to adversarially perturb the bias field, in order to attack DNN-based X-ray recognition. Contrary to the additive noise-perturbation attack on DNN-based recognition systems, the attack on the bias field is multiplicative in nature Zheng and Gee (2010), which is fundamentally different from the noise attack. What is more important is that in order to make the bias field attack realistic and imperceptible, the successful attacks need to maintain the smoothness property of the bias field, which is genuinely more challenging because local smoothness usually contradicts with high attack success rates.

To overcome this challenge, we capitalize on this proprietary degradation surrounding X-ray imagery and initiate adversarial attacks based on imperceptible modification on the bias field itself. Specifically, we have proposed the adversarial-smooth bias field generator that can locally tune the bias field with joint smooth and adversarial constraints by tapping into the bias field generation process based on a multivariate polynomial model. As a result, the adversarially perturbed bias field applied to the X-ray image can not only fool the DNN-based recognition system effectively, but also retain high level of realisticity. We have validated our proposed method on several chest X-ray classification datasets with the state-of-the-art DNNs such as ResNet, DenseNet, and MobileNet, by showing superior performance in terms of both image realisticity and high attack success rates. A careful investigation into which bias field region contributes more significantly to the adversarial nature of the attack can lead to a better interpretation and understanding of the DNN-based recognition system and its vulnerability, which, we believe, is of utmost importance. The ultimate goal of this work is to reveal that the bias field does pose a potential threat to the DNN-based automated recognition system, and can definitely benefit the development of bias-field-robust automated diagnosis system in the future.

2 Related Work

In this section, we will summarize the related works including X-Ray imagery recognition, noise-based adversarial attack and the adversarial attack on medical imagery.

2.1 X-Ray Imagery Recognition

X-ray radiography is widely used in the medical field for diagnosis or treatment of diseases. In recent years, many public X-ray image datasets are made available, leading to a wide literature examining data mining or deep learning techniques on such datasets.

One of the largest datasets is the ChestX-ray14 dataset from the National Institutes of Health (NIH) Clinical Center, which contains over 108,948 frontal-view X-ray images with 14 thoracic diseases, and other non-image features. Together with the dataset, Wang et al. (2017)

evaluates the performance of 4 classic convolutional neural network (CNN), namely AlexNet, ResNet-50, VGGNet, and GoogLeNet, on the multi-label image classification of diseases, creating an initial baseline of average area under the ROC curve (AUC) of 0.745 over all 14 diseases.

Inspired by this work, many researchers starts utilising the power of deep neural network on chest X-ray (CXR) classification. Li et al. (2017) presents a framework to jointly perform disease classification and localisation. With the use of bounding box to predict lesion area location, the classification performance is improved to an average AUC of 0.755. Yao et al. (2017)

proposes the use of a CNN backbone with a variant of DenseNet model, combining with a long-short term memory network (LSTM) to exploit statistical dependencies between labels, achieving an average AUC of 0.761.

Guan and Huang (2020) explores a category-wise residual attention learning (CRAL) framework, which is made up of feature embedding and attention learning module. Different attention weights are given to enhance or restrain different feature spatial regions, yielding an average AUC score of 0.816. Rajpurkar et al. (2017)

proposes the use of transfer learning by fine-tuning a modified DenseNet , resulting in an algorithm called CheXNet, a 121 layer CNN. It further raises the average AUC to 0.842.

Guan et al. (2018) presents the use of a three- branch attention guided CNN, which focuses on local cues from (small) localized lesion areas. The combination of local cues and global features achieves an average AUC achieves 0.871. The state of the art results using the official spilt released by Wang et al. (2017) are held by Gündel et al. (2018) with average AUC of 0.817. The paper argues that when random spilt is used, the same patient is likely to appear in both train and test set, and this overlap affects performance comparison. The method proposed is a location aware DenseNet-121, trained on ChestXRay14 data and PLCO dataset, which incorporates the use of spatial information in high resolution images.

The power of deep learning techniques on CXR is also explored for detection of COVID-19, motivated by the need of quick, effective and convenient screening. Studies showed that some COVID-19 patients displayed abnormalities in their CXR images. Wang and Wong (2020) releases an open access benchmark dataset COVIDx along with COVID-Net, a deep CNN designed for detection of COVID-19 from CXR images, achieving a sensitivity of 97.3 and appositive predict value of 99.7. Many studies have also leverage on variant of the dataset and network for prediction of COVID-19 Afshar et al. (2020); Li and Zhu (2020); Tartaglione et al. (2020).

Despite the strong performance of DNN, and considerations made to address data irregularities like class imbalance in dataset, the effect of medical image degradation on disease identification was rarely investigated and addressed. For example, bias field, also referred to as intensity inhomogeneity, is a low frequency smooth intensity signal across images due to imperfections in image acquisition methods. Bias field could adversely affect quantitative image analysis Juntu et al. (2005) Many inhomogeneity correction strategy are hence proposed in the literature Brey and Narayana (1988)Fan et al. (2003)Thomas et al. (2005).

However, the possible detrimental effect on disease identification, location or segmentation by the bias field are rarely explored in literature, hence DNN proposed may not be robust towards this inherent degradation. To the best of authors’ knowledge, this paper is very first work that looks at the effect of bias field from the view of adversarial attack.

2.2 General Adversarial Attack

Despite the robustness of various DNN deployed in solving different recognition problems in image, speech or natural language processing application, many studies have shown that DNN are susceptible to adversarial attacks

Szegedy et al. (2013)Goodfellow et al. (2014). There exist many literatures that propose different adversarial attacks Guo et al. (2020b, c); Wang et al. (2020); Cheng et al. (2020a)

, and they can be generally classified into attacks in training and testing stage.

In training stage, attackers can carry out data poisoning, which involves the insertion of adversarial example into training dataset, affecting model’s performance. For example, poison frog leverages on inserting images into dataset to ensure the wrong classification will be given to a target test sample Shafahi et al. (2018). The use of direct gradient method in generating adversarial images against neural network is also explored Yang et al. (2017).

In testing stage, attackers can carry out either white-box or black-box attacks. In white-box attacks, attackers are assumed to have access to target classifier. Biggio et al. (2013) focuses on optimising discriminant function to mislead a 3 layer full connected neural network. Shafahi et al. (2018) suggests that a certain imperceptible perturbation can be applied to cause misclassification on image, and this effect can be transferred to other different network to train on similar data to misclassify the same input. Fast gradient sign method (FGSM) is proposed by Goodfellow et al. (2014). It involves only one back propagation step when calculating the gradient of cost function, hence allowing fast adversarial example generation Kurakin et al. (2016)

proposes the iterative version of FGSM, known as basic iteration method (BIM), which heuristically search for examples that are most likely to fool the classifier. Given the presence of literature that defends against FGSM methods,

Carlini and Wagner (2017) proposes the use of margin loss instead of entropy loss during attacks. Cisse et al. (2017) proposes a an approach named HOUDINI, which generate adversarial examples for fooling visual and speech recognition models. Instead of altering pixel values, spatial transformed attacks are also proposed to perform special transformation such as translation or rotation on images Xiao et al. (2018).

In black box attacks, attackers have no access to classifier’s parameter or training sets. Papernot et al. (2017) proposes the exploitation of transferability of adversarial examples. A model similar to the target classifier is first trained, then adversarial examples generated to attack the trained model is used to target the actual classifier. Fredrikson et al. (2015) explores the exploitation of knowledge of confidence value of target classifier as predictions are made.

2.3 Adversarial Attack (and Defense) on Medical Imagery

There are existing literature that looks into adversarial attack against deep learning system for medical imagery. Finlayson et al. (2018) shows that both black box and white box PGD attack and adversarial patch attack could affect the performance of classifiers modelled after state-of-the-art systems on fundoscopy, chest X-ray and dermoscopy respectively. Similarly, Paschali et al. (2018) also shows that small perturbation can create classification performance drop across state-of-the-art networks such as Inception and UNet, for which accuracy drops from above 87% on normal medical images to almost 0% on adversarial examples. By producing crafted mask, an adaptive segmentation mask attack (ASMA) is proposed to fool DNN model for segmenting medical imagery Ozbulak et al. (2019).

In medical adversarial defence, Li et al. (2020) proposes an unsupervised detection of adversarial samples in which unsupervised adversarial detection (UAD) are complemented with semi-supervised adversarial training (SSAT). The proposed model claims to demonstrate a superior performance in medical defence against other techniques. Ma et al. (2020) further proposes that medical DNN are more vulnerable to attacks due to the specific characteristic of medical images having high gradient regions sensitive to perturbations, and over parameterization of the state-of-the-art DNN. This work then proposes an adversarial detector specifically designed for medical image attacks, achieving over 98% detection AUC.

However, very few literature has leverage on and conduct adversarial attack based on the inherent characteristic of the targeted medical imagery. For example, common noise degradation used for general adversarial attacks are rarely found in X-ray imagery. Hence in this work, we capitalize on the proprietary degradation surrounding X-ray imagery, bias field, and initiate adversarial attacks based on imperceptible modification on the bias field itself.

3 Methodology

3.1 Adversarial Bias Field Attack and Challenges

Given a X-ray image, e.g., , we can assume it is generated by adding a bias field to a clean version, i.e., , with the widely used imaging model

(1)

Under the automate diagnosis task where a DNN is used to recognize the category (i.e., normal or abnormal) of , it is necessary to explore a totally new task, i.e., adversarial bias field attack aiming to fool the DNN by calculating an adversarial bias filed , with which we can study the influence of the bias field as well as the potential threat of utilizing it to fool the automate diagnosis.

A simple way is to take logarithm on Eq. 1 and transform the multiplication to additive operation

(2)

where we use the ‘’ to represent the logarithm of a variable. With Eq. 2, it seems that all existing additive-based adversarial attacks, i.e., FGSM, BIM, MIFGSM, DIM, and TIMIFGSM, could be used for the new attack. For example, we can calculate to realize attack by solving

(3)

where

is the loss function for classification,

e.g., the cross-entropy loss, and denotes the ground truth label of . Nevertheless, we argue that such solution cannot generate the real ‘bias field’ since the optimized violated the basic property of bias field, i.e., spatially smooth changes resulting in intensity inhomogenity. For example, as shown in Fig. 2, when we optimize Eq. 3 to produce a bias field, we can attack the ResNet50 successfully while the bias field is noise-like and far from the appearance in the real world.

Figure 2: An example of using Eq. 3 to general the non-smooth adversarial bias field.

As a result, due to requirement of spatial smoothness of bias field, the adversarial bias field attack posts a totally new challenge to the field of adversarial attack: how to generate the adversarial perturbation that can not only achieve high attack success rate but maintain its spatial smoothness for the realisticity of bias field. Actually, since the high attack success rate relies on the pixel-wise tunable perturbation and violates the smoothness requirement of bias field, the two constraints contradicts each other and make the adversarial bias field attack significantly challenge.

3.2 Adversarial-Smooth Bias Field Attack

To overcome above challenge, we propose the distortion-aware multivariate polynomial model to represent the bias field whose inherit property guarantees the spatial smoothness of the bias field while the distortion helps achieve effective attack. Then, we define a new objective function for effective attack by combining the constraints of spatially smooth bias field, sparsity of the original image with the adversarial loss. Finally, we introduce the optimization method and attack algorithm.

3.2.1 Distortion-aware multivariate polynomial model.

We model the bias filed as

(4)

where represents the distortion transformation and we use the thin plate spline (TPS) transformation with being the control points. We denote as the -th pixel with its coordinates while means the pixel has been distorted by a TPS. In addition, and are the parameters and degree of the multivariate polynomial model, respectively, and the number of parameters are . For convenient representations, we concatenate

and obtain a vector

.

3.2.2 Adversarial-smooth objective function.

With Eq. 4, we can tune and for adversarial attack and the multivariate polynomial model can help preserve the smoothness of bias field. Intuitively, on the one hand, the lower degree leads to less model parameters and a smoother bias field could be obtained. On the other hand, the distortion can be locally tuned with different and can help achieve effective attack. The key problem is how to calculate and to balance the spatial smoothness and adversarial attack. To this end, we define a new objective function to realize the attack.

(5)

where represents parameters of the identify TPS transformation, i.e., . The first term is to tune the and to fool a DNN for X-ray recognition. The second term encourages the sparse of and would let the bias field smooth. The final term is to let the TPS transformation not go far away from the identity version. Two hyper-parameters, i.e., and control the balance between the smoothness and adversarial attack.

3.3 Optimization

Like the optimization methods used in general adversarial noise attack, we solve Eq. 3 and 5 via sign gradient descent where and are updated via fixed rate

(6)
(7)

where and denote the gradient of and with respect to the objective function in Eq. 5, respectively. For Eq. 3, we use the same to update directly. We fix with the iteration number being 10.

4 Experiments

In this section, we conduct comprehensive experiments on a real chest-xray dataset to validate the effectiveness of our method and discuss how bias field affects X-Ray recognition. We want to answer the following questions: ❶ What are the differences and advantages of the adversarial bias field attack over existing adversarial noise attacks? ❷ How and why can the bias fields affect the X-ray recognition? ❸ How do the hyper-parameters affect the attack results?

4.1 Setup and Dataset

4.1.1 Dataset.

We carry out our experiments on a chest-xray dataset about pneumonia, which contains 5863 X-ray images111Please find more details about the dataset in https://www.kaggle.com/paultimothymooney/chest-xray-pneumonia.. These images were selected from retrospective cohorts of pediatric patient. The dataset is divided into two categories, i.e., pneumonia and normal.

4.1.2 Models.

In order to show the effect of the attack on different neural network models, we finetune three pre-trained models on the chest-xray dataset. The three models are ResNet50, MobileNet and Densenet121 (Dense121).The accuracy of ResNet50, MobileNet and Densenet121 is 88.62%, 88.94% and 87.82%.

4.1.3 Metrics.

We choose the attack success rate and image quality to evaluate the effectiveness of the bias field attack. The image quality measurement metric is BRISQUE Mittal et al. (2012). BRISQUE is an unsupervised image quality assessment method. A high score for BRISQUE indicates poor image quality.

4.1.4 Baselines.

We select five adversarial attack methods as our baselines, which include basic iterative method (BIM) Kurakin et al. (2016), Carlini & Wagner L2 method (C&W) Carlini and Wagner (2017), saliency map method (SaliencyMap) Papernot et al. (2016), fast gradient sign method (FGSM) Goodfellow et al. (2014) and momentum iterative fast gradient sign method (MIFGSM) Dong et al. (2018).

For the setup of hyperparameters of these baselines, we set them as the default setup of foolbox

Rauber et al. (2017). We set max perturbation to be relative to [0,1] range in basic experiments. Besides, we set iterations to be 10 for MIFGSM and BIM.

Figure 3: Examples of adversarial examples generated with different techniques.
Figure 4: Pipeline and examples of exploring bias-field-sensitive regions. A subject model, i.e., ResNet50, is employed to generate adversarial bias field examples for 240 X-ray images and we then use Eq. 8 to produce the interpretable map for each image (i.e., the images at the second row where the maps are blended with the raw X-ray images for better understanding.). Finally, we can calculate an averaging map covering all interpretable maps and blend it with raw images (i.e., the images at the third row.)
Figure 5: Effects of the multivariate polynomial model with different control points (i.e., ). The first column shows the size of control points. The following columns show the bias fields that are generated by iteratively changing the position of control points.
Figure 6: Effects of the multivariate polynomial model with different number of degrees, i.e., and in Eq. 4.
Crafted from ResNet50 Dense121 MobileNet
Attacked model&BRISQUE MobileNet Dense121 ResNet50 BRISQUE ResNet50 MobileNet Dense121 BRISQUE ResNet50 Dense121 MobileNet BRISQUE
BIM 0.36 0 100 30.0249 0.54 0.36 100 29.6599 0 0 100 29.9947
C&W 0.36 0 100 30.1128 1.08 0.72 100 29.6455 0 0 100 30.051
SaliencyMap 1.08 0.18 100 28.7108 2.53 1.26 100 28.4046 0.72 0.18 100 30.8351
FGSM 0 0.18 67.8 67.0028 0.72 0.72 29.38 28.5753 0 0 30.09 28.5404
MIFGSM 0.36 0 100 30.0578 0.54 0.36 94.34 29.6094 0 0 100 30.0134
AdvSBF (Ours) 7.57 14.05 38.69 28.5703 7.78 5.95 34.49 28.9535 15.19 19.53 38.92 33.2062
Table 1: Adversarial comparison results on chest-Xray dataset with five attack baselines and our method. It contains the success rates (%) of transfer & whitebox adversarial attack on three normally trained models: ResNet50, Dense121, and MobileNet. For each four columns, whitebox attack results are shown in the third one. The first two columns display the transfer attack results. And the last column shows the BRISQUE score.

4.2 Comparison with Baseline Methods

For our method, we set the size of the control points, and as (16*16) , 10 and 1, respectively. Table 1 shows the quantitative results with our method and the baseline methods, which are conducted with different settings. Specifically, we conduct two different attacks, i.e., the white-box attack and the transfer attack. The white-box attack aims to attack the target DNN directly while the transfer attack attacks the target DNN with the adversarial examples generated from other models. For example, for the transfer attack in Table 1, the attack is performed on DNNs in the first row, and the generated adversarial examples are used to attack DNNs in the first two columns of the second row.

As we can see, for the white-box attack (i.e., the third column for each model), we could find that the success rate of our method is lower than the existing baselines. For example, on ResNet50, our method achieves 38.69% success rate while most of the baselines achieves 100% success rate. The main reason is that the existing attacking techniques could add arbitrary noises on the image, which is not realistic. However, our method has a strict smooth limitation such that the generated adversarial examples look more realistic. As shown in Fig. 3, we show some examples generated by different attacks. The first row shows the original images while the following rows list the corresponding adversarial examples. It is clear that our method could generate high-quality adversarial examples that are smooth and realistic. In most cases, the change between original image and the generated image is imperceptible. However, we could find obvious noises in the adversarial examples generated by the baseline methods. Such noises are difficult to appear in X-rays in the real world.

For the transfer attack (i.e., the first two columns), we found that our method achieves much higher success rate than others. For example, the attack on ResNet50 achieves 7.57% and 14.05% transfer success rate on MobileNet and DenseNet121, respectively. However, the the best results of the baseline are only 1.08% and 0.18%. It is because that existing techniques calculate the ad-hoc noise, which may be only effective on the target DNN but not on other models. However, our attack considers the smoothness such that the generated adversarial examples are more realistic. Such adversarial examples are more robust and could reveal the common weakness of different DNNs (i.e., higher success rate of the transfer attack). The results indicate that our method could generate high-quality adversarial examples. We also compare the image quality with the BRISQUE score (i.e., the forth column). The results show that our method could achieve competitive results with the-state-of-the-arts.

In summary, our method aims to generate high-quality and realistic adversarial examples. To generate such adversarial examples, the attack success rate is naturally lower than the noise-based adversarial attack techniques.

4.3 Understanding Effects of Bias Field

In this subsection, we aim to explore how the bias field affect the DNN-based X-ray recognition. Fong and Vedaldi (2017) proposes a method for understanding DNNs with the adversarial noise attack and generates an interpretable map indicating the classification-sensitive regions of a DNN. Inspired this idea, we can study which regions in the chest X-ray images are sensitive to the bias filed and affect the X-ray recognition. Specifically, given an adversarial bias field example generated by our method and the original image , we can calculate an interpretable map for a DNN by optimizing

(8)

where denotes the score at label that is the ground truth label of and is the total-variation norm. Intuitively, optimizing Eq. (8) is to find the region that causes misclassification. We optimize Eq. (8) via gradient decent in 150 iterations and fix and .

With Eq. 8, given a pre-trained model, i.e., , and a dataset containing the successfully attacked X-ray images, we can calculate a for each X-ray image and then average all interpretable maps to show the statistical regions that are sensitive to the bias field. For example, we adopt ResNet50 as the subject model and construct with 240 attacked X-ray images that can fool ResNet50 successfully. Then, we calculate the interpretable maps for all images in (e.g., the second row in Fig. 4) and average them, achieving a statistical mean map (e.g., the left image shown in Fig. 4). According to the visualization results, we observe that: ❶ Our method helps identify the bias-field-sensitive regions in each attacked X-ray image and we observe that these regions are related to the organ positions. This demonstrates that the effects of the bias field to the DNN stems from intensity variation around organs. ❷ According to the statistical mean map, we see that the bias-field sensitive regions mainly locate at the top and bottom positions across all attacked images, suggesting that future designed DNN should consider the spatial variations within in X-ray images. We observe similar results on other DNNs (Please find more results in the supplementary material), hinting that these are common phenomenons in the DNN-based X-ray recognition and demonstrating the potential applications of this work.

4.4 Effects of Hyper-parameters

, ResNet50 Dense121 MobileNet
MobileNet Dense121 ResNet50 BRISQUE ResNet50 MobileNet Dense121 BRISQUE ResNet50 Dense121 MobileNet BRISQUE
(4,4), 0 10.84 15.33 37.97 32.4873 14.65 8.29 31.39 31.331 21.52 20.44 35.68 34.9368
(8,8), 0 9.91 14.05 37.79 32.5778 13.2 6.49 31.57 31.3609 21.7 20.26 35.68 34.0957
(12,12), 0 9.73 14.23 37.61 32.097 12.84 6.49 31.2 31.9176 21.7 20.44 35.86 34.3194
(16,16), 0 10.81 14.42 38.34 32.3661 13.56 6.85 31.02 31.4455 21.34 20.26 36.04 34.0944
(16,16), 1 11.35 13.5 36.89 31.3312 14.65 9.37 32.12 30.6853 17 19.34 32.79 31.7842
(16,16), 2 8.11 7.85 29.48 29.0977 12.84 8.83 26.09 30.0223 12.12 11.86 26.85 29.5885
(16,16), 3 4.15 2.19 18.81 28.606 4.7 4.68 16.24 29.0152 4.7 3.47 15.32 29.2909
Table 2: Adversarial comparison results on chest-Xray dataset with different setup of hyper-parameters in our method. It contains the success rates (%) of transfer& whitebox adversarial attacks. For each model, the first two columns display the blackbox attack results, the third one shows the attack results and the last column shows the BRISQUE score.

We also evaluate the effects of hyper-parameters in our attack, i.e., and in Equation 4. Specifically, we change for TPS transformation by changing the number of control points. is denoted to represent the control points in the TPS transformation. Then we select different to conduct the attack. For the parameter , we set the fixed as 10 and change the value of , i.e., observe part of the sample display of the bias field by ignoring the lowest degree in the multivariate polynomial model.

Table 2 shows the results with different configurations. In the second row, we fix the as 0 and change value of as 4, 8, 12 and 16, respectively. As we can see, it seems that there is no clear difference in the attack success rate when the parameter varies. We conjecture that the attack could easily reach the upper bound in terms of the success rate with different . Figure 5 shows the bias field change with different in multiple iterations. Intuitively, we can see that when is smaller, more parts of the image can be adjusted in each iteration and the image may become less smooth. However, when the is becoming larger, there are more grids, which could provide more fine-grained change. Thus the generated image can be more smooth.

Then we fix the as 16 and change the parameter as 0, 1, 2 and 3 (in the third row). As we can see, as increases (i.e., more lower degree are ignored), the success rate of our method decreases and the BRISQUE score decreases. It is reasonable as ignoring more low degree in Equation 4 may reduce the space of the manipulation, resulting in higher image quality and lower attack success rate. The visualization results are shown in Fig. 6. When more lower degree is ignored (i.e., larger ), the bias field samples tend to be less smooth.

5 Conclusions

Deep learning has been used in chest X-ray image recognition for the diagnosis of lung diseases (e.g., COVID-19). It is especially important to ensure the robustness of the DNN in this scenario. To tackle this problem, this paper proposed a new adversarial bias field attack, which aims to generate more realistic adversarial examples by adding more smooth perturbations instead of noises. We demonstrated the effectiveness of our attack on the widely used DNNs. The results show that our method can generate high quality adversarial examples, which achieve high success rate of the transfer attack. The generated realistic images can reveal issues of the DNN, which calls for the attention of robustness enhancement of the deep learning-based healthcare system.

In the future, we will extend the adversarial bias field attack to other computer vision tasks,

e.g., natural image classification Guo et al. (2020b)

, face recognition

Wang et al. (2020), visual object tracking Guo et al. (2020c, a, 2017a, 2017b); Zhou et al. (2017), etc., and also in tandem with other attack modalities that are not based on additive noise in nature such as Gao et al. (2020); Cheng et al. (2020b); Zhai et al. (2020). In addition, we can regard our adversarial bias field as a new kind of mutation for DNN testing Xie et al. (2019a); Ma et al. (2018b); Du et al. (2019); Xie et al. (2019b); Ma et al. (2018a, 2019).

References

  • P. Afshar, S. Heidarian, F. Naderkhani, A. Oikonomou, K. N. Plataniotis, and A. Mohammadi (2020) Covid-caps: a capsule network-based framework for identification of covid-19 cases from x-ray images. arXiv preprint arXiv:2004.02696. Cited by: §2.1.
  • M. N. Ahmed, S. M. Yamany, N. Mohamed, A. A. Farag, and T. Moriarty (2002)

    A modified fuzzy c-means algorithm for bias field estimation and segmentation of mri data

    .
    IEEE transactions on medical imaging 21 (3), pp. 193–199. Cited by: §1.
  • B. Biggio, I. Corona, D. Maiorca, B. Nelson, N. Šrndić, P. Laskov, G. Giacinto, and F. Roli (2013)

    Evasion attacks against machine learning at test time

    .
    In Joint European conference on machine learning and knowledge discovery in databases, pp. 387–402. Cited by: §2.2.
  • W. W. Brey and P. A. Narayana (1988) Correction for intensity falloff in surface coil magnetic resonance imaging. Medical Physics 15 (2), pp. 241–245. Cited by: §2.1.
  • N. Carlini and D. Wagner (2017) Towards evaluating the robustness of neural networks. In 2017 ieee symposium on security and privacy (sp), pp. 39–57. Cited by: §2.2, §4.1.4.
  • Y. Cheng, Q. Guo, F. Juefei-Xu, X. Xie, S. Lin, W. Lin, W. Feng, and Y. Liu (2020a) Pasadena: perceptually aware and stealthy adversarial denoise attack. arXiv preprint arXiv:2007.07097. Cited by: §2.2.
  • Y. Cheng, F. Juefei-Xu, Q. Guo, H. Fu, X. Xie, S. Lin, W. Lin, and Y. Liu (2020b) Adversarial Exposure Attack on Diabetic Retinopathy Imagery. arXiv preprint arXiv. Cited by: §5.
  • M. M. Cisse, Y. Adi, N. Neverova, and J. Keshet (2017) Houdini: fooling deep structured visual and speech recognition models with adversarial examples. In Advances in neural information processing systems, pp. 6977–6987. Cited by: §2.2.
  • Y. Dong, F. Liao, T. Pang, H. Su, J. Zhu, X. Hu, and J. Li (2018) Boosting adversarial attacks with momentum. In

    Proceedings of the IEEE conference on computer vision and pattern recognition

    ,
    pp. 9185–9193. Cited by: §4.1.4.
  • X. Du, X. Xie, Y. Li, L. Ma, Y. Liu, and J. Zhao (2019) Deepstellar: model-based quantitative analysis of stateful deep learning systems. In Proceedings of the 2019 27th ACM Joint Meeting on European Software Engineering Conference and Symposium on the Foundations of Software Engineering, pp. 477–487. Cited by: §5.
  • A. Fan, W. M. Wells, J. W. Fisher, M. Cetin, S. Haker, R. Mulkern, C. Tempany, and A. S. Willsky (2003) A unified variational approach to denoising and bias correction in mr. In Biennial international conference on information processing in medical imaging, pp. 148–159. Cited by: §2.1.
  • S. G. Finlayson, H. W. Chung, I. S. Kohane, and A. L. Beam (2018) Adversarial attacks against medical deep learning systems. arXiv preprint arXiv:1804.05296. Cited by: §2.3.
  • R. C. Fong and A. Vedaldi (2017) Interpretable explanations of black boxes by meaningful perturbation. In ICCV, Vol. , pp. 3449–3457. Cited by: §4.3.
  • M. Fredrikson, S. Jha, and T. Ristenpart (2015) Model inversion attacks that exploit confidence information and basic countermeasures. In Proceedings of the 22nd ACM SIGSAC Conference on Computer and Communications Security, pp. 1322–1333. Cited by: §2.2.
  • R. Gao, Q. Guo, F. Juefei-Xu, H. Yu, X. Ren, W. Feng, and S. Wang (2020) Making Images Undiscoverable from Co-Saliency Detection. arXiv preprint arXiv. Cited by: §5.
  • I. J. Goodfellow, J. Shlens, and C. Szegedy (2014) Explaining and harnessing adversarial examples. arXiv preprint arXiv:1412.6572. Cited by: §2.2, §2.2, §4.1.4.
  • Q. Guan, Y. Huang, Z. Zhong, Z. Zheng, L. Zheng, and Y. Yang (2018) Diagnose like a radiologist: attention guided convolutional neural network for thorax disease classification. CoRR abs/1801.09927. Cited by: §2.1.
  • Q. Guan and Y. Huang (2020) Multi-label chest x-ray image classification via category-wise residual attention learning. Pattern Recognition Letters 130, pp. 259 – 266. Note: Image/Video Understanding and Analysis (IUVA) External Links: ISSN 0167-8655 Cited by: §2.1.
  • S. Gündel, S. Grbic, B. Georgescu, S. K. Zhou, L. Ritschl, A. Meier, and D. Comaniciu (2018) Learning to recognize abnormalities in chest x-rays with location-aware dense networks. CoRR abs/1803.04565. Cited by: §2.1.
  • Q. Guo, W. Feng, C. Zhou, R. Huang, L. Wan, and S. Wang (2017a) Learning dynamic siamese network for visual object tracking. In Proceedings of the IEEE international conference on computer vision, pp. 1763–1771. Cited by: §5.
  • Q. Guo, W. Feng, C. Zhou, C. Pun, and B. Wu (2017b) Structure-regularized compressive tracking with online data-driven sampling. IEEE Transactions on Image Processing 26 (12), pp. 5692–5705. Cited by: §5.
  • Q. Guo, R. Han, W. Feng, Z. Chen, and L. Wan (2020a)

    Selective spatial regularization by reinforcement learned decision making for object tracking

    .
    IEEE Transactions on Image Processing 29, pp. 2999–3013. Cited by: §5.
  • Q. Guo, F. Juefei-Xu, X. Xie, L. Ma, J. Wang, W. Feng, and Y. Liu (2020b) ABBA: saliency-regularized motion-based adversarial blur attack. arXiv preprint arXiv:2002.03500. Cited by: §2.2, §5.
  • Q. Guo, S. Sun, F. Dong, W. Feng, B. Z. Gao, and S. Ma (2017c) Frequency-tuned acm for biomedical image segmentation. In 2017 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 821–825. Cited by: §1.
  • Q. Guo, S. Sun, X. Ren, F. Dong, B. Z. Gao, and W. Feng (2018) Frequency-tuned active contour model. Neurocomputing 275, pp. 2307–2316. Cited by: §1.
  • Q. Guo, X. Xie, F. Juefei-Xu, L. Ma, Z. Li, W. Xue, W. Feng, and Y. Liu (2020c) SPARK: spatial-aware online incremental attack against visual tracking. In Proceedings of the European Conference on Computer Vision (ECCV), Cited by: §2.2, §5.
  • J. Juntu, J. Sijbers, D. Van Dyck, and J. Gielen (2005) Bias field correction for mri images. In Computer Recognition Systems, pp. 543–551. Cited by: §2.1.
  • A. Kurakin, I. Goodfellow, and S. Bengio (2016) Adversarial machine learning at scale. arXiv preprint arXiv:1611.01236. Cited by: §2.2, §4.1.4.
  • C. Li, R. Huang, Z. Ding, C. Gatenby, D. Metaxas, and J. Gore (2008) A variational level set approach to segmentation and bias correction of images with intensity inhomogeneity. In International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 1083–1091. Cited by: §1.
  • X. Li, D. Pan, and D. Zhu (2020) Defending against adversarial attacks on medical imaging ai system, classification or detection?. arXiv preprint arXiv:2006.13555. Cited by: §2.3.
  • X. Li and D. Zhu (2020) Covid-xpert: an ai powered population screening of covid-19 cases using chest radiography images. arXiv preprint arXiv:2004.03042. Cited by: §2.1.
  • Z. Li, C. Wang, M. Han, Y. Xue, W. Wei, L. Li, and F. Li (2017) Thoracic disease identification and localization with limited supervision. CoRR abs/1711.06373. Cited by: §2.1.
  • L. Ma, F. Juefei-Xu, J. Sun, C. Chen, T. Su, F. Zhang, M. Xue, B. Li, L. Li, Y. Liu, J. Zhao, and Y. Wang (2018a) DeepGauge: Multi-Granularity Testing Criteria for Deep Learning Systems. In The 33rd IEEE/ACM International Conference on Automated Software Engineering (ASE), Cited by: §5.
  • L. Ma, F. Juefei-Xu, M. Xue, B. Li, L. Li, Y. Liu, and J. Zhao (2019) DeepCT: Tomographic Combinatorial Testing for Deep Learning Systems. Proceedings of the IEEE International Conference on Software Analysis, Evolution and Reengineering (SANER). Cited by: §5.
  • L. Ma, F. Zhang, J. Sun, M. Xue, B. Li, F. Juefei-Xu, C. Xie, L. Li, Y. Liu, J. Zhao, and Y. Wang (2018b) DeepMutation: Mutation Testing of Deep Learning Systems. In The 29th IEEE International Symposium on Software Reliability Engineering (ISSRE), Cited by: §5.
  • X. Ma, Y. Niu, L. Gu, Y. Wang, Y. Zhao, J. Bailey, and F. Lu (2020) Understanding adversarial attacks on deep learning based medical image analysis systems. Pattern Recognition, pp. 107332. Cited by: §2.3.
  • A. Mittal, A. K. Moorthy, and A. C. Bovik (2012) No-reference image quality assessment in the spatial domain. IEEE Transactions on image processing 21 (12), pp. 4695–4708. Cited by: §4.1.3.
  • U. Ozbulak, A. Van Messem, and W. De Neve (2019) Impact of adversarial examples on deep learning models for biomedical image segmentation. In International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 300–308. Cited by: §2.3.
  • N. Papernot, P. McDaniel, I. Goodfellow, S. Jha, Z. B. Celik, and A. Swami (2017) Practical black-box attacks against machine learning. In Proceedings of the 2017 ACM on Asia conference on computer and communications security, pp. 506–519. Cited by: §2.2.
  • N. Papernot, P. McDaniel, S. Jha, M. Fredrikson, Z. B. Celik, and A. Swami (2016) The limitations of deep learning in adversarial settings. In 2016 IEEE European symposium on security and privacy (EuroS&P), pp. 372–387. Cited by: §4.1.4.
  • M. Paschali, S. Conjeti, F. Navarro, and N. Navab (2018) Generalizability vs. robustness: investigating medical imaging networks using adversarial examples. In International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 493–501. Cited by: §2.3.
  • P. Rajpurkar, J. Irvin, K. Zhu, B. Yang, H. Mehta, T. Duan, D. Y. Ding, A. Bagul, C. Langlotz, K. S. Shpanskaya, M. P. Lungren, and A. Y. Ng (2017) CheXNet: radiologist-level pneumonia detection on chest x-rays with deep learning. CoRR abs/1711.05225. Cited by: §2.1.
  • J. Rauber, W. Brendel, and M. Bethge (2017) Foolbox: a python toolbox to benchmark the robustness of machine learning models. arXiv preprint arXiv:1707.04131. Cited by: §4.1.4.
  • A. Shafahi, W. R. Huang, M. Najibi, O. Suciu, C. Studer, T. Dumitras, and T. Goldstein (2018) Poison frogs! targeted clean-label poisoning attacks on neural networks. In Advances in Neural Information Processing Systems, pp. 6103–6113. Cited by: §2.2, §2.2.
  • C. Szegedy, W. Zaremba, I. Sutskever, J. Bruna, D. Erhan, I. Goodfellow, and R. Fergus (2013) Intriguing properties of neural networks. arXiv preprint arXiv:1312.6199. Cited by: §2.2.
  • E. Tartaglione, C. A. Barbano, C. Berzovini, M. Calandri, and M. Grangetto (2020) Unveiling covid-19 from chest x-ray with deep learning: a hurdles race with small data. arXiv preprint arXiv:2004.05405. Cited by: §2.1.
  • D. L. Thomas, E. De Vita, R. Deichmann, R. Turner, and R. J. Ordidge (2005) 3D mdeft imaging of the human brain at 4.7 t with reduced sensitivity to radiofrequency inhomogeneity. Magnetic Resonance in Medicine: An Official Journal of the International Society for Magnetic Resonance in Medicine 53 (6), pp. 1452–1458. Cited by: §2.1.
  • U. Vovk, F. Pernus, and B. Likar (2007) A review of methods for correction of intensity inhomogeneity in mri. IEEE transactions on medical imaging 26 (3), pp. 405–421. Cited by: §1.
  • L. Wang and A. Wong (2020) COVID-net: a tailored deep convolutional neural network design for detection of covid-19 cases from chest x-ray images. arXiv preprint arXiv:2003.09871. Cited by: §2.1.
  • R. Wang, F. Juefei-Xu, X. Xie, L. Ma, Y. Huang, and Y. Liu (2020) Amora: black-box adversarial morphing attack. In ACM Multimedia Conference (ACMMM), Cited by: §2.2, §5.
  • X. Wang, Y. Peng, L. Lu, Z. Lu, M. Bagheri, and R. M. Summers (2017) ChestX-ray8: hospital-scale chest x-ray database and benchmarks on weakly-supervised classification and localization of common thorax diseases. CoRR abs/1705.02315. Cited by: §2.1, §2.1.
  • C. Xiao, J. Zhu, B. Li, W. He, M. Liu, and D. Song (2018) Spatially transformed adversarial examples. arXiv preprint arXiv:1801.02612. Cited by: §2.2.
  • X. Xie, L. Ma, F. Juefei-Xu, M. Xue, H. Chen, Y. Liu, J. Zhao, B. Li, J. Yin, and S. See (2019a) DeepHunter: A Coverage-Guided Fuzz Testing Framework for Deep Neural Networks. In ACM SIGSOFT International Symposium on Software Testing and Analysis (ISSTA), Cited by: §5.
  • X. Xie, L. Ma, H. Wang, Y. Li, Y. Liu, and X. Li (2019b) DiffChaser: detecting disagreements for deep neural networks.. In IJCAI, pp. 5772–5778. Cited by: §5.
  • C. Yang, Q. Wu, H. Li, and Y. Chen (2017) Generative poisoning attack method against neural networks. arXiv preprint arXiv:1703.01340. Cited by: §2.2.
  • L. Yao, E. Poblenz, D. Dagunts, B. Covington, D. Bernard, and K. Lyman (2017) Learning to diagnose from scratch by exploiting dependencies among labels. CoRR abs/1710.10501. Cited by: §2.1.
  • L. Zhai, F. Juefei-Xu, Q. Guo, X. Xie, L. Ma, W. Feng, S. Qin, and Y. Liu (2020) It’s Raining Cats or Dogs? Adversarial Rain Attack on DNN Perception. arXiv preprint arXiv. Cited by: §5.
  • Y. Zheng and J. C. Gee (2010) Estimation of image bias field with sparsity constraints. In 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pp. 255–262. Cited by: §1.
  • C. Zhou, Q. Guo, L. Wan, and W. Feng (2017) Selective object and context tracking. In 2017 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 1947–1951. Cited by: §5.