Medical image diagnosis and recognition is starting to be automated by DNNs with a clear advantage of being very efficient in diagnosing the disease outcomes. However, unlike human experts, such automated methods based on DNNs still have some caveats. For example, with the presence of image-level degradations during the image acquisition process, the recognition accuracy can be dramatically suppressed. Sometimes, such DNN-based medical image recognition system can even become entirely vulnerable when maliciously attacked by an adversary or an abuser that is financially incentivized.
There are mainly two types of image perturbations or degradations in medical imagery: (1) image noise, and (2) image bias field. The image noise is primarily caused by the image sensor noise and the image bias field is caused by the spatial variations of radiation Vovk et al. (2007), which is very common among medical imaging, ranging from magnetic resonance imaging (MRI) Ahmed et al. (2002), computed tomography (CT) Li et al. (2008); Guo et al. (2017c, 2018), to X-ray imaging, etc. The bias field appears as the intensity inhomogeneity in the MRI, CT, or X-ray images. For consumer digital imaging, the bias field shows up as the illumination changes or vignetting effect.
In this work, we want to reveal this vulnerability caused by image bias field. To the best of our knowledge, this is the very first attempt to adversarially perturb the bias field, in order to attack DNN-based X-ray recognition. Contrary to the additive noise-perturbation attack on DNN-based recognition systems, the attack on the bias field is multiplicative in nature Zheng and Gee (2010), which is fundamentally different from the noise attack. What is more important is that in order to make the bias field attack realistic and imperceptible, the successful attacks need to maintain the smoothness property of the bias field, which is genuinely more challenging because local smoothness usually contradicts with high attack success rates.
To overcome this challenge, we capitalize on this proprietary degradation surrounding X-ray imagery and initiate adversarial attacks based on imperceptible modification on the bias field itself. Specifically, we have proposed the adversarial-smooth bias field generator that can locally tune the bias field with joint smooth and adversarial constraints by tapping into the bias field generation process based on a multivariate polynomial model. As a result, the adversarially perturbed bias field applied to the X-ray image can not only fool the DNN-based recognition system effectively, but also retain high level of realisticity. We have validated our proposed method on several chest X-ray classification datasets with the state-of-the-art DNNs such as ResNet, DenseNet, and MobileNet, by showing superior performance in terms of both image realisticity and high attack success rates. A careful investigation into which bias field region contributes more significantly to the adversarial nature of the attack can lead to a better interpretation and understanding of the DNN-based recognition system and its vulnerability, which, we believe, is of utmost importance. The ultimate goal of this work is to reveal that the bias field does pose a potential threat to the DNN-based automated recognition system, and can definitely benefit the development of bias-field-robust automated diagnosis system in the future.
2 Related Work
In this section, we will summarize the related works including X-Ray imagery recognition, noise-based adversarial attack and the adversarial attack on medical imagery.
2.1 X-Ray Imagery Recognition
X-ray radiography is widely used in the medical field for diagnosis or treatment of diseases. In recent years, many public X-ray image datasets are made available, leading to a wide literature examining data mining or deep learning techniques on such datasets.
One of the largest datasets is the ChestX-ray14 dataset from the National Institutes of Health (NIH) Clinical Center, which contains over 108,948 frontal-view X-ray images with 14 thoracic diseases, and other non-image features. Together with the dataset, Wang et al. (2017)
evaluates the performance of 4 classic convolutional neural network (CNN), namely AlexNet, ResNet-50, VGGNet, and GoogLeNet, on the multi-label image classification of diseases, creating an initial baseline of average area under the ROC curve (AUC) of 0.745 over all 14 diseases.
Inspired by this work, many researchers starts utilising the power of deep neural network on chest X-ray (CXR) classification. Li et al. (2017) presents a framework to jointly perform disease classification and localisation. With the use of bounding box to predict lesion area location, the classification performance is improved to an average AUC of 0.755. Yao et al. (2017)
proposes the use of a CNN backbone with a variant of DenseNet model, combining with a long-short term memory network (LSTM) to exploit statistical dependencies between labels, achieving an average AUC of 0.761.Guan and Huang (2020) explores a category-wise residual attention learning (CRAL) framework, which is made up of feature embedding and attention learning module. Different attention weights are given to enhance or restrain different feature spatial regions, yielding an average AUC score of 0.816. Rajpurkar et al. (2017)
proposes the use of transfer learning by fine-tuning a modified DenseNet , resulting in an algorithm called CheXNet, a 121 layer CNN. It further raises the average AUC to 0.842.Guan et al. (2018) presents the use of a three- branch attention guided CNN, which focuses on local cues from (small) localized lesion areas. The combination of local cues and global features achieves an average AUC achieves 0.871. The state of the art results using the official spilt released by Wang et al. (2017) are held by Gündel et al. (2018) with average AUC of 0.817. The paper argues that when random spilt is used, the same patient is likely to appear in both train and test set, and this overlap affects performance comparison. The method proposed is a location aware DenseNet-121, trained on ChestXRay14 data and PLCO dataset, which incorporates the use of spatial information in high resolution images.
The power of deep learning techniques on CXR is also explored for detection of COVID-19, motivated by the need of quick, effective and convenient screening. Studies showed that some COVID-19 patients displayed abnormalities in their CXR images. Wang and Wong (2020) releases an open access benchmark dataset COVIDx along with COVID-Net, a deep CNN designed for detection of COVID-19 from CXR images, achieving a sensitivity of 97.3 and appositive predict value of 99.7. Many studies have also leverage on variant of the dataset and network for prediction of COVID-19 Afshar et al. (2020); Li and Zhu (2020); Tartaglione et al. (2020).
Despite the strong performance of DNN, and considerations made to address data irregularities like class imbalance in dataset, the effect of medical image degradation on disease identification was rarely investigated and addressed. For example, bias field, also referred to as intensity inhomogeneity, is a low frequency smooth intensity signal across images due to imperfections in image acquisition methods. Bias field could adversely affect quantitative image analysis Juntu et al. (2005) Many inhomogeneity correction strategy are hence proposed in the literature Brey and Narayana (1988)Fan et al. (2003)Thomas et al. (2005).
However, the possible detrimental effect on disease identification, location or segmentation by the bias field are rarely explored in literature, hence DNN proposed may not be robust towards this inherent degradation. To the best of authors’ knowledge, this paper is very first work that looks at the effect of bias field from the view of adversarial attack.
2.2 General Adversarial Attack
Despite the robustness of various DNN deployed in solving different recognition problems in image, speech or natural language processing application, many studies have shown that DNN are susceptible to adversarial attacksSzegedy et al. (2013)Goodfellow et al. (2014). There exist many literatures that propose different adversarial attacks Guo et al. (2020b, c); Wang et al. (2020); Cheng et al. (2020a)
, and they can be generally classified into attacks in training and testing stage.
In training stage, attackers can carry out data poisoning, which involves the insertion of adversarial example into training dataset, affecting model’s performance. For example, poison frog leverages on inserting images into dataset to ensure the wrong classification will be given to a target test sample Shafahi et al. (2018). The use of direct gradient method in generating adversarial images against neural network is also explored Yang et al. (2017).
In testing stage, attackers can carry out either white-box or black-box attacks. In white-box attacks, attackers are assumed to have access to target classifier. Biggio et al. (2013) focuses on optimising discriminant function to mislead a 3 layer full connected neural network. Shafahi et al. (2018) suggests that a certain imperceptible perturbation can be applied to cause misclassification on image, and this effect can be transferred to other different network to train on similar data to misclassify the same input. Fast gradient sign method (FGSM) is proposed by Goodfellow et al. (2014). It involves only one back propagation step when calculating the gradient of cost function, hence allowing fast adversarial example generation Kurakin et al. (2016)
proposes the iterative version of FGSM, known as basic iteration method (BIM), which heuristically search for examples that are most likely to fool the classifier. Given the presence of literature that defends against FGSM methods,Carlini and Wagner (2017) proposes the use of margin loss instead of entropy loss during attacks. Cisse et al. (2017) proposes a an approach named HOUDINI, which generate adversarial examples for fooling visual and speech recognition models. Instead of altering pixel values, spatial transformed attacks are also proposed to perform special transformation such as translation or rotation on images Xiao et al. (2018).
In black box attacks, attackers have no access to classifier’s parameter or training sets. Papernot et al. (2017) proposes the exploitation of transferability of adversarial examples. A model similar to the target classifier is first trained, then adversarial examples generated to attack the trained model is used to target the actual classifier. Fredrikson et al. (2015) explores the exploitation of knowledge of confidence value of target classifier as predictions are made.
2.3 Adversarial Attack (and Defense) on Medical Imagery
There are existing literature that looks into adversarial attack against deep learning system for medical imagery. Finlayson et al. (2018) shows that both black box and white box PGD attack and adversarial patch attack could affect the performance of classifiers modelled after state-of-the-art systems on fundoscopy, chest X-ray and dermoscopy respectively. Similarly, Paschali et al. (2018) also shows that small perturbation can create classification performance drop across state-of-the-art networks such as Inception and UNet, for which accuracy drops from above 87% on normal medical images to almost 0% on adversarial examples. By producing crafted mask, an adaptive segmentation mask attack (ASMA) is proposed to fool DNN model for segmenting medical imagery Ozbulak et al. (2019).
In medical adversarial defence, Li et al. (2020) proposes an unsupervised detection of adversarial samples in which unsupervised adversarial detection (UAD) are complemented with semi-supervised adversarial training (SSAT). The proposed model claims to demonstrate a superior performance in medical defence against other techniques. Ma et al. (2020) further proposes that medical DNN are more vulnerable to attacks due to the specific characteristic of medical images having high gradient regions sensitive to perturbations, and over parameterization of the state-of-the-art DNN. This work then proposes an adversarial detector specifically designed for medical image attacks, achieving over 98% detection AUC.
However, very few literature has leverage on and conduct adversarial attack based on the inherent characteristic of the targeted medical imagery. For example, common noise degradation used for general adversarial attacks are rarely found in X-ray imagery. Hence in this work, we capitalize on the proprietary degradation surrounding X-ray imagery, bias field, and initiate adversarial attacks based on imperceptible modification on the bias field itself.
3.1 Adversarial Bias Field Attack and Challenges
Given a X-ray image, e.g., , we can assume it is generated by adding a bias field to a clean version, i.e., , with the widely used imaging model
Under the automate diagnosis task where a DNN is used to recognize the category (i.e., normal or abnormal) of , it is necessary to explore a totally new task, i.e., adversarial bias field attack aiming to fool the DNN by calculating an adversarial bias filed , with which we can study the influence of the bias field as well as the potential threat of utilizing it to fool the automate diagnosis.
A simple way is to take logarithm on Eq. 1 and transform the multiplication to additive operation
where we use the ‘’ to represent the logarithm of a variable. With Eq. 2, it seems that all existing additive-based adversarial attacks, i.e., FGSM, BIM, MIFGSM, DIM, and TIMIFGSM, could be used for the new attack. For example, we can calculate to realize attack by solving
is the loss function for classification,e.g., the cross-entropy loss, and denotes the ground truth label of . Nevertheless, we argue that such solution cannot generate the real ‘bias field’ since the optimized violated the basic property of bias field, i.e., spatially smooth changes resulting in intensity inhomogenity. For example, as shown in Fig. 2, when we optimize Eq. 3 to produce a bias field, we can attack the ResNet50 successfully while the bias field is noise-like and far from the appearance in the real world.
As a result, due to requirement of spatial smoothness of bias field, the adversarial bias field attack posts a totally new challenge to the field of adversarial attack: how to generate the adversarial perturbation that can not only achieve high attack success rate but maintain its spatial smoothness for the realisticity of bias field. Actually, since the high attack success rate relies on the pixel-wise tunable perturbation and violates the smoothness requirement of bias field, the two constraints contradicts each other and make the adversarial bias field attack significantly challenge.
3.2 Adversarial-Smooth Bias Field Attack
To overcome above challenge, we propose the distortion-aware multivariate polynomial model to represent the bias field whose inherit property guarantees the spatial smoothness of the bias field while the distortion helps achieve effective attack. Then, we define a new objective function for effective attack by combining the constraints of spatially smooth bias field, sparsity of the original image with the adversarial loss. Finally, we introduce the optimization method and attack algorithm.
3.2.1 Distortion-aware multivariate polynomial model.
We model the bias filed as
where represents the distortion transformation and we use the thin plate spline (TPS) transformation with being the control points. We denote as the -th pixel with its coordinates while means the pixel has been distorted by a TPS. In addition, and are the parameters and degree of the multivariate polynomial model, respectively, and the number of parameters are . For convenient representations, we concatenate
and obtain a vector.
3.2.2 Adversarial-smooth objective function.
With Eq. 4, we can tune and for adversarial attack and the multivariate polynomial model can help preserve the smoothness of bias field. Intuitively, on the one hand, the lower degree leads to less model parameters and a smoother bias field could be obtained. On the other hand, the distortion can be locally tuned with different and can help achieve effective attack. The key problem is how to calculate and to balance the spatial smoothness and adversarial attack. To this end, we define a new objective function to realize the attack.
where represents parameters of the identify TPS transformation, i.e., . The first term is to tune the and to fool a DNN for X-ray recognition. The second term encourages the sparse of and would let the bias field smooth. The final term is to let the TPS transformation not go far away from the identity version. Two hyper-parameters, i.e., and control the balance between the smoothness and adversarial attack.
In this section, we conduct comprehensive experiments on a real chest-xray dataset to validate the effectiveness of our method and discuss how bias field affects X-Ray recognition. We want to answer the following questions: ❶ What are the differences and advantages of the adversarial bias field attack over existing adversarial noise attacks? ❷ How and why can the bias fields affect the X-ray recognition? ❸ How do the hyper-parameters affect the attack results?
4.1 Setup and Dataset
We carry out our experiments on a chest-xray dataset about pneumonia, which contains 5863 X-ray images111Please find more details about the dataset in https://www.kaggle.com/paultimothymooney/chest-xray-pneumonia.. These images were selected from retrospective cohorts of pediatric patient. The dataset is divided into two categories, i.e., pneumonia and normal.
In order to show the effect of the attack on different neural network models, we finetune three pre-trained models on the chest-xray dataset. The three models are ResNet50, MobileNet and Densenet121 (Dense121).The accuracy of ResNet50, MobileNet and Densenet121 is 88.62%, 88.94% and 87.82%.
We choose the attack success rate and image quality to evaluate the effectiveness of the bias field attack. The image quality measurement metric is BRISQUE Mittal et al. (2012). BRISQUE is an unsupervised image quality assessment method. A high score for BRISQUE indicates poor image quality.
We select five adversarial attack methods as our baselines, which include basic iterative method (BIM) Kurakin et al. (2016), Carlini & Wagner L2 method (C&W) Carlini and Wagner (2017), saliency map method (SaliencyMap) Papernot et al. (2016), fast gradient sign method (FGSM) Goodfellow et al. (2014) and momentum iterative fast gradient sign method (MIFGSM) Dong et al. (2018).
For the setup of hyperparameters of these baselines, we set them as the default setup of foolboxRauber et al. (2017). We set max perturbation to be relative to [0,1] range in basic experiments. Besides, we set iterations to be 10 for MIFGSM and BIM.
4.2 Comparison with Baseline Methods
For our method, we set the size of the control points, and as (16*16) , 10 and 1, respectively. Table 1 shows the quantitative results with our method and the baseline methods, which are conducted with different settings. Specifically, we conduct two different attacks, i.e., the white-box attack and the transfer attack. The white-box attack aims to attack the target DNN directly while the transfer attack attacks the target DNN with the adversarial examples generated from other models. For example, for the transfer attack in Table 1, the attack is performed on DNNs in the first row, and the generated adversarial examples are used to attack DNNs in the first two columns of the second row.
As we can see, for the white-box attack (i.e., the third column for each model), we could find that the success rate of our method is lower than the existing baselines. For example, on ResNet50, our method achieves 38.69% success rate while most of the baselines achieves 100% success rate. The main reason is that the existing attacking techniques could add arbitrary noises on the image, which is not realistic. However, our method has a strict smooth limitation such that the generated adversarial examples look more realistic. As shown in Fig. 3, we show some examples generated by different attacks. The first row shows the original images while the following rows list the corresponding adversarial examples. It is clear that our method could generate high-quality adversarial examples that are smooth and realistic. In most cases, the change between original image and the generated image is imperceptible. However, we could find obvious noises in the adversarial examples generated by the baseline methods. Such noises are difficult to appear in X-rays in the real world.
For the transfer attack (i.e., the first two columns), we found that our method achieves much higher success rate than others. For example, the attack on ResNet50 achieves 7.57% and 14.05% transfer success rate on MobileNet and DenseNet121, respectively. However, the the best results of the baseline are only 1.08% and 0.18%. It is because that existing techniques calculate the ad-hoc noise, which may be only effective on the target DNN but not on other models. However, our attack considers the smoothness such that the generated adversarial examples are more realistic. Such adversarial examples are more robust and could reveal the common weakness of different DNNs (i.e., higher success rate of the transfer attack). The results indicate that our method could generate high-quality adversarial examples. We also compare the image quality with the BRISQUE score (i.e., the forth column). The results show that our method could achieve competitive results with the-state-of-the-arts.
In summary, our method aims to generate high-quality and realistic adversarial examples. To generate such adversarial examples, the attack success rate is naturally lower than the noise-based adversarial attack techniques.
4.3 Understanding Effects of Bias Field
In this subsection, we aim to explore how the bias field affect the DNN-based X-ray recognition. Fong and Vedaldi (2017) proposes a method for understanding DNNs with the adversarial noise attack and generates an interpretable map indicating the classification-sensitive regions of a DNN. Inspired this idea, we can study which regions in the chest X-ray images are sensitive to the bias filed and affect the X-ray recognition. Specifically, given an adversarial bias field example generated by our method and the original image , we can calculate an interpretable map for a DNN by optimizing
where denotes the score at label that is the ground truth label of and is the total-variation norm. Intuitively, optimizing Eq. (8) is to find the region that causes misclassification. We optimize Eq. (8) via gradient decent in 150 iterations and fix and .
With Eq. 8, given a pre-trained model, i.e., , and a dataset containing the successfully attacked X-ray images, we can calculate a for each X-ray image and then average all interpretable maps to show the statistical regions that are sensitive to the bias field. For example, we adopt ResNet50 as the subject model and construct with 240 attacked X-ray images that can fool ResNet50 successfully. Then, we calculate the interpretable maps for all images in (e.g., the second row in Fig. 4) and average them, achieving a statistical mean map (e.g., the left image shown in Fig. 4). According to the visualization results, we observe that: ❶ Our method helps identify the bias-field-sensitive regions in each attacked X-ray image and we observe that these regions are related to the organ positions. This demonstrates that the effects of the bias field to the DNN stems from intensity variation around organs. ❷ According to the statistical mean map, we see that the bias-field sensitive regions mainly locate at the top and bottom positions across all attacked images, suggesting that future designed DNN should consider the spatial variations within in X-ray images. We observe similar results on other DNNs (Please find more results in the supplementary material), hinting that these are common phenomenons in the DNN-based X-ray recognition and demonstrating the potential applications of this work.
4.4 Effects of Hyper-parameters
We also evaluate the effects of hyper-parameters in our attack, i.e., and in Equation 4. Specifically, we change for TPS transformation by changing the number of control points. is denoted to represent the control points in the TPS transformation. Then we select different to conduct the attack. For the parameter , we set the fixed as 10 and change the value of , i.e., observe part of the sample display of the bias field by ignoring the lowest degree in the multivariate polynomial model.
Table 2 shows the results with different configurations. In the second row, we fix the as 0 and change value of as 4, 8, 12 and 16, respectively. As we can see, it seems that there is no clear difference in the attack success rate when the parameter varies. We conjecture that the attack could easily reach the upper bound in terms of the success rate with different . Figure 5 shows the bias field change with different in multiple iterations. Intuitively, we can see that when is smaller, more parts of the image can be adjusted in each iteration and the image may become less smooth. However, when the is becoming larger, there are more grids, which could provide more fine-grained change. Thus the generated image can be more smooth.
Then we fix the as 16 and change the parameter as 0, 1, 2 and 3 (in the third row). As we can see, as increases (i.e., more lower degree are ignored), the success rate of our method decreases and the BRISQUE score decreases. It is reasonable as ignoring more low degree in Equation 4 may reduce the space of the manipulation, resulting in higher image quality and lower attack success rate. The visualization results are shown in Fig. 6. When more lower degree is ignored (i.e., larger ), the bias field samples tend to be less smooth.
Deep learning has been used in chest X-ray image recognition for the diagnosis of lung diseases (e.g., COVID-19). It is especially important to ensure the robustness of the DNN in this scenario. To tackle this problem, this paper proposed a new adversarial bias field attack, which aims to generate more realistic adversarial examples by adding more smooth perturbations instead of noises. We demonstrated the effectiveness of our attack on the widely used DNNs. The results show that our method can generate high quality adversarial examples, which achieve high success rate of the transfer attack. The generated realistic images can reveal issues of the DNN, which calls for the attention of robustness enhancement of the deep learning-based healthcare system.
In the future, we will extend the adversarial bias field attack to other computer vision tasks,e.g., natural image classification Guo et al. (2020b) et al. (2020), visual object tracking Guo et al. (2020c, a, 2017a, 2017b); Zhou et al. (2017), etc., and also in tandem with other attack modalities that are not based on additive noise in nature such as Gao et al. (2020); Cheng et al. (2020b); Zhai et al. (2020). In addition, we can regard our adversarial bias field as a new kind of mutation for DNN testing Xie et al. (2019a); Ma et al. (2018b); Du et al. (2019); Xie et al. (2019b); Ma et al. (2018a, 2019).
- Covid-caps: a capsule network-based framework for identification of covid-19 cases from x-ray images. arXiv preprint arXiv:2004.02696. Cited by: §2.1.
A modified fuzzy c-means algorithm for bias field estimation and segmentation of mri data. IEEE transactions on medical imaging 21 (3), pp. 193–199. Cited by: §1.
Evasion attacks against machine learning at test time. In Joint European conference on machine learning and knowledge discovery in databases, pp. 387–402. Cited by: §2.2.
- Correction for intensity falloff in surface coil magnetic resonance imaging. Medical Physics 15 (2), pp. 241–245. Cited by: §2.1.
- Towards evaluating the robustness of neural networks. In 2017 ieee symposium on security and privacy (sp), pp. 39–57. Cited by: §2.2, §4.1.4.
- Pasadena: perceptually aware and stealthy adversarial denoise attack. arXiv preprint arXiv:2007.07097. Cited by: §2.2.
- Adversarial Exposure Attack on Diabetic Retinopathy Imagery. arXiv preprint arXiv. Cited by: §5.
- Houdini: fooling deep structured visual and speech recognition models with adversarial examples. In Advances in neural information processing systems, pp. 6977–6987. Cited by: §2.2.
Boosting adversarial attacks with momentum.
Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 9185–9193. Cited by: §4.1.4.
- Deepstellar: model-based quantitative analysis of stateful deep learning systems. In Proceedings of the 2019 27th ACM Joint Meeting on European Software Engineering Conference and Symposium on the Foundations of Software Engineering, pp. 477–487. Cited by: §5.
- A unified variational approach to denoising and bias correction in mr. In Biennial international conference on information processing in medical imaging, pp. 148–159. Cited by: §2.1.
- Adversarial attacks against medical deep learning systems. arXiv preprint arXiv:1804.05296. Cited by: §2.3.
- Interpretable explanations of black boxes by meaningful perturbation. In ICCV, Vol. , pp. 3449–3457. Cited by: §4.3.
- Model inversion attacks that exploit confidence information and basic countermeasures. In Proceedings of the 22nd ACM SIGSAC Conference on Computer and Communications Security, pp. 1322–1333. Cited by: §2.2.
- Making Images Undiscoverable from Co-Saliency Detection. arXiv preprint arXiv. Cited by: §5.
- Explaining and harnessing adversarial examples. arXiv preprint arXiv:1412.6572. Cited by: §2.2, §2.2, §4.1.4.
- Diagnose like a radiologist: attention guided convolutional neural network for thorax disease classification. CoRR abs/1801.09927. Cited by: §2.1.
- Multi-label chest x-ray image classification via category-wise residual attention learning. Pattern Recognition Letters 130, pp. 259 – 266. Note: Image/Video Understanding and Analysis (IUVA) External Links: Cited by: §2.1.
- Learning to recognize abnormalities in chest x-rays with location-aware dense networks. CoRR abs/1803.04565. Cited by: §2.1.
- Learning dynamic siamese network for visual object tracking. In Proceedings of the IEEE international conference on computer vision, pp. 1763–1771. Cited by: §5.
- Structure-regularized compressive tracking with online data-driven sampling. IEEE Transactions on Image Processing 26 (12), pp. 5692–5705. Cited by: §5.
Selective spatial regularization by reinforcement learned decision making for object tracking. IEEE Transactions on Image Processing 29, pp. 2999–3013. Cited by: §5.
- ABBA: saliency-regularized motion-based adversarial blur attack. arXiv preprint arXiv:2002.03500. Cited by: §2.2, §5.
- Frequency-tuned acm for biomedical image segmentation. In 2017 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 821–825. Cited by: §1.
- Frequency-tuned active contour model. Neurocomputing 275, pp. 2307–2316. Cited by: §1.
- SPARK: spatial-aware online incremental attack against visual tracking. In Proceedings of the European Conference on Computer Vision (ECCV), Cited by: §2.2, §5.
- Bias field correction for mri images. In Computer Recognition Systems, pp. 543–551. Cited by: §2.1.
- Adversarial machine learning at scale. arXiv preprint arXiv:1611.01236. Cited by: §2.2, §4.1.4.
- A variational level set approach to segmentation and bias correction of images with intensity inhomogeneity. In International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 1083–1091. Cited by: §1.
- Defending against adversarial attacks on medical imaging ai system, classification or detection?. arXiv preprint arXiv:2006.13555. Cited by: §2.3.
- Covid-xpert: an ai powered population screening of covid-19 cases using chest radiography images. arXiv preprint arXiv:2004.03042. Cited by: §2.1.
- Thoracic disease identification and localization with limited supervision. CoRR abs/1711.06373. Cited by: §2.1.
- DeepGauge: Multi-Granularity Testing Criteria for Deep Learning Systems. In The 33rd IEEE/ACM International Conference on Automated Software Engineering (ASE), Cited by: §5.
- DeepCT: Tomographic Combinatorial Testing for Deep Learning Systems. Proceedings of the IEEE International Conference on Software Analysis, Evolution and Reengineering (SANER). Cited by: §5.
- DeepMutation: Mutation Testing of Deep Learning Systems. In The 29th IEEE International Symposium on Software Reliability Engineering (ISSRE), Cited by: §5.
- Understanding adversarial attacks on deep learning based medical image analysis systems. Pattern Recognition, pp. 107332. Cited by: §2.3.
- No-reference image quality assessment in the spatial domain. IEEE Transactions on image processing 21 (12), pp. 4695–4708. Cited by: §4.1.3.
- Impact of adversarial examples on deep learning models for biomedical image segmentation. In International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 300–308. Cited by: §2.3.
- Practical black-box attacks against machine learning. In Proceedings of the 2017 ACM on Asia conference on computer and communications security, pp. 506–519. Cited by: §2.2.
- The limitations of deep learning in adversarial settings. In 2016 IEEE European symposium on security and privacy (EuroS&P), pp. 372–387. Cited by: §4.1.4.
- Generalizability vs. robustness: investigating medical imaging networks using adversarial examples. In International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 493–501. Cited by: §2.3.
- CheXNet: radiologist-level pneumonia detection on chest x-rays with deep learning. CoRR abs/1711.05225. Cited by: §2.1.
- Foolbox: a python toolbox to benchmark the robustness of machine learning models. arXiv preprint arXiv:1707.04131. Cited by: §4.1.4.
- Poison frogs! targeted clean-label poisoning attacks on neural networks. In Advances in Neural Information Processing Systems, pp. 6103–6113. Cited by: §2.2, §2.2.
- Intriguing properties of neural networks. arXiv preprint arXiv:1312.6199. Cited by: §2.2.
- Unveiling covid-19 from chest x-ray with deep learning: a hurdles race with small data. arXiv preprint arXiv:2004.05405. Cited by: §2.1.
- 3D mdeft imaging of the human brain at 4.7 t with reduced sensitivity to radiofrequency inhomogeneity. Magnetic Resonance in Medicine: An Official Journal of the International Society for Magnetic Resonance in Medicine 53 (6), pp. 1452–1458. Cited by: §2.1.
- A review of methods for correction of intensity inhomogeneity in mri. IEEE transactions on medical imaging 26 (3), pp. 405–421. Cited by: §1.
- COVID-net: a tailored deep convolutional neural network design for detection of covid-19 cases from chest x-ray images. arXiv preprint arXiv:2003.09871. Cited by: §2.1.
- Amora: black-box adversarial morphing attack. In ACM Multimedia Conference (ACMMM), Cited by: §2.2, §5.
- ChestX-ray8: hospital-scale chest x-ray database and benchmarks on weakly-supervised classification and localization of common thorax diseases. CoRR abs/1705.02315. Cited by: §2.1, §2.1.
- Spatially transformed adversarial examples. arXiv preprint arXiv:1801.02612. Cited by: §2.2.
- DeepHunter: A Coverage-Guided Fuzz Testing Framework for Deep Neural Networks. In ACM SIGSOFT International Symposium on Software Testing and Analysis (ISSTA), Cited by: §5.
- DiffChaser: detecting disagreements for deep neural networks.. In IJCAI, pp. 5772–5778. Cited by: §5.
- Generative poisoning attack method against neural networks. arXiv preprint arXiv:1703.01340. Cited by: §2.2.
- Learning to diagnose from scratch by exploiting dependencies among labels. CoRR abs/1710.10501. Cited by: §2.1.
- It’s Raining Cats or Dogs? Adversarial Rain Attack on DNN Perception. arXiv preprint arXiv. Cited by: §5.
- Estimation of image bias field with sparsity constraints. In 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pp. 255–262. Cited by: §1.
- Selective object and context tracking. In 2017 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 1947–1951. Cited by: §5.