A Hierarchical Feature Constraint to Camouflage Medical Adversarial Attacks

12/17/2020 ∙ by Qingsong Yao, et al. ∙ Princeton University Institute of Computing Technology, Chinese Academy of Sciences Tencent 0

Deep neural networks (DNNs) for medical images are extremely vulnerable to adversarial examples (AEs), which poses security concerns on clinical decision making. Luckily, medical AEs are also easy to detect in hierarchical feature space per our study herein. To better understand this phenomenon, we thoroughly investigate the intrinsic characteristic of medical AEs in feature space, providing both empirical evidence and theoretical explanations for the question: why are medical adversarial attacks easy to detect? We first perform a stress test to reveal the vulnerability of deep representations of medical images, in contrast to natural images. We then theoretically prove that typical adversarial attacks to binary disease diagnosis network manipulate the prediction by continuously optimizing the vulnerable representations in a fixed direction, resulting in outlier features that make medical AEs easy to detect. However, this vulnerability can also be exploited to hide the AEs in the feature space. We propose a novel hierarchical feature constraint (HFC) as an add-on to existing adversarial attacks, which encourages the hiding of the adversarial representation within the normal feature distribution. We evaluate the proposed method on two public medical image datasets, namely Fundoscopy and Chest X-Ray. Experimental results demonstrate the superiority of our adversarial attack method as it bypasses an array of state-of-the-art adversarial detectors more easily than competing attack methods, supporting that the great vulnerability of medical features allows an attacker more room to manipulate the adversarial representations.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 12

page 14

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Deep neural networks (DNNs) are known to be highly vulnerable to adversarial examples (AEs) [szegedy2013intriguing]. AEs are maliciously generated by adding human-imperceptible perturbations to clean examples, compromising a network to produce the attacker-desired incorrect predictions [dong2018boosting]

. This characteristic challenges DNNs to security-critical applications, face recognition

[parkhi2015deep] and autonomous driving [autodrive]. Especially, the adversarial attack in medical image analysis is disastrous as it can manipulate patients’ disease diagnosis and cause serious subsequent problems. More disturbingly, recent studies have shown that DNNs for medical image analysis [zhou2015medical, zhou2017deep, zhou2019handbook], including disease diagnosis [paschali2018generalizability, finlayson2018adversarial, ma2020understanding], organ segmentation [ozbulak2019impact], and landmark detection [yao2020miss], are more vulnerable to AEs than natural images.

Figure 1: We craft adversarial examples by the state-of-the-art the iterative basic method (BIM) [bim] under a small constraint to manipulate the medical diagnosis result. Then, we visualize the penultimate layer’s deep representations of the adversarial and clean examples by 2D t-SNE. Obviously, the adversarial feature lies in an outlier position that can be easily detected by adversarial detectors.

On the other hand, recent works [ma2020understanding] have shown that, unlike adversarial examples of natural images, medical adversarial examples can be easily detected in hierarchical feature space. To illustrate the distribution differences between clean and adversarial examples in the feature space, we plot the 2D t-SNE [t-SNE]

of their features from the penultimate layer of a well-trained pneumonia classifier in Fig. 

1

. It reveals that adversarial attacks move the deep representations from the original distribution to extreme outlier positions in order to compromise the classifier. As a result, a defender can easily take advantage of this intrinsic characteristic of adversarial examples by learning a decision boundary between clean and adversarial examples or distinguishing them directly by anomaly detection based methods. Given this phenomenon, two key questions are investigated in this paper. The first one is:

What causes medical adversarial examples to be more easily detected, compared to natural adversarial attacks? To better understand the problem, we conduct both empirical and theoretical analysis of medical adversarial examples and compare them with natural images. Firstly, we demonstrate that the medical features are more vulnerable than natural features in a stress test, which aims to distort the features by the adversarial attack. Then, we theoretically prove that the attack’s representations are optimized in a nearly-consistent direction. The consequence of such consistent guidance is that the vulnerable representations are pushed to outlier regions where the clean example features rarely reside. The second question is: If possible, how to hide a medical adversarial example from being spotted in the feature space?

Intuitively, if the attacker could imitate the feature distributions of clean examples when manipulating the final logits, it not only deceives the anomaly detection based detectors, but also bypasses the decision boundary trained to spot the extreme outlier AE features. A straightforward idea is to select a guide example and force the representations of the adversarial examples to be close to the guide image in the hierarchical feature space 

[feature_iclr]. However, different medical images have different backgrounds and lesions, so it is difficult to manipulate the adversarial representation to be the same as the guide one in all layers within the limitation of small perturbation. In order to find where to hide the adversarial representation in the normal feature distribution, we propose a novel hierarchical feature constraint (HFC) as an add-on term that can be plugged into all existing attacks

. HFC first models the normal feature distributions for each activation layer with a Gaussian mixture model and then promotes the hiding of adversarial samples in where the corresponding log-likelihood is maximized. We perform extensive experiments on two public medical diagnosis datasets to validate the effectiveness of HFC. HFC helps an attacker bypass several state-of-the-art adversarial detectors, while keeping the perturbations under a strict constraint. Also, HFC greatly outperforms other methods on manipulating adversarial representations. Furthermore, HFC bypasses the detectors in the grey-box setting, where only the backbone network is known. Finally, HFC is extended to natural adversarial attacks. Our experiments support that

the great vulnerability of medical features allows an attacker more room to manipulate the adversarial representations. Overall, we highlight the following contributions:

  • We investigate the intrinsic characteristics of medical images and shed light on why medical adversarial examples can be more easily detected, when compared with adversarial examples of natural images.

  • We propose a hierarchical feature constraint (HFC), a novel plug-in that can be applied to all existing attacks to lower their chance of being detected.

  • Extensive experiments validate that our HFC bypasses several state-of-the-art adversarial detectors with small perturbations in both white- and gray-box settings.

2 Related Work

Given a clean image with its ground truth label and a DNN classifier with pretrained parameters , the classifier predicts the class of the input example via:

(1)

where the logits output (with respect to class ) is given as , in which is the activation of the penultimate layer that has dimensions; and are the weights and the bias from the final dense layer, respectively; the

is the probability of

belonging to class . A common way of crafting an adversarial attack is to manipulate the classifier’s prediction by minimizing111In this work, we focus on targeted adversarial attack. the classification error between the prediction and target class , while keeping the adversarial example within a small -ball of the -norm [PGD] centered at the original sample , i.e., , where is perturbation budget.

2.1 Adversarial attacks

A wide range of gradient-based or optimization-based attacks have been proposed to generate AEs under different types of norms. The Jacobian-based saliency map attack (JSMA) [JSMA] crafted AEs on the -norm, modifying few pixels to change the loss as much as possible. DeepFool [deepfool] is an -norm attack method which performs less perturbations by moving the attacked input sample to its closest decision boundary. Another effective -norm attack proposed by Carlini and Wagner (CW attack) [cwattack] took a Lagrangian form and adopted Adam [adam] for optimization. Elastic-net Attack (EAD) [EAD] extended CW attack to -norm by including an regularization. In this paper, we focus on state-of-the-art adversarial attacks, which are most commonly used due to its consistency with respect to human perception [PGD]. The existing approaches can be categorized into three categories. The first category is one-step gradient-based approaches, such as the fast gradient sign method (FGSM) [goodfellow2014explaining], which generates adversarial example by minimizing the loss . is often chosen as the cross-entropy loss and is the norm bound:

(2)

The second category is iterative methods. The basic iterative method (BIM) [bim] is an iterative version of FGSM, which iteratively updates perturbations with a smaller step size and keep the perturbation in norm bound by the projection function :

(3)

Different from BIM, another iterative method named projected gradient descent (PGD) [PGD] used a random start , where is the uniform noise between and , and perturbs the input by Eq. (3) iteratively. Furthermore, the momentum iterative method (MIM) [dong2018boosting] was proposed to improve the transferability by integrating the momentum term into the iterative process. The last category is optimization-based methods, among which one of the representative approach is the Carlini and Wanger (CW) attack [cwattack]. According to [PGD], the version of CW attack can be solved by the PGD algorithm by using the following objective function:

(4)

where is the logits with respect to the target class, is the maximum logits of the remaining classes, and is a parameter managing the confidence.222We set to the average difference between the largest logits and the penultimate logits for each dataset.

(a) (b)

Figure 2: (a) The similarity between the value changes (under adversarial attack) from the penultimate layer and the difference between and . The activation value of the component with a greater difference is increased higher after the attack. (b) The similarity of the changes of activation values from the penultimate layer between different attacks and the different iterations from the basic iterative method (BIM).

2.2 Adversarial defenses

Plenty of proactive defense approaches have been proposed to defend against adversarial attacks, such as feature squeezing [xu2017feature], distillation network [papernot2016distillation], input transformation (, JPEG compression [jpeg]

, autoencoder-based denoising 

[liao2018defense] and regularization [ross2017improving]), Parseval network [cisse2017parseval], gradient masking [masking], randomization [liu2018towards, dhillon2018stochastic], radial basis mapping kernel [taghanaki2019kernelized], non-local context encoder [he2019non]. Per [dong2019benchmarking], the PGD-based adversarial (re)training [goodfellow2014explaining, tramer2017ensemble, PGD] is the most robust defense strategy, which augments the training set with adversarial examples but consumes too much training time. However, these defenses can be bypassed either completely or partially by adaptive attacks [CW_bpda, CW_ten, tramer2020adaptive]. Different from the challenging proactive defense, recent work have focused on reactive defense, which aims at detecting AEs from clean examples with high accuracy [meng2017magnet, miller2020adversarial, zheng2018robust]. In particular, several emerging works shed light on the intrinsic characteristic in the high dimensional feature subspace [zheng2018robust, li2017adversarial]. Some of them use learning-based methods (, RBF-SVM [SaftyNet], DNN [metzen2017detecting]

) to train a decision boundary between clean and adversarial distributions in the feature space. Another line of research is k-nearest neighbors (kNN

[dubey2019defense, papernot2018deep, cohen2020detecting] based methods, which make prediction according to logits (or classes) of the kNNs in the feature space. Furthermore, anomaly-detection based methods are suitable for detecting AEs too: Feinman et al. [kde] and Li et al. [li2020robust]

modeled the normal distribution with kernel density estimation (KDE) and multivariate Gaussian model (MGM) respectively. Ma et al. 

[ma2018characterizing] characterized the dimensional properties of the adversarial subspaces by local intrinsic dimensionality (LID). Lee et al. [MAHA] measured the degree of the outlier by a Mahalanobis distance (MAHA) based confidence score. Especially, Ma et al. [ma2020understanding] evaluated that medical AEs are much easier to detect than natural images (with 100% accuracy). Similar conclusion comes from [li2020robust], which motivates us to explore the reason behind this phenomenon and evaluate the robustness of those detectors.

3 Why are Medical AEs Easy to Detect?

3.1 Vulnerability of representations

To fully understand the intrinsic characteristics of medical adversarial examples, we first perform a test to evaluate the robustness of the deep representations. Specifically, we aim to manipulate the activation values as much as possible by adversarial attacks. In implementation, we try to decrease () and increase (

) the activation values by replacing the loss function

in BIM by and respectively, where is the feature from the activation layer. We execute the stress attack on the medical dataset (Fundoscopy [aptos]

) and natural dataset (CIFAR-10

[cifar]). The comparison results shown in Table 1 demonstrate that the differences caused by the medical image attacks are larger than natural ones, indicating that the representations of medical images are easier to attack; in other words, the medical image representations are much more vulnerable.

Dataset Fundoscopy CIFAR-10
Layer index 36 45 48 36 45 48
Normal .0475 .1910 .3750 .0366 .1660 .1900
Adversarial () .0322 .0839 .0980 .0312 .1360 .1480
Adversarial () .0842 .7306 2.0480 .0432 .2030 .2640
Difference () .0153 .1071 .2770 .0054 .0300 .0420
Difference () .0367 .5396 1.6730 .0066 .0370 .0740
Table 1: Comparison of the robustness between medical image and natural image. We calculate the mean values of the activation layers from ResNet-50 before and after the adversarial attacks. The features of the medical images are decreased or increased more drastically than natural images. The stress test uses perturbations under the constraint .

3.2 Consistency of gradient direction

We then investigate the loss function and the corresponding gradient on the final logits output . In each iteration of the approaches introduced above, and

increase the logits of the target class and decrease the other logits at the same time. Therefore, the gradients towards the similar direction occur in different iterations of various attacks, which will be back-propagated according to the chain rules.

Theorem 1. Consider a binary333We provide the theoretical analysis and empirical results about the multi-class classification in the supplementary material. disease diagnosis network and its representations from the penultimate layer, the directions of the corresponding gradients are fixed during during each iteration under adversarial attack.444We provide the proof in the supplementary material. As a representative, we use adversarial attack to convert the prediction of diagnosis network from 0 to 1. Implication. The partial derivative of cross-entropy loss with respect to the activation value of -th node in the penultimate layer is computed as:

(5)

where denotes the prediction confidence of class 1 and denotes the weight between -th node in penultimate layer and -th node in the last layer. Accordingly, the component with bigger difference between and will increase more (guided by the gradient) under adversarial attack. We plot the similarity between the value changes and in Fig. 2(a). Similar conclusions can be derived when the attacker chooses different approaches to increase the targeted logits and compromise the left ones, , CW attack. Hence, we calculate the similarity of the value changes among different adversarial approaches and different iterations, and the results are shown in Fig. 2(b). These similar changes in the feature space follow that the adversarial detector such as RBF-SVM [SaftyNet] (which trained on single attack) has high transferability [ma2020understanding] to different attacks.

3.3 Extremely OOD activation values

Since the activation values are vulnerable and iteratively updated towards a consistent direction, it is likely that a few of them increase to extremely large values, which the clean activation values are unlikely to be. Suppose that we have clean samples with their AEs and the penultimate layer of the network has activation values. To figure out the abnormality levels of the outlier, we first collect the activation values from the penultimate layer generated by clean examples and AEs, and store them as and , respectively. Next, we perform a column-wise normalization: for each column of or , we divide each of its entry by its corresponding column-wise maximum that is calculated from only. This results in normalized matrices and

. Finally, we compute the standard deviation and the maximum value for each row vector of

and 555Code can be found in supplementary material..

Figure 3: The distribution of standard deviations and the maximum values. The perturbation budgets are multiplied by 256. The standard deviations and maximum values of AEs are much bigger than those of clean examples.

We illustrate the distribution of max values and standard deviations in Fig. 3, the maximum values and standard deviations of adversarial examples are much greater than those of clean images, which means that several activation values are extremely greater than the normal ones. As the perturbation budget raises, the degree of outlier increases accordingly. It is worth noting that when the perturbation budget is larger than , the adversarial activation values of BIM keep increasing while those of CW stop, because the gradient is zero when . This intrinsic characteristic of the outliers in the feature space facilitates that many out-of-distribution (OOD) detection methods can detect AEs in the feature space with high accuracy, especially for medical images [ma2020understanding]. However, we show that the attacker is able to take advantage of the fragility of medical AE features and hide them to avoid being spotted in Sec. 4.

4 Adversarial attack with a hierarchical feature constraint

Here, we demonstrate how to hide the adversarial representation w.r.t. the normal feature distribution. Our intuition is to derive a term that measures the distance from the adversarial representation to the normal feature distribution, so that the adversarial representation can be pushed toward the normal distribution on the shortest path by directly minimizing this term, during the process of stochastic gradient descent in each iteration of adversarial attack.

Modeling the normal feature distribution: We model the normal feature distribution using a Gaussian mixture model (GMM) as following:

(6)

where is the probability density of sample in the target class ; denotes the mapping function, , the deep representation of the activation layer with parameters ; is the mixture coefficient subject to ; and are the mean and covariance matrix of the

-th Gaussian component in the mixture model. These parameters are trained by the expectation-maximization (EM) algorithm

[EM] on the data belonging to the target class . For a given input , we separately compute the log-likelihood of an adversarial feature relative to each component and find the most probable Gaussian component:

(7)

Then we focus on maximizing the log-likelihood of this chosen component to hide the adversarial representation. Hierarchical feature constraint: To avoid being detected by outlier detectors, we add the constraint of (7), ignoring the constant terms, to all DNN layers. The hierarchical feature constraint induces a loss () that is formulated as:

(8)

where is a weighting factor that controls the contribution of constraint in layer . Algorithm 1 shows the pseudo-code for the adversarial attack with hierarchical feature constraint. Given an input image , the goal is to find an adversarial example that can be misclassified to the target class , and keep the deep representation close to the normal feature distribution. Here, we focus on the AEs with the constraint. We first model the normal hierarchical features of the training data with GMM. Then, we extend the attacking process of BIM by replacing the original loss function in Eq. (3) by:

(9)

where is classification loss as same as Eq. (4) in the CW attack and is the HFC loss term.

0:    Training set , input image , target class , DNN model , iteration , step size
0:    Adversarial image with
1:  for each DNN’s layer to  do
2:     Initialize mean and covariance of each component, where
3:     Train the GMM using the EM algorithm with , where with
4:  end for
5:  
6:  for  to  do
7:     ,
8:     
9:  end for
10:  return  
Algorithm 1 Adversarial attack with hierarchical feature constraint

5 Experiments

Fundoscopy MIM (Adv. Acc=99.5) BIM (Adv. Acc=99.5) PGD (Adv. Acc=99.5) CW (Adv. Acc=99.5)
AUC TPR@90 AUC TPR@90 AUC TPR@90 AUC TPR@90
KD 98.8 / 72.0 96.3 / 10.0 99.0 / 74.2 96.8 / 20.5 99.4 / 73.4 98.6 / 13.2 99.5 / 74.7 99.1 / 19.6
MAHA 100 /  7.8 100 /  0.0 99.6 /  6.4 99.5 /  0.0 100 /  4.2 100 /  0.0 99.8 / 33.0 99.5 /  0.0
LID 98.8 / 67.1 99.1 / 31.5 99.8 / 78.3 100 / 40.6 99.6 / 73.2 98.6 / 35.5 98.8 / 73.4 97.7 / 33.3
SVM 96.9 / 27.3 99.5 / 27.3 99.5 / 28.6 99.1 /  0.0 99.8 / 23.1 99.5 /  0.0 99.8 / 27.0 99.5 /  0.0
DNN 100 / 31.5 100 /  0.5 100 / 60.0 100 / 12.8 100 / 58.6 100 /  8.2 100 / 62.6 100 / 15.1
BU 89.9 / 33.5 60.7 /  0.0 58.9 / 37.4 9.1 /  0.0 61.9 / 35.9 9.1 /  0.0 93.0 / 32.8 73.1 /  5.0
Chest X-Ray MIM (Adv. Acc=98.1) BIM (Adv. Acc=90.9) PGD (Adv. Acc=90.9) CW (Adv. Acc=98.9)
AUC TPR@90 AUC TPR@90 AUC TPR@90 AUC TPR@90
KD 100 / 67.9 100 /  7.9 100 / 73.1 100 /  6.8 100 / 82.3 100 / 50.5 99.2 / 71.5 98.4 / 15.7
MAHA 100 /  0.0 100 /  0.0 100 /  0.0 100 /  0.0 100 /  0.0 100 /  0.0 100 / 22.4 100 /  0.0
LID 100 / 47.5 100 /  2.3 100 / 48.6 100 /  1.8 100 / 49.1 100 /  1.5 99.2 / 64.5 98.4 / 14.4
SVM 100 / 8.9 100 / 46.7 100 / 16.7 100 /  6.9 100 /  5.8 100 /  0.0 100 / 21.2 100 /  0.0
DNN 100 / 35.5 100 /  1.0 100 / 31.8 100 /  0.7 100 / 33.7 100 /  0.0 100 / 61.6 100 /  5.2
BU 100 / 15.2 100 /  0.0 49.9 / 26.1 19.2 /  0.0 49.2 / 26.2 22.7 /  0.0 98.3 / 26.2 94.8 /  0.0
Table 2: The point-wise results of the proposed method. The metrics scores on the left of the slash are the performances (%) of the adversarial detectors, trained on the adversarial samples generated by the corresponding adversarial attacks with the constraint . The metrics scores on the right of the slash are the adversarial detection accuracy under our attack, which satisfies the same constraint. All of the proposed attacks are under the constraint and evaluated on ResNet-50. Adv. Acc is the successful rate of HFC to manipulate the prediction of disease diagnosis network.

5.1 Setup

Datasets. We use two public datasets on typical medical classification tasks. The first one is Kaggle Fundoscopy dataset [aptos] on the diabetic retinopathy (DR) classification task, which consists of 3,663 high-resolution fundus images. Each image is labeled to one of the five levels from ‘No DR’ to ‘mid/moderate/severe/proliferate DR’. Following [ma2020understanding, finlayson2018adversarial], we conduct a binary classification experiment, which considers all fundoscopies with DR as the same class. The other one is Kaggle Chest X-Ray [CXR] dataset on the pneumonia classification task, which consists of 5,863 X-Ray images labeled with ’Pneumonia’ or ’Normal’. Following the literature [ma2020understanding, ma2018characterizing], we split both datasets into three subsets: Train, AdvTrain and AdvTest. For each dataset, we randomly select 80% of the samples as Train set to train the DNN classifier, and treat the left samples as the Test set. The misclassified (by the diagnosis network) test samples are discarded. Then we use 70% of the samples (AdvTrain) in the Test set to train the adversarial detectors and evaluate their effectiveness with the left ones (AdvTest). DNN models. We choose the ResNet-50 [resnet] and VGG-16 [VGG]

models pretrained using ImageNet. All images are resized to 299

2993 and normalized to [-1,1]. The models are trained with augmented data using random crop and horizontal flip. Both models achieve high area under curve (AUC) scores on both Fundoscopy and Chest X-Ray datasets: ResNet-50 reaches 99.5% and 97.0%, while VGG-16 reaches 99.3% and 96.5%, respectively. Adversarial attacks and detectors. Following [ma2020understanding], we choose MIM, BIM, PGD and CW to attack our models. For the adversarial detectors, we use kernel density (KD) [kde], bayesian uncertainty (BU) [kde], local intrinsic dimensionality (LID) [ma2018characterizing], Mahalanobis distance (MAHA) [MAHA], RBF-SVM [SaftyNet], and deep neural network (DNN) [metzen2017detecting]

. The parameters for KD, LID, BU and MAHA are set per original papers. We compute the scores of LID and MAHA for all of the activation layers and train a logistic regression classifier

[MAHA, ma2018characterizing]. For KD, BU, and RBF-SVM, we extract features from the penultimate layers. For DNN, we train a classifier for each activation layer and embed these networks by summing up their logits. Metrics. We choose three metrics to evaluate the effectiveness of the adversarial detector and the proposed method: 1) True positive rate at 90% true negative rate (TPR@90): The detector will drop 10% of the normal samples to reject more adversarial attacks; 2) Area under curve (AUC) score; 3) Adversarial accuracy (Adv. Acc): The success rate of the targeted adversarial attack. Hyperparameters. We set and for Fundoscopy and Chest X-Ray datasets, respectively. For the activation layer, we take the mean value of each channel separately, and set to , where is the number of channels. As a tiny perturbation in medical images causes a drastic increase in loss [ma2020understanding], we set a small step size . We set the number of iterations to .

5.2 Bypassing adversarial detectors

Figure 4: The AUC scores in the solid lines are the performances of the adversarial detectors trained on AEs generated by BIM, while the other scores in dotted lines are the adversarial detection accuracy under our proposed attack. The performance drop of each adversarial detector can be observed from the pair of solid and dotted line of corresponding color. Our method can cause drastically performance drop (from solid line to dotted line) to various adversarial detectors with different backbones and tasks.
Figure 5: The AUC scores in the solid lines are the performances of the adversarial detectors for each activation layer (from ResNet-50 on Fundoscopy). Our HFC compromises the detection accuracy (the gaps between solid lines and the corresponding dotted lines) at each layer. We extend KD to the all layers.
Figure 6: Visualization of 2D t-SNE of clean features and adversarial features generated from BIM and HFC. We extract features from ResNet-50 on Chest X-Ray. The feature distributions of our method are surrounded by the feature of clean data.

We first train adversarial detectors on different DNN classifiers, datasets, and perturbation constraints, and then evaluate their performances under the proposed attack. As shown in Fig. 5, most of the detectors achieve high AUC scores in the deep layers (solid lines). When we use HFS to strengthen the attack, the visualization of t-SNE in Fig. 6 shows the HFS moves the adversarial representations (orange) from the outlier to the location (cyan) surrounded by normal features (purple), thereby bypassing the detectors in all layers. Furthermore, as reported in Table 2, the proposed HFS term strengthens all the adversarial attacks so that the corresponding detectors are bypassed. As in Sec. 3.3, when the perturbation constraint weakens, the BIM features move further away from the normal feature distribution. Consequently, Fig. 4 shows that the detectors have better performance on detecting BIM examples (as the solid lines increase). However, a bigger perturbation budget gives our method more room to manipulate the representations. The attacker can move the feature closer to the normal feature distribution, which compromises the detectors more drastically (as the dotted lines decrease).

5.3 Comparison of other sneak attacks

We compare different sneak attack methods to manipulate the deep representations and bypass the detectors: 1) Generate AEs with internal representation similar to a random guide image [feature_iclr]; 2) Instead of random sampling, choose a guide image with its representation closest to the input [feature_iclr]; 3) Minimize the loss terms of KDE and cross-entropy at the same time [CW_ten]; 4) Minimize the loss term of LID666The term of KDE and LID can be found in the supplementary material. and cross-entropy at the same time [CW_bpda]. As shown in Table 3, all attacks that mimic the normal feature distribution can bypass KD, MAHA, and SVM. Under the strict constraint (), our method can break the five detectors at the same time and greatly outperform other methods.

Fundoscopy KD MAHA LID SVM DNN Adv. Acc
Random 75.1 86.1 91.7 48.2 93.7 100.0
Closest 77.0 64.0 91.0 13.0 79.3 81.5
KDE 51.6 86.5 90.9 45.3 95.0 100.0
LID 87.6 85.4 93.4 61.2 96.2 95.9
HFC 74.2 6.4 78.3 28.6 60.0 99.5
Chest X-Ray KD MAHA LID SVM DNN Adv. Acc
Random 77.0 64.0 91.0 13.0 79.3 94.8
Closest 80.1 38.3 71.3 9.9 87.7 53.1
KDE 58.2 66.9 71.7 15.3 95.6 100.0
LID 84.0 66.6 77.1 28.6 96.6 70.9
HFC 70.8 0.0 53.6 16.7 32.6 95.8
Table 3: Comparison of various attacks that aim at manipulating representations. The AEs are generated on ResNet-50 under the constraint . We choose AUC scores (%) as metrics for adversarial detectors.

5.4 Hyperparameter analysis

We model the normal feature distributions of ResNet-50 on Fundoscopy by GMM with different number of components, and evaluate the corresponding attacking performances. As in Fig. 4 (ResNet-50, Fundoscopy), since777More experiments can be found in supplementary material. only KD and LID can keep the AUC scores around 80%, we report their performances in Table 4. All of the attacks can stably compromise the detectors, while setting can slightly improve the performance.

1 2 4 8 16 32 64 128
KD 75.8 78.5 77.9 77.8 73.6 73.3 74.2 74.4
LID 81.7 82.4 83.0 84.2 83.2 80.7 78.3 78.7
Table 4: The AUC scores (%) under different number of components of GMM.

5.5 Gray-box attack

We also consider a more difficult scenario of gray-box attack: the attacker who knows only the backbone tries to confuse the victim model and bypass its adversarial detectors at the same time. As illustrated in [inkawhich2019feature], different models trained on the same dataset have similar decision boundaries and class orientations in the feature space. We explore the potential of adversarial examples generated from a substitute model to bypass the victim model’s detectors. As in Table 5, our adversarial examples can bypass most of the detectors from the victim model with high adversarial accuracy. It is worth noting that, for VGG-16, the OOD-based detectors have limited ability to detect BIM examples transferred from a substitute model with the same architecture.

Fundoscopy KD MAHA LID SVM DNN Adv ACC
ResNet50-BIM 98.7 100.0 99.5 92.1 100.0 83.2
Resnet50-Ours 78.0 9.1 68.0 16.9 43.8 68.2
VGG16-BIM 72.3 89.6 81.5 48.1 95.2 88.6
VGG16-Ours 50.9 18.2 64.6 28.8 16.7 73.2
Chest X-Ray KD MAHA LID SVM DNN Adv ACC
ResNet50-BIM 92.6 95.4 79.5 24.1 99.5 85.8
Resnet50-Ours 89.4 85.9 71.5 11.4 79.5 76.4
VGG16-BIM 46.4 89.2 81.3 73.6 98.5 98.3
VGG16-Ours 34.6 8.6 49.4 41.3 69.5 88.9
Table 5: The performance of the proposed attack and BIM under gray-box setting. All of the attacks are under constraint , and AUC scores (%) are used as metrics for adversarial detectors.

5.6 Comparison with natural image attack

We also extend HFC to the natural images (CIFAR-10), and evaluate the performance of the adversarial detectors and HFC under constraints .888The Adv Acc. will be less than 95% for medical and natural images when the perturbation is smaller than , respectively. As shown in Table 6, the detection rate for BIM examples increases with a larger perturbation budget, , the adversarial features are moved further away from the normal ones. Meanwhile, HFC has better ability to manipulate deep representations, which weakens the detection capability. On the other hand, as in Sec. 3.1, the medical features are much more vulnerable than natural ones, which make medical attacks easily detectable. Meanwhile, HFC enjoys more success in manipulating the adversarial feature into the normal feature distribution even with a small perturbation.

CIFAR-10 KD MAHA LID SVM DNN Adv ACC
BIM=8/256 76.4 91.4 80.8 96.9 99.8 97.6
Ours=8/256 30.1 82.7 58.7 87.9 87.3 99.3
CIFAR-10 KD MAHA LID SVM DNN Adv ACC
BIM=16/256 87.5 99.0 90.3 99.0 100.0 99.7
Ours=16/256 24.0 76.4 54.1 85.8 80.7 99.7
Fundoscopy KD MAHA LID SVM DNN Adv ACC
BIM=0.5/256 97.1 99.0 98.8 98.6 100.0 99.7
Ours=0.5/256 81.2 7.6 81.7 23.3 58.3 95.5
Table 6: The comparison between attacking medical images and natural images. The adversarial attacks are deployed on ResNet-50. AUC scores (%) are used as metrics for adversarial detectors.

6 Conclusion

In this paper, we first attempted to understand the intrinsic characteristics of medical adversarial examples. The key difference between a medical image and a natural image lies in the vulnerability of deep representations. Existing adversarial attacks distort the prediction by optimizing the feature representation towards a consistent direction, which raises the vulnerability of medical image feature to an out-of-distribution level. Despite that the existing adversarial attacks on medical images are easy to detect, this fact is not reliable if the attacker alters the attacking strategy and hides the representations. On the contrary, higher vulnerability renders the attacker more power to manipulate the representations. We proposed a novel hierarchical feature constraint to find the closest direction to hide the adversarial feature in normal feature distribution represented by a Gaussian mixture model. Extensive experiments validated the effectiveness of the proposed attack to bypass adversarial detectors, under both white-box and gray-box settings.

References

7 The Consistency of Gradient Direction

Figure 7: Detailed diagram of a normal binary classification neural network, where the red arrows indicate the gradient in back propagation.

7.1 Binary classification

Theorem 1. Consider a binary disease diagnosis network and its representations from the penultimate layer, the directions of the corresponding gradients are fixed during during each iteration under adversarial attack.999As a representative, we use adversarial attack to convert the prediction of diagnosis network from 0 to 1. Proof. As is shown in Fig. 7, Denote denotes the activation output of

-th node (neuron) in the penultimate layer of a neural network;

denotes the weight from -th node to -th node of next layer; is the output of neural network in -th channel; is the probability of the input being

-th class. For simplicity, we ignore the bias parameters and activation functions in the middle layer, which does not affect the conclusion. Formally, the cross-entropy loss can be defined as:

(10)

and the Softmax function is:

(11)

According to the chain rule, we can compute the partial derivative of with respect to :

(12)

Hence,

(13)

in Sec. (3.3), the goal of adversarial attack is to invert the prediction from 0 to 1, i.e., , so we can derive the partial derivative on :

(14)

where and is constant, so the partial derivative on will keep in the same direction, as is claimed in Sec. (3.3).

Figure 8: The cosline similarities for different iterations of BIM [bim] for ‘ALL’ and ‘Biggest_0’ channels. Similar to the results of binary classification (in Sec. 3.3), the activations values are updated toward similar directions in different iterations. Especially, the similarities of the channels of ‘Biggest_0’ are bigger than ‘ALL’, since they are guided by the negative gradients all the time.

7.2 Multi-class classification.

Similar to binary classification, some of the activation values are increased to outlier, which are guided by similar gradients in each iteration. Differently, the derivative result shows that only the channels () with the biggest weight (greater than other weights ) have the negative gradients all the time. Our experiments prove that these values are increased to the out-of-distribution positions in a similar direction iteratively. Proof. The goal is to make the prediction classified to a particular erroneous class . And the partial derivative of with respect to the -th activation out in the penultimate layer is computed as:

(15)

where . According to Eq. 12, we will have:

(16)

For targeted attack, we know that:

(17)

Hence, Eq. 16 can be rewrited as:

(18)

where and is constant. Focus on the channel whose is greater than the other weights , its gradient keep negative all the time.

(19)

Implication. Under the direction of the negative gradient, the activation value increases to a out-of-distribution position iteratively. Thus, we conduct 10-class classification experiment with ResNet-50 [resnet] network on CIFAR-10 dataset to verify this conclusion101010Here, we select 1000 images from the test set, whose predictions and set the target class .

and explore the degree of outlier. Same as Sec. 3.3, we plot the cosine similarities, distributions of normalized standard deviations and maximum values for all of the channels (‘ALL’, 2048 channels in total), and the channels with the largest weight

(respect to class , 159 channels in total, marked as ‘Biggest_0’). As shown in Fig. 8 and Fig. 9, the activation values are updated toward similar direction in each iteration. Especially, the cosine similarities of the channels in ‘Biggest_0’ are as great as the ones in binary classification tasks, which move the features to out-of-distribution positions.

Figure 9: The distributions for the normalized standard deviations and maximum values for ‘ALL’ and ‘Biggest_0’ channels. Similar to the results of binary classification (in Sec. 3.3), some of the channels (both in ‘ALL’ and ‘Biggest_0’ channels) are increased rapidly to out-of-distributions positions.

8 Visualization Code for Figure 10

The python code for showing the distributions of standard deviations and maximum values.

def normalize(A_clean, A_adv):
    # A_clean, A_adv:
    # shape == [num_example, channels]
    # Max_values (M):
    # shape == [channels]
    # Distribution of std and max:
    # std_clean, std_adv, max_clean, max_adv
    # shape == [num_expamle]
    max_values = A_clean.transpose().max(-1)
    A_clean = A_clean / max_values
    A_adv = A_adv / max_values
    max_clean = A_clean.max(-1)
    max_adv = A_adv.max(-1)
    std_clean = A_clean.std(-1)
    std_adv = A_adv.std(-1)
    return max_clean, max_adv, std_clean, std_adv
Listing 1: Python code.

9 Adversarial Detection Methods

Here we give a brief introduction to the state-of-the-art adversarial attack detection methods, including KD [kde], BU [kde], LID [ma2018characterizing], MAHA [MAHA], SVM [SaftyNet] and DNN [metzen2017detecting]. We try out best to use the official codes if released111111 KD and BU, https://github.com/rfeinman/detecting-adversarial-samples
LID and MAHA, https://github.com/pokaxpoka/deep_Mahalanobis_detector
. In the following, we detail each method. Kernel Density (KD) KD calculated with the training set in the feature space of the last hidden layer. These are meant to detect points that lie far from the data manifold. Specifically, given a sample of class , and a set of training samples from the same class

, the KD of x can be estimated by:

(20)

where is the last hidden layer activation vector, is the kernel function, often chosen as a Gaussian kernel. Bayesian Uncertainty estimates (BU). BU are available in dropout121212We add a dropout layer in the before the last layer of ResNet-50 [resnet]. neural networks. These are meant to detect when points lie in low-confidence regions of the input space, and can detect adversarial samples in situations where density estimates cannot. Specially, the authors sample times from distribution of network configurations, thus, for a sample and stochastic predictions , the BU can be computed as:

(21)

Local Intrinsic Dimensionality (LID). LID describes the rate of expansion in the number of data objects as the distance from the reference sample increases. Specifically, given a sample , LID makes use of its distances to the first nearest neighbors:

(22)

where is the activation values from intermediate layer ; is the Euclidean distance between and its nearest neighbor. LID is computed on each layer of DNN. Mahalanobis distance-based confidence score (MAHA). MAHA utilize the Mahalanobis distance-based metric instead of Euclidean distance, and also process the DNN features for detecting adversarial samples. Specifically, the authors first compute the empirical mean and covariance of the activations for each layer of the training samples. Then, they compute the Mahalanobis distance score as:

(23)

where and are the mean and covariance of the training samples, is the activations in the intermediate layer of DNN. MAHA is computed on each layer of DNN as LID. SVM and DNN simply train a classifier (i.e., SVM or DNN) on the adversarial examples. We follow the literature [SaftyNet, metzen2017detecting] to implement them.

Figure 10: Visualization for the clean and adversarial examples. The numbers above the pictures represent the -norm of the perturbations.

10 More Hyperparameter Analysis

We report the performances of HFC under different number of component on both Fundoscopy and Chest X-Ray. The experiment result in Table 7 and Table 8 shows that HFC can bypass the detectors at the same time under different components stably.

1 2 4 8 16 32 64 128
KD 75.8 78.5 77.9 77.8 73.6 73.3 74.2 74.4
MAHA 5.9 5.6 6.2 7.1 8.2 8.6 7.6 5.9
LID 81.7 82.4 83.0 84.2 83.2 80.7 78.3 78.7
SVM 52.3 45.7 40.4 34.0 31.4 34.5 35.5 40.3
DNN 61.4 60.7 62.6 64.3 64.4 63.5 63.4 64.9
BU 55.5 52.2 49.7 45.2 43.3 42.6 43.8 42.6
Table 7: The AUC scores (%) under different number of components of GMM on Fundoscopy.
1 2 4 8 16 32 64 128
KD 70.3 71.0 66.5 64.9 65.6 63.7 61.7 62.8
MAHA 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
LID 48.3 52.3 55.5 58.8 49.8 49.0 48.5 49.4
SVM 20.6 18.7 17.8 14.4 12.3 14.3 13.6 13.5
DNN 31.8 32.7 38.6 41.9 40.2 39.5 38.9 37.9
BU 44.2 35.6 27.8 22.2 25.8 26.2 25.6 26.3
Table 8: The AUC scores (%) under different number of components of GMM on Chest X-Ray.