Efficient detection of adversarial images

07/09/2020 ∙ by Darpan Kumar Yadav, et al. ∙ Indian Institute of Technology Delhi 0

In this paper, detection of deception attack on deep neural network (DNN) based image classification in autonomous and cyber-physical systems is considered. Several studies have shown the vulnerability of DNN to malicious deception attacks. In such attacks, some or all pixel values of an image are modified by an external attacker, so that the change is almost invisible to the human eye but significant enough for a DNN-based classifier to misclassify it. This paper first proposes a novel pre-processing technique that facilitates the detection of such modified images under any DNN-based image classifier as well as the attacker model. The proposed pre-processing algorithm involves a certain combination of principal component analysis (PCA)-based decomposition of the image, and random perturbation based detection to reduce computational complexity. Next, an adaptive version of this algorithm is proposed where a random number of perturbations are chosen adaptively using a doubly-threshold policy, and the threshold values are learnt via stochastic approximation in order to minimize the expected number of perturbations subject to constraints on the false alarm and missed detection probabilities. Numerical experiments show that the proposed detection scheme outperforms a competing algorithm while achieving reasonably low computational complexity.



There are no comments yet.


page 1

page 2

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

Recently there have been significant research interest in cyber-physical systems (CPS) that connect the cyber world and the physical world, via integration of sensing, control, communication, computation and learning. Popular CPS applications include networked monitoring of industry, disaster management, smart grids, intelligent transportation systems, networked surveillance, etc. One important component of future intelligent transportation systems is autonomous vehicle. It is envisioned that future autonomous vehicles will be equipped with high-quality cameras, whose images will be classified by a DNN-based classifier for object detection and recognition, in order to facilitate an informed maneuvering decision by the controller or autopilot. Clearly, vehicular safety in such cases is highly sensitive to image classification; any mistake in object detection or classification can lead to accidents. In the context of surveillance or security systems, adversarial images can greatly endanger human and system security.

Over the last few years, several studies have suggested that the DNN-based image classifier is highly vulnerable to deception attack [akhtar2018threat, eykholt2017robust]. In fact, with the emergence of internet-of-things (IoT) providing an IP address to all gadgets including cameras, the autonomous vehicles will become more vulnerable to such attacks [chernikova2019self]. Hackers can easily tamper with the pixel values (see Figure 1) or the image data sent by the camera to the classifier. In a similar way, networked surveillance cameras will also become vulnerable to such malicious attacks.

In order to address the above challenge, we propose a new class of algorithms for adversarial image detection. Our first perturbation-based algorithm PERT performs PCA (Prinicipal Component Analysis) on clean image data set, and detects an adversary by perturbing a test image in the spectral domain along certain carefully chosen coordinates obtained from PCA. Next, its adaptive version APERT chooses the number of perturbations adaptively in order to minimize the expected number of perturbations subject to constraints on the false alarm and missed detection probabilities. Numerical results demonstrate the efficacy of these two algorithms.

Figure 1: Example of an adversarial image. The original image is classified as a cat. Addition of a carefully designed noise changes the same classifier’s output to ostrich, while the visual change in the image is not significant.

I-a Related work

The existing research on adversarial images can be divided into two categories: attack design and attack mitigation.

I-A1 Attack design

While there have been numerous attempts to tackle deception attacks in sensor-based remote estimation systems

[chattopadhyay2019security, chattopadhyay2018secure, chattopadhyay2018attack], the problem of design and mitigation of adversarial attack on images to cause misclassification is relatively new. The first paper on adversarial image generation was reported in [szegedy2013intriguing]. Since then, there have been significant research on attack design in this setting. All these attack schemes can be divided into two categories:

  1. White box attack: Here the attacker knows the architecture, parameters, cost functions, etc of the classifier. Hence, it is easier to design such attacks. Examples of such attacks are given in [goodfellow2014explaining], [szegedy2013intriguing], [carlini2017towards], [madry2017towards], [papernot2016limitations], [kurakin2016adversarial].

  2. Black box attack:

    Here the adversary has access only to the output (e.g., logits or probabilities) of the classifier against a test input. Hence, the attacker has to probe the classifier with many test input images in order to estimate the sensitivity of the output with respect to the input. One black box attack is reported in


On the other hand, depending on attack goals, the attack schemes can be divided into two categories:

  1. Targeted attack: Such attacks seek to misclassify a particular class to another pre-defined class. For example, a fruit classifier is made to classify all the apple images as banana. Such attacks are reported in [carlini2018audio] and [brendel2017decision].

  2. Reliability Attack: Such attacks only seek to increase the classification error. Such attacks have been reported in [yuan2019adversarial], [brendel2017decision], [goodfellow2014explaining], [madry2017towards], [szegedy2013intriguing].

Some popular adversarial attacks are summarized below:

  • L-BFGS Attack [szegedy2013intriguing]: This white box attack tries to find a perturbation to an image such that the perturbed image minimizes a cost function (where is cost parameter and is the label) of the classifier, while remains within some small set around the origin to ensure small perturbation. A Lagrange multiplier is used to relax the constraint on , which is found via line search.

  • Fast Gradient Sign Method (FGSM) [goodfellow2014explaining]: Here the perturbation is computed as where

    the magnitude of perturbation. This perturbation can be computed via backpropagation.

    Basic Iterative Method (BIM) [kurakin2016adversarial]: This is an iterative variant of FGSM.

  • Attack [carlini2017towards]: This is similar to [szegedy2013intriguing] except that: (i) [carlini2017towards] uses a cost function that is different from the classifier’s cost function , and (ii) the optimal Lagrange multiplier is found via binary search.

  • Projected Gradient Descent (PGD) [madry2017towards]: This involves applying FGSM iteratively and clipping the iterate images to ensure that they remain close to the original image.

  • Jacobian Saliency Map Attack (JSMA) [papernot2016limitations]: It is a greedy attack algorithm which selects the most important pixels by calculating Jacobian based saliency map, and modifies those pixels iteratively.

  • Boundary Attack [brendel2017decision]: This is a black box attack which starts from an adversarial point and then performs a random walk along the decision boundary between the adversarial and the non-adversarial regions, such that the iterate image stays in the adversarial region but the distance between the iterate image and the target image is progressively minimized. This is done via rejection sampling using a suitable proposal distribution, in order to find progressively smaller adversarial perturbations.

I-A2 Attack mitigation

There are two possible approaches for defence against adversarial attack:

  1. Robustness based defense: These methods seek to classify adversarial images correctly, e.g., [xie2019feature], [papernot2016distillation].

  2. Detection based defense: These methods seek to just distinguish between adversarial and clean images; eg., [feinman2017detecting], [song2017pixeldefend].

Here we describe some popular attack mitigation schemes. The authors of [xie2019feature] proposed feature denoising to improve robustness of CNNs against adversarial images. They found that certain architectures were good for robustness even though they are not sufficient for accuracy improvements. However, when combined with adversarial training, these designs could be more robust. The authors of [feinman2017detecting] put forth a Bayesian view of detecting adversarial samples, claiming that the uncertainty associated with adversarial examples is more compared to clean ones. They used a Bayesian neural network to distinguish between adversarial and clean images on the basis of uncertainty estimation.

The authors of [song2017pixeldefend] trained a PixelCNN network [salimans2017pixelcnn++] to differentiate between clean and adversarial examples. They rejected adversarial samples using p-value based ranking of PixelCNN. This scheme was able to detect several attacks like , Deepfool, BIM. The paper [wang2018detecting] observed that there is a significant difference between the percentage of label change due to perturbation in adversarial samples as compared to clean ones. They designed a statistical adversary detection algorithm called nMutant; inspired by mutation testing from software engineering community.

The authors of [papernot2016distillation] designed a method called network distillation to defend DNNs against adversarial examples. The original purpose of network distillation was the reduction of size of DNNs by transferring knowledge from a bigger network to a smaller one [ba2014deep], [hinton2015distilling]. The authors discovered that using high-temperature softmax reduces the model’s sensitivity towards small perturbations. This defense was tested on the MNIST and CIFAR-10 data sets. It was observed that network distillation reduces the success rate of JSMA attack [papernot2016limitations] by and

respectively. However, a lot of new attacks have been proposed since then, which defeat defensive distillation (e.g.,

[carlini2016defensive]). The paper [goodfellow2014explaining]

tried training an MNIST classifier with adversarial examples (adversarial retraining approach). A comprehensive analysis of this method on ImageNet data set found it to be effective against one-step attacks (eg., FGSM), but ineffective against iterative attacks (e.g., BIM

[kurakin2016adversarial]). After evaluating network distillation with adversarially trained networks on MNIST and ImageNet, [tramer2017ensemble] found it to be robust against white box attacks but not against black box ones.

I-B Our Contributions

In this paper, we make the following contributions:

  1. We propose a novel detection algorithm PERT for adversarial attack detection. The algorithm performs PCA on clean image data set to obtain a set of orthonormal bases. Projection of a test image along some least significant principal components are randomly perturbed for detecting proximity to a decision boundary, which is used for detection. This combination of PCA and image perturbation in spectral domain, which is motivated by the empirical findings in [hendrycks2016early], is new to the literature.111The paper [liang2017deep] uses PCA but throws away least significant components, thereby removing useful information along those components, possibly leading to high false alarm rate. The paper [carlini2017adversarial] showed that their attack can break simple PCA-based defence, while our algorithm performs well against the attack of [carlini2017adversarial] as seen later in the numerical results.

  2. PERT has low computational complexity; PCA is performed only once off-line.

  3. We also propose an adaptive version of PERT called APERT. The APERT algorithm declares an image to be adversarial by checking whether a specific sequential probability ratio exceeds an upper or a lower threshold. The problem of minimizing the expected number of perturbations per test image, subject to constraints on false alarm and missed detection probabilities, is relaxed via a pair of Lagrange multipliers. The relaxed problem is solved via simultaneous perturbation stochastic approximation (SPSA; see [spall1992multivariate]) to obtain the optimal threshold values, and the optimal Lagrange multipliers are learnt via two-timescale stochastic approximation [borkar2009stochastic] in order to meet the constraints. The use of stochastic approximation and SPSA to optimize the threshold values are new to the signal processing literature to the best of our knowledge. Also, the APERT algorithm has a sound theoretical motivation which is rare in most papers on adversarial image detection.

  4. PERT and APERT are agnostic to attacker and classifier models, which makes them attractive to many practical applications.

  5. Numerical results demonstrate high probability of attack detection and small value for false alarm probability under PERT and APERT against a competing algorithm, and reasonably low computational complexity in APERT.

I-C Organization

The rest of the paper is organized as follows. The PERT algorithm is described in Section II. The APERT algorithm is described in Section III. Numerical exploration of the proposed algorithm is summarized in Section IV, followed by the conclusion in Section V.

Ii Static perturbation based algorithm

In this section, we propose an adversarial image detection algorithm based on random perturbation of an image in the spectral domain; the algorithm is called PERT. This algorithm is motivated by the two key observations:

  1. The authors of [hendrycks2016early]

    found that the injected adversarial noise mainly resides in least significant principal components. Intuitively, this makes sense since injecting noise to the most significant principal components would lead to detection by human eye. We have applied PCA on CIFAR-10 training data set to learn its principal components sorted by decreasing eigenvalues; the ones with higher eigenvalues are the most significant principal components. CIFAR-10 data set consists of 3072 dimensional images, applying PCA on the entire data set yields 3072 principal components. The cumulative explained variance ratio as a function of the number of components (in decreasing order of the eigenvalues) is shown in Figure 

    2; this figure shows that most of the variance is concentrated along the first few principal components. Hence, least significant components do not provide much additional information, and adversarial perturbation of these components should not change the image significantly.

  2. Several attackers intend push the image close to the decision boundary to fool a classifier [brendel2017decision]. Thus it is possible to detect an adversarial image if we can check whether it is close to a decision boundary or not. Hence, we propose a new scheme for exploring the neighborhood of a given image in spectral domain.

Figure 2: Cumulative explained variance versus components of PCA.

Hence, our algorithm performs PCA on a training data set, and finds the principal components. When a new test image (potentially adversarial) comes, it projects that image along these principal components, randomly perturbs the projection along a given number of least significant components, and then obtains another image from this perturbed spectrum. If the classifier yields same label for this new image and the original test image, then it is concluded that the original image is most likely not near a decision boundary and hence not an adversarial; else, an alarm is raised for adversarial attack. In fact, multiple perturbed images can be generated by this process, and if the label of the original test image differs with that of at least one perturbed image, an alarm is raised. The intuition behind this is that if an image is adversarial it will lie close to a decision boundary, and perturbation should push it to another region, thus changing the label generated by the classifier.

1 Training Phase (PCA):
2 Input: Training image set
3 Output: Principal components of the data set
  1. Vectorize the pixel values of all images.

  2. Find the sample covariance matrix of these vectors.

  3. Perform singular value decomposition (SVD) of the sample covariance matrix.

  4. Obtain the eigenvectors

    arranged from most significant components to least significant components.

Test Phase (Perturbation based attack detection): Initialization : Boolean result = False
Input: Input image (vectorized), no. of purturbed image samples to generate , no. of coefficients to perturb
Output: True, if input is adversarial
    False, if input is not adversarial
4 Get prediction for input image through classifier.
5 Compute the projections (dot products) and vectorize these values as .
6 for  to  do
7       Add realizations of

i.i.d. zero-mean Gaussian random variables to

. This will convert to .
8       Get inverse transform of to get a new image .
9       Get prediction for image through classifier.
10       if  not equal  then
11             result = True;
12             break;
14      else
15             continue;
17       end if
19 end for
Algorithm 1 The PERT algorithm

Discussion: PERT has several advantages over most algorithms in the literature:

  1. PERT is basically a pre-processing algorithm for the test image, and hence it is agnostic to the attacker and classifier models.

  2. The on-line part of PERT involves computing simple dot products and perturbations, which have very low complexity. PCA can be performed once off-line and used for ever.

However, one should remember that PERT perturbs the least significant components randomly, and hence there is no guarantee that a perturbation will be in the right direction to ensure a crossover of the decision boundary. This issue can be resolved by developing more sophisticated perturbation methods using direction search, specifically in case some knowledge of the decision boundaries is available to the detector. Another option is to create many perturbations of a test image, at the expense of more computational complexity. However, in the next section, we will formulate the sequential version of PERT, which will minimize the mean number of image perturbations per image, under a budget on the missed detection probability and false alarm probability.

Iii Adaptive perturbation based algorithm

In Section II, the PERT algorithm used up to a constant number of perturbations of the test image in the spectral domain. However, the major drawback of PERT is that it might be wasteful in terms of computations. If an adversarial image is very close to the decision boundary, then small number of perturbations might be sufficient for detection. On the other hand, if the adversarial image is far away from a decision boundary, then more perturbations will be required to cross the decision boundary with high probability. Also, the PERT algorithm only checks for a decision boundary crossover (hard decision), while many DNNs yield a belief probability vector for the class of a test image (soft output); this soft output of DNNs can be used to improve detector performance and reduce its complexity.

In this section, we propose an adaptive version of PERT called APERT. The APERT algorithm sequentially perturbs the test image in spectral domain. A stopping rule is used by the pre-processing unit to decide when to stop perturbing a test image and declare a decision (adversarial or non-adversarial); this stopping rule is a two-threshold rule motivated by the sequential probability ratio test (SPRT [poor2013introduction]), on top of the decision boundary crossover checking. The threshold values are optimized using the theory of stochastic approximation [borkar2009stochastic] and SPSA [spall1992multivariate].

Iii-a Mathematical formulation

Let us denote the random number of perturbations used in any adaptive technique based on random perturbation by , and let the probabilities of false alarm and missed detection of any randomly chosen test image under this technique be denoted by and respectively. We seek to solve the following constrained problem:


where and are two constraint values. However, (CP) can be relaxed by using two Lagrange multipliers to obtain the following unconstrained problem:


Let the optimal decision rule for (UP) under be denoted by . It is well known that, if there exists and such that , then is an optimal solution for (CP) as well.

Finding out for a pair is very challenging. Hence, we focus on the class of SPRT-type algorithms instead. Let us assume that the DNN based classifier generates a probability value against an input image; this probability is the belief of the classifier that the image under consideration is adversarial. Now, suppose that we sequentially perturb an image in the spectral domain as in PERT, and feed these perturbed images one by one to the DNN, which acts as our classifier. Let the DNN return category wise probabilistic distribution of the image in the form of a vector. We use these vectors to determine which indicates the likelihood (not necessarily a probability) of the -th perturbed image being adversarial. Motivated by SPRT, the proposed APERT algorithm checks if the ratio crosses an upper threshold or a lower threshold after the -th perturbation; an adversarial image is declared if , a non-adversarial image is declared if , and the algorithm continues perturbing the image if . In case exceeds a pre-determined maximum number of perturbations without any threshold crossing, the image is declared to be non-adversarial.

Initialization : , Boolean result = False
Input: Threshold pair , number of coefficients to perturb , maximum number of perturbations , input image (vectorized), orthonormal basis vectors (typically obtained from PCA), Switch for Category change detection where
Output: True, if input image is adversarial
    False, if input image is not adversarial
1 Get category wise probability classification vector for input image through classifier. Compute the projections (dot products) and vectorize these values as .
2 while  and  do
3       Add realizations of i.i.d. zero-mean Gaussian random variables to . This will convert to . Get inverse transform of to get a new image . Get category wise probability classification vector for input image through classifier. Get by taking , where is the dimension of row vector and if Predicted category changed in perturbed image and  then
4             result = True;
5             break;
7       else if  then
8            result = False;
9             break;
10       else if  then
11            result = True;
12             break;
13       else
14             continue;
16       end if
18 end while
return result
Algorithm 2 SRT( algorithm

Clearly, for given , the algorithm needs to compute the optimal threshold values and to minimize the cost in (UP). Also, and need to be computed to meet the constraints in (CP) with equality. APERT uses two-timescale stochastic approximation and SPSA for updating the Lagrange multipliers and the threshold values in the training phase, learns the optimal parameter values, and uses these parameter values in the test phase.

Iii-B The SRT algorithm for image classification

Here we describe an SPRT based algorithm called sequential ration test or SRT for classifying an image . The algorithm takes , the PCA eigenvectors

, and a binary variable

as input, and classifies as adversarial or non-adversarial. This algorithm is used as one important component of the APERT algorithm described later. SRT blends ideas from PERT and the standard SPRT algorithm. However, as seen in the pseudocode of SRT, we use a quantity in the threshold testing where cannot be interpreted as a probability. Instead, is the normalized value of the -norm of the difference between outputs and of the DNN against inputs and its -th perturbation . The binary variable is used as a switch; if and if the belief probability vectors and lead to two different predicted categories, then SRT directly declares to be adversarial. It has been observed numerically that this results in a better adversarial image detection probability, and hence any test image in the proposed APERT scheme later is classified via SRT with .

Iii-C The APERT algorithm

Iii-C1 The training phases

The APERT algorithm, designed for (CP), consists of two training phases and a testing phase. The first training phase simply runs the PCA algorithm. The second training phase basically runs stochastic approximation iterations to find so that the false alarm and missed detection probability constraints are satisfied with equality.

The second training phase of APERT requires three non-negative sequences , and are chosen such that: (i) = = , (ii) , , (iii) , (iv) , (v) . The first two conditions are standard requirements for stochastic approximation. The third and fourth conditions are required for convergence of SPSA, and the fifth condition maintains the necessary timescale separation explained later.

The APERT algorithm also requires which is the percentage of adversarial images among all image samples used in the training phase II. It also maintains two iterates and to represent the number of clean and images encountered up to the -th training image; i.e., .


of APERT correspond to SPSA which is basically a stochastic gradient descent scheme with noisy estimate of gradient, used for minimizing the objective of (

UP) over and for current iterates. SPSA allows us to compute a noisy gradient of the objective of (CP) by randomly and simultaneously perturbing in two opposite directions and obtaining the noisy estimate of gradient from the difference in the objective function evaluated at these two perturbed values; this allows us to avoid coordinate-wise perturbation in gradient estimation. In has to be noted that, the cost to be optimized by SPSA has to be obtained from SRT. The and iterates are projected onto non-overlapping compact intervals and (with ) to ensure boundedness.

Steps  are used to find and via stochastic approximation in a slower timescale. In has to be noted that, since , we have a two-timescale stochastic approximation [borkar2009stochastic] where the Lagrange multipliers are updated in a slower timescale and the threshold values are updated via SPSA in a faster timescale. The faster timescale iterates view the slower timescale iterates as quasi-static, while the slower-timescale iterates view the faster timescale iterates as almost equilibriated; as if, the slower timescale iterates vary in an outer loop and the faster timescale iterates vary in an inner loop. It has to be noted that, though standard two-timescale stochastic approximation theory guarantees some convergence under suitable conditions [borkar2009stochastic], here we cannot provide any convergence guarantee of the iterates due to the lack of established statistical properties of the images. It is also noted that, and are updated at different time instants; this corresponds to asynchronous stochastic approximation [borkar2009stochastic]. The and iterates are projected onto to ensure non-negativity. Intuitively, if a false alarm is observed, the cost of false alarm, is increased. Similarly, if a missed detection is observed, then the cost of missed detection, , is increased, else it is decreased. Ideally, the goal is to reach to a pair so that the constraints in (CP) are met with equality, through we do not have any formal convergence proof.

Iii-C2 Testing phase

The testing phase just uses SRT with for any test image. Since , a test image bypasses the threshold testing and is declared adversarial, in case the random perturbation results in predicted category change of the test image. It has been numerically observed (see Section IV) that this results in a small increase in false alarm rate but a high increase in adversarial image detection rate compared to . However, one has the liberty to avoid this and only use the threshold test in SRT by setting . Alternatively, one can set a slightly smaller value of in APERT with in order to compensate for the increase in false alarm.

1 Training Phase I (PCA):
2 Input: Training image set
3 Output: Principal components of the data set
  1. Vectorize the pixel values of all images.

  2. Find the sample covariance matrix of these vectors.

  3. Perform singular value decomposition (SVD) of the sample covariance matrix.

  4. Obtain the eigenvectors arranged from most significant components to least significant components.

Training Phase II (Determining A and B) Initialization : , non-negative
Input: , training image set with each image in vectorized form, number of training images , no. of coefficients to perturb , , , sequences , maximum number of perturbations , range of accepted values of the thresholds , such that
Output: Final Values of (and also and which are not used in the test phase)
4 for  to  do
5        Randomly generate and with probability Compute and , similarly and . Randomly pick the training image (can be adversarial with probability ) If image is actually adversarial then , If image is clean then Define and as an indicator of missed detection and false alarm respectively by SRT algorithm and as indicator of a non adversarial image Compute cost by using Compute cost by using Update and then project on Update and then project on Again Determine and from SRT Update ,
6 end for
return Testing Phase: Initialization : Boolean result = False
Input: Input image (vectorized), maximum number of perturbed image samples to generate , no. of coefficients to perturb , lower threshold , upper threshold
Output: True, if input image is adversarial
    False, if input image is not adversarial
return SRT
Algorithm 3 The APERT algorithm

Iv Experiments

Iv-a Performance of PERT

We evaluated our proposed algorithm on CIFAR-10 data set and the classifier of [madry2017towards] implemented in a challenge to explore adversarial robustness of neural networks (see [MadryLabCifar10]).222Codes for our numerical experiments are available in [PCA_detection] and [SPRT_detection]. We used Foolbox library [rauber2017foolbox] for generating adversarial images. PCA was performed using Scikit-learn [scikit-learn] library in Python; this library allows us to customize the computational complexity and accuracy in PCA. Each image in CIFAR-10 has pixels, where each pixel has three channels: red, green, blue. Hence, PCA provides orthonormal basis vectors. CIFAR-10 has images, out of which images were used for PCA based training and rest of the images were used for evaluating the performance of the algorithm.

Table I shows the variation of detection probability (percentage of detected adversarial images) for adversarial images generated using various attacks, for number of components and various values for maximum possible number of samples (number of perturbations for a given image). Due to huge computational requirement in generating adversarial images via black box attack, we have considered only four white box attacks. It is evident that the attack detection probability (percentage) increases with ; this is intuitive since larger results in a higher probability of decision boundary crossover if an adversarial image is perturbed. The second column of Table I denotes the percentage of clean images that were declared to be adversarial by our algorithm, i.e., it contains the false alarm probabilities which also increase with . However, we observe that our pre-processing algorithm achieves very low false alarm probability and high attack detection probability under these four popular white box attacks. This conclusion is further reinforced in Table II, which shows the variation in detection performance with varying , for and . It is to be noted that the detection probability under the detection algorithm of [wang2018detecting] are and for and FGSM attacks; clearly our detection algorithm outperforms [wang2018detecting] while having low computation. The last column of Table II suggests that there is an optimal value of , since perturbation along more principal components may increase the decision boundary crossover probability but at the same time can modify the information along some most significant components as well.

No. of Percentage Detection (%)
Samples Clean FGSM L-BFGS PGD CW(L2)
05 1.2 50.02 89.16 55.03 96.47
10 1.5 63.53 92.50 65.08 98.23
15 1.7 69.41 93.33 67.45 99.41
20 1.9 73.53 95.03 71.01 99.41
25 1.9 75.29 95.03 75.14 100.00
Clean images that are detected as adversarial
Table I: Detection and false alarm performance of PERT algorithm for various values of .
No. of Percentage Detection (%)
Coefficients Clean FGSM L-BFGS PGD CW(L2)
No. of Samples(): 10

1.20 58.23 90.83 57.40 95.90
1000 1.50 69.41 93.33 60.95 95.45
1500 2.10 64.11 91.67 61.53 95.00
No. of Samples(): 20
0500 1.20 68.23 93.33 68.05 95.90
1000 1.90 74.11 94.16 70.41 95.90
1500 2.50 71.18 95.00 71.00 95.00
Clean images that are detected as adversarial
Table II: Detection and false alarm performance of PERT algorithm for various values of .

Iv-B Performance of APERT

For APERT, we initialize , and choose step sizes . The Foolbox library was used to craft adversarial examples. The classification neural network is taken from [MadryLabCifar10]

-norm is used to obtain the values since it was observed that -norm outperforms -norm. In the training process, of the training images were clean and images were adversarial.

Though there is no theoretical convergence guarantee for APERT, we have numerically observed convergence of , , and

Iv-B1 Computational complexity of PERT and APERT

We note that, a major source of computational complexity in PERT and APERT is perturbing an image and passing it through a classifier. In Table III and Table IV, we numerically compare the mean number of perturbations required for PERT and APERT under and respectively. The classification neural network was taken from [MadryLabCifar10].

Table III and Table IV show that the APERT algorithm requires much less perturbations compared to PERT for almost similar detection performance, for various attack algorithms and various test images that result in false alarm, adversarial image detection, missed detection and (correctly) clean image detection. It is also noted that, for the images resulting in missed detection and clean image detection, PERT has to exhaust all perturbation options before stopping. As a result, the mean number of perturbations in APERT becomes significantly smaller than PERT; see Table V. The key reason behind smaller number of perturbations in APERT is the fact that APERT uses a doubly-threshold stopping rule motivated by the popular SPRT algorithm in detection theory. It is also observed that APERT with in the testing phase has slightly lower computaional complexity than APERT with , since APERT with has an additional flexibility of stopping the perturbation if there is a change in predicted category.

width=0.5 Attack Mean Number of Samples Generated Type False Detected Missed Detected
Alarm Adversarial Detection Clean
PERT APERT PERT APERT PERT APERT PERT APERT 9.76 1.17 1.19 1.02 25 4.09 25 2.37 LBFGS 11.86 1.42 1.87 1.07 25 4.97 25 3.41 FGSM 1.68 1.08 4.97 1.07 25 5.08 25 2.97 PGD 14.12 1.15 4.87 1.41 25 5.47 25 3.03 Attack Corresponding Detection performance % Type False Detected Missed Detected
Alarm Probability Adversarial Probability Detection Probability Clean Probability
PERT APERT PERT APERT PERT APERT PERT APERT 4.56 5.12 97.10 98.10 2.90 1.90 95.44 94.88 LBFGS 4.85 5.24 96.3 94.35 3.7 5.65 95.15 94.76 FGSM 5.41 5.88 79.31 87.64 20.69 12.36 94.59 94.12 PGD 4.01 4.51 83.99 84.45 16.01 15.55 95.99 95.49

Table III: Mean number of Samples generated for PERT and APERT algorithm.Here PERT and APERT’s parameter were set in a way that they bring their false alarm performance closest to each other for in testing phase for APERT

width=0.5 Attack Mean Number of Samples Generated Type False Detected Missed Detected
Alarm Adversarial Detection Clean
PERT APERT PERT APERT PERT APERT PERT APERT 9.76 1.0 1.19 1.0 25 6.11 25 2.96 LBFGS 11.86 1.02 1.87 1.0 25 6.07 25 3.28 FGSM 1.68 1.05 4.97 1.02 25 6.11 25 3.10 PGD 14.12 1.07 4.87 1.21 25 6.037 25 3.05 Attack Corresponding Detection performance % Type False Detected Missed Detected
Alarm Probability Adversarial Probability Detection Probability Clean Probability
PERT APERT PERT APERT PERT APERT PERT APERT 4.56 5.45 97.10 84.09 2.90 15.91 95.44 94.55 LBFGS 4.85 5.97 96.3 80.57 3.7 19.43 95.15 94.03 FGSM 5.41 6.47 79.31 65.29 20.69 34.71 94.59 93.53 PGD 4.01 5.12 83.99 65.49 16.01 34.51 95.99 94.88

Table IV: Mean number of Samples generated for PERT and APERT algorithm.Here APERT’s parameters were set in a way that they bring the false alarm performance of APERT closest to corresponding PERT’s false alarm performance in Table III for in testing phase for APERT
Mean Number of Samples Generated
Q = 1 Q = 0
13.3 1.85 1.92
LBFGS 13.5 2.19 2.19
FGSM 15.5 2.14 2.56
PGD 14.48 2.20 2.57

Table V: Mean number of Samples generated for T = 25 and C = 1000 for APERT and PERT algorithm over a Dataset with Aversarial images and clean images
Figure 3: ROC plot comparison of PERT, APERT with Q = 0, APERT with Q = 1 and GPRBD detection algorithms for various attack schemes. Top left: CW attack. Top right: LBFGS attack. Bottom left: FGSM attack. Bottom right: PGD attack.

We also implemented a Gaussian process regression based detector (GPRBD) from [lee2019adversarial] (not sequential in nature) which uses the neural network classifier of [MadryLabCifar10], tested it against our adversarial examples, and compared its runtime against that of PERT and APERT equipped with the neural network classifier of [MadryLabCifar10]. These experiments were run under the same colab runtime environment, in a single session. The runtine specifications are- CPU Model name: Intel(R) Xeon(R) CPU @ 2.30GHz, Socket(s): 1, Core(s) per socket: 1, Thread(s) per core: 2, L3 cache: 46080K, CPU MHz: 2300.000, RAM available: 12.4 GB, Disk Space Available: 71 GB. Table VI shows that, APERT has significantly smaller runtime than PERT as expected, and slightly larger runtime than GPRBD. Also, APERT with has smaller runtime than .

Attack Average Time Taken per Image (seconds)
Q = 1 Q = 0
0.2829 0.6074 0.6398 4.1257
LBFGS 0.2560 0.6982 0.7059 4.7895
FGSM 0.2728 0.6372 0.7801 4.6421
PGD 0.2694 0.6475 0.7789 4.4216

Table VI: Performance of our implementation of Gaussian Process Regression based detector(GPRBD) vs our APERT algorithm for T = 25, C = 1000

Iv-B2 Performance of PERT and APERT

In Figure 3, we compare the ROC (receiver operating characteristic) plots of PERT, APERT and GPRBD algorithms, all implemented with the same neural network classifier of [MadryLabCifar10]. The Gaussian model used for GPRBD was implemented using [gpy2014] with the Kernel parameters set as follows: input dimensions = 10, variance = 1 and length scale = 0.01 as in [lee2019adversarial]. The Gaussian model parameter optimization was done using LBFGS with max iterations = 1000. It is obvious from Figure 3 that, for the same false alarm probability, APERT has higher or almost same attack detection rate compared to PERT. Also, APERT and PERT significantly outperform GPRBD. Hence, APERT yields a good compromise between ROC performance and computational complexity. It is also observed that APERT with always has a better ROC curve than APERT with in the testing phase.

Table VII and Table VIII show that the false alarm probability and attack detection probability of APERT increases with for a fixed , for both and . As increases, more least significant components are perturbed in the spectral domain, resulting in a higher probability of decision boundary crossover.

width=0.5 Attack Percentage Detection(%) Type C = 500 C = 1000 C = 1500
False Detection False Detection False Detection
Alarm Probability Alarm Probability Alarm Probability No. of Samples(): 10 5.0 96.81 8.18 98.18 11.36 99.09 LBFGS 4.16 94.16 10.0 93.33 12.5 97.5 FGSM 2.94 62.35 4.70 80.0 5.88 94.70 PGD 2.36 63.9 5.91 77.54 7.10 88.16 No. of Samples(): 20 1.01 95.46 5.45 96.81 6.36 98.18 LBFGS 2.5 90.83 9.16 94.16 15.0 96.66 FGSM 2.94 57.64 4.70 79.41 7.64 90.58 PGD 1.77 60.9 4.14 79.88 9.46 88.757

Table VII: Variation in Performance of APERT with values of C using second norm for T = 10 and T = 20 for Q = 1 in testing phase

width=0.5 Attack Percentage Detection(%) Type C = 500 C = 1000 C = 1500
False Detection False Detection False Detection
Alarm Probability Alarm Probability Alarm Probability No. of Samples(): 10 3.18 89.09 8.18 91.81 12.72 93.63 LBFGS 6.66 81.66 15.0 85.83 19.16 96.66 FGSM 2.35 50.0 7.05 67.65 8.23 85.88 PGD 2.36 39.05 6.5 68.63 9.46 82.84 No. of Samples(): 20 5.45 84.09 8.18 91.81 11.81 93.18 LBFGS 7.5 85.0 12.5 93.33 17.5 94.166 FGSM 3.529 44.70 6.47 65.29 7.05 83.53 PGD 2.95 40.82 7.10 67.45 10.0 85.21

Table VIII: Variation in Performance of APERT with values of C using second norm for T = 10 and T = 20, for Q = 0 in testing phase

V Conclusion

In this paper, we have proposed two novel pre-processing schemes for detection of adversarial images, via a combination of PCA-based spectral decomposition, random perturbation, SPSA and two-timescale stochastic approximation. The proposed schemes have reasonably low computational complexity and are agnostic to attacker and classifier models. Numerical results on detection and false alarm probabilities demonstrate the efficacy of the proposed algorithms, despite having low computational complexity. We will extend this work for detection of black box attacks in our future research endeavour.