Adversarially Robust One-class Novelty Detection

08/25/2021
by   Shao-Yuan Lo, et al.
Johns Hopkins University
0

One-class novelty detectors are trained with examples of a particular class and are tasked with identifying whether a query example belongs to the same known class. Most recent advances adopt a deep auto-encoder style architecture to compute novelty scores for detecting novel class data. Deep networks have shown to be vulnerable to adversarial attacks, yet little focus is devoted to studying the adversarial robustness of deep novelty detectors. In this paper, we first show that existing novelty detectors are susceptible to adversarial examples. We further demonstrate that commonly-used defense approaches for classification tasks have limited effectiveness in one-class novelty detection. Hence, we need a defense specifically designed for novelty detection. To this end, we propose a defense strategy that manipulates the latent space of novelty detectors to improve the robustness against adversarial examples. The proposed method, referred to as Principal Latent Space (PLS), learns the incrementally-trained cascade principal components in the latent space to robustify novelty detectors. PLS can purify latent space against adversarial examples and constrain latent space to exclusively model the known class distribution. We conduct extensive experiments on multiple attacks, datasets and novelty detectors, showing that PLS consistently enhances the adversarial robustness of novelty detection models.

READ FULL TEXT VIEW PDF

page 7

page 9

03/20/2019

OCGAN: One-class Novelty Detection Using GANs with Constrained Latent Representations

We present a novel model called OCGAN for the classical problem of one-c...
01/07/2021

OAAE: Adversarial Autoencoders for Novelty Detection in Multi-modal Normality Case via Orthogonalized Latent Space

Novelty detection using deep generative models such as autoencoder, gene...
07/03/2020

Improving auto-encoder novelty detection using channel attention and entropy minimization

Novelty detection is a important research area which mainly solves the c...
06/26/2020

Learning Diverse Latent Representations for Improving the Resilience to Adversarial Attacks

This paper proposes an ensemble learning model that is resistant to adve...
11/11/2021

Improving Novelty Detection using the Reconstructions of Nearest Neighbours

We show that using nearest neighbours in the latent space of autoencoder...
12/09/2020

Generating Out of Distribution Adversarial Attack using Latent Space Poisoning

Traditional adversarial attacks rely upon the perturbations generated by...
03/29/2022

TransductGAN: a Transductive Adversarial Model for Novelty Detection

Novelty detection, a widely studied problem in machine learning, is the ...

1 Introduction

One-class novelty detection refers to the problem of determining if a test data sample is normal (known class) or anomalous (novel class). In real-world applications, novel data is difficult to collect since they are often rare or unsafe. Hence, one-class novelty detection considers training data from only a single known class. Most recent advances in one-class novelty detection are based on the deep Auto-Encoder (AE) style architectures, such as Denoising Auto-Encoder (DAE) [salehi2020arae, vincent2008extracting], Variational Auto-Encoder (VAE) [kingma2013auto], Adversarial Auto-Encoder (AAE) [makhzani2015adversarial, pidhorskyi2018generative]

, Generative Adversarial Network (GAN)

[goodfellow2014generative, perera2019ocgan, sabokrou2018adversarially, zhangp], etc. Given an AE that learns the distribution of the known class, normal data are expected to be reconstructed accurately, while anomalous data are not. The reconstruction error of the AE is then used as a score for a test example to perform novelty detection. Although deep novelty detection methods achieve impressive performance, their robustness against adversarial attacks [goodfellow2015explaining, Szegedy2014Intriguing] lacks exploration.

Figure 1: Overview of the proposed adversarially robust one-class novelty detection idea (PLS). The vanilla Auto-Encoder (AE) and AE+PLS are trained with the known class defined as digit 8. AE+PLS reconstructs every adversarial data into the known class (digit 8) and thus produces preferred reconstruction errors for novelty detection, even under attacks.

Adversarial examples pose serious security threats to deep networks as they can fool them with carefully crafted perturbations. Over the past few years, many adversarial attack and defense approaches have been proposed for tasks such as image classification [guo2017countering, raff2019barrage, Xie_2019_CVPR, xu2017feature], video recognition [lo2020defending, wei2019sparse]

, optical flow estimation

[ranjan2019attacking] and open-set recognition [shao2020open]. However, adversarial attacks or defenses have not been thoroughly investigated in the context of one-class novelty detection. We first show that present novelty detectors are vulnerable to adversarial attacks. Subsequently, we demonstrate that many state-of-the-art defenses [hendrycks2019selfsupervised, shi2021online, xie2020smooth, Xie_2019_CVPR] prove to be sub-optimal to properly defend novelty detectors against adversarial examples. This motivates us to design an effective defense strategy specifically for one-class novelty detection.

To this end, we propose to leverage task-specific knowledge to protect novelty detectors. These novelty detectors are only required to retain information about normal data, thereby resulting in poor reconstructions for anomalous data. This is favorable to the novelty detection problem. This can be achieved by constraining the latent space to make the features closer to a prior distribution [perera2019ocgan, park2020learning]. Also, it has been shown that adversarial perturbations can be removed in the feature space [Xie_2019_CVPR]

. Therefore, one can largely manipulate the latent space of novelty detectors to devoid them of feature corruption introduced by adversaries, while maintaining the performance on clean input data. This property is unique to the novelty detection task, as most deep learning applications (e.g., image classification) require a model containing sophisticated semantic information, and a large manipulation on the latent space may limit the model capability, resulting in performance degradation.

In this paper, we propose a defense strategy, referred to as Principal Latent Space (PLS), to defend novelty detectors against adversarial examples. Specifically, PLS learns the incrementally-trained [ross2008incremental]

cascade principal components in the latent space. This contains a cascade principal component analysis (PCA), which consists of a PCA operating on the vector dimension (i.e., channel) of a latent space

[van2017neural] and the other PCA operating on the spatial dimension. We name these two PCAs as Vector-PCA and Spatial-PCA, respectively. First, Vector-PCA uses a learned principal latent vector to represent a latent space as the Vector-PCA space of a single-channel map. Since the principal latent vector is a pre-trained component that would not be affected by adversarial perturbations, most adversaries are removed at this step, and the remaining adversaries are enclosed within the small Vector-PCA space. Subsequently, Spatial-PCA uses learned principal Vector-PCA maps to represent the Vector-PCA space as the Spatial-PCA space and expel the remaining adversaries. Finally, the corresponding cascade inverse PCA transforms the Spatial-PCA space back to the original dimensionality, resulting in the principal latent space.

With PLS, the decoder could compute preferred reconstruction errors as novelty scores, even under adversarial attacks (see Fig. 1). Additionally, we incorporate adversarial training (AT) [madry2018towards] with PLS to further exert PLS’s ability. In contrast to typical defenses which often sacrifice their performance on clean data [tsipras2018robustness, xie2020adversarial], the proposed defense strategy does not hurt the performance but rather improves it. The PLS module can be attached to any AE-style architectures (VAE, GAN, etc), so it is applicable to a wide variety of the existing novelty detection approaches, such as [kingma2013auto, makhzani2015adversarial, sabokrou2018adversarially, pidhorskyi2018generative, salehi2020arae] etc. We extensively evaluate PLS on eight adversarial attacks, three datasets and six different novelty detectors. We further compare PLS with commonly-used defense methods and show that it consistently enhances the adversarial robustness of novelty detectors by significant margins. To the best of our knowledge, this is one of the first adversarially robust novelty detection methods.

2 Related work

One-class novelty detection.

One-class novelty detection is of great interest to the computer vision community. Earlier algorithms mainly rely on Support Vector Machines (SVM) formulation

[scholkopf1999support, tax2004support]. With the advent of deep learning, AE-based approaches are dominating this area and achieve state-of-the-art performance [gong2019memorizing, park2020learning, perera2019ocgan, pidhorskyi2018generative, sabokrou2018adversarially, sakurada2014anomaly, salehi2020arae, xia2015learning, zhou2017anomaly]. ALOCC [sabokrou2018adversarially] considers a DAE [vincent2008extracting] as a generator and appends a discriminator to train the entire network by the generative adversarial framework [goodfellow2014generative]. GPND [pidhorskyi2018generative] is based on AAE [makhzani2015adversarial], and it applies a discriminator to the latent space and the other discriminator to the output. OCGAN [perera2019ocgan]

includes two discriminators and a classifier to train a DAE by the generative adversarial framework. ARAE

[salehi2020arae] crafts adversarial examples from the latent space to adversarially train a DAE. Different from our work, ARAE’s adversarial examples aim to pursue performance, and its adversarial robustness is not thoroughly evaluated (see Supplementary).

Adversarial attacks. Szegedy et al. [Szegedy2014Intriguing] showed that carefully crafted perturbations can fool deep networks. Goodfellow et al. [goodfellow2015explaining] introduced the Fast Gradient Sign Method (FGSM), which leverages the sign of gradients to produce adversarial examples. Projected Gradient Descent (PGD) [madry2018towards] extends FGSM from single iteration gradient descent to an iterative version. MI-FGSM [dong2018boosting] generates more transferable adversarial attacks by a momentum mechanism. MultAdv [lo2020multav] produces adversarial examples via the multiplicative operation instead of the additive operation. Physically realizable attacks, which can be implemented in the physical scenarios, is also developed [Sharif16AdvML, zajac2019adversarial]. For example, Adversarial Framing (AF) [zajac2019adversarial] adds perturbations on the border of an image, while the remaining pixels are unchanged.

Adversarial defenses. At earlier time, a few studies aim to detect adversarial examples [hendrycks2016early, jere2020principal, li2017adversarial]. However, it is well-known that detection is inherently weaker than defense in terms of resisting adversarial attacks. Although several defense approaches based on image transformation are proposed afterward [guo2017countering, xu2017feature, bhagoji2017dimensionality], they fail to defend against white-box attacks [carlini2017adversarial, obfuscated]. Recently, Adversarial Training (AT) has been considered one of the most effective defenses, especially in the white-box setting. Madry et al. [madry2018towards] formulated AT in a min-max optimization framework (PGD-AT), and this has been widely used as a benchmark. Xie et al. [Xie_2019_CVPR] includes the feature denoising block (FD) in networks to remove adversarial perturbations in the feature domain. SAT [xie2020smooth]

uses smooth approximations of ReLU activation to enhance PGD-AT. Hendrycks et al.

[hendrycks2019selfsupervised] added an auxiliary rotation prediction task [gidaris2018unsupervised] to improve PGD-AT (RotNet-AT). SOAP [shi2021online] takes self-supervised signals to purify adversarial examples during inference.

To the best of our knowledge, APAE [goodge2020robustness]

might be the only present defense designed for anomaly detection. It uses approximate projection and feature weighting to reduce adversarial effects. However, its robustness is not fully tested and only anomalous data are perturbed in its evaluation (see Supplementary). Instead, we provide a generic framework for evaluating the adversarial robustness of novelty detectors and our proposed defense method.

3 Attacking novelty detection models

We consider several popular adversarial attacks [dong2018boosting, goodfellow2015explaining, lo2020multav, madry2018towards, papernot2017practical, zajac2019adversarial] and modify their loss objectives to suit the novelty detection problem setup. Here, we take PGD [madry2018towards] as an example to illustrate our attack formulation. The other gradient-based attacks can be extended by a similar formulation (see Supplementary).

Consider an AE-based target model with an encoder and a decoder , and an input image with the ground-truth label , where “” denotes the known class and “” denotes the novel classes. We generate the adversarial example as follows:

(1)

where, , denotes a step size, and is the number of attacking iterations, and . projects its element into an -norm bound with perturbation size such that . corresponds to the mean square error (MSE) loss defined as follows:

(2)

Given a test example, if it belongs to the known class, we maximize its reconstruction error (i.e., novelty score) by gradient ascent; while if it belongs to novel classes, we minimize its reconstruction error by gradient descent.

Present novelty detection methods are vulnerable to this attack (see Sec. 5.2); that is, normal data would be misclassified into novel classes, and anomalous data would be misclassified into the known class. Moreover, this attacking strategy is much stronger than the attacks introduced by [salehi2020arae], which perturbs only normal data, and by [goodge2020robustness], which perturbs only anomalous data (see Supplementary).

4 Adversarially robust novelty detection

The proposed defense strategy exploits the task-specific knowledge of one-class novelty detection. Specifically, we leverage the fact that a novelty detector’s latent space can be manipulated to a larger extent as long as it retains the known class information. This property is especially useful to remove more adversarial perturbations in the latent space. Therefore, we propose to train a novelty detector by manipulating its latent space such that it can improve adversarial robustness while maintaining the performance on clean data. Note that these characteristics are specific to the novelty detection problem. The majority of visual recognition problems, such as image classification, require a model retaining multiple category information. Hence, a large manipulation on the latent space may hinder the model capability and thus degrade the performance.

In the following subsections, we first briefly review PCA to define the notations used in this paper, then discuss the proposed PLS in detail.

4.1 Preliminary

PCA computes the principal components of a collection of data and uses them to conduct a change of basis on the data through a linear transformation. Consider a data matrix

, its mean and its covariance . can be written as via Singular Vector Decomposition (SVD), where

is an orthogonal matrix containing the principal components of

. Here, we define a mapping which computes the mean vector and the first principal components of the given :

(3)

where keeps only the first columns of . Now we define the forward and the inverse PCA transformation as a pair of mapping , ; performs the forward PCA:

(4)

and performs the inverse PCA:

(5)

where . Finally, we can write the PCA reconstruction of as .

4.2 Principal Latent Space (PLS)

The proposed PLS contains two major components: (1) Vector-PCA and (2) Spatial-PCA. In Vector-PCA, we perform on the vector dimension as , and in Spatial-PCA, we perform on the spatial dimension as . Let be the encoder and be the decoder of a novelty detection model. Let us denote an adversarial image as , we have its latent space , where is the spatial dimensionality obtained by the product of height and width, and is the vector dimensionality (i.e., the number of channels). Under adversarial attacks, would be corrupted by adversarial perturbations such that the decoder cannot compute reconstruction errors favorable to novelty detection. We define the proposed PLS as a transformation , which removes adversaries from , where is referred to as principal latent space.

is implemented by our incrementally-trained cascade PCA. In the beginning, a sigmoid function replaces the encoder’s last activation function to bound

values between 0 and 1. The following procedures are described below.

First, Vector-PCA computes the mean latent vector and the principal latent vector of :

(6)

where, we always set to 1, so is the first principal latent vector of . Second, Vector-PCA transforms to its Vector-PCA space :

(7)

Next, Spatial-PCA computes the mean Vector-PCA map111We use the word “map” to indicate they are on the spatial dimension. and the principal Vector-PCA maps of :

(8)

where,

is a hyperparameter. Then, Spatial-PCA transforms

to its Spatial-PCA space :

(9)

Finally, the inverse Spatial-PCA and the inverse Vector-PCA transform back to its original dimensionality:

(10)
(11)

where, is the Spatial-PCA reconstruction of , and is the resulting principal latent space. Fig. 2 gives an overview of this procedure. The decoder then uses to reconstruct the input adversarial example as for computing the novelty score.

Figure 2: Overview of the proposed PLS. : forward Vector-PCA, : forward Spatial-PCA, : inverse Spatial-PCA, : inverse Vector-PCA, and are the mappings for computing principal components.

4.3 Incremental training

The principal latent components are incrementally-trained along with the network weights by exponential moving average (EMA) during training, so we call this process incrementally-trained cascade PCA. Specifically, at training iteration , these components are updated with following equations:

(12)
(13)

where and are the EMA learning rates.

Consider the model weights are trained by the mini-batch gradient descent with a batch size , the latent dimensionality is shaped to , the resulting is reshaped to after the Vector-PCA , and is reshaped back to after the inverse Spatial-PCA . Hence, in a mini-batch, both and have times more data points to acquire better principal latent components at each training iteration. At iteration , performs with the components , and performs with the components . When the training process ends, the well-trained components are denoted as . During infernce, performs with , and performs with , while and do not operate (see Fig. 2). The entire process is differentiable during inference and thus does not cause obfuscated gradients [obfuscated]. This incremental training helps make sure the cascade PCA is aware of the network weight updates at each training step, encouraging mutual learning between the network weights and the principal latent components. The entire model and thus can be trained end-to-end.

4.4 Defense mechanism

We further elaborate on how the proposed PLS defends against adversarial attacks. Given an adversarial example , its latent space is adversarially perturbed. After Vector-PCA, each latent vector of is represented by a scaling factor of the learned principal latent vector (with a bias term ). The Vector-PCA space stores these scaling factors on a single-channel map (i.e., on the spatial domain only). Since all the principal latent components are pre-trained parameters, they would not be affected by adversarial perturbations. Replacing the perturbed latent vectors by removes the majority of the adversaries. The only place where the remaining adversaries can appear is the scaling factors of on the single-channel map. In other words, these adversaries are enclosed within a small subspace, making them easier to expel.

Subsequently, Spatial-PCA reconstructs this small subspace by a set of principal Vector-PCA maps (with a bias term ). Since and are adversary-free, the remaining adversaries are further removed. From another perspective, this step can be viewed as PCA-based denoising performing in the spatial domain of features. With the robust principal latent space , the decoder can obtain a preferred reconstruction error for novelty detection, even in the presence of an adversarial example. Additionally, we perform AT [madry2018towards] to train the model, further improving the robustness.

5 Experiments

We evaluate PLS on eight adversarial attacks, three datasets and six existing novelty detection methods. We further compare PLS with state-of-the-art defense approaches. An extensive ablation study is also presented.

Dataset Defense Clean FGSM [goodfellow2015explaining] PGD [madry2018towards] MI-FGSM [dong2018boosting] MultAdv [lo2020multav] AF [zajac2019adversarial] Black-box [papernot2017practical]
No Defense 0.964 0.350 0.051 0.022 0.170 0.014 0.790
PGD-AT [madry2018towards] 0.961 0.604 0.357 0.369 0.444 0.155 0.691
FD [Xie_2019_CVPR] 0.963 0.612 0.366 0.379 0.453 0.142 0.700
MNIST SAT [xie2020smooth] 0.947 0.527 0.295 0.306 0.370 0.142 0.652
[lecun2010mnist] RotNet-AT [hendrycks2019selfsupervised] 0.967 0.598 0.333 0.333 0.424 0.101 0.695
SOAP [shi2021online] 0.940 0.686 0.504 0.506 0.433 0.088 0.863
APAE [goodge2020robustness] 0.925 0.428 0.104 0.105 0.251 0.022 0.730
PLS (ours) 0.967 0.786 0.678 0.679 0.701 0.599 0.840
No Defense 0.892 0.469 0.088 0.047 0.148 0.112 0.562
PGD-AT [madry2018towards] 0.890 0.518 0.368 0.348 0.327 0.253 0.540
FD [Xie_2019_CVPR] 0.886 0.524 0.379 0.359 0.335 0.252 0.535
F-MNIST SAT [xie2020smooth] 0.878 0.444 0.306 0.285 0.273 0.231 0.492
[xiao2017fashion] RotNet-AT [hendrycks2019selfsupervised] 0.891 0.527 0.375 0.351 0.312 0.240 0.541
SOAP [shi2021online] 0.876 0.639 0.475 0.475 0.327 0.274 0.611
APAE [goodge2020robustness] 0.861 0.510 0.174 0.174 0.220 0.135 0.513
PLS (ours) 0.909 0.677 0.600 0.585 0.573 0.591 0.696
No Defense 0.550 0.186 0.034 0.018 0.025 0.035 0.227
PGD-AT [madry2018towards] 0.546 0.236 0.145 0.139 0.107 0.096 0.223
FD [Xie_2019_CVPR] 0.546 0.237 0.147 0.141 0.109 0.103 0.222
CIFAR-10 SAT [xie2020smooth] 0.537 0.223 0.141 0.135 0.101 0.079 0.219
[krizhevsky2009learning] RotNet-AT [hendrycks2019selfsupervised] 0.547 0.236 0.139 0.107 0.075 0.092 0.224
SOAP [shi2021online] 0.546 0.270 0.131 0.141 0.096 0.070 0.231
APAE [goodge2020robustness] 0.552 0.259 0.097 0.097 0.077 0.112 0.255
PLS (ours) 0.578 0.320 0.245 0.242 0.201 0.243 0.331
Table I: The mAUROC of models under various adversarial attacks.

5.1 Experimental setup

Datasets. We use three datasets for evaluation: MNIST [lecun2010mnist], Fashion-MNIST (F-MNIST) [xiao2017fashion] and CIFAR-10 [krizhevsky2009learning]. MNIST consists of 28 28 grayscale handwritten digits from 0 to 9. It contains 60,000 training data and 10,000 test data. F-MNIST is composed of 28 28 grayscale images from 10 fashion product categories. It comprises of 60,000 training data and 10,000 test data. CIFAR-10 consists of 32 32 color images from 10 different classes. There are 50,000 training and 10,000 test images in this dataset.

Evaluation protocol.

We simulate a one-class novelty detection scenario by the following protocol. Given a dataset, each class is defined as the known class at a time, and a model is trained with the training data of this known class. During inference, the test data of the known class are considered normal, and the test data of the other classes (i.e., novel classes) are considered anomalous. We select the anomalous data from each novel class equally to constitute half of the test set, where the anomalous data within a novel class are selected randomly. Hence, our test set contains 50% anomalous data, where each novel class accounts for the same proportion. The area under the Receiver Operating Characteristic curve (AUROC) value is used as the evaluation metric, where the ROC curve is obtained by varying the threshold of the novelty score. For each dataset, we report the mean AUROC (mAUROC) across its 10 classes.

Attack setting. We test adversarial robustness against five white-box attacks, inclduing FGSM [goodfellow2015explaining], PGD [madry2018towards], MI-FGSM [dong2018boosting], MultAdv [lo2020multav] and AF [zajac2019adversarial], where PGD is the default attack if not otherwise specified. A black-box attack and two adaptive attacks [papernot2017practical, tramer2020adaptive] are also considered. All the attacks are implemented based on the formulation in Sec. 3.

For FGSM, PGD and MI-FGSM, we set to for MNIST, for F-MNIST, and for CIFAR-10. For MultAdv, we set to for MNIST, for F-MNIST, and for CIFAR-10. For AF, we set to , and for MNIST, F-MNIST and CIFAR-10, respectively. The framing width is set to . The number of attack iterations is set to for FGSM and for the other attacks.

Baseline defenses. To the best of our knowledge, APAE [goodge2020robustness] might be the only present defense designed for anomaly detection. In addition to APAE, we implement five commonly-used defenses, which are originally designed for classification tasks, in the context of novelty detection. They are PGD-AT [madry2018towards], FD [Xie_2019_CVPR], SAT [xie2020smooth], RotNet-AT [hendrycks2019selfsupervised] and SOAP [shi2021online], where FD, SAT and RotNet-AT incorporate PGD-AT. We use Gaussian non-local means [buades2005non] for FD, Swish [hendrycks2016gaussian] for SAT, and RotNet [gidaris2018unsupervised] for SOAP. These are their well-performing versions.

Benchmark novelty detectors. We apply PLS to six novelty detection methods, including a vanilla AE, VAE [kingma2013auto], AAE [makhzani2015adversarial], ALOCC [sabokrou2018adversarially], GPND [pidhorskyi2018generative] and ARAE [salehi2020arae], where the vanilla AE is the default novelty detector if not otherwise specified. PLS is added after the last layer of the novelty detection models’ encoder.

In order to evenly evaluate the adversarial robustness of these approaches, we unify their AE backbones into the following archirecture. The encoder consists of four 3 3 convolutional layers, where each of the first three layers are followed by a 2

2 max-pooling with stride 2. We use a base channel size of 64, and increase the number of channels by a factor of 2. The decoder mirrors the encoder but replaces every max-pooling by a bilinear interpolation with a factor of 2. All the convolutional layers are followed by a batch normalization layer

[ioffe2015batch] and ReLU.

Implementation details. All the models are trained by Adam optimizer [kingma2014adam] with initial learning rate and weight decay

, where the learning rate is decreased by a factor of 10 at the 20th and 40th epochs. The batch size is 128. For PLS, we set

to , to , initial to and initial to , where and are also decreased by a factor of 10 at the 20th and 40th epochs.

Dataset Defense Test type AE VAE [kingma2013auto] AAE [makhzani2015adversarial] ALOCC [sabokrou2018adversarially] GPND [pidhorskyi2018generative] ARAE [salehi2020arae]
No Defense Clean 0.964 0.979 0.973 0.961 0.946 0.965
No Defense PGD 0.051 0.087 0.056 0.141 0.128 0.133
PGD-AT [madry2018towards] 0.357 0.521 0.427 0.312 0.582 0.341
MNIST FD [Xie_2019_CVPR] 0.366 0.525 0.419 0.319 0.551 0.350
[lecun2010mnist] SAT [xie2020smooth] 0.295 0.485 0.470 0.330 0.527 0.254
RotNet-AT [hendrycks2019selfsupervised] PGD 0.333 0.501 0.507 0.361 0.551 0.314
SOAP [shi2021online] 0.504 0.608 0.398 0.606 0.425 0.522
APAE [goodge2020robustness] 0.104 0.155 0.240 0.202 0.229 0.191
PLS (ours) 0.678 0.739 0.608 0.693 0.741 0.695
No Defense Clean 0.892 0.914 0.912 0.901 0.915 0.901
No Defense PGD 0.088 0.223 0.152 0.177 0.177 0.262
PGD-AT [madry2018towards] 0.368 0.538 0.512 0.367 0.539 0.420
F-MNIST FD [Xie_2019_CVPR] 0.379 0.533 0.513 0.370 0.542 0.428
[xiao2017fashion] SAT [xie2020smooth] 0.306 0.504 0.499 0.332 0.530 0.351
RotNet-AT [hendrycks2019selfsupervised] PGD 0.375 0.542 0.509 0.365 0.524 0.396
SOAP [shi2021online] 0.475 0.509 0.313 0.477 0.386 0.548
APAE [goodge2020robustness] 0.174 0.366 0.300 0.246 0.398 0.310
PLS (ours) 0.600 0.604 0.599 0.612 0.626 0.599
No Defense Clean 0.550 0.552 0.555 0.551 0.559 0.578
No Defense PGD 0.034 0.073 0.051 0.037 0.027 0.087
PGD-AT [madry2018towards] 0.145 0.177 0.195 0.146 0.182 0.157
CIFAR-10 FD [Xie_2019_CVPR] 0.147 0.180 0.206 0.150 0.187 0.152
[krizhevsky2009learning] SAT [xie2020smooth] 0.141 0.170 0.186 0.141 0.181 0.107
RotNet-AT [hendrycks2019selfsupervised] PGD 0.139 0.163 0.161 0.105 0.147 0.101
SOAP [shi2021online] 0.131 0.094 0.043 0.172 0.075 0.117
APAE [goodge2020robustness] 0.097 0.179 0.171 0.095 0.062 0.154
PLS (ours) 0.245 0.247 0.252 0.244 0.242 0.245
Table II: The mAUROC of models under PGD attack. Various novelty detectors are used.

5.2 Robustness

5.2.1 White-box attacks

The robustness of one-class novelty detection against various white-box attacks is reported in Table I, where the vanilla AE is used. Without a defense, mAUROC scores drop significantly under all the white-box attacks, which shows the vulnerability of novelty detectors to the adversarial examples. PGD-AT improves adversarial robustness to a great extent. FD makes a slight improvement upon PGD-AT in most cases. SAT and Rot-AT seem not effective upon PGD-AT in the context of novelty detection. SOAP performs well in some cases but not uniformly. Compared to other methods, APAE generally shows less robustness. The proposed method, PLS, significantly increases mAUROC with PGD-AT, leading the other defenses by a decent margin. Moreover, PLS is consistently better across all the five white-box attacks on three datasets.

Figure 3: The mAUROC of PLS under PLS-knowledgeable attacks with varied trade-off parameters. (a) Knowledgeable A. (b) Knowledgeable B.

PLS-knowledgeable attacks. As discussed above, in a white-box attack, attackers are aware of the presence of the defense mechanism, i.e., PLS (it is differentiable at inference time, see Sec. 4). However, they count on only the novelty detection objective (i.e., MSE loss, see Eq. (2)) to generate adversarial examples. In this subsection, we follow the practice of the most recent adversarial defense studies such as [shi2021online], to thoroughly evaluate the proposed defense mechanism. More precisely, we try to find an adaptive attack [papernot2017practical, tramer2020adaptive] by giving the full knowledge of the PLS defense mechanism to the attacker. We refer to this type of attack as PLS-knowledgeable attack in the paper.

We construct two PLS-knowledgeable attacks, Knowledgeable A and Knowledgeable B. They jointly optimize Eq. (2) and an auxiliary loss developed with the knowledge of PLS. Knowledgeable A attempts to minimize the -norm between the latent space before and after the PLS transformation. The intuition is to void PLS such that the input and the output latent space of PLS become closer. In other words, Knowledgeable A replaces Eq. (2) with the following equation:

(14)

where, is a trade-off parameter. Knowledgeable B attempts to maximize the -norm between the latent space of the current adversarial example and its clean counterpart after the PLS transformation. The intuition is to keep the adversarial latent space away from the clean one. In other words, Knowledgeable B replaces Eq. (2) with the following equation:

(15)

where, is a trade-off parameter. When or , the PLS-knowledgeable attacks reduce to the conventional white-box attacks.

In Fig. 3, we can observe that mAUROC monotonously increases as or increases. That is, these PLS-knowledgeable attacks cannot further reduce PLS’s mAUROC, and the additional auxiliary loss terms would attenuate the MSE loss gradients. This indicates that attackers cannot straightforwardly benefit from the knowledge of PLS. Hence, the conventional white-box attack still has the greatest attacking strength. This result shows that it is not easy to find a stronger attack to break PLS, even with the full knowledge of the PLS mechanism.

5.2.2 Black-box attacks

The robustness against black-box attacks [papernot2017practical] is shown in the last column of Table I. Here, we consider a naturally trained (i.e., train with only clean data) GPND as a substitute model and apply MI-FGSM, which has better transferability, to generate black-box adversarial examples for target models. As we can see, the defenses with PGD-AT degrade black-box robustness, which is identical to the observation in classification tasks [tramer2018ensemble]. SOAP, which is without using AT, shows better black-box robustness. PLS greatly improves the black-box robustness even with PGD-AT, and it is consistently better across all datasets. Naturally trained PLS achieves 0.907, 0.742 and 0.332 mAUROC on MNIST, F-MNIST and CIFAR-10, respectively, under the black-box attack.

5.2.3 Generalizability

Table II shows the adversarial robustness of various state-of-the-art novelty detection models. All of them are susceptible to adversarial attacks. We attach the PLS module to these models to protect them. We can see that PLS uniformly robustifies all of these novelty detectors and significantly outperforms the other defense approaches. This confirms that PLS is applicable to a wide variety of the present novelty detection methods, demonstrating its excellent generalizability.

Defense MNIST F-MNIST CIFAR-10
No Defense 0.964 0.892 0.550
FD [Xie_2019_CVPR] 0.965 0.892 0.551
SAT [xie2020smooth] 0.949 0.883 0.543
RotNet-AT [hendrycks2019selfsupervised] 0.963 0.897 0.554
SOAP [shi2021online] 0.940 0.876 0.546
APAE [goodge2020robustness] 0.925 0.861 0.552
PLS (ours) 0.973 0.922 0.578
Table III: The mAUROC of models under clean data.

5.3 Performance on clean data

We also evaluate the performance of PLS on clean data. In this experiment, all the models are naturally trained. As shown in Table III, PLS improves the performance upon the original network architecture (No Defense), while, the other defenses do not make obvious improvements. This shows that PLS generalizes better for both clean data and adversarial examples. PLS enjoys this benefit because the principal latent components are learned from only the latent space of the known class. Due to this, when transforming the latent space of any novel class image, PLS projects it into the known class space defined by the principal latent component. This brings the transformed latent space closer to the latent space of the known class, resulting in the decoder trying to reconstruct it into a known class image. Subsequently, this produces high reconstruction error for the novel class images while barely affecting the reconstruction of the known class images.

5.4 Ablation study

PLS components. Table IV reports the results of different PLS variants. First, Vector-PCA alone significantly improves the robustness upon PGD-AT. This shows that the mechanism of replacing perturbed latent vectors by the incrementally-trained principal latent vector is effective. As discussed earlier, in PLS the adversaries can stay only on the scaling factors of the principal latent vector. Next, we further remove the adversaries with the help of denoising operation on the spatial dimension. We try to deploy a feature denoising block [Xie_2019_CVPR] after the forward Vector-PCA. This baseline is denoted as Vector-PCA+FD. This makes a slight improvement over Vector-PCA baseline. Finally, the complete PLS uses Spatial-PCA for this purpose instead, achieveing great mAUROC increase. This shows Spatial-PCA’s advantage over FD in our case.

Defense MNIST F-MNIST CIFAR-10
PGD-AT [madry2018towards] 0.357 0.368 0.145
Vector-PCA 0.566 0.499 0.215
Vector-PCA+FD 0.582 0.505 0.215
PLS (ours) 0.678 0.600 0.245
Table IV: The mAUROC of PLS variants under PGD attack.
Figure 4: Mean -norm between the latent space of PGD adversarial examples and that of their clean counterpart on different defenses. The values are the mean over an entire dataset.
Figure 5: Histograms of reconstruction errors. (a) No Defense under clean data. (b) No Defense under PGD attack. (c) PGD-AT under PGD attack. (d) PLS under PGD attack. Digit 0 of MNIST is set to normal data, and the other digits are anomalous.

Stability of latent space. We compute the mean -norm between the latent space of adversarial examples and that of their clean counterpart: . As can be seen in Fig. 4, PLS’s mean -norm is three orders of magnitude smaller than the other defenses. This indicates that PLS’s latent space are barely affected by adversaries, showing PLS’s effectiveness in adversary removal.

Reconstruction errors. For an AE-style novelty detection model, normal data and anomalous data are expected to get low and high reconstruction errors, respectively. The model follows this behavior given clean data, as shown in Fig. 5(a). When an attacker attempts to maximize the reconstruction errors of normal data and minimize that of anomalous data, the model would make wrong predictions, shown in Fig. 5(b). Fig. 5(c) shows that PGD-AT pulls back the enlarged reconstruction errors of normal data, but they still overlap for the anomalous data. In Fig. 5, it can be observed that PLS pushes the reconstruction errors of anomalous data with better margin. Although the reconstruction errors of normal data also increases, the gap between normal and anomalous data is retained resulting in PLS performing better under attacks.

Reconstructed images. Fig. 6 compares the reconstructed images of No Defense model and PLS under PGD attack. Digit 2 of MNIST is used as the known class. We can see that No Defense model captures the shape of the adversarial anomalous data and thus produces fair reconstructions. In other words, the reconstruction error gap between normal data and anomalous data is insufficiently large. Such observation is consistent with the quantitative results that it is not adversarially robust. In contrast, PLS reconstructs every data into the known class of digit 2. Hence, even under attacks, PLS can obtain very high reconstruction errors from anomalous data and low errors from normal data.

Figure 6: Reconstructions under PGD attack with . Digit 2 is set to normal data, and the other digits are anomalous.

6 Conclusion

In this paper, we study the adversarial robustness in the context of one-class novelty detection problem. We show that existing novelty detection models are vulnerable to adversarial perturbations and then propose a defense method referred to as Principal Latent Space (PLS). Specifically, PLS purifies the latent space by the incrementally-trained cascade PCA process. Moreover, we construct a generic evaluation framework to fully test the effectiveness of the proposed PLS. We perform extensive experiments on multiple datasets with multiple existing novelty detection models and consider various attacks to show that PLS improves the robustness consistently across different attacks and datasets.

Acknowledgments

This work was supported by the DARPA GARD Program HR001119S0026-GARD-FP-052.

References

A1 Basic sanity checks to evaluation

To further verify that the proposed PLS’s robustness is not due to obfuscated gradients, we report our results on the basic sanity checks introduced in Athalyz et al. [obfuscated].

  • Table I shows that iterative attacks (PGD [madry2018towards] and MI-FGSM [dong2018boosting]) are stronger than one-step attacks (FGSM [goodfellow2015explaining]).

  • Table I shows that white-box attacks are stronger than black-box attacks (by MI-FGSM).

  • Unbounded attacks reach 100% attack success rate (AUROC drops to 0.000) on all the three datasets.

  • Fig. 9 shows that increasing distortion bound increases attack success (decreases AUROC).

A2 More on attack formulation

In Sec. 3, we take PGD [madry2018towards] as an example to illustrate the proposed attacking method against novelty detection models. Here, we elaborate on the formulation of the other attacks we used in this paper, including MI-FGSM [dong2018boosting], AF [zajac2019adversarial] and MultAdv [lo2020multav].

Consider an AE-based target model with an encoder and a decoder , and an input image with the ground-truth label , where “” denotes the known class and “” denotes the novel classes. MI-FGSM generates the adversarial example as follows:

(16)

where gathers the gradients of the first iterations with a decay factor . corresponds to the MSE loss defined in Eq. 2. Then,

(17)

where, , denotes a step size, and is the number of attack iterations, and . projects its element into an -norm bound with perturbation size such that .

AF adds adversarial perturbations on the border of an images, while the remaining pixels are kept unchanged. We generate the AF example as follows:

(18)

where is the AF mask. Let be a pixel index of . If is on the border of within a framing width , ; otherwise, .

MultAdv produces adversarial examples via the multiplicative operation, formulated as follows:

(19)

where is the multiplicative step size, performs projection with ratio bound such that . Eq. 2 is used as the loss objective for AF and MultAdv as well to suit the novelty detection problem setup.

Figure 7: Reconstructions under AF attack with . Digit 2 is set to normal data, and the other digits are anomalous.

A3 More on reconstructed images

In Sec. 5.4, we compare the reconstructed images under PGD attack [madry2018towards] in Fig. 6. In this section, Fig. 7 presents the reconstructed images under AF attack [zajac2019adversarial]. It can be observed that No Defense captures the shape of the adversarial anomalous data and thus produce fair reconstructions. Nevertheless, fails to reconstruct recognizable patterns under AF. Hence, the resulting reconstruction errors would let the novelty detector make wrong predictions. In contrast, PLS reconstructs every data into the known class of digit 2. Therefore, even under AF, PLS can obtain very high reconstruction errors from anomalous data and low errors from normal data. Such observations are consistent with the quantitative results in Table I.

A4 Trade-off of value and value

This section looks into the trade-off of the and values of the proposed PLS. Table V reports our results on the MNIST dataset [lecun2010mnist]. For both varying (fix =8) and (fix =1), we observe that larger leads to lower PGD accuracy but higher clean accuracy. The reason is that using larger retains more semantic information of feature maps while keeping more adversaries simultaneously. =1 is an exception. It has lower PGD accuracy because it loses too much information. According to the trade-off analysis, we set =1 and =8 for PLS as discussed in Sec. 5.1.

Input Original AE
Clean 0.964 0.967 0.975 0.971 0.971
PGD 0.051 0.678 0.621 0.581 0.557
Input Vec-PCA only
Clean 0.968 0.937 0.951 0.967 0.973
PGD 0.566 0.549 0.681 0.678 0.667
Table V: The mAUROC of models under clean data.
Figure 8: The mAUROC of models under PGD attack with varied numbers of attack iterations .
Figure 9: The mAUROC of models under PGD attack with varied perturbation sizes .

A5 Attack budgets

To fully evaluate the effectiveness of the proposed PLS, we test its scalability to different attack budgets. We vary the attack budgets by two aspects: The number of attack iterations and perturbation sizes . The results are presented in Fig. 8 and Fig. 9, respectively.

First, we can see that the attack strength does not increase obviously along with the increase of . This observation is consistent with that of Madry et al. [madry2018towards] and Xie et al. [Xie_2019_CVPR]. The proposed PLS shows constant adversarial robustness and consistently performs better than No Defense and PGD-AT [madry2018towards] under different .

On the other hand, the attack strength significantly increases along with the increase of . It can be observed that PLS consistently demonstrates better robustness under different . Apparently, PLS is scalable to different attack budgets.

Defense Test type MNIST F-MNIST CIFAR-10
Clean 0.964 0.892 0.550
PGD 0.051 0.088 0.034
No Defense PGD-normal 0.167 0.284 0.111
PGD-latent 0.773 0.715 0.433
PGD-clean 0.106 0.180 0.070
PGD-anomalous 0.939 0.788 0.332
PGD 0.357 0.368 0.145
PGD-normal 0.745 0.656 0.309
PGD-AT [madry2018towards] PGD-latent 0.914 0.784 0.448
PGD-clean 0.863 0.802 0.403
PGD-anomalous 0.753 0.677 0.328
PGD 0.366 0.379 0.147
PGD-normal 0.750 0.654 0.309
FD [Xie_2019_CVPR] PGD-latent 0.906 0.762 0.447
PGD-clean 0.871 0.794 0.401
PGD-anomalous 0.761 0.673 0.331
PGD 0.678 0.600 0.245
PGD-normal 0.885 0.779 0.399
PLS (ours) PGD-latent 0.953 0.883 0.547
PGD-clean 0.923 0.867 0.521
PGD-anomalous 0.860 0.775 0.407
Table VI: The mAUROC of models under PGD, PGD-normal, PGD-latent, PGD-clean and PGD-anomalous attacks. Underlines denote the lowest mAUROC, which indicates the strongest attack method.

A6 Further comparison with ARAE

ARAE [salehi2020arae] somewhat refers to the adversarial robustness of novelty detection though its main purpose is improving performance. As mentioned in Sec. 3, ARAE’s adversarial robustness is not thoroughly evaluated. In this section, we make a comprehensive comparison with ARAE.

First, ARAE evaluates adversarial robustness by crafting adversarial examples from only the normal test data (the known class). We name this attack PGD-normal. Instead, our attack method crafts adversarial examples from every test data regardless of their class (see Sec. 3). We reproduce PGD-normal with the same setting as in Sec. 5.1. As shown in Table VI, the proposed attack (denoted as PGD) is stronger than PGD-normal, in which PGD obtains lower mAUROC across all the considered defense methods and datasets. It is intuitive that perturbing every input data poses a stronger attack.

Second, ARAE performes AT on the latent space-based adversarial examples. We name this attack as PGD-latent. Instead, in this paper, we perform AT on the reconstruction error-based adversarial examples (see Sec. 3). We reproduce PGD-latent with the same setting as in Sec. 5.1. As can be seen in Table VI, PGD is much stronger than PGD-latent, in which PGD obtains lower mAUROC across all the considered defense methods and datasets.

Third, since a novelty detector would not know whether an input image is adversarial or not during inference, it should compute the novelty score by the reconstruction error between the reconstructed image and the input image instead of that between the reconstructed image and the clean image. For instance, if a given test image is an adversarial example , a novelty detector should compute instead of as the novelty score, where is the clean image. Therefore, to craft a strong adversarial example, one should maximize the reconstruction error between the reconstructed image and the input image. The proposed attack is based on this nature; that is, at each attack iteration, we maximize the reconstruction error between the current adversarial example the reconstruction of that current adversarial example (see Eq. 2). We make an attack variant, PGD-clean, which maximizes the reconstruction error between the clean image and the reconstruction of the current adversarial example. Specifically, PGD-clean replaces the loss objective Eq. 2 with follows:

(20)

ARAE uses this form. As shown in Table VI, PGD is much stronger than PGD-clean, in which PGD obtains lower mAUROC across all the considered defense methods and datasets. Therefore, we perform AT by minimizing Eq. 2 to make a stronger defense, while ARAE minimizes Eq. 20.

In summary, the proposed attack is stronger than PGD-normal, PGD-latent and PGD-clean. Hence, we are able to carefully and strictly evaluate the adversarial robustness of a novelty detector. Moreover, conducting AT on a stronger attack can enhance robustness to a greater extent. We hope to provide researchers a benchmark for future work on the adversarial robustness of one-class novelty detection.

A7 Further comparison with APAE

To the best of our knowledge, APAE [goodge2020robustness] might be the only present defense designed for anomaly detection. However, as mentioned in Sec. 3, APAE’s adversarial robustness is not thoroughly evaluated. In this section, we make more comparisons with APAE.

First, APAE evaluates adversarial robustness by crafting adversarial examples from only the anomalous test data (the unknown class). We name this attack PGD-anomalous. Instead, our attack method crafts adversarial examples from every test data regardless of their class (see Sec. 3). We reproduce PGD-anomalous with the same setting as in Sec. 5.1. As shown in Table VI, the proposed attack (denoted as PGD) is stronger than PGD-anomalous, in which PGD obtains lower mAUROC across all the considered defense methods and datasets. It is intuitive that perturbing every input data poses a stronger attack. On the other hand, No Defense attains the best mAUROC compared with the other defenses. The reason is that these defenses use only normal data to do AT, so they overfit for the adversarial normal data and show less robustness against PGD-anomalous.

Second, APAE claims that AT is inapplicable to the novelty detection problem. In contrast, in this paper, we show that AT is actually applicable to novelty detection, in which we can craft adversarial examples for the normal data to train the target model. Indeed, as can be seen in Table VI, using AT is less robust to PGD-anomalous. However, for the stronger attacks that contain adversarial normal data, AT can significantly improve the robustness.

As discussed in Sec. A6, we construct a proper evaluation protocol to fully test the adversarial robustness of novelty detectors. With a good evaluation protocol, we are able to design a better defense method accordingly.

A8 Comparison with vector quantization

The proposed PLS learns a principal latent vector, which is adversary-free, to replace perturbed latent vectors and enhance adversarial robustness. An alternative way of learning the adversary-free latent vectors is using vector quantization. VQ-VAE [van2017neural] is an AE variant that uses the vector quantization technique to improve generation ability. To the best of our knowledge, VQ-VAE has not been adopted in the context of novelty detection. In this section, we implement VQ-VAE for one-class novelty detection and evaluate its adversarial robustness. We set the number of embeddings to 4 for MNIST, 8 for F-MNIST and 256 for CIFAR-10. These numbers achieve the best robustness according to our experiments.

Because the quantization step is non-differentiable, it causes obfuscated gradients [obfuscated]

. Hence, we build a neural network, which consists of four fully connected layers, to learn the mapping from the latent vectors (the output of the encoder) to the quantized latent vectors (corresponded embedding vectors). Since the neural network is differentiable, we use it to approximate the gradients of the non-differentiable part to perform PGD attack

[madry2018towards]. For comparison, we train another neural network with the same architecture to learn the mapping from the latent space to the principal latent space of PLS.

Table VII reports the experimental results. Comparing PLS (PGD examples are generated from the entire differentiable network) and PLS* (PGD examples are generated from the neural network gradient approximator), we can see that the neural network still cannot perfectly approximate the gradients, so the produced attack is weaker. However, although attacked by this weaker attack, VQ-VAE achieves lower mAUROC than PLS on MNIST and F-MNIST, and much lower mAUROC than PLS* on all the datasets. This shows that PLS has better robustness than VQ-VAE.

The explanations are as follows. First, PLS’s principal latent vector is learned by the incrementally-trained cascade PCA process, which is not only adversary-free but also contains important features that can properly substitute the original latent vectors. In contrast, VQ-VAE’s embedding vectors are randomly initialized. Even using the training strategy in [van2017neural], the embedding vectors are still not close to the original latent vectors. Therefore, PLS’s principal latent vector is a better adversary-free substitute. Second, after Vector-PCA, PLS’s Vector-PCA map stores the scaling factors of the principal latent vector with spatial information, so we can perform Spatial-PCA on it to further remove the remaining adversaries. In contrast, the vector quantization map stores the indices of the embedding vectors, and we cannot do further operations on these indices. These demonstrate the advantages of the proposed PLS.

Defense MNIST F-MNIST CIFAR-10
VQ-VAE [van2017neural]* 0.542 0.588 0.248
PLS (ours) 0.678 0.600 0.245
PLS (ours)* 0.816 0.755 0.325
Table VII: The mAUROC of VQ-VAE and PLS under PGD attack. “*” denotes that PGD examples are generated from a neural network gradient approximator.
Defense Clean PGD
PGD-AT [madry2018towards] 0.229 0.912
FD [Xie_2019_CVPR] 0.243 0.914
SAT [xie2020smooth] 0.360 0.916
RotNet-AT [hendrycks2019selfsupervised] 0.252 0.909
PLS (ours) 0.170 0.849
Table VIII: The mean of FPR at 95% TPR of models under PGD attack.

A9 Comparison with the defenses that use dimensionality reduction techniques

A few studies employ vanilla PCA to counter adversarial attacks for the image classification problem. Hendrycks & Gimpel [hendrycks2016early] and Jere et al. [jere2020principal] utilized PCA to detect adversarial examples. Li & Li [li2017adversarial] performed PCA in the feature domain and used a cascade classifier to detect adversarial examples. However, detection is inherently weaker than defense in terms of resisting adversarial attacks. Bhagoji et al. [bhagoji2017dimensionality] mapped each input image into a dimensionality-reduced PCA space to defend against adversarial attacks, but this fails to resist white-box attacks [carlini2017adversarial]. As discussed in Sec. 1, doing image classification requires a model containing sophisticated semantic information, and large manipulation such as dimensionality reduction would hurt the model capability. Hence, it is counterintuitive to use dimensionality reduction for robustifying image classification models.

In contrast, we target at a different downstream application, one-class novelty detection. As discussed in Sec. 1, novelty detection has a peculiar property that a novelty detector’s latent space can be manipulated to a larger extent as long as it retains the known class information. This is natually suitable for using dimensionality reduction techniques to remove adversaries and maintain the model capability simultaneously. Furthermore, we propose a novel training scheme that learns the incrementally-trained cascade principal components in the latent space. The proposed defense method is fully differentiable at inference time, and it is highly robust to white-box attacks as shown in Sec. 5.2.

A10 Evaluation with FPR at 95% TPR

In addition to the AUROC metric, in Table VIII, we also provide the mean of FPR at 95% TPR comparison for different defenses on the MNIST dataset [lecun2010mnist]. We observe a similar trend as that of mAUROC (see Table I). The proposed PLS outperforms all the other defense approaches.

A11 AUROC of each class

In this section, we provide more detailed quantatitive results. From Table IX to Table XIV, they are the expansion of Table II, where the AUROC scores of each class are reported.

Dataset Defense Test type 0 1 2 3 4 5 6 7 8 9 Mean
No Defense Clean 0.994 0.999 0.943 0.965 0.956 0.967 0.971 0.973 0.913 0.971 0.964
No Defense PGD 0.000 0.463 0.000 0.000 0.020 0.000 0.001 0.026 0.000 0.002 0.051
MNIST PGD-AT [madry2018towards] 0.371 0.959 0.174 0.098 0.404 0.162 0.490 0.538 0.033 0.340 0.357
FD [Xie_2019_CVPR] PGD 0.412 0.956 0.177 0.117 0.419 0.146 0.489 0.548 0.070 0.322 0.366
PLS (ours) 0.876 0.991 0.430 0.479 0.723 0.531 0.797 0.758 0.468 0.724 0.678
No Defense Clean 0.885 0.988 0.857 0.914 0.886 0.845 0.782 0.978 0.818 0.968 0.892
No Defense PGD 0.022 0.202 0.014 0.088 0.012 0.071 0.022 0.374 0.008 0.068 0.088
F-MNIST PGD-AT [madry2018towards] 0.277 0.773 0.226 0.408 0.249 0.325 0.136 0.746 0.102 0.439 0.368
FD [Xie_2019_CVPR] PGD 0.289 0.790 0.253 0.412 0.320 0.328 0.148 0.770 0.095 0.387 0.379
PLS (ours) 0.540 0.862 0.486 0.603 0.547 0.663 0.370 0.857 0.301 0.771 0.600
No Defense Clean 0.628 0.311 0.667 0.539 0.728 0.533 0.633 0.445 0.665 0.351 0.550
No Defense PGD 0.052 0.003 0.053 0.022 0.087 0.020 0.031 0.013 0.051 0.006 0.034
CIFAR-10 PGD-AT [madry2018towards] 0.213 0.029 0.222 0.122 0.267 0.104 0.163 0.075 0.217 0.040 0.145
FD [Xie_2019_CVPR] PGD 0.211 0.034 0.225 0.125 0.269 0.104 0.171 0.075 0.212 0.042 0.147
PLS (ours) 0.325 0.096 0.317 0.194 0.392 0.184 0.334 0.164 0.347 0.101 0.245
Table IX: The AE novelty detector’s AUROC of each class with different defense approaches (expansion of Table II).
Dataset Defense Test type 0 1 2 3 4 5 6 7 8 9 Mean
No Defense Clean 0.998 0.999 0.971 0.976 0.976 0.978 0.993 0.984 0.934 0.985 0.979
No Defense PGD 0.003 0.673 0.001 0.002 0.023 0.002 0.028 0.128 0.000 0.015 0.087
MNIST PGD-AT [madry2018towards] 0.679 0.979 0.382 0.258 0.598 0.330 0.687 0.654 0.159 0.481 0.521
FD [Xie_2019_CVPR] PGD 0.705 0.984 0.380 0.282 0.577 0.311 0.677 0.662 0.178 0.498 0.525
PLS (ours) 0.921 0.992 0.622 0.539 0.760 0.622 0.845 0.811 0.501 0.780 0.739
No Defense Clean 0.908 0.990 0.882 0.934 0.901 0.888 0.813 0.983 0.859 0.979 0.914
No Defense PGD 0.067 0.435 0.059 0.208 0.083 0.282 0.049 0.680 0.028 0.337 0.223
F-MNIST PGD-AT [madry2018towards] 0.460 0.839 0.392 0.573 0.475 0.581 0.305 0.834 0.193 0.732 0.538
FD [Xie_2019_CVPR] PGD 0.445 0.826 0.407 0.562 0.452 0.597 0.298 0.834 0.194 0.718 0.533
PLS (ours) 0.536 0.845 0.477 0.637 0.524 0.698 0.381 0.849 0.315 0.782 0.604
No Defense Clean 0.635 0.336 0.661 0.522 0.725 0.512 0.639 0.463 0.672 0.354 0.552
No Defense PGD 0.100 0.016 0.133 0.055 0.124 0.049 0.079 0.032 0.123 0.018 0.073
CIFAR-10 PGD-AT [madry2018towards] 0.255 0.050 0.256 0.146 0.328 0.123 0.219 0.092 0.251 0.052 0.177
FD [Xie_2019_CVPR] PGD 0.274 0.059 0.269 0.156 0.305 0.124 0.204 0.096 0.267 0.051 0.180
PLS (ours) 0.325 0.097 0.324 0.198 0.386 0.197 0.322 0.171 0.352 0.101 0.247
Table X: The VAE [kingma2013auto] novelty detector’s AUROC of each class with different defense approaches (expansion of Table II).
Dataset Defense Test type 0 1 2 3 4 5 6 7 8 9 Mean
No Defense Clean 0.998 0.999 0.951 0.974 0.973 0.962 0.993 0.976 0.920 0.983 0.973
No Defense PGD 0.001 0.492 0.000 0.001 0.022 0.003 0.002 0.033 0.001 0.004 0.056
MNIST PGD-AT [madry2018towards] 0.509 0.965 0.167 0.185 0.537 0.208 0.590 0.569 0.143 0.401 0.427
FD [Xie_2019_CVPR] PGD 0.592 0.968 0.138 0.145 0.474 0.209 0.556 0.593 0.134 0.386 0.419
PLS (ours) 0.727 0.985 0.432 0.410 0.659 0.419 0.763 0.723 0.311 0.649 0.608
No Defense Clean 0.908 0.988 0.875 0.930 0.900 0.887 0.819 0.986 0.852 0.975 0.912
No Defense PGD 0.094 0.091 0.011 0.196 0.053 0.303 0.071 0.501 0.016 0.189 0.152
F-MNIST PGD-AT [madry2018towards] 0.428 0.761 0.398 0.551 0.443 0.546 0.308 0.817 0.155 0.711 0.512
FD [Xie_2019_CVPR] PGD 0.408 0.739 0.453 0.534 0.450 0.557 0.326 0.804 0.188 0.672 0.513
PLS (ours) 0.537 0.795 0.532 0.609 0.516 0.660 0.451 0.832 0.304 0.748 0.599
No Defense Clean 0.634 0.336 0.661 0.524 0.732 0.499 0.662 0.465 0.671 0.366 0.555
No Defense PGD 0.074 0.011 0.041 0.055 0.097 0.0333 0.080 0.032 0.071 0.012 0.051
CIFAR-10 PGD-AT [madry2018towards] 0.274 0.058 0.284 0.168 0.320 0.143 0.265 0.109 0.264 0.063 0.195
FD [Xie_2019_CVPR] PGD 0.306 0.065 0.276 0.161 0.374 0.139 0.284 0.123 0.267 0.068 0.206
PLS (ours) 0.324 0.088 0.333 0.205 0.410 0.191 0.352 0.172 0.340 0.110 0.252
Table XI: The AAE [makhzani2015adversarial] novelty detector’s AUROC of each class with different defense approaches (expansion of Table II).
Dataset Defense Test type 0 1 2 3 4 5 6 7 8 9 Mean
No Defense Clean 0.990 0.998 0.913 0.974 0.961 0.955 0.986 0.957 0.918 0.965 0.961
No Defense PGD 0.019 0.742 0.007 0.018 0.270 0.008 0.081 0.194 0.000 0.071 0.141
MNIST PGD-AT [madry2018towards] 0.262 0.962 0.141 0.071 0.438 0.098 0.372 0.464 0.009 0.298 0.312
FD [Xie_2019_CVPR] PGD 0.306 0.977 0.127 0.127 0.352 0.091 0.402 0.539 0.023 0.247 0.319
PLS (ours) 0.911 0.990 0.594 0.487 0.609 0.554 0.849 0.752 0.422 0.764 0.693
No Defense Clean 0.912 0.988 0.879 0.926 0.899 0.851 0.823 0.980 0.794 0.958 0.901
No Defense PGD 0.094 0.525 0.062 0.113 0.102 0.161 0.052 0.538 0.023 0.099 0.177
F-MNIST PGD-AT [madry2018towards] 0.297 0.769 0.266 0.406 0.300 0.284 0.146 0.733 0.092 0.375 0.367
FD [Xie_2019_CVPR] PGD 0.316 0.790 0.230 0.415 0.316 0.287 0.149 0.733 0.087 0.377 0.370
PLS (ours) 0.554 0.870 0.565 0.610 0.531 0.708 0.370 0.839 0.265 0.810 0.612
No Defense Clean 0.617 0.324 0.664 0.538 0.731 0.530 0.630 0.450 0.671 0.358 0.551
No Defense PGD 0.061 0.005 0.068 0.023 0.081 0.019 0.035 0.012 0.063 0.006 0.037
CIFAR-10 PGD-AT [madry2018towards] 0.217 0.032 0.216 0.123 0.271 0.111 0.165 0.073 0.210 0.040 0.146
FD [Xie_2019_CVPR] PGD 0.208 0.035 0.217 0.126 0.291 0.106 0.183 0.072 0.220 0.045 0.150
PLS (ours) 0.322 0.097 0.304 0.205 0.386 0.188 0.331 0.171 0.346 0.095 0.244
Table XII: The ALOCC [sabokrou2018adversarially] novelty detector’s AUROC of each class with different defense approaches (expansion of Table II).
Dataset Defense Test type 0 1 2 3 4 5 6 7 8 9 Mean
No Defense Clean 0.998 0.998 0.911 0.921 0.894 0.937 0.984 0.965 0.914 0.936 0.946
No Defense PGD 0.029 0.757 0.002 0.050 0.142 0.004 0.031 0.049 0.010 0.206 0.128
MNIST PGD-AT [madry2018towards] 0.706 0.987 0.507 0.361 0.670 0.350 0.716 0.739 0.207 0.570 0.582
FD [Xie_2019_CVPR] PGD 0.689 0.987 0.463 0.413 0.407 0.472 0.762 0.528 0.241 0.547 0.551
PLS (ours) 0.880 0.991 0.557 0.543 0.766 0.613 0.821 0.745 0.443 0.786 0.741
No Defense Clean 0.907 0.981 0.877 0.911 0.916 0.900 0.831 0.983 0.878 0.963 0.915
No Defense PGD 0.076 0.328 0.016 0.227 0.008 0.407 0.014 0.581 0.006 0.103 0.177
F-MNIST PGD-AT [madry2018towards] 0.487 0.821 0.409 0.585 0.452 0.570 0.340 0.833 0.198 0.690 0.539
FD [Xie_2019_CVPR] PGD 0.526 0.821 0.411 0.578 0.484 0.546 0.321 0.832 0.195 0.705 0.542
PLS (ours) 0.586 0.840 0.530 0.665 0.547 0.694 0.424 0.860 0.331 0.785 0.626
No Defense Clean 0.659 0.344 0.659 0.520 0.735 0.507 0.674 0.464 0.647 0.358 0.559
No Defense PGD 0.024 0.002 0.076 0.019 0.055 0.022 0.026 0.014 0.027 0.006 0.027
CIFAR-10 PGD-AT [madry2018towards] 0.260 0.052 0.266 0.139 0.322 0.127 0.247 0.107 0.249 0.055 0.182
FD [Xie_2019_CVPR] PGD 0.258 0.057 0.269 0.154 0.330 0.141 0.254 0.099 0.252 0.058 0.187
PLS (ours) 0.314 0.095 0.330 0.195 0.374 0.183 0.335 0.167 0.325 0.098 0.242
Table XIII: The GPND [pidhorskyi2018generative] novelty detector’s AUROC of each class with different defense approaches (expansion of Table II).
Dataset Defense Test type 0 1 2 3 4 5 6 7 8 9 Mean
No Defense Clean 0.993 0.999 0.951 0.958 0.965 0.953 0.981 0.972 0.917 0.965 0.965
No Defense PGD 0.022 0.754 0.004 0.001 0.013 0.004 0.050 0.239 0.000 0.144 0.133
MNIST PGD-AT [madry2018towards] 0.341 0.965 0.152 0.098 0.391 0.144 0.461 0.522 0.047 0.349 0.341
FD [Xie_2019_CVPR] PGD 0.382 0.950 0.170 0.106 0.402 0.108 0.465 0.545 0.039 0.336 0.350
PLS (ours) 0.891 0.989 0.477 0.472 0.723 0.564 0.858 0.755 0.498 0.723 0.695
No Defense Clean 0.876 0.975 0.852 0.937 0.913 0.847 0.772 0.984 0.883 0.982 0.901
No Defense PGD 0.103 0.487 0.057 0.415 0.077 0.272 0.040 0.803 0.023 0.345 0.262
F-MNIST PGD-AT [madry2018towards] 0.302 0.766 0.280 0.491 0.351 0.381 0.182 0.818 0.104 0.524 0.420
FD [Xie_2019_CVPR] PGD 0.344 0.784 0.264 0.510 0.364 0.404 0.175 0.798 0.113 0.525 0.428
PLS (ours) 0.511 0.842 0.459 0.614 0.531 0.673 0.395 0.848 0.353 0.765 0.599
No Defense Clean 0.670 0.389 0.626 0.518 0.686 0.526 0.571 0.490 0.738 0.568 0.578
No Defense PGD 0.136 0.014 0.109 0.070 0.144 0.088 0.074 0.060 0.144 0.028 0.087
CIFAR-10 PGD-AT [madry2018towards] 0.217 0.039 0.235 0.125 0.278 0.108 0.208 0.086 0.230 0.043 0.157
FD [Xie_2019_CVPR] PGD 0.223 0.028 0.240 0.122 0.275 0.106 0.189 0.079 0.216 0.039 0.152
PLS (ours) 0.329 0.099 0.314 0.193 0.388 0.189 0.334 0.168 0.344 0.098 0.245
Table XIV: The ARAE [salehi2020arae] novelty detector’s AUROC of each class with different defense approaches (expansion of Table II).