DeepAI
Log In Sign Up

CADet: Fully Self-Supervised Anomaly Detection With Contrastive Learning

Handling out-of-distribution (OOD) samples has become a major stake in the real-world deployment of machine learning systems. This work explores the application of self-supervised contrastive learning to the simultaneous detection of two types of OOD samples: unseen classes and adversarial perturbations. Since in practice the distribution of such samples is not known in advance, we do not assume access to OOD examples. We show that similarity functions trained with contrastive learning can be leveraged with the maximum mean discrepancy (MMD) two-sample test to verify whether two independent sets of samples are drawn from the same distribution. Inspired by this approach, we introduce CADet (Contrastive Anomaly Detection), a method based on image augmentations to perform anomaly detection on single samples. CADet compares favorably to adversarial detection methods to detect adversarially perturbed samples on ImageNet. Simultaneously, it achieves comparable performance to unseen label detection methods on two challenging benchmarks: ImageNet-O and iNaturalist. CADet is fully self-supervised and requires neither labels for in-distribution samples nor access to OOD examples.

READ FULL TEXT VIEW PDF

page 1

page 2

page 3

page 4

07/16/2021

Contrastive Predictive Coding for Anomaly Detection

Reliable detection of anomalies is crucial when deploying machine learni...
07/10/2020

Contrastive Training for Improved Out-of-Distribution Detection

Reliable detection of out-of-distribution (OOD) inputs is increasingly u...
05/29/2020

Machine learning methods to detect money laundering in the Bitcoin blockchain in the presence of label scarcity

Every year, criminals launder billions of dollars acquired from serious ...
01/17/2022

Self-Supervised Anomaly Detection by Self-Distillation and Negative Sampling

Detecting whether examples belong to a given in-distribution or are Out-...
07/04/2022

Partial and Asymmetric Contrastive Learning for Out-of-Distribution Detection in Long-Tailed Recognition

Existing out-of-distribution (OOD) detection methods are typically bench...
10/01/2022

Detecting Irregular Network Activity with Adversarial Learning and Expert Feedback

Anomaly detection is a ubiquitous and challenging task relevant across m...
08/17/2021

GCCAD: Graph Contrastive Coding for Anomaly Detection

Graph-based anomaly detection has been widely used for detecting malicio...

1 Introduction

While modern machine learning systems have achieved countless successful real-world applications, handling out-of-distribution (OOD) inputs remains a tough challenge of significant importance. The problem is especially acute for high-dimensional problems like image classification. Models are typically trained in a close-world setting but inevitably faced with novel input classes when deployed in the real world. The impact can range from displeasing customer experience to dire consequences in the case of safety-critical applications such as autonomous driving (Kitt et al., 2010) or medical analysis (Schlegl et al., 2017b). Although achieving high accuracy against all meaningful distributional shifts is the most desirable solution, it is particularly challenging. An efficient method to mitigate the consequences of unexpected inputs is to perform anomaly detection, which allows the system to anticipate its inability to process unusual inputs and react adequately.

Anomaly detection methods generally rely on one of three types of statistics: features, logits, and softmax probabilities, with some systems leveraging a mix of these 

(Wang et al., 2022). An anomaly score is computed, and then detection with threshold is performed based on whether . The goal of a detection system is to find an anomaly score that efficiently discriminates between in-distribution and out-of-distribution samples. However, the common problem of these systems is that different distributional shifts will unpredictably affect these statistics. Accordingly, detection systems either achieve good performance on specific types of distributions or require tuning on OOD samples. In both cases, their practical use is severely limited. Motivated by these issues, recent work has tackled the challenge of designing detection systems for unseen classes without prior knowledge of the unseen label set or access to OOD samples (Winkens et al., 2020; Tack et al., 2020; Wang et al., 2022).

We first investigate the use of maximum mean discrepancy two-sample test (MMD) (Gretton et al., 2012) in conjunction with self-supervised contrastive learning to assess whether two sets of samples have been drawn from the same distribution. Motivated by the strong testing power of this method, we then introduce a statistic inspired by MMD and leveraging contrastive transformations. Based on this statistic, we propose CADet (Contrastive Anomaly Detection), which is able to detect OOD samples from single inputs and performs well on both label-based and adversarial detection benchmarks, without requiring access to any OOD samples to train or tune the method.

Only a few works have addressed these tasks simultaneously. These works either focus on particular in-distribution data such as medical imaging for specific diseases (Uwimana1 and Senanayake, 2021) or evaluate their performances on datasets with very distant classes such as CIFAR10 (Krizhevsky, 2009)

, SVHN 

(Netzer et al., 2011), and LSUN (Yu et al., 2015), resulting in simple benchmarks that do not translate to general real world applications (Lee et al., 2018).

Contributions: Our main contributions are as follows:

  • We use similarity functions learned by self-supervised contrastive learning with MMD to show that the test sets of CIFAR10 and CIFAR10.1 (Recht et al., 2019) have different distributions.

  • We propose a novel improvement to MMD and show it can also be used to confidently detect distributional shifts when given a small number of samples.

  • We introduce CADet, a fully self-supervised method for anomaly detection, and show it outperforms current methods in adversarial detection tasks while performing well on class-based OOD detection.

The outline is as follows: in Section 2, we discuss relevant previous work. Section 3 describes the self-supervised contrastive method based on SimCLRv2 (Chen et al., 2020b) used in this work. Section 4 explores the application of learned similarity functions in conjunction with MMD to verify whether two independent sets of samples are drawn from the same distribution. Section 5 presents CADet and evaluates its empirical performance. Finally, we discuss results and limitations in Section 6.

2 Related work

We propose a self-supervised contrastive method for anomaly detection (both unknown classes and adversarial attacks) inspired by MMD. Thus, our work intersects with the MMD, label-based OOD detection, adversarial detection, and self-supervised contrastive learning literature.

MMD two-sample test has been extensively studied (Gretton et al., 2012; Wenliang et al., 2019; Gretton et al., 2009; Sutherland et al., 2016; Chwialkowski et al., 2015; Jitkrittum et al., 2016), though it is, to the best of our knowledge, the first time a similarity function trained via contrastive learning is used in conjunction with MMD. Liu et al. (2020a) uses MMD with a deep kernel trained on a fraction of the samples to argue that CIFAR10 and CIFAR10.1 have different test distributions. We build upon that work by confirming their finding with higher confidence levels, using fewer samples.

Label-based OOD detection methods discriminate samples that differ from those in the training distribution. We focus on unsupervised OOD detection in this work, i.e., we do not assume access to data labeled as OOD. Unsupervised OOD detection methods include density-based (Zhai et al., 2016; Nalisnick et al., 2018, 2019; Choi et al., 2018; Du and Mordatch, 2019; Ren et al., 2019; Serrà et al., 2019; Grathwohl et al., 2019; Liu et al., 2020c; Dinh et al., 2016), reconstruction-based (Schlegl et al., 2017a; Zong et al., 2018; Deecke et al., 2018; Pidhorskyi et al., 2018; Perera et al., 2019; Choi et al., 2018)

, one-class classifiers 

(Schölkopf et al., 1999; Ruff et al., 2018), self-supervised (Golan and El-Yaniv, 2018; Hendrycks et al., 2019b; Bergman and Hoshen, 2020; Tack et al., 2020), and supervised approaches (Liang et al., 2017; Hendrycks and Gimpel, 2016), though some works do not fall into any of these categories (Wang et al., 2022).

Adversarial detection discriminates adversarial samples from the original data. Adversarial samples are generated by minimally perturbing actual samples to produce a change in the model’s output, such as a misclassification. Most works rely on the knowledge of some attacks for training (Abusnaina et al., 2021; Metzen et al., 2017; Feinman et al., 2017; Lust and Condurache, 2020; Zuo and Zeng, 2021; Papernot and McDaniel, 2018; Ma et al., 2018), with the exception of Hu et al. (2019).

Self-supervised contrastive learning methods (Wu et al., 2018; He et al., 2020; Chen et al., 2020a, b) are commonly used to pre-train a model from unlabeled data to solve a downstream task such as image classification. Contrastive learning relies on instance discrimination trained with a contrastive loss (Hadsell et al., 2006) such as infoNCE (Gutmann and Hyvärinen, 2010).

Contrastive learning for OOD detection aims to find good representations for detecting OOD samples in a supervised (Liu and Abbeel, 2020; Khalid et al., 2022) or unsupervised (Winkens et al., 2020; Mohseni et al., 2020; Sehwag et al., 2021) setting. Perhaps the closest work in the literature is CSI (Tack et al., 2020), which found SimCLR features to have good discriminative power for unknown classes detection and leveraged similarities between transformed samples in their score. However, this method is not well-suited for adversarial detection. CSI ignores the similarities between different transformations of a same sample, an essential component to perform adversarial detection (see Section 6.2). In addition, CSI scales their score with the norm of input representations. While efficient on samples with unknown classes, it is unreliable on adversarial perturbations, which typically increase representation norms.

3 Contrastive model

We build our model on top of SimCLRv2 (Chen et al., 2020b) for its simplicity and efficiency. It is composed of an encoder backbone network as well as a -layer contrastive head . Given an in-distribution sample , a similarity function sim, and a distribution of training transformations , the goal is to simultaneously maximize and minimize , i.e., we want to learn representations in which random transformations of a same example are close while random transformations of different examples are distant.

To achieve this, given an input batch , we compute the set by applying two transformations independently sampled from to each . We then compute the embeddings and apply the following contrastive loss:

(1)

where

is the temperature hyperparameter and

is the cosine similarity.

Hyperparameters: We follow as closely as possible the setting from SimCLRv2 with a few modifications to adapt to hardware limitations. In particular, we use the LARS optimizer (You et al., 2017) with learning rate , momentum , and weight decay . Iteration-wise, we scale up the learning rate for the first epochs linearly, then use an iteration-wise cosine decaying schedule until epoch , with temperature . We train on GPUs with an accumulated batch size of . We compute the contrastive loss on all batch samples by aggregating the embeddings computed by each GPU. We use synchronized BatchNorm and fp32 precision and do not use a memory buffer. We use the same set of transformations, i.e., Gaussian blur and horizontal flip with probability , color jittering with probability , random crop with scale uniformly sampled in , and grayscale with probability .

For computational simplicity and comparison with previous work, we use a ResNet50 encoder architecture with final features of size . Following SimCLRv2, we use a three-layer fully connected contrastive head with hidden layers of width

using ReLU activation and batchNorm and set the last layer projection to dimension

. For evaluation, we use the features produced by the encoder without the contrastive head. We do not, at any point, use supervised fine-tuning.

4 MMD two-sample test

The Maximum Mean Discrepancy (MMD) is a statistic used in the MMD two-sample test to assess whether two sets of samples and

are drawn from the same distribution. It estimates the expected difference between the intra-set distances and the across-sets distances.

Definition 4.1 (Gretton et al. (2012)).

Let be the kernel of a reproducing Hilbert space , with feature maps . Let and . Under mild integrability conditions,

(2)
(3)

Given two sets of samples and , respectively drawn from and

, we can compute the following unbiased estimator

Liu et al. (2020a):

(4)

Under the null hypothesis

, this estimator follows a normal distribution of mean

 (Gretton et al., 2012)

. Its variance can be directly estimated 

(Gretton et al., 2009), but it is simpler to perform a permutation test as suggested in Sutherland et al. (2016), which directly yields a -value for . The idea is to use random splits of the input sample sets to obtain different (though not independent) samplings of , which approximate the distribution of under the null hypothesis.

Liu et al. (2020a) train a deep kernel to maximize the test power of the MMD two-sample test on a training split of the sets of samples to test. We propose instead to use our learned similarity function without any fine-tuning. Note that we return the -value instead of . Indeed, under the null hypothesis , and are drawn from the same distribution, so for , the probability for to be smaller than exactly elements of is . Therefore, the probability that elements or less of are larger than is . While this change has a small impact for large values of , it is essential to guarantee that we indeed return a correct -value. Notably, the algorithm of Liu et al. (2020a) has a probability to return an output of even under the null hypothesis.

Additionnally, we propose an improvement of MMD called MMD-CC (MMD with Clean Calibration). Instead of computing based on random splits of , we require as input two disjoint sets of samples drawn from and compute based on random splits of (see Algorithm 1). This change requires to use twice as many samples from , but reduces the variance induced by the random splits of , which is significant when the number of samples is small.

Input:

for  do
     Randomly split into two disjoint sets of equal size
     
end for

Output: :

Algorithm 1 MMD-CC two-sample test

4.1 Distribution shift between CIFAR-10 and CIFAR-10.1 test sets

After years of evaluation of popular supervised architectures on the test set of CIFAR-10 (Krizhevsky, 2009), modern models may overfit it through their hyperparameter tuning and structural choices. CIFAR-10.1 (Recht et al., 2019) was collected to verify the performances of these models on a truly independent sample from the training distribution. The authors note a consistent drop in accuracy across models and suggest it could be due to a distributional shift, though they could not demonstrate it. Recent work (Liu et al., 2020a) leveraged the two-sample test to provide strong evidence of distributional shifts between the test sets of CIFAR-10 and CIFAR-10.1. We run MMD-CC and MMD two-sample tests for different samplings of , using every time , and rejecting when the obtained -value is below the threshold

. We also report results using cosine similarity applied to the features of supervised models as a comparative baseline. We report the results in Table 

1 for a range of sample sizes. We compare the results to three competitive methods reported in Liu et al. (2020a): Mean embedding (ME) (Chwialkowski et al., 2015; Jitkrittum et al., 2016), MMD-D (Liu et al., 2020a), and C2ST-L (Cheng and Cloninger, 2019). Finally, we show in Figure 0(a) the ROC curves of the proposed model for different sample sizes.

(a)
(b)
Figure 1: ROC curves for MMD-CC two-sample test on CIFAR10 vs CIFAR10.1 using different sample sizes (left) and for CADet anomaly detection on different out-distributions (right).

Other methods in the literature do not use external data for pre-training, as we do with ImageNet, which makes a fair comparison difficult. However, it is noteworthy that our learned similarity can very confidently distinguish samples from the two datasets, even in settings with fewer samples available. Furthermore, while we achieve excellent results even with a supervised network, our model trained with contrastive learning outperforms the supervised alternative very significantly. We note however that with such high number of samples available, MMD-CC performs slightly worse than MMD. Finally, we believe the confidence obtained with our method decisively concludes that CIFAR10 and CIFAR10.1 have different distributions, which is likely the primary explanation for the significant drop in performances across models on CIFAR10.1, as conjectured by Recht et al. (2019). The difference in distribution between CIFAR10 and CIFAR10.1 is neither based on label set nor adversarial perturbations, making it an interesting task.

n=2000 n=1000 n=500 n=200 n=100 n=50
MEChwialkowski et al. (2015) 0.588 - - - - -
C2ST-LCheng and Cloninger (2019) 0.529 - - - - -
MMD-DLiu et al. (2020a) 0.744 - - - - -
MMD + SimCLRv2 (ours) 1.00 1.00 0.997 0.702 0.325 0.154
MMD-CC + SimCLRv2 (ours) 1.00 1.00 0.997 0.686 0.304 0.150
MMD + Supervised (ours) 1.00 1.00 0.884 0.305 0.135 0.103
MMD-CC + Supervised (ours) 1.00 1.00 0.870 0.298 0.131 0.096
Table 1: Average rejection rates of on CIFAR-10 vs CIFAR-10.1 for across different sample sizes , using a ResNet50 backbone.

4.2 Detection of distributional shifts from small number of samples

Given a small set of samples with potential unknown classes or adversarial attacks, we can similarly use the two-sample test with our similarity function to verify whether these samples are in-distribution (Gao et al., 2021). In particular, we test for samples drawn from ImageNet-O, iNaturalist, and PGD perturbations, with sample sizes ranging from to . For these experiments, we sample and times across all of ImageNet’s validation set and compare their MMD and MMD-CC estimators to the one obtained from and . We report in Table 2 the AUROC of the resulting detection and compare it to the ones obtained with a supervised ResNet50 as the baseline.

ImageNet-O iNaturalist PGD
n_samples 3 5 10 20 3 5 10 20 3 5 10 20
MMD + SimCLRv2
64.3 72.4 86.9 97.6 88.3 97.6 99.5 99.5 35.2 53.8 86.6 98.8
MMD-CC + SimCLRv2
65.3 73.2 88.0 97.7 95.4 99.2 99.5 99.5 70.5 84.0 96.6 99.5
MMD + Supervised
62.7 69.7 83.2 96.4 91.8 98.7 99.5 99.5 20.0 22.5 33.0 57.5
MMD-CC + Supervised
62.6 71.0 85.5 97.2 98.0 99.5 99.5 99.5 57.4 61.3 70.5 85.8
Table 2: AUROC for detection using two-sample test on 3 to 20 samples drawn from ImageNet and from ImageNet-O, iNaturalist or PGD perturbations, with a ResNet50 backbone.

Such a setting where we use several samples assumed to be drawn from a same distribution to perform detection is uncommon, and we are not aware of prior baselines in the literature. Despite using very few samples (), our method can detect OOD samples with high confidence. We observe particularly outstanding performances on iNaturalist, which is easily explained by the fact that the subset we are using (cf. Section 1) only contains plant species, logically inducing an abnormally high similarity within its samples. Furthermore, we observe that MMD-CC performs significantly better than MMD, especially on detecting samples perturbed by PGD.

Although our method attains excellent detection rates for sufficient numbers of samples, the requirement to have a set of samples all drawn from the same distribution to perform the test makes it unpractical for real-world applications. In the following section, we present CADet, a detection method inspired by MMD but applicable to anomaly detection with single inputs.

5 CADet: Contrastive Anomaly Detection

While the numbers in Section 4 demonstrate the reliability of two-sample test coupled with contrastive learning for identifying distributional shifts, it requires several samples from the same distribution, which is generally unrealistic for practical detection purposes. This section presents CADet, a method to leverage contrastive learning for anomaly detection on single samples from OOD distributions.

Self-supervised contrastive learning trains a similarity function to maximize the similarity between augmentations of the same sample, and minimize the similarity between augmentations of different samples. Given an input sample , we propose to leverage this property to perform anomaly detection on , taking inspiration from MMD two-sample test. More precisely, given a transformation distribution , we compute random transformations of , as well as random transformations on each sample of a held-out validation dataset . We then compute the intra-similarity and out-similarity:

(5)

We finally define the following statistic to perform detection:

(6)

Calibration:

since we do not assume knowledge of OOD samples, it is difficult a priori to tune , although crucial to balance information between intra-sample similarity and cross-sample similarity. As a workaround, we calibrate by equalizing the variance between and on a second set of validation samples :

(7)

Rather than evaluating the false positive rate (FPR) for a range of possible thresholds , we use the hypothesis testing approach to compute the p-value:

(8)

Algorithm 2 and Algorithm 3 detail the calibration and the testing steps, respectively. Setting a threshold for the will result in a FPR of mean , with a variance dependant of .

Section 5.1 further describes our experimental setting.

Input: , , , learned similarity function , various hyperparameters used below;

1:for  do
2:     for  do
3:         Sample from
4:         
5:     end for
6:end for
7:
8:for  do
9:     for  do
10:         Sample from
11:         
12:     end for
13:     
14:     
15:     
16:end for
17:
18:
19:
20:
21:
22:for  do
23:     
24:     
25:end for

Output: coefficient: , scores: , transformed samples:

Algorithm 2 CADet calibration step

Input: transformed samples: , scores: , test sample: , coefficient: , trasnformation set: ;

1:for  do
2:     Sample from
3:     
4:end for
5:
6:
7:
8:
9:for  do
10:     if  then
11:         
12:     end if
13:end for
14:

Output: p-value:

Algorithm 3 CADet testing step

5.1 Experiments

For all evaluations, we use the same transformations as SimCLRv2 except color jittering, Gaussian blur and grayscaling. We fix the random crop scale to . We use in-distribution samples, separate samples to compute cross-similarities, and transformations per sample. We pre-train a ResNet50 with ImageNet as in-distribution.

Unknown classes detection: we use two challenging benchmarks for the detection of unknown classes. iNaturalist using the subset in Huang and Li (2021) made of plants with classes that do not intersect ImageNet. Wang et al. (2022) noted that this dataset is particularly challenging due to proximity of its classes. We also evaluate on ImageNet-O (Hendrycks et al., 2021); explicitly designed to be challenging for OOD detection with ImageNet as in-distribution. We compare to recent works and report the AUROC scores in Table 3.

Adversarial detection: for adversarial detection, we generate adversarial attacks on the validation partition of ImageNet against a pre-trained ResNet50 using three popular attacks: PGD (Madry et al., 2017), CW (Carlini and Wagner, 2017), and FGSM (Goodfellow et al., 2014). We follow the tuning suggested by Abusnaina et al. (2021), i.e. PGD: norm , , step size , 50 iterations; CW: norm , , learning rate of , and iterations; FGSM: norm , . We compare our results with ODIN (Liang et al., 2017), which achieves good performances in Lee et al. (2018) despite not being designed for adversarial detection, and to Hu et al. (2019). Most other existing adversarial detection methods assume access to adversarial samples during training (see Section 2). While both of these works use adversarial samples to tune hyperparameters, we additionally propose a modification to Hu et al. (2019) to perform auto-calibration based on the mean and variance of the criterions on clean data, similarly to CADet’s calibration step. We report the AUROC scores in Table 4 and illustrate them with ROC curves against each anomaly type in Figure 0(b).

6 Discussion

6.1 Results

Training iNaturalist ImageNet-O Average
MSP Hendrycks and Gimpel (2016) Supervised 88.58 56.13 72.36
Energy Liu et al. (2020b) 80.50 53.95 67.23
ODIN Liang et al. (2017) 86.48 52.87 69.68
MaxLogit Hendrycks et al. (2019a) 86.42 54.39 70.41
KL Matching Hendrycks et al. (2019a) 90.48 67.00 78.74
ReAct Sun et al. (2021) 87.27 68.02 77.65
Mahalanobis Lee et al. (2018) 89.48 80.15 84.82
Residual Wang et al. (2022) 84.63 81.15 82.89
ViM Wang et al. (2022) 89.26 81.02 85.14
CADet (ours) Supervised 95.28 70.73 83.01
Self-supervised
(contrastive)
83.42 82.29 82.86
Table 3: AUROC for OOD detection on ImageNet-O and iNaturalist with ResNet50 backbone.
Tuned on Adv Training PGD CW FGSM Average
ODIN (Liang et al., 2017) Yes Supervised 62.30 60.29 68.10 63.56
Contrastive 59.91 60.23 64.99 61.71
Hu (Hu et al., 2019) Yes Supervised 84.31 84.29 77.95 82.18
Contrastive 94.80 95.19 78.18 89.39
Hu (Hu et al., 2019) + self-calibration No Supervised 66.40 59.58 71.02 65.67
Contrastive 75.69 75.74 69.20 73.54
CADet (ours) No Supervised 75.25 71.02 83.45 76.57
Contrastive 94.88 95.93 97.56 96.12
Table 4: AUROC for adversarial detection on ImageNet against PGD, CW and FGSM attacks, with ResNet50 backbone.

CADet performs particularly well on adversarial detection, surpassing alternatives by a wide margin. We argue that self-supervised contrastive learning is a suitable mechanism for detecting classification attacks due to its inherent label-agnostic nature. Interestingly, Hu et al. (2019) also benefits from contrastive pre-training, achieving much higher performances than with a supervised backbone. However, it is very reliant on calibrating on adversarial samples, since we observe a significant drop in performances with auto-calibration. Simultaneously, CADet performs well on detecting unknown classes, although not beating the best existing methods on iNaturalist.

Notably, applying CADet to a supervised network achieves state-of-the-art performances on iNaturalist with ResNet50 architecture, suggesting CADet can be a reasonable standalone detection method on some benchmarks, independently of contrastive learning. In addition, the poor performances of the supervised network on ImageNet-O and adversarial attacks show that contrastive learning is essential to address the trade-off between different type of anomalies.

Overall, our results show CADet achieves an excellent trade-off when considering both adversarial and label-based OOD samples.

6.2 The predictive power of in-similarities and out-similarities

Table 5 reports the mean and variance of and , and the rescaled mean across all distributions. Interestingly, we see that out-similarities better discriminate label-based OOD samples, while in-similarities better discriminate adversarial perturbations. Combining in-similarities and out-similarities is thus an essential component to simultaneously detect adversarial perturbations and unknown classes.

IN-1K iNat IN-O PGD CW FGSM
Mean 0.972 0.967 0.969 0.954 0.954 0.948
0.321 0.296 0.275 0.306 0.302 0.311
0.071 0.066 0.061 0.068 0.067 0.069
Var 8.3e-05 7.8e-05 1.0e-04 2.1e-04 2.0e-04 2.1e-04
1.7e-03 7.0e-04 2.3e-03 7.0e-04 1.7e-03 1.1e-03
Table 5: Mean and variance of and .

6.3 Limitations

Computational cost: To perform detection with CADet, we need to compute the features for a certain number of transformations of the test sample, incurring significant overhead. Figure 2 shows that reducing the number of transformations to minimize computational cost may not significantly affect performances. While the calibration step can be expensive, we note that it only needs to be run once for a given in-distribution. The coefficient and scores are all one-dimensional values that can be easily stored, and we purposely use a small number of validation samples to make their embedding easy to memorize.

Figure 2: AUROC score of CADet against the number of transformations.

Architecture scale: as self-supervised contrastive learning is computationally expensive, we only evaluated our method on a ResNet50 architecture. In Wang et al. (2022), the authors achieve significantly superior performances when using larger, recent architectures. The performances achieved with a ResNet50 are insufficient for real-world usage, and the question of how our method would scale to larger architectures remains open.

6.4 Future directions

While spurious correlations with background features are a problem in supervised learning, it is aggravated in self-supervised contrastive learning, where background features are highly relevant to the training task. We conjecture the poor performances of CADet on iNaturalist OOD detection are explained by the background similarities with ImageNet images, obfuscating the differences in relevant features. A natural way to alleviate this issue is to incorporate background transformations to the training pipeline, as was successfully applied in

Ma et al. (2018). This process would come at the cost of being unable to detect shifts in background distributions, but such a case is generally less relevant to deployed systems. We leave to future work the exploration of how background transformations could affect the capabilities of CADet.

6.5 Conclusion

We have presented CADet, a method for both OOD and adversarial detection based on self-supervised contrastive learning. CADet achieves an excellent trade-off in detection power across different anomaly types. Additionally, we discussed how MMD could be leveraged with contrastive learning to assess distributional discrepancies between two sets of samples.

References

  • A. Abusnaina, Y. Wu, S. Arora, Y. Wang, F. Wang, H. Yang, and D. Mohaisen (2021) Adversarial example detection using latent neighborhood graph. In

    International Conference on Computer Vision

    ,
    Cited by: §2, §5.1.
  • L. Bergman and Y. Hoshen (2020) Classification-based anomaly detection for general data. arXiv preprint arXiv:2005.02359. Cited by: §2.
  • N. Carlini and D. Wagner (2017)

    Towards evaluating the robustness of neural networks

    .
    In Symposium on Security and Privacy, Cited by: §5.1.
  • T. Chen, S. Kornblith, M. Norouzi, and G. Hinton (2020a) A simple framework for contrastive learning of visual representations. In International Conference on Machine Learning, Cited by: §2.
  • T. Chen, S. Kornblith, K. Swersky, M. Norouzi, and G. Hinton (2020b) Big self-supervised models are strong semi-supervised learners. arXiv preprint arXiv:2006.10029. Cited by: §1, §2, §3.
  • X. Cheng and A. Cloninger (2019) Classification logit two-sample testing by neural networks. arXiv preprint arXiv:1909.11298. Cited by: §4.1, Table 1.
  • H. Choi, E. Jang, and A. A. Alemi (2018) Waic, but why? generative ensembles for robust anomaly detection. arXiv preprint arXiv:1810.01392. Cited by: §2.
  • K. Chwialkowski, A. Ramdas, D. Sejdinovic, and A. Gretton (2015) Fast two-sample testing with analytic representations of probability measures. arXiv preprint arXiv:1506.04725. Cited by: §2, §4.1, Table 1.
  • L. Deecke, R. Vandermeulen, L. Ruff, S. Mandt, and M. Kloft (2018)

    Image anomaly detection with generative adversarial networks

    .
    In Joint european conference on machine learning and knowledge discovery in databases, Cited by: §2.
  • L. Dinh, J. Sohl-Dickstein, and S. Bengio (2016) Density estimation using real nvp. arXiv preprint arXiv:1605.08803. Cited by: §2.
  • Y. Du and I. Mordatch (2019)

    Implicit generation and modeling with energy based models

    .
    In Advances in Neural Information Processing Systems, Cited by: §2.
  • R. Feinman, R. R. Curtin, S. Shintre, and A. B. Gardner (2017) Detecting adversarial samples from artifacts. arXiv preprint arXiv:1703.00410. Cited by: §2.
  • R. Gao, F. Liu, J. Zhang, B. H. 0003, T. Liu, G. N. 0001, and M. Sugiyama (2021) Maximum mean discrepancy test is aware of adversarial attacks. In Proceedings of the 38th International Conference on Machine Learning, ICML 2021, 18-24 July 2021, Virtual Event, M. Meila and T. Z. 0001 (Eds.), Proceedings of Machine Learning Research, Vol. 139, pp. 3564–3575. External Links: Link Cited by: §4.2.
  • I. Golan and R. El-Yaniv (2018) Deep anomaly detection using geometric transformations. In Advances in Neural Information Processing Systems, Cited by: §2.
  • I. J. Goodfellow, J. Shlens, and C. Szegedy (2014) Explaining and harnessing adversarial examples. arXiv preprint arXiv:1412.6572. Cited by: §5.1.
  • W. Grathwohl, K. Wang, J. Jacobsen, D. Duvenaud, M. Norouzi, and K. Swersky (2019) Your classifier is secretly an energy based model and you should treat it like one. arXiv preprint arXiv:1912.03263. Cited by: §2.
  • A. Gretton, K. M. Borgwardt, M. J. Rasch, B. Schölkopf, and A. Smola (2012) A kernel two-sample test. Journal of Machine Learning Research 13 (1), pp. 723–773. Cited by: §1, §2, Definition 4.1, §4.
  • A. Gretton, K. Fukumizu, Z. Harchaoui, and B. K. Sriperumbudur (2009) A fast, consistent kernel two-sample test. In Advances in Neural Information Processing Systems, Y. Bengio, D. Schuurmans, J. Lafferty, C. Williams, and A. Culotta (Eds.), Cited by: §2, §4.
  • M. Gutmann and A. Hyvärinen (2010) Noise-contrastive estimation: a new estimation principle for unnormalized statistical models. In

    International Conference on Artificial Intelligence and Statistics

    ,
    Cited by: §2.
  • R. Hadsell, S. Chopra, and Y. LeCun (2006) Dimensionality reduction by learning an invariant mapping. In

    Computer Vision and Pattern Recognition

    ,
    Cited by: §2.
  • K. He, H. Fan, Y. Wu, S. Xie, and R. Girshick (2020) Momentum contrast for unsupervised visual representation learning. In Conference on Computer Vision and Pattern Recognition, Cited by: §2.
  • D. Hendrycks, S. Basart, M. Mazeika, A. Zou, J. Kwon, M. Mostajabi, J. Steinhardt, and D. Song (2019a) Scaling out-of-distribution detection for real-world settings. arXiv preprint arXiv:1911.11132. Cited by: Table 3.
  • D. Hendrycks and K. Gimpel (2016) A baseline for detecting misclassified and out-of-distribution examples in neural networks. arXiv preprint arXiv:1610.02136. Cited by: §2, Table 3.
  • D. Hendrycks, M. Mazeika, S. Kadavath, and D. Song (2019b) Using self-supervised learning can improve model robustness and uncertainty. In Advances in Neural Information Processing Systems, Cited by: §2.
  • D. Hendrycks, K. Zhao, S. Basart, J. Steinhardt, and D. Song (2021) Natural adversarial examples. arXiv preprint arXiv:1907.07174. Cited by: §5.1.
  • S. Hu, T. Yu, C. Guo, W. Chao, and K. Q. Weinberger (2019) A new defense against adversarial images: turning a weakness into a strength. In Advances in Neural Information Processing Systems, Cited by: §2, §5.1, §6.1, Table 4.
  • R. Huang and Y. Li (2021) MOS: towards scaling out-of-distribution detection for large semantic space. arXiv preprint arXiv:2105.01879. Cited by: §5.1.
  • W. Jitkrittum, Z. Szabo, K. Chwialkowski, and A. Gretton (2016) Interpretable distribution features with maximum testing power. arXiv preprint arXiv:1605.06796. Cited by: §2, §4.1.
  • U. Khalid, A. Esmaeili, N. Karim, and N. Rahnavard (2022) Rodd: a self-supervised approach for robust out-of-distribution detection. arXiv preprint arXiv:2204.02553. Cited by: §2.
  • B. Kitt, A. Geiger, and H. Lategahn (2010)

    Visual odometry based on stereo image sequences with ransac-based outlier rejection scheme

    .
    In 2010 IEEE Intelligent Vehicles Symposium, Vol. , pp. 486–492. External Links: Document Cited by: §1.
  • A. Krizhevsky (2009) Learning multiple layers of features from tiny images. Technical report University of Toronto. Cited by: §1, §4.1.
  • K. Lee, K. Lee, H. Lee, and J. Shin (2018) A simple unified framework for detecting out-of-distribution samples and adversarial attacks. In Advances in Neural Information Processing Systems, Cited by: §1, §5.1, Table 3.
  • S. Liang, Y. Li, and R. Srikant (2017) Enhancing the reliability of out-of-distribution image detection in neural networks. arXiv preprint arXiv:1706.02690. Cited by: §2, §5.1, Table 3, Table 4.
  • F. Liu, W. Xu, J. Lu, G. Zhang, A. Gretton, and D. J. Sutherland (2020a) Learning deep kernels for non-parametric two-sample tests. In International Conference on Machine Learning, Cited by: §2, §4.1, Table 1, §4, §4.
  • H. Liu and P. Abbeel (2020) Hybrid discriminative-generative training via contrastive learning. arXiv preprint arXiv:2007.09070. Cited by: §2.
  • W. Liu, X. Wang, J. D. Owens, and Y. Li (2020b) Energy-based out-of-distribution detection. arXiv preprint arXiv:2010.03759. Cited by: Table 3.
  • W. Liu, X. Wang, J. Owens, and Y. Li (2020c) Energy-based out-of-distribution detection. In Advances in Neural Information Processing Systems, Cited by: §2.
  • J. Lust and A. P. Condurache (2020) GraN: an efficient gradient-norm based detector for adversarial and misclassified examples. arXiv preprint arXiv:2004.09179. Cited by: §2.
  • X. Ma, B. Li, Y. Wang, S. M. Erfani, S. Wijewickrema, G. Schoenebeck, D. Song, M. E. Houle, and J. Bailey (2018) Characterizing adversarial subspaces using local intrinsic dimensionality. arXiv preprint arXiv:1801.02613. Cited by: §2, §6.4.
  • A. Madry, A. Makelov, L. Schmidt, D. Tsipras, and A. Vladu (2017)

    Towards deep learning models resistant to adversarial attacks

    .
    arXiv preprint arXiv:1706.06083. Cited by: §5.1.
  • J. H. Metzen, T. Genewein, V. Fischer, and B. Bischoff (2017) On detecting adversarial perturbations. arXiv preprint arXiv:1702.04267. Cited by: §2.
  • S. Mohseni, M. Pitale, J. Yadawa, and Z. Wang (2020) Self-supervised learning for generalizable out-of-distribution detection. In AAAI Conference on Artificial Intelligence, Cited by: §2.
  • E. Nalisnick, A. Matsukawa, Y. W. Teh, D. Gorur, and B. Lakshminarayanan (2018) Do deep generative models know what they don’t know?. arXiv preprint arXiv:1810.09136. Cited by: §2.
  • E. T. Nalisnick, A. Matsukawa, Y. W. Teh, and B. Lakshminarayanan (2019) Detecting out-of-distribution inputs to deep generative models using a test for typicality. arXiv preprint arXiv:1906.02994. Cited by: §2.
  • Y. Netzer, T. Wang, A. Coates, A. Bissacco, B. Wu, and A. Y. Ng (2011) Reading digits in natural images with unsupervised feature learning. In NIPS Workshop on Deep Learning and Unsupervised Feature Learning 2011, Cited by: §1.
  • N. Papernot and P. McDaniel (2018) Deep k-nearest neighbors: towards confident, interpretable and robust deep learning. arXiv preprint arXiv:1803.04765. Cited by: §2.
  • P. Perera, R. Nallapati, and B. Xiang (2019)

    Ocgan: one-class novelty detection using gans with constrained latent representations

    .
    In Conference on Computer Vision and Pattern Recognition, Cited by: §2.
  • S. Pidhorskyi, R. Almohsen, and G. Doretto (2018)

    Generative probabilistic novelty detection with adversarial autoencoders

    .
    In Advances in neural information processing systems, Cited by: §2.
  • B. Recht, R. Roelofs, L. Schmidt, and V. Shankar (2019) Do imagenet classifiers generalize to imagenet?. arXiv preprint arXiv:1902.10811. Cited by: 1st item, §4.1, §4.1.
  • J. Ren, P. J. Liu, E. Fertig, J. Snoek, R. Poplin, M. Depristo, J. Dillon, and B. Lakshminarayanan (2019) Likelihood ratios for out-of-distribution detection. In Advances in Neural Information Processing Systems, Cited by: §2.
  • L. Ruff, R. Vandermeulen, N. Goernitz, L. Deecke, S. A. Siddiqui, A. Binder, E. Müller, and M. Kloft (2018) Deep one-class classification. In International Conference on Machine Learning, Cited by: §2.
  • T. Schlegl, P. Seeböck, S. M. Waldstein, U. Schmidt-Erfurth, and G. Langs (2017a) Unsupervised anomaly detection with generative adversarial networks to guide marker discovery. In International conference on information processing in medical imaging, Cited by: §2.
  • T. Schlegl, P. Seeböck, S. M. Waldstein, U. Schmidt-Erfurth, and G. Langs (2017b) Unsupervised anomaly detection with generative adversarial networks to guide marker discovery. arXiv preprint arXiv:1703.05921. Cited by: §1.
  • B. Schölkopf, R. C. Williamson, A. Smola, J. Shawe-Taylor, and J. Platt (1999)

    Support vector method for novelty detection

    .
    In Advances in neural information processing systems, Cited by: §2.
  • V. Sehwag, M. Chiang, and P. Mittal (2021)

    SSD: a unified framework for self-supervised outlier detection

    .
    In International Conference on Learning Representations, External Links: Link Cited by: §2.
  • J. Serrà, D. Álvarez, V. Gómez, O. Slizovskaia, J. F. Núñez, and J. Luque (2019) Input complexity and out-of-distribution detection with likelihood-based generative models. arXiv preprint arXiv:1909.11480. Cited by: §2.
  • Y. Sun, C. Guo, and Y. Li (2021) ReAct: out-of-distribution detection with rectified activations. arXiv preprint arXiv:2111.12797. Cited by: Table 3.
  • D. J. Sutherland, H. Tung, H. Strathmann, S. De, A. Ramdas, A. Smola, and A. Gretton (2016) Generative models and model criticism via optimized maximum mean discrepancy. arXiv preprint arXiv:1611.04488. Cited by: §2, §4.
  • J. Tack, S. Mo, J. Jeong, and J. Shin (2020) Csi: novelty detection via contrastive learning on distributionally shifted instances. In Advances in Neural Information Processing Systems, Cited by: §1, §2, §2.
  • A. Uwimana1 and R. Senanayake (2021) Out of distribution detection and adversarial attacks on deep neural networks for robust medical image analysis. arXiv preprint arXiv:2107.04882. Cited by: §1.
  • H. Wang, Z. Li, L. Feng, and W. Zhang (2022) ViM: out-of-distribution with virtual-logit matching. In Conference on Computer Vision and Pattern Recognition, Cited by: §1, §2, §5.1, §6.3, Table 3.
  • L. Wenliang, D. Sutherland, H. Strathmann, and A. Gretton (2019) Learning deep kernels for exponential family densities. In International Conference on Machine Learning, Cited by: §2.
  • J. Winkens, R. Bunel, A. G. Roy, R. Stanforth, V. Natarajan, J. R. Ledsam, P. MacWilliams, P. Kohli, A. Karthikesalingam, S. Kohl, et al. (2020) Contrastive training for improved out-of-distribution detection. arXiv preprint arXiv:2007.05566. Cited by: §1, §2.
  • Z. Wu, Y. Xiong, S. X. Yu, and D. Lin (2018) Unsupervised feature learning via non-parametric instance discrimination. In Conference on Computer Vision and Pattern Recognition, Cited by: §2.
  • Y. You, I. Gitman, and B. Ginsburg (2017) Large batch training of convolutional networks. arXiv preprint arXiv:1708.03888. Cited by: §3.
  • F. Yu, A. Seff, Y. Zhang, S. Song, T. Funkhouser, and J. Xiao (2015) LSUN: construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365. Cited by: §1.
  • S. Zhai, Y. Cheng, W. Lu, and Z. Zhang (2016) Deep structured energy based models for anomaly detection. In International Conference on Machine Learning, Cited by: §2.
  • B. Zong, Q. Song, M. R. Min, W. Cheng, C. Lumezanu, D. Cho, and H. Chen (2018)

    Deep autoencoding gaussian mixture model for unsupervised anomaly detection

    .
    In International Conference on Learning Representations, Cited by: §2.
  • F. Zuo and Q. Zeng (2021) Exploiting the sensitivity of l2 adversarial examples to erase-and-restore. In Asia Conference on Computer and Communications Security, Cited by: §2.