Deep Anomaly Detection by Residual Adaptation

Deep anomaly detection is a difficult task since, in high dimensions, it is hard to completely characterize a notion of "differentness" when given only examples of normality. In this paper we propose a novel approach to deep anomaly detection based on augmenting large pretrained networks with residual corrections that adjusts them to the task of anomaly detection. Our method gives rise to a highly parameter-efficient learning mechanism, enhances disentanglement of representations in the pretrained model, and outperforms all existing anomaly detection methods including other baselines utilizing pretrained networks. On the CIFAR-10 one-versus-rest benchmark, for example, our technique raises the state of the art from 96.1 to 99.0 mean AUC.


page 1

page 2

page 3

page 4


Simple Adaptive Projection with Pretrained Features for Anomaly Detection

Deep anomaly detection aims to separate anomaly from normal samples with...

Towards Fair Deep Anomaly Detection

Anomaly detection aims to find instances that are considered unusual and...

Feature anomaly detection system (FADS) for intelligent manufacturing

Anomaly detection is important for industrial automation and part qualit...

PANDA – Adapting Pretrained Features for Anomaly Detection

Anomaly detection methods require high-quality features. One way of obta...

Boosting Anomaly Detection Using Unsupervised Diverse Test-Time Augmentation

Anomaly detection is a well-known task that involves the identification ...

Unsupervised Anomaly Detection From Semantic Similarity Scores

In this paper, we present SemSAD, a simple and generic framework for det...

Anomaly Detection with Tensor Networks

Originating from condensed matter physics, tensor networks are compact r...

1 Introduction

The core goal of anomaly detection is the identification of unusual samples within data (Edgeworth, 1887; Grubbs, 1969; Schölkopf et al., 1999; Chandola et al., 2009). What complicates matters is that unusualness can be caused by a variety of factors, especially for data types that are semantically rich. For these settings, there has been continued interest in developing new deep anomaly detectors (Zhai et al., 2016; Schlegl et al., 2017; Sabokrou et al., 2018; Deecke et al., 2018; Ruff et al., 2018; Golan & El-Yaniv, 2018; Pidhorskyi et al., 2018; Hendrycks et al., 2018, 2019b)

that utilize end-to-end learning, a defining property amongst deep learning approaches

(Krizhevsky et al., 2012; He et al., 2016).

For deep anomaly detection there is no natural learning objective, and thus several methods have been proposed. One emerging trend is to utilize self-supervision (Golan & El-Yaniv, 2018; Hendrycks et al., 2019b; Bergman & Hoshen, 2020)

. In these approaches one creates an auxiliary task from a nominal dataset by transforming its samples, and then utilizing these in a fashion that resembles supervision. A different approach uses large unstructured collections of data, which serve a purpose similar to outliers

Hendrycks et al. (2018)

, to train models akin to classifiers

(Ruff et al., 2020b)

, or enhance self-supervised learning criterions

(Hendrycks et al., 2019b). Considering the rather simplistic objective for these approaches, especially compared to the richness of images, one may wonder whether they learn particularly meaningful features from such training procedures. This is potentially problematic since anomalies can manifest themselves in subtle ways that require a good semantic understanding: for example, anomalous objects may appear in crowded scenes (Mahadevan et al., 2010), or be subject to transitions between day and night in video footage (Sultani et al., 2018).

Recently there has been a large increase in the availability and utilization of pretrained networks. Ideally, these will enhance the representations learned by task-specific downstream models, as they already incorporate different variations commonly seen in data (edges, color, semantic parts, etc.). While He et al. (2019) fundamentally questioned whether actual benefits are brought about by pretrained models, Hendrycks et al. (2019a) recently painted them in a more positive light, showing they boost performance in robustness and uncertainty tasks.

Pretrained representations are relied upon in many areas of machine learning, for example in object detection

(Girshick et al., 2014; Girshick, 2015)

, transfer learning

(Guo et al., 2019), when looking to transfer between large numbers of tasks (Zamir et al., 2018), or from one domain to another (Rebuffi et al., 2017, 2018)

. Other examples can be found in natural language processing, where a surge of papers has recently elevated the role of pretrained models

(Mikolov et al., 2018; Devlin et al., 2018; Howard & Ruder, 2018; Adhikari et al., 2019; Beltagy et al., 2019; Hendrycks et al., 2020).

While anomaly detection fundamentally differs from common pretraining tasks (e.g

. ImageNet

(Deng et al., 2009)) since it is more abstract and less well-defined, it may still benefit from adapting rich, pretrained representations. In doing so, one should however ensure that the change in representation is not excessive (Li et al., 2018), as this risks catastrophic forgetting (Kirkpatrick et al., 2017). For anomaly detection in particular, it is crucial to preserve variations incorporated during pretraining that, even though they potentially don’t exist in the training set, can nonetheless be meaningful for inferring anomalous characteristics at test time. On the other hand, it is important to let the network have some flexibility to learn new variations which are important for the new task.

In this paper we introduce an anomaly detection method that goes beyond the traditional finetuning paradigm by using lightweight dynamic enhancements (Deecke et al., 2020), which serve as modifications to the pretrained network at every layer; for a visualization, see Figure 1. We call this anomaly detection with residual adaptation (ADRA). This introduces a simple enhanced objective that combines outlier exposure (Hendrycks et al., 2018) and deep one-class classification (Ruff et al., 2020a)

, two powerful learning techniques for anomaly detection. ADRA is straightforward to train and deploy, highly parameter-efficient, and can much better consolidate pretrained networks and anomaly detection than mere feature extraction

(Bergman et al., 2020). In extensive experiments we show that ADRA outperforms all previous approaches in the deep anomaly detection literature on a set of common benchmarks. On the CIFAR-10 one-versus-rest benchmark, for example, our technique raises the state of the art from 96.1 to 99.0 mean AUC, reducing the gap to perfect performance by 75%. Besides the strong performance of ADRA, we use images from a new disentanglement dataset (Gondal et al., 2019) to show that ADRA naturally disentangles meaningful variations in data into its representations.

Figure 1: Instead of finetuning parameters of pretrained models, ADRA keeps them fixed and injects new learnable connections into the network (symbolized in blue). Extending the model with new parameters lets ADRA incorporate representations suitable for deep anomaly detection, while holding on to information incorporated in the pretraining task. Best viewed in color.

2 Related work

Anomaly detection has a long history, with early work going back to Edgeworth (1887) and has been extensively studied in the classical machine learning literature, e.g. through generative models for intrusion detection (Yeung & Chow, 2002)

, or hidden Markov models for registering network attacks

(Ourston et al., 2003)

. Other examples include active learning of anomalies

(Pelleg & Moore, 2005)

, or dynamic Bayesian networks for traffic incident detection

(Singliar & Hauskrecht, 2006). An overview of traditional anomaly detection methods can be found in Chandola et al. (2009), an empirical evaluation in Emmott et al. (2013).

Previous deep anomaly detection methods utilized autoencoders

(Zhou & Paffenroth, 2017; Zong et al., 2018), hybrid methods (Erfani et al., 2016)

, or generative adversarial networks

(Schlegl et al., 2017; Akcay et al., 2018; Deecke et al., 2018; Perera et al., 2019). A recent focus is on repurposing auxiliary tasks for anomaly detection, often following the paradigm of self-supervision: Golan & El-Yaniv (2018) propose learning features from predicting geometric transformations of the nominal data, which was extended to other data types by Bergman & Hoshen (2020). In a separate line of work, Hendrycks et al. (2018) propose carrying out anomaly detection through a paradigm they call outlier exposure where one utilizes large unstructured sets of data, assumed to not belong to the normal class, to improve performance of deep anomaly detection. Our approach also leverages learning from such corpora, however bypasses all self-supervision steps entirely.

The technique of adding residual connections to adapt networks to new tasks, also known as

residual adaptation, was introduced in Rebuffi et al. (2017, 2018). While originally developed for multi-task learning, this assumption was recently relaxed in work that extended residual adaptation to other problem settings, such as latent domain learning (Deecke et al., 2020). In the realm of language modeling Stickland & Murray (2019) applied residual adaptations to pretrained BERT networks (Devlin et al., 2018) to improve performance there. Our method further demonstrates the usefulness of residual adaptation outside of multi-task learning by extending it to the task of anomaly detection.

A number of recent publications proposed unsupervised mechanisms to learn disentangled representations (Kulkarni et al., 2015; Higgins et al., 2017; Bouchacourt et al., 2018; Burgess et al., 2018; Chen et al., 2018; Kim & Mnih, 2018; Kumar et al., 2018). Locatello et al. (2019)

outlined the incompatibility of unsupervised learning and repesentation disentanglement, and follow-up work established the need for some form of weak supervision to give rise to disentanglement

(Locatello et al., 2020). Considerable hopes have been placed on the usefulness of such disentangled representations (Bengio, 2017; van Steenkiste et al., 2019). We investigate connections to anomaly detection in Section 4.4.

3 Method

We review the individual components to our proposed approach in Sections 3.1 and 3.2, then subsequently introduce ADRA in Section 3.3.

3.1 Deep one-class classification

When learning from data, a semantic understanding of normality is typically extracted from a set of data assumed to have been sampled i.i.d. from the nominal distribution over some sample space . How this data is then incorporated is how the different approaches in the anomaly detection literature can be categorized, e.g. in an unsupervised way (Ruff et al., 2018), or through self-supervision (Golan & El-Yaniv, 2018), c.f. Section 2. The ansatz of outlier exposure (Hendrycks et al., 2018) revolves around the utilization of a large number of unlabeled images from some unstructured corpus of data (where potentially ), for example 80 Million Tiny Images (Torralba et al., 2008), on which models are trained to identify whether samples belong to the corpus, or the nominal data. Importantly, this is a form of weak supervision via existing resources (Zhou, 2018), and not equivalent to binary classification: images from the corpus are not necessarily outliers (and may even contain samples from ). Nonetheless, this procedure can help models incorporate richer representations of the data in .

Initially observed in Ruff et al. (2020b), the structure of anomaly detection tasks benefits from encapsulating the normal class through radial functions, in line with the so-called concentration assumption fundamental in anomaly detection (Schölkopf & Smola, 2002; Steinwart et al., 2005). Ruff et al. (2020a) showed that outlier exposure also benefits from a reformulation via a class of spherical learning objectives, which the authors use to set the current state of the art in anomaly detection performance. Given access to data, this learning criterion can be expressed as a functional of some model as


where pseudo-labels are determined by a sample’s origin, i.e. . This loss can be coupled with different radial functions , which Ruff et al. (2020a) recommend setting to . We follow their recommendation in this work, and found it to be stable across experiments.

Note that previous works (to which we compare in our experiments, see Section 4

) would use some randomly initialized neural network and obtain its parameterization via minimization of some criterion, in the above case for example

. In ADRA, we constrain the optimization to a more suitably regularized class of functions, see Section 3.3.

3.2 Residual adaptation

In conventional residual networks (He et al., 2016), the information from the ’th layer is passed on via , where each is typically parameterized as a 3x3 convolution.111This omits normalization and activation to declutter notation. Originally developed in the context of multi-domain learning, Rebuffi et al. (2018) proposed adding a small linear correction to every layer, such that


with each parameterized by a smaller 1x1 convolution. Deecke et al. (2020) recently generalized this concept through a mixture of experts approach. For this, a set of linear corrections is introduced at every layer, which are targeted via a self-attention (Lin et al., 2017) mechanism that adaptively combines available corrections. This yields


As the authors show, residual adaptation gives rise to an efficient plug-in module that encourages parameter sharing between similar modes in data. Crucially, the module also increases the robustness of models in regions where the density associated with the data-generating distribution has little mass — regions of particular importance to anomaly detection.

The motivation behind residual adaptation was developed in earlier work on universal representations (Bilen & Vedaldi, 2017). At their core, universal representations build on the idea that general-purpose parameters obtained through some large pretraining task require only small modifications for them to be adapted to a wide range of tasks. Even though universal representations were originally conceived with the objective of training compact models over sets of very different tasks, our experiments suggest that its insights hold promise for a wider range of learning problems, anomaly detection included.

3.3 Anomaly detection with residual adaptation

Following work that investigated the prospects of large pretrained networks (Devlin et al., 2018; Howard & Ruder, 2018; Adhikari et al., 2019; Beltagy et al., 2019; Hendrycks et al., 2020), a recent study proposed carrying out anomaly detection through a nearest neighbor search on top of features extracted from a large pretrained residual network (Bergman et al., 2020). We include this approach in our experiments (see Table 1), but as its performance shows, simply transferring over fixed representations to an unrelated task does not sufficiently incorporate abstract information, limiting its usefulness for complex, high-dimensional anomaly detection. Next, we outline how parameters obtained from pretraining can be more adequately repurposed for a new task.

The initial pretraining itself follows a simple protocol: a model’s parameters are randomly initialized from some distribution over parameters (for example Xavier initialization (Glorot & Bengio, 2010)). Minimization of a suitable pretraining task

(say, object classification on ImageNet

(Deng et al., 2009)) then yields a set of general-purpose parameters . The so-obtained model is now fully “pretrained”, ready to be used in a downstream task .

The traditional ansatz for leveraging pretrained models is to “finetune”, i.e. continuing to optimize the model parameters (or a subset thereof) on . Finetuning can therefore be seen as a type of weight initialization, using weights from a pretrained network. One crucial limitation of this learning protocol is that when learning on isn’t carried out very carefully through the introduction of some explicit inductive bias (Li et al., 2018), this risks catastrophic forgetting of information previously extracted from . To alleviate this a common approach is to develop adequate forms of regularization, which were used successfully e.g. in continual learning (Kirkpatrick et al., 2017; Lopez-Paz & Ranzato, 2017). The explicit bias we introduce in ADRA is different, and instead obtained by sidestepping the standard finetuning protocol and directly modifying the model’s structure.

To modulate the base network, ADRA introduces a fresh set of model parameters . These have not previously been exposed to the pretraining task , and serve to incorporate the task-specific variations crucial to inferring anomaly. At the same time, ADRA fixes the pretrained parameters and never changes them, which ensures they remain linked to the information obtained in

. In doing so, our methodology constraints the learning criterion, recasting the optimization of the radial loss functional introduced in eq. (

1) as


Because the pretrained parameters are fully determined via and fixed thereafter, only the new set of residual connections is task-specific. For every nominal class, ADRA therefore only requires the parameters specific to to be inserted back into the model. This requires a much smaller number of overall model parameters (i.e. ), giving rise to a highly efficient, adaptive architecture .

4 Experiments

We consider three settings to evaluate our method: anomaly detection on the (i.) one-versus-rest and (ii.) hold-one-out benchmarks, as well as (iii.) measuring the disentanglement of learned representations. All experiments use code implemented through standard routines in PyTorch

(Paszke et al., 2017).

4.1 One-versus-rest anomaly detection

We evaluate performance of ADRA on the CIFAR-10 (Krizhevsky & Hinton, 2009) one-versus-rest anomaly detection benchmark, which is reported across large parts of the literature (Deecke et al., 2018; Golan & El-Yaniv, 2018; Hendrycks et al., 2018; Ruff et al., 2018; Abati et al., 2019; Hendrycks et al., 2019b; Perera et al., 2019; Bergman & Hoshen, 2020; Ruff et al., 2020b, a). This benchmark is not equivalent to CIFAR-10 classification, and consists of ten individual tasks instead: in each, a single class is fixed as the normal class — say, dogs. All dogs in the CIFAR-10 training split are collected into , from which models can then be learned about the nominal distribution . Finally, models are evaluated against the entire CIFAR-10 test split, and performance is recorded by checking whether anomaly scores assigned to dogs are lower than scores assigned to the remaining non-dog classes. To express performance through a single number, authors usually report AUC under the receiver operating characteristic; we follow this practice here.

In ADRA, we additionally contrast against images from an unstructured corpus . Guided by previous work (Hendrycks et al., 2018; Ruff et al., 2020a), we fix this to contain all samples from the CIFAR-100 training split. Access to should best be thought of as a weak supervisory signal: CIFAR-100 does not contain any of the classes in CIFAR-10, so 9 out of 10 classes are only seen during evaluation, making them true outliers.


Training is carried out using stochastic gradient descent (momentum parameter of 0.9, weight decay of

) for a total of 120 epochs, with learning rate reductions by 1/10 after 80 and 100 epochs. The batch size is fixed to 128, with each batch containing an equivalent number of samples from

and . All experiments use a residual network with 26 layers, and unless otherwise noted, its initial parameters were obtained from pretraining on a downsized ImageNet variant (at 72x72 resolution), for a final top-1 accuracy of 60.32%. ADRA parameters are always initialized randomly, and then trained on the constrained objective in eq. (4). We vary the number of linear corrections in ADRA over . Performances for ADRA are recorded by averaging over five random initializations.


0 74.7 78.5 93.9 90.4 96.4 98.8 99.0 99.1
1 95.7 89.8 97.7 99.3 98.8 99.7 99.8 99.8
2 78.1 86.1 85.5 93.7 93.0 96.3 98.1 98.6
3 72.4 77.4 85.5 88.1 90.0 95.7 96.3 97.3
4 87.8 90.5 93.6 97.4 97.1 98.2 99.1 99.1
5 87.8 84.5 91.3 94.3 94.2 97.4 98.1 98.2
6 83.4 89.2 94.3 97.1 98.0 99.4 99.6 99.6
7 95.5 92.9 93.6 98.8 97.6 99.1 99.5 99.5
8 93.3 92.0 95.1 98.7 98.1 99.4 99.4 99.5
9 91.3 85.5 95.3 98.5 97.7 99.1 99.4 99.3
Mean AUC 86.0 86.6 92.5 95.6 96.1 98.3 98.8 99.0
Table 1: AUCs for different methods on the CIFAR-10 one-versus-rest anomaly detection benchmark. For ADRA we vary .

As shown in Table 1, ADRA raises the state of the art to 99.0 mean AUC, closing the gap between the previous best method of 96.1 mean AUC (SAD (Ruff et al., 2020a)) and perfect classification by roughly 75%. As demonstrated by the performance of kNN-AD (Bergman et al., 2020), a different work also focusing on the utilization of pretrained networks for anomaly detection, simply using features from a large pretrained network alone does not solve the problem of detecting anomalies. Our results suggest including adaptation in a proper way is critical for utilizing these networks to their full potential.

Dynamic residual adaptation with self-attention as proposed by Deecke et al. (2020) improves performance across the benchmark. For larger , the improvement in effectiveness is reduced, as sufficient adaptivity is likely already obtained for a smaller number of experts. Unless otherwise noted, in subsequent trials we therefore fix .

Current existing methods require that all parameters of each model are stored. This scales their memory requirements as , where denotes the number of subtasks (e.g. ten for our benchmarks). For ADRA, a small set of task-specific corrections augment the base model, reducing its computational footprint to . On our benchmarks for example, ADRA with requires only around mil parameters in total, a fraction of the roughly mil parameters required to parameterize ten individual ResNet26. We visualize these savings in Figure 3 (left).

4.2 Robustness to small modes

Ideally models have the ability to incorporate information from nominal samples even if they form only a minor mode of , such that only few samples from this mode are contained in . Deecke et al. (2020) suggest that dynamic residual adaptation benefits the robustness to small latent domains, i.e. regions where the density associated with the data-generating distribution has little mass. Next, we evaluate this property in the context of anomaly detection.

For this experiment, we let the normal class be constituted by samples associated with pairs of classes , such that , with controlling the presence of samples from . For a robust model, even as is relaxed toward , its ability to detect normality amongst the secondary class remains intact.

Figure 2: Relative AUCs for the secondary class for pairs of classes from CIFAR-10. Shown are performances for a random initialization (), traditional finetuning (), and ADRA (). Dashed curves indicate the relative change in performance for the primary class.

We pair up classes from CIFAR-10, and report AUCs relative to for the primary and secondary class in Figure 2. When models are trained from a random initialization, their performance falls of faster than for approaches that start from pretrained networks. This trend is consistent across class pairings. There is a modest increase in performance for traditional finetuning, ADRA offers the highest robustness at incorporating small nominal modes.

4.3 Hold-one-out anomaly detection

An alternative benchmark implemented in Perera et al. (2019) or Ahmed & Courville (2019) is to collect multiple classes into the nominal data , while only a single class is declared anomalous and set aside during training. At test time then, the task is to identify samples from the held-out class. It has been argued that this is a more difficult benchmark than one-versus-rest, since it requires the learning of multiple nominal modes (Ahmed & Courville, 2019; Bergman et al., 2020).

On the hold-one-out benchmark, Ahmed & Courville (2019)

evaluate the performance of ranking anomaly via maximum softmax probability

(Hendrycks & Gimpel, 2017) and ODIN (Liang et al., 2018) in combination with an auxiliary self-supervised criterion inspired by RotNet (Gidaris et al., 2018)

. In particular, the authors propose evaluating models on STL-10

(Coates et al., 2011) for a novel, more difficult benchmark: its images have a higher resolution of 96x96, while only containing 500 samples for each object class. The dataset also contains a large unlabeled split, which we collect into our corpus for outlier exposure.

For STL-10, we pretrained ResNet26 for a final top-1 classification accuracy of 63.74% on ImageNet at this resolution. All other optimization settings remain unchanged from those outlined in Section 4.1.

0 49.8 43.0 50.8 42.9 23.4 23.1 44.5 43.1
1 17.4 76.7 77.4 88.4 40.1 13.8 14.8 23.7
2 54.6 61.1 68.0 74.4 16.9 39.9 82.2 56.0
3 55.8 65.8 72.7 72.5 31.4 18.9 27.4 37.5
4 52.8 60.6 65.3 73.3 29.7 25.3 17.0 28.8
5 32.5 64.2 65.1 63.3 26.1 17.3 12.3 18.5
6 54.4 84.0 89.9 90.7 23.6 30.1 22.1 33.5
7 39.7 52.9 46.8 53.2 28.3 18.4 14.9 44.2
8 28.8 70.8 64.2 74.4 15.4 49.2 70.3 59.4
9 29.9 87.7 84.5 94.1 16.6 40.7 44.4 55.7
Mean AP 41.2 66.7 68.5 72.7 25.1 27.7 35.0 40.0
Table 2: Average precisions on CIFAR-10 and STL-10 hold-one-out benchmarks. ADRA uses .


We follow Ahmed & Courville (2019) and report performance on the hold-one-out benchmark in terms of average precision. We include their best results, rotation-augmented ODIN (RA-ODIN), in Table 2. We do not include the results for OCGAN from Perera et al. (2019) in our tabulation, as the authors report in AUC instead. Note however ADRA outperforms OCGAN on mean AUC by a wide margin: 94.7 versus 65.7.

As our results confirm, inferring anomaly on STL-10 is significantly harder. In particular, traditional finetuning (TF) does not successfully address the task of anomaly detection, likely because variations that are important to determining anomaly at test time are overwritten in the finetuning process, yielding poor performance across classes (mean AP of 27.7). To counteract this, we follow the recommendation of Li et al. (2018) and add a penalty toward the initial pretrained parameters via L2-SP regularization, scaled with a regularization strength of . While this boosts performance somewhat (mean AP of 35.0), simply regularizing the otherwise unchanged finetuning process does not fully address the issue of forgetting information extracted during pretraining.

By modifying the structure of the base model, ADRA introduces a different explicit bias toward the initial pretrained representation. Our results indicate that preserving the initial pretrained parameters and adapting the network to the task through the introduction of new network components allows one to keep both the benefits of pretraining and learning for a specific task.

4.4 Disentanglement of representations

Figure 3: Left: memory requirements for incorporating ten tasks, just as in our benchmarks. ADRA combines parameters of a single pretrained base model () with task-specific corrections (), giving rise to a lean overall model () that incurs large savings compared to SAD () and other previous methods. Calculation shown for . Right: DCI disentanglement for different methods, both unsupervised (SVDD, UADRA) and using weak supervision through outlier exposure (TF, L2-SP, SAD, ADRA).

Next, we take a closer look at the representations learned by different anomaly detection methods. For this, we examine models on MPI3D (Gondal et al., 2019), a recently released dataset to facilitate research on disentangled representations. It contains joint pairs of ground-truth factors (color, shape, angle, etc.), and corresponding images of a robot arm mounted with an object. The original dataset comes in three different styles (photo-realistic, simple or detailed animation); we only make use of the more complex, photo-realistic images here.

We compare ADRA to two fundamental approaches to deep anomaly detection: deep SVDD, a fully unsupervised one-class model proposed in (Ruff et al., 2018), and SAD Ruff et al. (2020b, a), which can be viewed as a direct extension of SVDD that additionally incorporates outlier exposure into the learning criterion. Moreover, we include traditional finetuning (TF) and finetuning with regularization (L2-SP (Li et al., 2018)), both trained with outlier exposure. To ablate against weak supervision, we also train our method in an unsupervised fashion (UADRA), i.e. with an empty corpus .

For our experiments on MPI3D, we arbitrarily fix a red cone as the normal object, and then train models on all available views. For methods that use outlier exposure, we include all such images that do not constitute the normal class. For example all appear in the corpus . For ADRA, we can simply reuse the pretrained network also used in our CIFAR-10 benchmarks, and otherwise leave the optimization protocol unchanged from that in Section 4.1. To measure disentanglement, we follow Locatello et al. (2020) and evaluate models in terms of DCI disentanglement (Eastwood & Williams, 2018).


Disentanglement is often a desirable property (Bengio, 2017; van Steenkiste et al., 2019). Representations learned via SVDD, i.e. unsupervised anomaly detection, however exhibit relatively little of it (see Figure 3, right). Outlier exposure as in SAD raises disentanglement. This is in line with observations in Locatello et al. (2020), which showed that some weak supervision is required for learning disentangled representations.

ADRA exhibits the highest amount of disentanglement in its representations. To ensure that this actually stems from our model, we ablate this against UADRA. Removing access to the weak supervision of causes another anticipated loss in disentanglement, but UADRA still disentangles better than SVDD. So while both our model and outlier exposure increase DCI disentanglement, their benefits should be viewed as distinct.

Interestingly TF and L2-SP perform much worse than SAD, even though they only differ in their in parameter initialization. This is potentially because the pretrained network weights aren’t particularly well suited for the task, and the dataset size is too small in proportion to the number of free parameters to adapt them. ADRA on the other has a much smaller number of free parameters, thus allowing for sample-efficient utilization of those features which are useful from the pretrained network.

5 Conclusion

Detecting anomalies is a difficult task, especially when carried out in high-dimensional spaces. In this paper, we introduced a powerful and simple method for deep anomaly detection. By incorporating dynamic residual adaptation to leverage pretrained models, ADRA constitutes a parameter-efficient learning protocol.

Our method exhibits strong performance across common benchmarks for deep anomaly detection, and can robustly incorporate small modes of nominal data. Moreover, we established a positive link between the best performing methods for anomaly detection and their disentanglement, indicating that deep anomaly detection can directly benefit from the ongoing development of disentangled representations.


  • Abati et al. (2019) Davide Abati, Angelo Porrello, Simone Calderara, and Rita Cucchiara.

    Latent space autoregression for novelty detection.


    IEEE Conference on Computer Vision and Pattern Recognition

    , pp. 481–490, 2019.
  • Adhikari et al. (2019) Ashutosh Adhikari, Achyudh Ram, Raphael Tang, and Jimmy Lin. Docbert: Bert for document classification. arXiv preprint arXiv:1904.08398, 2019.
  • Ahmed & Courville (2019) Faruk Ahmed and Aaron Courville. Detecting semantic anomalies. arXiv preprint arXiv:1908.04388, 2019.
  • Akcay et al. (2018) Samet Akcay, Amir Atapour-Abarghouei, and Toby P Breckon. GANomaly: Semi-supervised anomaly detection via adversarial training. In Asian Conference on Computer Vision, pp. 622–637. Springer, 2018.
  • Beltagy et al. (2019) Iz Beltagy, Kyle Lo, and Arman Cohan. SciBERT: A pretrained language model for scientific text. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing, pp. 3606–3611, 2019.
  • Bengio (2017) Yoshua Bengio. The consciousness prior. arXiv preprint arXiv:1709.08568, 2017.
  • Bergman & Hoshen (2020) Liron Bergman and Yedid Hoshen. Classification-based anomaly detection for general data. In International Conference on Learning Representations, 2020.
  • Bergman et al. (2020) Liron Bergman, Niv Cohen, and Yedid Hoshen. Deep nearest neighbor anomaly detection. arXiv preprint arXiv:2002.10445, 2020.
  • Bilen & Vedaldi (2017) Hakan Bilen and Andrea Vedaldi. Universal representations: The missing link between faces, text, planktons, and cat breeds. arXiv preprint arXiv:1701.07275, 2017.
  • Bouchacourt et al. (2018) Diane Bouchacourt, Ryota Tomioka, and Sebastian Nowozin. Multi-level variational autoencoder: Learning disentangled representations from grouped observations. In

    Thirty-Second AAAI Conference on Artificial Intelligence

    , 2018.
  • Burgess et al. (2018) Christopher P Burgess, Irina Higgins, Arka Pal, Loic Matthey, Nick Watters, Guillaume Desjardins, and Alexander Lerchner. Understanding disentangling in -VAE. arXiv preprint arXiv:1804.03599, 2018.
  • Chandola et al. (2009) Varun Chandola, Arindam Banerjee, and Vipin Kumar. Anomaly detection: A survey. ACM Computing Surveys (CSUR), 41(3):15, 2009.
  • Chen et al. (2018) Tian Qi Chen, Xuechen Li, Roger B Grosse, and David K Duvenaud. Isolating sources of disentanglement in variational autoencoders. In Advances in Neural Information Processing Systems, pp. 2610–2620, 2018.
  • Coates et al. (2011) Adam Coates, Andrew Ng, and Honglak Lee. An analysis of single-layer networks in unsupervised feature learning. In International Conference on Artificial Intelligence and Statistics, pp. 215–223, 2011.
  • Deecke et al. (2018) Lucas Deecke, Robert Vandermeulen, Lukas Ruff, Stephan Mandt, and Marius Kloft. Image anomaly detection with generative adversarial networks. In Joint European Conference on Machine Learning and Knowledge Discovery in Databases, pp. 3–17. Springer, 2018.
  • Deecke et al. (2020) Lucas Deecke, Hospedales Timothy, and Hakan Bilen. Latent domain learning with dynamic residual adapters. arXiv preprint arXiv:2006.00996, 2020.
  • Deng et al. (2009) Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large-scale hierarchical image database. In IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255, 2009.
  • Devlin et al. (2018) Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805, 2018.
  • Eastwood & Williams (2018) Cian Eastwood and Christopher KI Williams. A framework for the quantitative evaluation of disentangled representations. In International Conference on Learning Representations, 2018.
  • Edgeworth (1887) FY Edgeworth. XLI. on discordant observations. The London, Edinburgh, and Dublin Philosophical Magazine and Journal of Science, 23(143):364–375, 1887.
  • Emmott et al. (2013) Andrew F Emmott, Shubhomoy Das, Thomas Dietterich, Alan Fern, and Weng-Keen Wong. Systematic construction of anomaly detection benchmarks from real data. In

    ACM SIGKDD Workshop on Outlier Detection and Description

    , pp. 16–21. ACM, 2013.
  • Erfani et al. (2016) Sarah M Erfani, Sutharshan Rajasegarar, Shanika Karunasekera, and Christopher Leckie. High-dimensional and large-scale anomaly detection using a linear one-class SVM with deep learning. Pattern Recognition, 58:121–134, 2016.
  • Gidaris et al. (2018) Spyros Gidaris, Praveer Singh, and Nikos Komodakis. Unsupervised representation learning by predicting image rotations. In International Conference on Learning Representations, 2018.
  • Girshick (2015) Ross Girshick. Fast R-CNN. In IEEE Conference on Computer Vision and Pattern Recognition, pp. 1440–1448, 2015.
  • Girshick et al. (2014) Ross Girshick, Jeff Donahue, Trevor Darrell, and Jitendra Malik. Rich feature hierarchies for accurate object detection and semantic segmentation. In IEEE Conference on Computer Vision and Pattern Recognition, pp. 580–587, 2014.
  • Glorot & Bengio (2010) Xavier Glorot and Yoshua Bengio. Understanding the difficulty of training deep feedforward neural networks. In International Conference on Artificial Intelligence and Statistics, pp. 249–256, 2010.
  • Golan & El-Yaniv (2018) Izhak Golan and Ran El-Yaniv. Deep anomaly detection using geometric transformations. In Advances in Neural Information Processing Systems, pp. 9758–9769, 2018.
  • Gondal et al. (2019) Muhammad Waleed Gondal, Manuel Wuthrich, Djordje Miladinovic, Francesco Locatello, Martin Breidt, Valentin Volchkov, Joel Akpo, Olivier Bachem, Bernhard Schölkopf, and Stefan Bauer. On the transfer of inductive bias from simulation to the real world: a new disentanglement dataset. In Advances in Neural Information Processing Systems, pp. 15714–15725, 2019.
  • Grubbs (1969) Frank E Grubbs. Procedures for detecting outlying observations in samples. Technometrics, 11(1):1–21, 1969.
  • Guo et al. (2019) Yunhui Guo, Honghui Shi, Abhishek Kumar, Kristen Grauman, Tajana Rosing, and Rogerio Feris. Spottune: transfer learning through adaptive fine-tuning. In IEEE Conference on Computer Vision and Pattern Recognition, pp. 4805–4814, 2019.
  • He et al. (2016) Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778, 2016.
  • He et al. (2019) Kaiming He, Ross Girshick, and Piotr Dollár. Rethinking imagenet pre-training. In IEEE Conference on Computer Vision and Pattern Recognition, pp. 4918–4927, 2019.
  • Hendrycks & Gimpel (2017) Dan Hendrycks and Kevin Gimpel. A baseline for detecting misclassified and out-of-distribution examples in neural networks. In International Conference on Learning Representations, 2017.
  • Hendrycks et al. (2018) Dan Hendrycks, Mantas Mazeika, and Thomas Dietterich. Deep anomaly detection with outlier exposure. arXiv preprint arXiv:1812.04606, 2018.
  • Hendrycks et al. (2019a) Dan Hendrycks, Kimin Lee, and Mantas Mazeika. Using pre-training can improve model robustness and uncertainty. arXiv preprint arXiv:1901.09960, 2019a.
  • Hendrycks et al. (2019b) Dan Hendrycks, Mantas Mazeika, Saurav Kadavath, and Dawn Song. Using self-supervised learning can improve model robustness and uncertainty. In Advances in Neural Information Processing Systems, pp. 15637–15648, 2019b.
  • Hendrycks et al. (2020) Dan Hendrycks, Xiaoyuan Liu, Eric Wallace, Adam Dziedzic, Rishabh Krishnan, and Dawn Song. Pretrained transformers improve out-of-distribution robustness. arXiv preprint arXiv:2004.06100, 2020.
  • Higgins et al. (2017) Irina Higgins, Loic Matthey, Arka Pal, Christopher Burgess, Xavier Glorot, Matthew Botvinick, Shakir Mohamed, and Alexander Lerchner. beta-VAE: Learning basic visual concepts with a constrained variational framework. In International Conference on Learning Representations, 2017.
  • Howard & Ruder (2018) Jeremy Howard and Sebastian Ruder. Universal language model fine-tuning for text classification. arXiv preprint arXiv:1801.06146, 2018.
  • Kim & Mnih (2018) Hyunjik Kim and Andriy Mnih. Disentangling by factorising. In International Conference on Learning Representations, 2018.
  • Kirkpatrick et al. (2017) James Kirkpatrick, Razvan Pascanu, Neil Rabinowitz, Joel Veness, Guillaume Desjardins, Andrei A Rusu, Kieran Milan, John Quan, Tiago Ramalho, Agnieszka Grabska-Barwinska, et al. Overcoming catastrophic forgetting in neural networks. Proceedings of the National Academy of Sciences, 114(13):3521–3526, 2017.
  • Krizhevsky & Hinton (2009) Alex Krizhevsky and Geoffrey Hinton. Learning multiple layers of features from tiny images. Technical report, University of Toronto, 2009.
  • Krizhevsky et al. (2012) Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton.

    Imagenet classification with deep convolutional neural networks.

    In Advances in Neural Information Processing Systems, pp. 1097–1105, 2012.
  • Kulkarni et al. (2015) Tejas D Kulkarni, William F Whitney, Pushmeet Kohli, and Josh Tenenbaum. Deep convolutional inverse graphics network. In Advances in Neural Information Processing Systems, pp. 2539–2547, 2015.
  • Kumar et al. (2018) Abhishek Kumar, Prasanna Sattigeri, and Avinash Balakrishnan. Variational inference of disentangled latent concepts from unlabeled observations. In International Conference on Learning Representations, 2018.
  • Li et al. (2018) Xuhong Li, Yves Grandvalet, and Franck Davoine. Explicit inductive bias for transfer learning with convolutional networks. In International Conference on Machine Learning, pp. 2825–2834, 2018.
  • Liang et al. (2018) Shiyu Liang, Yixuan Li, and Rayadurgam Srikant. Enhancing the reliability of out-of-distribution image detection in neural networks. In International Conference on Learning Representations, 2018.
  • Lin et al. (2017) Zhouhan Lin, Minwei Feng, Cicero Nogueira dos Santos, Mo Yu, Bing Xiang, Bowen Zhou, and Yoshua Bengio. A structured self-attentive sentence embedding. In Proc. ICLR, 2017.
  • Locatello et al. (2019) Francesco Locatello, Stefan Bauer, Mario Lucic, Gunnar Rätsch, Sylvain Gelly, Bernhard Schölkopf, and Olivier Bachem. Challenging common assumptions in the unsupervised learning of disentangled representations. In International Conference on Machine Learning, 2019.
  • Locatello et al. (2020) Francesco Locatello, Ben Poole, Gunnar Rätsch, Bernhard Schölkopf, Olivier Bachem, and Michael Tschannen. Weakly-supervised disentanglement without compromises. arXiv preprint arXiv:2002.02886, 2020.
  • Lopez-Paz & Ranzato (2017) David Lopez-Paz and Marc’Aurelio Ranzato. Gradient episodic memory for continual learning. In Advances in Neural Information Processing Systems, pp. 6467–6476, 2017.
  • Mahadevan et al. (2010) Vijay Mahadevan, Weixin Li, Viral Bhalodia, and Nuno Vasconcelos. Anomaly detection in crowded scenes. In IEEE Conference on Computer Vision and Pattern Recognition, pp. 1975–1981, 2010.
  • Mikolov et al. (2018) Tomas Mikolov, Edouard Grave, Piotr Bojanowski, Christian Puhrsch, and Armand Joulin. Advances in pre-training distributed word representations. In International Conference on Language Resources and Evaluation, 2018.
  • Ourston et al. (2003) Dirk Ourston, Sara Matzner, William Stump, and Bryan Hopkins.

    Applications of hidden Markov models to detecting multi-stage network attacks.

    In Proceedings of the 36th Annual Hawaii International Conference on System Sciences. IEEE, 2003.
  • Paszke et al. (2017) Adam Paszke, Sam Gross, Soumith Chintala, Gregory Chanan, Edward Yang, Zachary DeVito, Zeming Lin, Alban Desmaison, Luca Antiga, and Adam Lerer. Automatic differentiation in PyTorch. 2017.
  • Pelleg & Moore (2005) Dan Pelleg and Andrew W Moore. Active learning for anomaly and rare-category detection. In Advances in Neural Information Processing Systems, pp. 1073–1080, 2005.
  • Perera et al. (2019) Pramuditha Perera, Ramesh Nallapati, and Bing Xiang. OCGAN: One-class novelty detection using gans with constrained latent representations. In IEEE Conference on Computer Vision and Pattern Recognition, pp. 2898–2906, 2019.
  • Pidhorskyi et al. (2018) Stanislav Pidhorskyi, Ranya Almohsen, and Gianfranco Doretto. Generative probabilistic novelty detection with adversarial autoencoders. In Advances in Neural Information Processing Systems, pp. 6822–6833, 2018.
  • Rebuffi et al. (2018) S-A. Rebuffi, H. Bilen, and A. Vedaldi. Efficient parametrization of multi-domain deep neural networks. In IEEE Conference on Computer Vision and Pattern Recognition, 2018.
  • Rebuffi et al. (2017) Sylvestre-Alvise Rebuffi, Hakan Bilen, and Andrea Vedaldi. Learning multiple visual domains with residual adapters. In Advances in Neural Information Processing Systems, pp. 506–516, 2017.
  • Ruff et al. (2018) Lukas Ruff, Robert Vandermeulen, Nico Görnitz, Lucas Deecke, Shoaib Ahmed Siddiqui, Alexander Binder, Emmanuel Müller, and Marius Kloft. Deep one-class classification. In International Conference on Machine Learning, pp. 4393–4402, 2018.
  • Ruff et al. (2020a) Lukas Ruff, Robert A Vandermeulen, Billy Joe Franks, Klaus-Robert Müller, and Marius Kloft. Rethinking assumptions in deep anomaly detection. arXiv preprint arXiv:2006.00339, 2020a.
  • Ruff et al. (2020b) Lukas Ruff, Robert A Vandermeulen, Nico Görnitz, Alexander Binder, Emmanuel Müller, Klaus-Robert Müller, and Marius Kloft. Deep semi-supervised anomaly detection. In International Conference on Learning Representations, 2020b.
  • Sabokrou et al. (2018) Mohammad Sabokrou, Mohammad Khalooei, Mahmood Fathy, and Ehsan Adeli. Adversarially learned one-class classifier for novelty detection. In IEEE Conference on Computer Vision and Pattern Recognition, pp. 3379–3388, 2018.
  • Schlegl et al. (2017) Thomas Schlegl, Philipp Seeböck, Sebastian M Waldstein, Ursula Schmidt-Erfurth, and Georg Langs. Unsupervised anomaly detection with generative adversarial networks to guide marker discovery. In International Conference on Information Processing in Medical Imaging, pp. 146–157. Springer, 2017.
  • Schölkopf & Smola (2002) Bernhard Schölkopf and Alex J Smola. Learning with Kernels. MIT press, 2002.
  • Schölkopf et al. (1999) Bernhard Schölkopf, John C Platt, John Shawe-Taylor, Alex J Smola, and Robert C Williamson. Estimating the support of a high-dimensional distribution. Technical Report MSR-TR-99-87, Microsoft Research, 1999.
  • Singliar & Hauskrecht (2006) Tomas Singliar and Milos Hauskrecht. Towards a learning traffic incident detection system. In Workshop on Machine Learning Algorithms for Surveillance and Event Detection, International Conference on Machine Learning, 2006.
  • Steinwart et al. (2005) Ingo Steinwart, Don Hush, and Clint Scovel. A classification framework for anomaly detection. Journal of Machine Learning Research, 6(Feb):211–232, 2005.
  • Stickland & Murray (2019) Asa Cooper Stickland and Iain Murray. BERT and PALs: Projected attention layers for efficient adaptation in multi-task learning. In International Conference on Machine Learning, 2019.
  • Sultani et al. (2018) Waqas Sultani, Chen Chen, and Mubarak Shah. Real-world anomaly detection in surveillance videos. In IEEE Conference on Computer Vision and Pattern Recognition, pp. 6479–6488, 2018.
  • Torralba et al. (2008) Antonio Torralba, Rob Fergus, and William T Freeman.

    80 million tiny images: A large data set for nonparametric object and scene recognition.

    IEEE Transactions on Pattern Analysis and Machine Intelligence, 30(11):1958–1970, 2008.
  • van Steenkiste et al. (2019) Sjoerd van Steenkiste, Francesco Locatello, Jürgen Schmidhuber, and Olivier Bachem. Are disentangled representations helpful for abstract visual reasoning? In Advances in Neural Information Processing Systems, pp. 14222–14235, 2019.
  • Yeung & Chow (2002) Dit-Yan Yeung and Calvin Chow. Parzen-window network intrusion detectors. In International Conference on Pattern Recognition, volume 4, pp. 385–388. IEEE, 2002.
  • Zamir et al. (2018) Amir R Zamir, Alexander Sax, William Shen, Leonidas J Guibas, Jitendra Malik, and Silvio Savarese. Taskonomy: Disentangling task transfer learning. In IEEE Conference on Computer Vision and Pattern Recognition, pp. 3712–3722, 2018.
  • Zhai et al. (2016) Shuangfei Zhai, Yu Cheng, Weining Lu, and Zhongfei Zhang.

    Deep structured energy based models for anomaly detection.

    In Proc. ICML, volume 48, pp. 1100–1109, 2016.
  • Zhou & Paffenroth (2017) Chong Zhou and Randy C Paffenroth. Anomaly detection with robust deep autoencoders. In Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 665–674, 2017.
  • Zhou (2018) Zhi-Hua Zhou. A brief introduction to weakly supervised learning. National Science Review, 5(1):44–53, 2018.
  • Zong et al. (2018) Bo Zong, Qi Song, Martin Renqiang Min, Wei Cheng, Cristian Lumezanu, Daeki Cho, and Haifeng Chen.

    Deep autoencoding Gaussian mixture model for unsupervised anomaly detection.

    In International Conference on Learning Representations, 2018.