An anomaly (or outlier, novelty, out-of-distribution sample) is an observation that differs significantly from the vast majority of the data. Anomaly detection (AD) tries to distinguish anomalous samples from the samples that are deemed ‘normal’ in the data. It has become increasingly relevant to detect these anomalies to make machine learning methods more reliable and to improve their applicability in real-world scenarios, such as automated industrial inspections and medical diagnosis(Ruff et al., 2021)
. Typically, anomaly detection is treated as an unsupervised learning problem, since labelled data is generally unavailable and to allow for the development of methods that can detect previously unseen anomalies.
One promising direction involves the adaptation of contrastive learning approaches (Hjelm et al., 2019; Oord et al., 2019; Chen et al., 2020; He et al., 2020) to the anomaly detection setting (Tack et al., 2020; Winkens et al., 2020; Kopuklu et al., 2021; Qiu et al., 2021; Sohn et al., 2021). However, even though most of these approaches have been applied to image data, none of them can use the contrastive losses directly for both anomaly detection and segmentation.
In this paper, we demonstrate that Contrastive Predictive Coding (CPC) (Oord et al., 2019; Hénaff et al., 2020) can be applied to detect and segment anomalies in images. We show that the InfoNCE loss introduced by Oord et al. (2019) can be directly interpreted as an anomaly score. Since in this loss patches from within an image are contrasted against one another, we can further use it to create accurate anomaly segmentation masks. This results in a compact and straightforward anomaly detection and segmentation approach.
To improve the performance of the CPC model for anomaly detection, we introduce two adjustments. First, we adapt the setup of negative samples during testing such that anomalous patches can only appear within the positive sample. Second, we omit the autoregressive part of the CPC model. With these adjustments, our proposed method achieves promising performance on real-world data, such as the challenging MVTec-AD dataset (Bergmann et al., 2019).
2 Related Work
In this section, we will give an overview of contrastive learning approaches and different methods for anomaly detection.
2.1 Contrastive Learning
Lately, impressive results have been achieved with self-supervised methods based on contrastive learning (Wu et al., 2018; Oord et al., 2019; Hjelm et al., 2019; He et al., 2020; Chen et al., 2020; Li et al., 2021b). Overall, these methods work by making a model decide whether two (randomly) transformed inputs originated from the same input sample, or from two samples that have been randomly drawn from across the dataset. Different transformations can be chosen depending on the domain and downstream task. For example, on image data, random data augmentation such as random cropping and color jittering has proven useful (Chen et al., 2020; He et al., 2020). In this paper, we use the Contrastive Predictive Coding model (Oord et al., 2019; Hénaff et al., 2020)
, which makes use of temporal transformations. Generally, these approaches are evaluated by training a linear classifier on top of the created representations and by measuring the performance that this linear classifier can achieve on downstream tasks.
2.2 Anomaly Detection
Anomaly detection methods can roughly be divided into three categories: density-based, reconstruction-based and discriminative-based methods (Ruff et al., 2021)2017; Winkens et al., 2020; Liu et al., 2020)
; reconstruction-based methods are based on models that are trained with a reconstruction objective (e.g. autoencoders)(Zhou & Paffenroth, 2017; Bergmann et al., 2018; Luo et al., 2020); discriminative-based methods learn a decision boundary between anomalous and normal data (e.g. SVM, one-class classification) (Ruff et al., 2020; Tack et al., 2020; Liznerski et al., 2021; Li et al., 2021a). The method proposed in this paper can be seen as a density-based method with a discriminative one-class objective.
to learn representations of the data. Then, they calculate a separate anomaly score by using these representations for density estimation, one-class classification, or by applying metric measures like the cosine similarity and the norm of the representations. The downsides of this approach are that it requires extensive data augmentations and multiple different measures, or multiple models. Another comparable contrastive learning AD method(Kopuklu et al., 2021)
uses noise contrastive estimation for training, similar to our method. Differently to our method, they map the samples to multiple latent spaces and use anomalous samples as negatives during training. This results in a more complex model with a supervised training phase. NeuTraL AD(Qiu et al., 2021) makes use of a contrastive loss with learnable transformations, and reuses this loss as an anomaly score. In contrast to our method, their approach has been evaluated on time-series and tabular data.
3 Contrastive Predictive Coding
Contrastive Predictive Coding (Oord et al., 2019) is a self-supervised representation learning approach that leverages the structure of the data and enforces temporally nearby inputs to be encoded similarly in latent space. It achieves this by making the model decide whether a pair of samples is made up of temporally nearby samples or randomly assigned samples. This approach can also be applied to static image data by splitting the images up into patches, and interpreting each row of patches as a separate time-step.
The CPC model makes use of a contrastive loss function, coined InfoNCE, that is based on Noise-Contrastive Estimation(Gutmann & Hyvärinen, 2010) and is designed to optimize the mutual information between the latent representations of patches () and their surrounding patches ():
where and represents a non-linear encoder, and
represents an autoregressive model. Furthermore,
describes a linear transformation used for predictingtime-steps ahead. The set of samples consists of one positive sample and negative samples for which is randomly sampled from across the current batch.
4 CPC for Anomaly Detection
We propose to apply the CPC model for anomaly detection and segmentation (Fig. 1). In order to improve the performance of the CPC model in this setting, we introduce two adjustments to its architecture: (1) We omit the autoregressive model . As a result, our loss function changes to:
This formulation is equivalent to the loss used in the Greedy InfoMax model (Löwe et al., 2019). This adjustment results in a simpler model, which is still able to learn useful latent representations – according to preliminary results. (2) We change the setup of the negative samples during testing. Previous implementations of the CPC model use random patches from within the same test-batch (Hénaff et al., 2020) as negative samples. However, this may result in negative samples containing anomalous patches, which could make it harder for the model to detect anomalous patches in the positive sample based on the contrastive loss. To avoid this, during testing, we draw negative samples from the (non-anomalous) training data.
In the test-phase, we use the loss function in Eq. 2 to decide whether an image patch can be classified as anomalous:
The threshold value
remains implicit, since we use the area under the receiver operating characteristic curve (AUROC) as performance measure. While we can create anomaly segmentation masks by making use of the anomaly scores per patch, we can also apply our approach to decide whether a sample is anomalous – either by averaging over the scores of all patches within an image, or by examining the patch with the highest score.
We evaluate the proposed Contrastive Predictive Coding model for anomaly detection and segmentation on the MVTec-AD dataset (Bergmann et al., 2019). This dataset contains high-resolution images of ten objects and five textures with pixel-accurate annotations and provides between 60 and 391 training images per class. During training we randomly crop every image to times the original dimensions. Then, both train and test images are resized to 768768 pixels. The resulting image is split into patches of size 256256, where each patch has 50% overlap with its neighbouring patches. These patches are further divided into sub-patches of size 6464, also with 50% overlap. These sub-patches are used in the InfoNCE loss (Fig. 1
) to detect anomalies. The cropped and resized images are horizontally flipped with a probability of 50% during training.
We use a ResNet-18 v2 (He et al., 2016) up until the third residual block as encoder
. We train a separate model from scratch for each class with a batch size of 16 for 150 epochs using the Adam optimizer(Kingma & Ba, 2015) with a learning rate of . As proposed by Oord et al. (2019), we train and evaluate the model on grayscale images. For both training and evaluation we use 16 negative samples in the InfoNCE loss. To increase the accuracy of the InfoNCE loss as an indicator for anomalous patches, we apply four separate models in four different directions – predicting patches using context from above, below, left and right – and combine their losses in the test-phase.
5.1 Anomaly Detection
To evaluate our model’s performance for detecting anomalies, we average the top- InfoNCE loss values across all sub-patches within an image and use this value to calculate the AUROC score. In Table 1, we compare against previously published works from peer-reviewed venues that do not make use of pre-trained feature extractors. We find that our proposed CPC-AD
model substantially improves upon a kernel density estimation model (KDE) and an autoencoding model (Auto) as presented in Kauffmann et al. (2020). We also improve upon the contrastive learning approach combined with a KDE model (avg. AUROC: 0.865) as proposed by Sohn et al. (2021). The performance of our model lags behind the CutPaste model (Li et al., 2021a). However, we argue that CPC-AD provides a more generally applicable approach for anomaly detection. The CutPaste model relies heavily on randomly sampled artificial anomalies that are designed to resemble the anomalies encountered in the dataset. As a result, it is not applicable to a -classes-out task, where anomalies differ semantically from the normal data. For comparison, the current state-of-the-art model on this dataset which makes use of a pre-trained feature extractor achieves 0.979 AUROC averaged across all classes (Defard et al., 2020).
5.2 Anomaly Segmentation
For the evaluation of the proposed CPC-AD model’s anomaly segmentation performance, we up-sample the sub-patch-wise InfoNCE loss values to match the pixel-wise ground truth annotations. To do so, we average the InfoNCE losses of overlapping sub-patches and assign the resulting values to all affected pixels. This allows us to create anomaly segmentation masks at the resolution of half a sub-patch (3232 pixels) that are of the same dimensions as the resized images (768768).
In Table 2
in the Appendix, we compare the anomaly segmentation performance of the proposed CPC-AD method against previously published works from peer-reviewed venues. The best results on the MVTec-AD dataset are achieved with extensive models that are pre-trained on ImageNet, such asFCDD and PaDiM (Liznerski et al., 2021; Defard et al., 2020), or make use of additional artificial anomalies and ensemble methods, such as CutPaste (Li et al., 2021a). Our model is trained from scratch and uses merely the provided training data, making for a less complex and more general method. The proposed CPC-AD approach is further outperformed by one autoencoding approach (AE-SSIM) and a partially contrastive approach (DistAug), but is on par with another autoencoding approach (AE-L2). Our proposed method outperforms the GAN-based approach (AnoGAN) (Bergmann et al., 2019; Schlegl et al., 2017). Interestingly, the CPC-AD model scores relatively well on textures, compared to similar models.
Nonetheless, although the quantitative results achieved with CPC-AD are not state-of-the-art, the model succeeds in generating accurate segmentation masks for most classes (Fig. 2). Even for classes with a low pixelwise AUROC score, such as pill, it can be seen that the created segmentation masks correctly highlight anomalous input regions, although there is some background noise. This corresponds with the comparatively high detection performance that the CPC-AD method achieves for this class (Table 1). These results indicate that part of the low segmentation scores (compared to the detection scores) could be due to small spatial deviations from the ground truth. This effect might be exacerbated by the relatively low resolution of the segmentation masks that our patch-wise approach creates. Nonetheless, we argue that this resolution would be sufficient in practice to provide interpretable results for human inspection. Overall, CPC-AD provides a promising first step towards anomaly segmentation methods that are based on contrastive learning.
Overall, the CPC-AD model shows that contrastive learning can be used not just for anomaly detection, but also for anomaly segmentation. The proposed method performs well on the anomaly detection task, with competitive results for a majority of the data. Additionally the generated segmentation masks provide a promising first step towards anomaly segmentation methods that are based on contrastive losses.
- Bergmann et al. (2018) Bergmann, P., Löwe, S., Fauser, M., Sattlegger, D., and Steger, C. Improving unsupervised defect segmentation by applying structural similarity to autoencoders. arXiv preprint arXiv:1807.02011, 2018.
- Bergmann et al. (2019) Bergmann, P., Fauser, M., Sattlegger, D., and Steger, C. Mvtec ad – a comprehensive real-world dataset for unsupervised anomaly detection. In
- Bergmann et al. (2020) Bergmann, P., Fauser, M., Sattlegger, D., and Steger, C. Uninformed students: Student-teacher anomaly detection with discriminative latent embeddings. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 4183–4192, 2020.
- Chen et al. (2020) Chen, T., Kornblith, S., Norouzi, M., and Hinton, G. A simple framework for contrastive learning of visual representations. In Proceedings of the 37th International Conference on Machine Learning, volume 119 of Proceedings of Machine Learning Research, pp. 1597–1607. PMLR, 13–18 Jul 2020.
- Defard et al. (2020) Defard, T., Setkov, A., Loesch, A., and Audigier, R. Padim: a patch distribution modeling framework for anomaly detection and localization. arXiv preprint arXiv:2011.08785, 2020.
Gutmann & Hyvärinen (2010)
Gutmann, M. and Hyvärinen, A.
Noise-contrastive estimation: A new estimation principle for
unnormalized statistical models.
In Teh, Y. W. and Titterington, M. (eds.),
Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, volume 9 of Proceedings of Machine Learning Research, pp. 297–304, 13–15 May 2010.
- He et al. (2016) He, K., Zhang, X., Ren, S., and Sun, J. Identity mappings in deep residual networks. In European conference on computer vision, pp. 630–645. Springer, 2016.
- He et al. (2020) He, K., Fan, H., Wu, Y., Xie, S., and Girshick, R. Momentum contrast for unsupervised visual representation learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), June 2020.
- Hénaff et al. (2020) Hénaff, O. J., Srinivas, A., Fauw, J. D., Razavi, A., Doersch, C., Eslami, S. M. A., and van den Oord, A. Data-efficient image recognition with contrastive predictive coding. In Proceedings of the 37th International Conference on Machine Learning, volume 119 of Proceedings of Machine Learning Research, pp. 4182–4192, 13–18 Jul 2020.
- Hjelm et al. (2019) Hjelm, R. D., Fedorov, A., Lavoie-Marchildon, S., Grewal, K., Trischler, A., and Bengio, Y. Learning deep representations by mutual information estimation and maximization. Proceedings of the 7th International Conference on Learning Representations, 2019.
- Kauffmann et al. (2020) Kauffmann, J., Ruff, L., Montavon, G., and Müller, K.-R. The clever hans effect in anomaly detection. arXiv preprint arXiv:2006.10609, 2020.
- Kingma & Ba (2015) Kingma, D. P. and Ba, J. Adam: A method for stochastic optimization. Proceedings of the 7th International Conference on Learning Representations, 2015.
- Kopuklu et al. (2021) Kopuklu, O., Zheng, J., Xu, H., and Rigoll, G. Driver anomaly detection: A dataset and contrastive learning approach. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), pp. 91–100, January 2021.
- Li et al. (2021a) Li, C.-L., Sohn, K., Yoon, J., and Pfister, T. Cutpaste: Self-supervised learning for anomaly detection and localization. arXiv preprint arXiv:2104.04015, 2021a.
- Li et al. (2021b) Li, J., Zhou, P., Xiong, C., and Hoi, S. C. H. Prototypical contrastive learning of unsupervised representations. arXiv preprint arXiv:2005.04966, 2021b.
- Liu et al. (2020) Liu, W., Li, R., Zheng, M., Karanam, S., Wu, Z., Bhanu, B., Radke, R. J., and Camps, O. Towards visually explaining variational autoencoders. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), June 2020.
- Liznerski et al. (2021) Liznerski, P., Ruff, L., Vandermeulen, R. A., Franks, B. J., Kloft, M., and Müller, K.-R. Explainable deep one-class classification. Proceedings of the 7th International Conference on Learning Representations, 2021.
- Löwe et al. (2019) Löwe, S., O’Connor, P., and Veeling, B. Putting an end to end-to-end: Gradient-isolated learning of representations. In Advances in Neural Information Processing Systems, pp. 3039–3051, 2019.
- Luo et al. (2020) Luo, W., Gu, Z., Liu, J., and Gao, S. Encoding structure-texture relation with p-net for anomaly detection in retinal images. In European conference on computer vision, pp. 360–377. Springer, 2020.
- Oord et al. (2019) Oord, A. v. d., Li, Y., and Vinyals, O. Representation learning with contrastive predictive coding. arXiv preprint arXiv:1807.03748, 2019.
- Qiu et al. (2021) Qiu, C., Pfrommer, T., Kloft, M., Mandt, S., and Rudolph, M. Neural transformation learning for deep anomaly detection beyond images. arXiv preprint arXiv:2103.16440, 2021.
- Ruff et al. (2020) Ruff, L., Vandermeulen, R. A., Görnitz, N., Binder, A., Müller, E., Müller, K.-R., and Kloft, M. Deep semi-supervised anomaly detection. arXiv preprint arXiv:1906.02694, 2020.
- Ruff et al. (2021) Ruff, L., Kauffmann, J. R., Vandermeulen, R. A., Montavon, G., Samek, W., Kloft, M., Dietterich, T. G., and Müller, K.-R. A unifying review of deep and shallow anomaly detection. Proceedings of the IEEE, 109(5):756–795, 2021.
Schlegl et al. (2017)
Schlegl, T., Seeböck, P., Waldstein, S. M., Schmidt-Erfurth, U., and Langs,
Unsupervised anomaly detection with generative adversarial networks to guide marker discovery.In International conference on information processing in medical imaging, pp. 146–157. Springer, 2017.
- Sohn et al. (2021) Sohn, K., Li, C.-L., Yoon, J., Jin, M., and Pfister, T. Learning and evaluating representations for deep one-class classification. In International Conference on Learning Representations, 2021.
Tack et al. (2020)
Tack, J., Mo, S., Jeong, J., and Shin, J.
Csi: Novelty detection via contrastive learning on distributionally shifted instances.In 34th Conference on Neural Information Processing Systems (NeurIPS), 2020.
- Winkens et al. (2020) Winkens, J., Bunel, R., Roy, A. G., Stanforth, R., Natarajan, V., Ledsam, J. R., MacWilliams, P., Kohli, P., Karthikesalingam, A., Kohl, S., Cemgil, T., Eslami, S. M. A., and Ronneberger, O. Contrastive training for improved out-of-distribution detection. arXiv preprint arXiv:2007.05566, 2020.
- Wu et al. (2018) Wu, Z., Xiong, Y., Yu, S. X., and Lin, D. Unsupervised feature learning via non-parametric instance discrimination. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2018.
- Zhou & Paffenroth (2017) Zhou, C. and Paffenroth, R. C. Anomaly detection with robust deep autoencoders. In Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 665–674, 2017.
Appendix A Additional Results
a.1 Anomaly Segmentation
In Table 2, we compare the anomaly segmentation performance of the proposed CPC-AD method against previously published works from peer-reviewed venues.