No Shifted Augmentations (NSA): compact distributions for robust self-supervised Anomaly Detection

03/19/2022
by   Mohamed Yousef, et al.
3

Unsupervised Anomaly detection (AD) requires building a notion of normalcy, distinguishing in-distribution (ID) and out-of-distribution (OOD) data, using only available ID samples. Recently, large gains were made on this task for the domain of natural images using self-supervised contrastive feature learning as a first step followed by kNN or traditional one-class classifiers for feature scoring. Learned representations that are non-uniformly distributed on the unit hypersphere have been shown to be beneficial for this task. We go a step further and investigate how the geometrical compactness of the ID feature distribution makes isolating and detecting outliers easier, especially in the realistic situation when ID training data is polluted (i.e. ID data contains some OOD data that is used for learning the feature extractor parameters). We propose novel architectural modifications to the self-supervised feature learning step, that enable such compact distributions for ID data to be learned. We show that the proposed modifications can be effectively applied to most existing self-supervised objectives, with large gains in performance. Furthermore, this improved OOD performance is obtained without resorting to tricks such as using strongly augmented ID images (e.g. by 90 degree rotations) as proxies for the unseen OOD data, as these impose overly prescriptive assumptions about ID data and its invariances. We perform extensive studies on benchmark datasets for one-class OOD detection and show state-of-the-art performance in the presence of pollution in the ID data, and comparable performance otherwise. We also propose and extensively evaluate a novel feature scoring technique based on the angular Mahalanobis distance, and propose a simple and novel technique for feature ensembling during evaluation that enables a big boost in performance at nearly zero run-time cost.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
04/06/2022

RODD: A Self-Supervised Approach for Robust Out-of-Distribution Detection

Recent studies have addressed the concern of detecting and rejecting the...
research
04/07/2023

Anomalous Sound Detection using Audio Representation with Machine ID based Contrastive Learning Pretraining

Existing contrastive learning methods for anomalous sound detection refi...
research
06/08/2023

On the Effectiveness of Out-of-Distribution Data in Self-Supervised Long-Tail Learning

Though Self-supervised learning (SSL) has been widely studied as a promi...
research
08/30/2022

Anomaly Detection using Contrastive Normalizing Flows

Detecting test data deviating from training data is a central problem fo...
research
03/24/2020

Attention-Based Self-Supervised Feature Learning for Security Data

While applications of machine learning in cyber-security have grown rapi...
research
04/13/2023

In-Distribution and Out-of-Distribution Self-supervised ECG Representation Learning for Arrhythmia Detection

This paper presents a systematic investigation into the effectiveness of...
research
07/02/2023

End-to-End Out-of-distribution Detection with Self-supervised Sampling

Out-of-distribution (OOD) detection empowers the model trained on the cl...

Please sign up or login with your details

Forgot password? Click here to reset