Contrastive Training for Improved Out-of-Distribution Detection

by   Jim Winkens, et al.

Reliable detection of out-of-distribution (OOD) inputs is increasingly understood to be a precondition for deployment of machine learning systems. This paper proposes and investigates the use of contrastive training to boost OOD detection performance. Unlike leading methods for OOD detection, our approach does not require access to examples labeled explicitly as OOD, which can be difficult to collect in practice. We show in extensive experiments that contrastive training significantly helps OOD detection performance on a number of common benchmarks. By introducing and employing the Confusion Log Probability (CLP) score, which quantifies the difficulty of the OOD detection task by capturing the similarity of inlier and outlier datasets, we show that our method especially improves performance in the `near OOD' classes – a particularly challenging setting for previous methods.


Unsupervised Outlier Detection using Memory and Contrastive Learning

Outlier detection is one of the most important processes taken to create...

Using contrastive learning to improve the performance of steganalysis schemes

To improve the detection accuracy and generalization of steganalysis, th...

Contrastive Domain Adaptation

Recently, contrastive self-supervised learning has become a key componen...

Contrastive Predictive Coding for Anomaly Detection

Reliable detection of anomalies is crucial when deploying machine learni...

CSI: Novelty Detection via Contrastive Learning on Distributionally Shifted Instances

Novelty detection, i.e., identifying whether a given sample is drawn fro...

SAINT: Improved Neural Networks for Tabular Data via Row Attention and Contrastive Pre-Training

Tabular data underpins numerous high-impact applications of machine lear...

A Simple Fix to Mahalanobis Distance for Improving Near-OOD Detection

Mahalanobis distance (MD) is a simple and popular post-processing method...