Large Class Separation is not what you need for Relational Reasoning-based OOD Detection

07/12/2023
by   Lorenzo Li Lu, et al.
0

Standard recognition approaches are unable to deal with novel categories at test time. Their overconfidence on the known classes makes the predictions unreliable for safety-critical applications such as healthcare or autonomous driving. Out-Of-Distribution (OOD) detection methods provide a solution by identifying semantic novelty. Most of these methods leverage a learning stage on the known data, which means training (or fine-tuning) a model to capture the concept of normality. This process is clearly sensitive to the amount of available samples and might be computationally expensive for on-board systems. A viable alternative is that of evaluating similarities in the embedding space produced by large pre-trained models without any further learning effort. We focus exactly on such a fine-tuning-free OOD detection setting. This works presents an in-depth analysis of the recently introduced relational reasoning pre-training and investigates the properties of the learned embedding, highlighting the existence of a correlation between the inter-class feature distance and the OOD detection accuracy. As the class separation depends on the chosen pre-training objective, we propose an alternative loss function to control the inter-class margin, and we show its advantage with thorough experiments.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/09/2022

Inadequately Pre-trained Models are Better Feature Extractors

Pre-training has been a popular learning paradigm in deep learning era, ...
research
07/18/2022

Semantic Novelty Detection via Relational Reasoning

Semantic novelty detection aims at discovering unknown categories in the...
research
09/07/2021

Exploring Strategies for Generalizable Commonsense Reasoning with Pre-trained Models

Commonsense reasoning benchmarks have been largely solved by fine-tuning...
research
06/16/2023

Parameter-efficient is not sufficient: Exploring Parameter, Memory, and Time Efficient Adapter Tuning for Dense Predictions

Pre-training fine-tuning is a prevalent paradigm in computer vision ...
research
10/17/2022

ZooD: Exploiting Model Zoo for Out-of-Distribution Generalization

Recent advances on large-scale pre-training have shown great potentials ...
research
06/30/2020

SE3M: A Model for Software Effort Estimation Using Pre-trained Embedding Models

Estimating effort based on requirement texts presents many challenges, e...

Please sign up or login with your details

Forgot password? Click here to reset