FROB: Few-shot ROBust Model for Classification and Out-of-Distribution Detection

11/30/2021
by   Nikolaos Dionelis, et al.
0

Nowadays, classification and Out-of-Distribution (OoD) detection in the few-shot setting remain challenging aims due to rarity and the limited samples in the few-shot setting, and because of adversarial attacks. Accomplishing these aims is important for critical systems in safety, security, and defence. In parallel, OoD detection is challenging since deep neural network classifiers set high confidence to OoD samples away from the training data. To address such limitations, we propose the Few-shot ROBust (FROB) model for classification and few-shot OoD detection. We devise FROB for improved robustness and reliable confidence prediction for few-shot OoD detection. We generate the support boundary of the normal class distribution and combine it with few-shot Outlier Exposure (OE). We propose a self-supervised learning few-shot confidence boundary methodology based on generative and discriminative models. The contribution of FROB is the combination of the generated boundary in a self-supervised learning manner and the imposition of low confidence at this learned boundary. FROB implicitly generates strong adversarial samples on the boundary and forces samples from OoD, including our boundary, to be less confident by the classifier. FROB achieves generalization to unseen OoD with applicability to unknown, in the wild, test sets that do not correlate to the training datasets. To improve robustness, FROB redesigns OE to work even for zero-shots. By including our boundary, FROB reduces the threshold linked to the model's few-shot robustness; it maintains the OoD performance approximately independent of the number of few-shots. The few-shot robustness analysis evaluation of FROB on different sets and on One-Class Classification (OCC) data shows that FROB achieves competitive performance and outperforms benchmarks in terms of robustness to the outlier few-shot sample population and variability.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
12/09/2020

Detection of Adversarial Supports in Few-shot Classifiers Using Feature Preserving Autoencoders and Self-Similarity

Few-shot classifiers excel under limited training samples, making it use...
research
10/27/2022

Towards Reliable Zero Shot Classification in Self-Supervised Models with Conformal Prediction

Self-supervised models trained with a contrastive loss such as CLIP have...
research
10/28/2021

OMASGAN: Out-of-Distribution Minimum Anomaly Score GAN for Sample Generation on the Boundary

Generative models trained in an unsupervised manner may set high likelih...
research
11/26/2020

Evaluation of Out-of-Distribution Detection Performance of Self-Supervised Learning in a Controllable Environment

We evaluate the out-of-distribution (OOD) detection performance of self-...
research
10/19/2022

Few-shot Transferable Robust Representation Learning via Bilevel Attacks

Existing adversarial learning methods for enhancing the robustness of de...
research
10/24/2021

Towards A Conceptually Simple Defensive Approach for Few-shot classifiers Against Adversarial Support Samples

Few-shot classifiers have been shown to exhibit promising results in use...
research
04/28/2021

Shot Contrastive Self-Supervised Learning for Scene Boundary Detection

Scenes play a crucial role in breaking the storyline of movies and TV ep...

Please sign up or login with your details

Forgot password? Click here to reset