Detection of Adversarial Supports in Few-shot Classifiers Using Feature Preserving Autoencoders and Self-Similarity

12/09/2020
by   Yi Xiang Marcus Tan, et al.
0

Few-shot classifiers excel under limited training samples, making it useful in real world applications. However, the advent of adversarial samples threatens the efficacy of such classifiers. For them to remain reliable, defences against such attacks must be explored. However, closer examination to prior literature reveals a big gap in this domain. Hence, in this work, we propose a detection strategy to highlight adversarial support sets, aiming to destroy a few-shot classifier's understanding of a certain class of objects. We make use of feature preserving autoencoder filtering and also the concept of self-similarity of a support set to perform this detection. As such, our method is attack-agnostic and also the first to explore detection for few-shot classifiers to the best of our knowledge. Our evaluation on the miniImagenet and CUB datasets exhibit optimism when employing our proposed approach, showing high AUROC scores for detection in general.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/24/2021

Towards A Conceptually Simple Defensive Approach for Few-shot classifiers Against Adversarial Support Samples

Few-shot classifiers have been shown to exhibit promising results in use...
research
11/30/2021

FROB: Few-shot ROBust Model for Classification and Out-of-Distribution Detection

Nowadays, classification and Out-of-Distribution (OoD) detection in the ...
research
07/01/2020

Making Use of NXt to Nothing: The Effect of Class Imbalances on DGA Detection Classifiers

Numerous machine learning classifiers have been proposed for binary clas...
research
12/13/2019

Multi-level Similarity Learning for Low-Shot Recognition

Low-shot learning indicates the ability to recognize unseen objects base...
research
12/14/2021

Exploring Category-correlated Feature for Few-shot Image Classification

Few-shot classification aims to adapt classifiers to novel classes with ...
research
05/03/2019

CharBot: A Simple and Effective Method for Evading DGA Classifiers

Domain generation algorithms (DGAs) are commonly leveraged by malware to...
research
03/18/2020

CAFENet: Class-Agnostic Few-Shot Edge Detection Network

We tackle a novel few-shot learning challenge, which we call few-shot se...

Please sign up or login with your details

Forgot password? Click here to reset