Adversarial Attacks are a Surprisingly Strong Baseline for Poisoning Few-Shot Meta-Learners

11/23/2022
by   Elre T. Oldewage, et al.
0

This paper examines the robustness of deployed few-shot meta-learning systems when they are fed an imperceptibly perturbed few-shot dataset. We attack amortized meta-learners, which allows us to craft colluding sets of inputs that are tailored to fool the system's learning algorithm when used as training data. Jointly crafted adversarial inputs might be expected to synergistically manipulate a classifier, allowing for very strong data-poisoning attacks that would be hard to detect. We show that in a white box setting, these attacks are very successful and can cause the target model's predictions to become worse than chance. However, in opposition to the well-known transferability of adversarial examples in general, the colluding sets do not transfer well to different classifiers. We explore two hypotheses to explain this: 'overfitting' by the attack, and mismatch between the model on which the attack is generated and that to which the attack is transferred. Regardless of the mitigation strategies suggested by these hypotheses, the colluding inputs transfer no better than adversarial inputs that are generated independently in the usual way.

READ FULL TEXT

page 5

page 10

research
05/19/2017

Ensemble Adversarial Training: Attacks and Defenses

Machine learning models are vulnerable to adversarial examples, inputs m...
research
07/14/2019

Measuring the Transferability of Adversarial Examples

Adversarial examples are of wide concern due to their impact on the reli...
research
11/07/2019

White-Box Target Attack for EEG-Based BCI Regression Problems

Machine learning has achieved great success in many applications, includ...
research
12/08/2018

AutoGAN: Robust Classifier Against Adversarial Attacks

Classifiers fail to classify correctly input images that have been purpo...
research
01/16/2023

Meta Generative Attack on Person Reidentification

Adversarial attacks have been recently investigated in person re-identif...
research
09/14/2022

Robust Transferable Feature Extractors: Learning to Defend Pre-Trained Networks Against White Box Adversaries

The widespread adoption of deep neural networks in computer vision appli...
research
11/26/2019

Defending Against Adversarial Machine Learning

An Adversarial System to attack and an Authorship Attribution System (AA...

Please sign up or login with your details

Forgot password? Click here to reset