Adversarial Vulnerability of Active Transfer Learning

01/26/2021
by   Nicolas M. Müller, et al.
0

Two widely used techniques for training supervised machine learning models on small datasets are Active Learning and Transfer Learning. The former helps to optimally use a limited budget to label new data. The latter uses large pre-trained models as feature extractors and enables the design of complex, non-linear models even on tiny datasets. Combining these two approaches is an effective, state-of-the-art method when dealing with small datasets. In this paper, we share an intriguing observation: Namely, that the combination of these techniques is particularly susceptible to a new kind of data poisoning attack: By adding small adversarial noise on the input, it is possible to create a collision in the output space of the transfer learner. As a result, Active Learning algorithms no longer select the optimal instances, but almost exclusively the ones injected by the attacker. This allows an attacker to manipulate the active learner to select and include arbitrary images into the data set, even against an overwhelming majority of unpoisoned samples. We show that a model trained on such a poisoned dataset has a significantly deteriorated performance, dropping from 86% to 34% test accuracy. We evaluate this attack on both audio and image datasets and support our findings empirically. To the best of our knowledge, this weakness has not been described before in literature.

READ FULL TEXT
research
04/08/2019

A Target-Agnostic Attack on Deep Models: Exploiting Security Vulnerabilities of Transfer Learning

Due to the lack of enough training data and high computational cost to t...
research
04/20/2020

Headless Horseman: Adversarial Attacks on Transfer Learning Models

Transfer learning facilitates the training of task-specific classifiers ...
research
02/01/2022

Minority Class Oriented Active Learning for Imbalanced Datasets

Active learning aims to optimize the dataset annotation process when res...
research
09/07/2019

Active learning to optimise time-expensive algorithm selection

Hard optimisation problems such as Boolean Satisfiability typically have...
research
01/18/2022

Optimizing Active Learning for Low Annotation Budgets

When we can not assume a large amount of annotated data , active learnin...
research
07/26/2022

Generative Extraction of Audio Classifiers for Speaker Identification

It is perhaps no longer surprising that machine learning models, especia...
research
05/18/2023

Attacks on Online Learners: a Teacher-Student Analysis

Machine learning models are famously vulnerable to adversarial attacks: ...

Please sign up or login with your details

Forgot password? Click here to reset