Defending Against Misclassification Attacks in Transfer Learning

08/29/2019
by   Bang Wu, et al.
0

Transfer learning accelerates the development of new models (Student Models). It applies relevant knowledge from a pre-trained model (Teacher Model) to the new ones with a small amount of training data, yet without affecting the model accuracy. However, these Teacher Models are normally open in order to facilitate sharing and reuse, which creates an attack plane in transfer learning systems. Among others, recent emerging attacks demonstrate that adversarial inputs can be built with negligible perturbations to the normal inputs. Such inputs can mimic the internal features of the student models directly based on the knowledge of the Teacher Models and cause misclassification in final predictions. In this paper, we propose an effective defence against the above misclassification attacks in transfer learning. First, we propose a distilled differentiator that can address the targeted attacks, where adversarial inputs are misclassified to a specific class. Specifically, this dedicated differentiator is designed with network activation pruning and retraining in a fine-tuned manner, so as to reach high defence rates and high model accuracy. To address the non-targeted attacks that misclassify adversarial inputs to randomly selected classes, we further employ an ensemble structure from the differentiators to cover all possible misclassification. Our evaluations over common image recognition tasks confirm that the student models applying our defence can reject most of the adversarial inputs with a marginal accuracy loss. We also show that our defence outperforms prior approaches in both targeted and non-targeted attacks.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
01/10/2020

Backdoor Attacks against Transfer Learning with Pre-trained Deep Learning Models

Transfer learning, that transfer the learned knowledge of pre-trained Te...
research
05/24/2019

Regula Sub-rosa: Latent Backdoor Attacks on Deep Neural Networks

Recent work has proposed the concept of backdoor attacks on deep neural ...
research
01/08/2020

To Transfer or Not to Transfer: Misclassification Attacks Against Transfer Learned Text Classifiers

Transfer learning — transferring learned knowledge — has brought a parad...
research
06/23/2021

Teacher Model Fingerprinting Attacks Against Transfer Learning

Transfer learning has become a common solution to address training data ...
research
11/22/2021

Adaptive Transfer Learning: a simple but effective transfer learning

Transfer learning (TL) leverages previously obtained knowledge to learn ...
research
12/02/2018

Model-Reuse Attacks on Deep Learning Systems

Many of today's machine learning (ML) systems are built by reusing an ar...
research
08/26/2021

Why Adversarial Reprogramming Works, When It Fails, and How to Tell the Difference

Adversarial reprogramming allows repurposing a machine-learning model to...

Please sign up or login with your details

Forgot password? Click here to reset