To Transfer or Not to Transfer: Misclassification Attacks Against Transfer Learned Text Classifiers

01/08/2020
by   Bijeeta Pal, et al.
0

Transfer learning — transferring learned knowledge — has brought a paradigm shift in the way models are trained. The lucrative benefits of improved accuracy and reduced training time have shown promise in training models with constrained computational resources and fewer training samples. Specifically, publicly available text-based models such as GloVe and BERT that are trained on large corpus of datasets have seen ubiquitous adoption in practice. In this paper, we ask, "can transfer learning in text prediction models be exploited to perform misclassification attacks?" As our main contribution, we present novel attack techniques that utilize unintended features learnt in the teacher (public) model to generate adversarial examples for student (downstream) models. To the best of our knowledge, ours is the first work to show that transfer learning from state-of-the-art word-based and sentence-based teacher models increase the susceptibility of student models to misclassification attacks. First, we propose a novel word-score based attack algorithm for generating adversarial examples against student models trained using context-free word-level embedding model. On binary classification tasks trained using the GloVe teacher model, we achieve an average attack accuracy of 97 multi-class tasks, we divide the Newsgroup dataset into 6 and 20 classes and achieve an average attack accuracy of 75 present length-based and sentence-based misclassification attacks for the Fake News Detection task trained using a context-aware BERT model and achieve 78 and 39 designing training techniques that are robust to unintended feature learning, specifically for transfer learned models.

READ FULL TEXT
research
06/23/2021

Teacher Model Fingerprinting Attacks Against Transfer Learning

Transfer learning has become a common solution to address training data ...
research
08/29/2019

Defending Against Misclassification Attacks in Transfer Learning

Transfer learning accelerates the development of new models (Student Mod...
research
01/10/2020

Backdoor Attacks against Transfer Learning with Pre-trained Deep Learning Models

Transfer learning, that transfer the learned knowledge of pre-trained Te...
research
07/01/2023

Common Knowledge Learning for Generating Transferable Adversarial Examples

This paper focuses on an important type of black-box attacks, i.e., tran...
research
02/25/2021

Understanding Robustness in Teacher-Student Setting: A New Perspective

Adversarial examples have appeared as a ubiquitous property of machine l...
research
11/08/2019

Towards a General Model of Knowledge for Facial Analysis by Multi-Source Transfer Learning

This paper proposes a step toward obtaining general models of knowledge ...
research
09/23/2020

Improving Dialog Evaluation with a Multi-reference Adversarial Dataset and Large Scale Pretraining

There is an increasing focus on model-based dialog evaluation metrics su...

Please sign up or login with your details

Forgot password? Click here to reset