Adversarial Reprogramming of Sequence Classification Neural Networks

09/06/2018
by   Paarth Neekhara, et al.
1

Adversarial Reprogramming has demonstrated success in utilizing pre-trained neural network classifiers for alternative classification tasks without modification to the original network. An adversary in such an attack scenario trains an additive contribution to the inputs to repurpose the neural network for the new classification task. While this reprogramming approach works for neural networks with a continuous input space such as that of images, it is not directly applicable to neural networks trained for tasks such as text classification, where the input space is discrete. Repurposing such classification networks would require the attacker to learn an adversarial program that maps inputs from one discrete space to the other. In this work, we introduce a context-based vocabulary remapping model to reprogram neural networks trained on a specific sequence classification task, for a new sequence classification task desired by the adversary. We propose training procedures for this adversarial program in both white-box and black-box settings. We demonstrate the application of our model by adversarially repurposing various text-classification models including LSTM, bi-directional LSTM and CNN for alternate classification tasks.

READ FULL TEXT
research
02/15/2021

Cross-modal Adversarial Reprogramming

With the abundance of large-scale deep learning models, it has become po...
research
01/03/2021

Learning Neural Networks on SVD Boosted Latent Spaces for Semantic Classification

The availability of large amounts of data and compelling computation pow...
research
10/06/2020

Interpretable Sequence Classification via Discrete Optimization

Sequence classification is the task of predicting a class label given a ...
research
04/24/2018

Opening the black box of neural nets: case studies in stop/top discrimination

We introduce techniques for exploring the functionality of a neural netw...
research
08/12/2021

Attacks against Ranking Algorithms with Text Embeddings: a Case Study on Recruitment Algorithms

Recently, some studies have shown that text classification tasks are vul...
research
05/29/2019

A backdoor attack against LSTM-based text classification systems

With the widespread use of deep learning system in many applications, th...
research
11/20/2019

Outside the Box: Abstraction-Based Monitoring of Neural Networks

Neural networks have demonstrated unmatched performance in a range of cl...

Please sign up or login with your details

Forgot password? Click here to reset