Don't Retrain, Just Rewrite: Countering Adversarial Perturbations by Rewriting Text

05/25/2023
by   Ashim Gupta, et al.
0

Can language models transform inputs to protect text classifiers against adversarial attacks? In this work, we present ATINTER, a model that intercepts and learns to rewrite adversarial inputs to make them non-adversarial for a downstream text classifier. Our experiments on four datasets and five attack mechanisms reveal that ATINTER is effective at providing better adversarial robustness than existing defense approaches, without compromising task accuracy. For example, on sentiment classification using the SST-2 dataset, our method improves the adversarial accuracy over the best existing defense approach by more than 4 2.5 tasks and classifiers without having to explicitly retrain it for those settings. Specifically, we find that when ATINTER is trained to remove adversarial perturbations for the sentiment classification task on the SST-2 dataset, it even transfers to a semantically different task of news classification (on AGNews) and improves the adversarial robustness by more than 10

READ FULL TEXT
research
02/23/2022

LPF-Defense: 3D Adversarial Defense based on Frequency Analysis

Although 3D point cloud classification has recently been widely deployed...
research
09/11/2017

Ensemble Methods as a Defense to Adversarial Perturbations Against Deep Neural Networks

Deep learning has become the state of the art approach in many machine l...
research
06/08/2019

Defending against Adversarial Attacks through Resilient Feature Regeneration

Deep neural network (DNN) predictions have been shown to be vulnerable t...
research
07/28/2020

Reachable Sets of Classifiers Regression Models: (Non-)Robustness Analysis and Robust Training

Neural networks achieve outstanding accuracy in classification and regre...
research
02/11/2022

Using Random Perturbations to Mitigate Adversarial Attacks on Sentiment Analysis Models

Attacks on deep learning models are often difficult to identify and ther...
research
07/11/2019

Why Blocking Targeted Adversarial Perturbations Impairs the Ability to Learn

Despite their accuracy, neural network-based classifiers are still prone...

Please sign up or login with your details

Forgot password? Click here to reset