A Differentiable Language Model Adversarial Attack on Text Classifiers

07/23/2021
by   Ivan Fursov, et al.
0

Robustness of huge Transformer-based models for natural language processing is an important issue due to their capabilities and wide adoption. One way to understand and improve robustness of these models is an exploration of an adversarial attack scenario: check if a small perturbation of an input can fool a model. Due to the discrete nature of textual data, gradient-based adversarial methods, widely used in computer vision, are not applicable per se. The standard strategy to overcome this issue is to develop token-level transformations, which do not take the whole sentence into account. In this paper, we propose a new black-box sentence-level attack. Our method fine-tunes a pre-trained language model to generate adversarial examples. A proposed differentiable loss function depends on a substitute classifier score and an approximate edit distance computed via a deep learning model. We show that the proposed attack outperforms competitors on a diverse set of NLP problems for both computed metrics and human evaluation. Moreover, due to the usage of the fine-tuned language model, the generated adversarial examples are hard to detect, thus current models are not robust. Hence, it is difficult to defend from the proposed attack, which is not the case for other attacks.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/19/2020

Differentiable Language Model Adversarial Attacks on Categorical Sequence Classifiers

An adversarial attack paradigm explores various scenarios for the vulner...
research
08/16/2020

TextDecepter: Hard Label Black Box Attack on Text Classifiers

Machine learning has been proven to be susceptible to carefully crafted ...
research
03/25/2023

Backdoor Attacks with Input-unique Triggers in NLP

Backdoor attack aims at inducing neural models to make incorrect predict...
research
03/11/2022

Block-Sparse Adversarial Attack to Fool Transformer-Based Text Classifiers

Recently, it has been shown that, in spite of the significant performanc...
research
06/09/2021

URLTran: Improving Phishing URL Detection Using Transformers

Browsers often include security features to detect phishing web pages. I...
research
01/08/2021

Misspelling Correction with Pre-trained Contextual Language Model

Spelling irregularities, known now as spelling mistakes, have been found...
research
09/06/2019

Natural Adversarial Sentence Generation with Gradient-based Perturbation

This work proposes a novel algorithm to generate natural language advers...

Please sign up or login with your details

Forgot password? Click here to reset