An Attention Score Based Attacker for Black-box NLP Classifier

12/22/2021
by   Yueyang Liu, et al.
0

Deep neural networks have a wide range of applications in solving various real-world tasks and have achieved satisfactory results, in domains such as computer vision, image classification, and natural language processing. Meanwhile, the security and robustness of neural networks have become imperative, as diverse researches have shown the vulnerable aspects of neural networks. Case in point, in Natural language processing tasks, the neural network may be fooled by an attentively modified text, which has a high similarity to the original one. As per previous research, most of the studies are focused on the image domain; Different from image adversarial attacks, the text is represented in a discrete sequence, traditional image attack methods are not applicable in the NLP field. In this paper, we propose a word-level NLP sentiment classifier attack model, which includes a self-attention mechanism-based word selection method and a greedy search algorithm for word substitution. We experiment with our attack model by attacking GRU and 1D-CNN victim models on IMDB datasets. Experimental results demonstrate that our model achieves a higher attack success rate and more efficient than previous methods due to the efficient word selection algorithms are employed and minimized the word substitute number. Also, our model is transferable, which can be used in the image domain with several modifications.

READ FULL TEXT
research
06/01/2020

BadNL: Backdoor Attacks Against NLP Models

Machine learning (ML) has progressed rapidly during the past decade and ...
research
07/10/2019

Neural Networks as Explicit Word-Based Rules

Filters of convolutional networks used in computer vision are often visu...
research
12/22/2019

AdvCodec: Towards A Unified Framework for Adversarial Text Generation

While there has been great interest in generating imperceptible adversar...
research
02/15/2021

Cross-modal Adversarial Reprogramming

With the abundance of large-scale deep learning models, it has become po...
research
09/16/2021

Don't Search for a Search Method – Simple Heuristics Suffice for Adversarial Text Attacks

Recently more attention has been given to adversarial attacks on neural ...
research
02/21/2022

Rethinking the Zigzag Flattening for Image Reading

Sequence ordering of word vector matters a lot to text reading, which ha...
research
10/09/2019

Deep Latent Defence

Deep learning methods have shown state of the art performance in a range...

Please sign up or login with your details

Forgot password? Click here to reset