Token-Modification Adversarial Attacks for Natural Language Processing: A Survey

03/01/2021
by   Tom Roth, et al.
0

There are now many adversarial attacks for natural language processing systems. Of these, a vast majority achieve success by modifying individual document tokens, which we call here a token-modification attack. Each token-modification attack is defined by a specific combination of fundamental components, such as a constraint on the adversary or a particular search algorithm. Motivated by this observation, we survey existing token-modification attacks and extract the components of each. We use an attack-independent framework to structure our survey which results in an effective categorisation of the field and an easy comparison of components. We hope this survey will guide new researchers to this field and spark further research into the individual attack components.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
04/29/2020

TextAttack: A Framework for Adversarial Attacks in Natural Language Processing

TextAttack is a library for running adversarial attacks against natural ...
research
09/15/2019

Natural Language Adversarial Attacks and Defenses in Word Level

Up until recent two years, inspired by the big amount of research about ...
research
04/16/2021

Towards Variable-Length Textual Adversarial Attacks

Adversarial attacks have shown the vulnerability of machine learning mod...
research
05/24/2023

How do humans perceive adversarial text? A reality check on the validity and naturalness of word-based adversarial attacks

Natural Language Processing (NLP) models based on Machine Learning (ML) ...
research
11/22/2022

A Survey on Backdoor Attack and Defense in Natural Language Processing

Deep learning is becoming increasingly popular in real-life applications...
research
07/29/2020

Adversarial Robustness for Machine Learning Cyber Defenses Using Log Data

There has been considerable and growing interest in applying machine lea...
research
08/26/2022

Living-off-the-Land Abuse Detection Using Natural Language Processing and Supervised Learning

Living-off-the-Land is an evasion technique used by attackers where nati...

Please sign up or login with your details

Forgot password? Click here to reset