Frauds Bargain Attack: Generating Adversarial Text Samples via Word Manipulation Process

03/01/2023
by   Mingze Ni, et al.
0

Recent studies on adversarial examples expose vulnerabilities of natural language processing (NLP) models. Existing techniques for generating adversarial examples are typically driven by deterministic heuristic rules that are agnostic to the optimal adversarial examples, a strategy that often results in attack failures. To this end, this research proposes Fraud's Bargain Attack (FBA) which utilizes a novel randomization mechanism to enlarge the search space and enables high-quality adversarial examples to be generated with high probabilities. FBA applies the Metropolis-Hasting sampler, a member of Markov Chain Monte Carlo samplers, to enhance the selection of adversarial examples from all candidates proposed by a customized stochastic process that we call the Word Manipulation Process (WMP). WMP perturbs one word at a time via insertion, removal or substitution in a contextual-aware manner. Extensive experiments demonstrate that FBA outperforms the state-of-the-art methods in terms of both attack success rate and imperceptibility.

READ FULL TEXT

page 13

page 14

research
03/09/2023

BeamAttack: Generating High-quality Textual Adversarial Examples through Beam Search and Mixed Semantic Spaces

Natural language processing models based on neural networks are vulnerab...
research
09/16/2020

Contextualized Perturbation for Textual Adversarial Attack

Adversarial examples expose the vulnerabilities of natural language proc...
research
10/15/2021

Generating Natural Language Adversarial Examples through An Improved Beam Search Algorithm

The research of adversarial attacks in the text domain attracts many int...
research
12/24/2020

A Context Aware Approach for Generating Natural Language Attacks

We study an important task of attacking natural language processing mode...
research
01/26/2019

Towards Weighted-Sampling Audio Adversarial Example Attack

Recent studies have highlighted audio adversarial examples as a ubiquito...
research
12/29/2020

Generating Natural Language Attacks in a Hard Label Black Box Setting

We study an important and challenging task of attacking natural language...
research
06/29/2020

Natural Backdoor Attack on Text Data

Deep learning has been widely adopted in natural language processing app...

Please sign up or login with your details

Forgot password? Click here to reset