How Far Are We from Real Synonym Substitution Attacks?

10/06/2022
by   Cheng-Han Chiang, et al.
0

In this paper, we explore the following question: how far are we from real synonym substitution attacks (SSAs). We approach this question by examining how SSAs replace words in the original sentence and show that there are still unresolved obstacles that make current SSAs generate invalid adversarial samples. We reveal that four widely used word substitution methods generate a large fraction of invalid substitution words that are ungrammatical or do not preserve the original sentence's semantics. Next, we show that the semantic and grammatical constraints used in SSAs for detecting invalid word replacements are highly insufficient in detecting invalid adversarial samples. Our work is an important stepping stone to constructing better SSAs in the future.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
01/01/2022

Generating Adversarial Samples For Training Wake-up Word Detection Systems Against Confusing Words

Wake-up word detection models are widely used in real life, but suffer f...
research
04/25/2020

Reevaluating Adversarial Examples in Natural Language

State-of-the-art attacks on NLP models have different definitions of wha...
research
12/29/2020

Generating Adversarial Examples in Chinese Texts Using Sentence-Pieces

Adversarial attacks in texts are mostly substitution-based methods that ...
research
06/02/2023

VoteTRANS: Detecting Adversarial Text without Training by Voting on Hard Labels of Transformations

Adversarial attacks reveal serious flaws in deep learning models. More d...
research
10/22/2020

Rewriting Meaningful Sentences via Conditional BERT Sampling and an application on fooling text classifiers

Most adversarial attack methods that are designed to deceive a text clas...
research
07/19/2017

Expect the unexpected: Harnessing Sentence Completion for Sarcasm Detection

The trigram `I love being' is expected to be followed by positive words ...
research
08/23/2021

Semantic-Preserving Adversarial Text Attacks

Deep neural networks (DNNs) are known to be vulnerable to adversarial im...

Please sign up or login with your details

Forgot password? Click here to reset