Frequency-Guided Word Substitutions for Detecting Textual Adversarial Examples

04/13/2020
by   Maximilian Mozes, et al.
0

While recent efforts have shown that neural text processing models are vulnerable to adversarial examples, comparatively little attention has been paid to explicitly characterize their effectiveness. To overcome this, we present analytical insights into the word frequency characteristics of word-level adversarial examples for neural text classification models. We show that adversarial attacks against CNN-, LSTM- and Transformer-based classification models perform token substitutions that are identifiable through word frequency differences between replaced words and their substitutions. Based on these findings, we propose frequency-guided word substitutions (FGWS) as a simple algorithm for the automatic detection of adversarially perturbed textual sequences. FGWS exploits the word frequency properties of adversarial word substitutions, and we assess its suitability for the automatic detection of adversarial examples generated from the SST-2 and IMDb sentiment datasets. Our method provides promising results by accurately detecting adversarial examples, with F_1 detection scores of up to 93.7 against BERT-based classification models. We compare our approach against baseline detection approaches as well as a recently proposed perturbation discrimination framework, and show that we outperform existing approaches by up to 15.1

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/13/2021

Randomized Substitution and Vote for Textual Adversarial Example Detection

A line of work has shown that natural text processing models are vulnera...
research
03/03/2022

Detection of Word Adversarial Examples in Text Classification: Benchmark and Baseline via Robust Density Estimation

Word-level adversarial attacks have shown success in NLP models, drastic...
research
02/06/2023

Less is More: Understanding Word-level Textual Adversarial Attack via n-gram Frequency Descend

Word-level textual adversarial attacks have achieved striking performanc...
research
09/09/2021

Contrasting Human- and Machine-Generated Word-Level Adversarial Examples for Text Classification

Research shows that natural language processing models are generally con...
research
07/03/2023

Interpretability and Transparency-Driven Detection and Transformation of Textual Adversarial Examples (IT-DT)

Transformer-based text classifiers like BERT, Roberta, T5, and GPT-3 hav...
research
10/20/2022

Identifying Human Strategies for Generating Word-Level Adversarial Examples

Adversarial examples in NLP are receiving increasing research attention....
research
03/23/2022

Adversarial Training for Improving Model Robustness? Look at Both Prediction and Interpretation

Neural language models show vulnerability to adversarial examples which ...

Please sign up or login with your details

Forgot password? Click here to reset