Masked Language Model Based Textual Adversarial Example Detection

04/18/2023
by   Xiaomei Zhang, et al.
0

Adversarial attacks are a serious threat to the reliable deployment of machine learning models in safety-critical applications. They can misguide current models to predict incorrectly by slightly modifying the inputs. Recently, substantial work has shown that adversarial examples tend to deviate from the underlying data manifold of normal examples, whereas pre-trained masked language models can fit the manifold of normal NLP data. To explore how to use the masked language model in adversarial detection, we propose a novel textual adversarial example detection method, namely Masked Language Model-based Detection (MLMD), which can produce clearly distinguishable signals between normal examples and adversarial examples by exploring the changes in manifolds induced by the masked language model. MLMD features a plug and play usage (i.e., no need to retrain the victim model) for adversarial defense and it is agnostic to classification tasks, victim model's architectures, and to-be-defended attack methods. We evaluate MLMD on various benchmark textual datasets, widely studied machine learning models, and state-of-the-art (SOTA) adversarial attacks (in total 3*4*4 = 48 settings). Experimental results show that MLMD can achieve strong performance, with detection accuracy up to 0.984, 0.967, and 0.901 on AG-NEWS, IMDB, and SST-2 datasets, respectively. Additionally, MLMD is superior, or at least comparable to, the SOTA detection defenses in detection accuracy and F1 score. Among many defenses based on the off-manifold assumption of adversarial examples, this work offers a new angle for capturing the manifold change. The code for this work is openly accessible at <https://github.com/mlmddetection/MLMDdetection>.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/05/2022

Textual Manifold-based Defense Against Natural Language Adversarial Examples

Recent studies on adversarial images have shown that they tend to leave ...
research
02/03/2020

Defending Adversarial Attacks via Semantic Feature Manipulation

Machine learning models have demonstrated vulnerability to adversarial a...
research
12/10/2020

Geometric Adversarial Attacks and Defenses on 3D Point Clouds

Deep neural networks are prone to adversarial examples that maliciously ...
research
09/07/2019

On Need for Topology-Aware Generative Models for Manifold-Based Defenses

ML algorithms or models, especially deep neural networks (DNNs), have sh...
research
11/04/2018

Adversarial Gain

Adversarial examples can be defined as inputs to a model which induce a ...
research
03/10/2019

Manifold Preserving Adversarial Learning

How to generate semantically meaningful and structurally sound adversari...
research
03/24/2022

A Manifold View of Adversarial Risk

The adversarial risk of a machine learning model has been widely studied...

Please sign up or login with your details

Forgot password? Click here to reset