Searching for an Effective Defender: Benchmarking Defense against Adversarial Word Substitution

08/29/2021
by   Zongyi Li, et al.
0

Recent studies have shown that deep neural networks are vulnerable to intentionally crafted adversarial examples, and various methods have been proposed to defend against adversarial word-substitution attacks for neural NLP models. However, there is a lack of systematic study on comparing different defense approaches under the same attacking setting. In this paper, we seek to fill the gap of systematic studies through comprehensive researches on understanding the behavior of neural text classifiers trained by various defense methods under representative adversarial attacks. In addition, we propose an effective method to further improve the robustness of neural text classifiers against such attacks and achieved the highest accuracy on both clean and adversarial examples on AGNEWS and IMDB datasets by a significant margin.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/02/2018

Detecting Adversarial Examples via Key-based Network

Though deep neural networks have achieved state-of-the-art performance i...
research
06/20/2020

Defense against Adversarial Attacks in NLP via Dirichlet Neighborhood Ensemble

Despite neural networks have achieved prominent performance on many natu...
research
06/11/2023

Securing Visually-Aware Recommender Systems: An Adversarial Image Reconstruction and Detection Framework

With rich visual data, such as images, becoming readily associated with ...
research
02/28/2019

Towards Understanding Adversarial Examples Systematically: Exploring Data Size, Task and Model Factors

Most previous works usually explained adversarial examples from several ...
research
12/25/2018

PPD: Permutation Phase Defense Against Adversarial Examples in Deep Learning

Deep neural networks have demonstrated cutting edge performance on vario...
research
09/24/2018

On The Utility of Conditional Generation Based Mutual Information for Characterizing Adversarial Subspaces

Recent studies have found that deep learning systems are vulnerable to a...
research
06/09/2022

CARLA-GeAR: a Dataset Generator for a Systematic Evaluation of Adversarial Robustness of Vision Models

Adversarial examples represent a serious threat for deep neural networks...

Please sign up or login with your details

Forgot password? Click here to reset