Generating Textual Adversarial Examples for Deep Learning Models: A Survey

01/21/2019
by   Wei Emma Zhang, et al.
0

With the development of high computational devices, deep neural networks (DNNs), in recent years, have gained significant popularity in many Artificial Intelligence (AI) applications. However, previous efforts have shown that DNNs were vulnerable to strategically modified samples, named adversarial examples. These samples are generated with some imperceptible perturbations but can fool the DNNs to give false predictions. Inspired by the popularity of generating adversarial examples for image DNNs, research efforts on attacking DNNs for textual applications emerges in recent years. However, existing perturbation methods for images cannotbe directly applied to texts as text data is discrete. In this article, we review research works that address this difference and generatetextual adversarial examples on DNNs. We collect, select, summarize, discuss and analyze these works in a comprehensive way andcover all the related information to make the article self-contained. Finally, drawing on the reviewed literature, we provide further discussions and suggestions on this topic.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/13/2018

Adversarial Examples: Opportunities and Challenges

With the advent of the era of artificial intelligence(AI), deep neural n...
research
12/18/2018

Safety and Trustworthiness of Deep Neural Networks: A Survey

In the past few years, significant progress has been made on deep neural...
research
06/19/2019

SemanticAdv: Generating Adversarial Examples via Attribute-conditional Image Editing

Deep neural networks (DNNs) have achieved great success in various appli...
research
10/08/2021

Graphs as Tools to Improve Deep Learning Methods

In recent years, deep neural networks (DNNs) have known an important ris...
research
11/14/2019

Adversarial Margin Maximization Networks

The tremendous recent success of deep neural networks (DNNs) has sparked...
research
06/14/2021

Selection of Source Images Heavily Influences the Effectiveness of Adversarial Attacks

Although the adoption rate of deep neural networks (DNNs) has tremendous...
research
10/09/2018

Analyzing the Noise Robustness of Deep Neural Networks

Deep neural networks (DNNs) are vulnerable to maliciously generated adve...

Please sign up or login with your details

Forgot password? Click here to reset