Adversarial Attacks and Defense on Texts: A Survey

05/28/2020
by   Aminul Huq, et al.
0

Deep leaning models have been used widely for various purposes in recent years in object recognition, self-driving cars, face recognition, speech recognition, sentiment analysis and many others. However, in recent years it has been shown that these models possess weakness to noises which forces the model to misclassify. This issue has been studied profoundly in image and audio domain. Very little has been studied on this issue with respect to textual data. Even less survey on this topic has been performed to understand different types of attacks and defense techniques. In this manuscript we accumulated and analyzed different attacking techniques, various defense models on how to overcome this issue in order to provide a more comprehensive idea. Later we point out some of the interesting findings of all papers and challenges that need to be overcome in order to move forward in this field.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/28/2020

Adversarial Attacks and Defense on Textual Data: A Review

Deep leaning models have been used widely for various purposes in recent...
research
07/22/2020

Threat of Adversarial Attacks on Face Recognition: A Comprehensive Survey

Face recognition (FR) systems have demonstrated outstanding verification...
research
12/10/2020

An Empirical Review of Adversarial Defenses

From face recognition systems installed in phones to self-driving cars, ...
research
05/03/2019

Browser Fingerprinting: A survey

With this paper, we survey the research performed in the domain of brows...
research
09/17/2020

Online Alternate Generator against Adversarial Attacks

The field of computer vision has witnessed phenomenal progress in recent...
research
03/24/2018

An Overview of Vulnerabilities of Voice Controlled Systems

Over the last few years, a rapidly increasing number of Internet-of-Thin...

Please sign up or login with your details

Forgot password? Click here to reset