Universal Rules for Fooling Deep Neural Networks based Text Classification

01/22/2019
by   Di Li, et al.
0

Recently, deep learning based natural language processing techniques are being extensively used to deal with spam mail, censorship evaluation in social networks, among others. However, there is only a couple of works evaluating the vulnerabilities of such deep neural networks. Here, we go beyond attacks to investigate, for the first time, universal rules, i.e., rules that are sample agnostic and therefore could turn any text sample in an adversarial one. In fact, the universal rules do not use any information from the method itself (no information from the method, gradient information or training dataset information is used), making them black-box universal attacks. In other words, the universal rules are sample and method agnostic. By proposing a coevolutionary optimization algorithm we show that it is possible to create universal rules that can automatically craft imperceptible adversarial samples (only less than five perturbations which are close to misspelling are inserted in the text sample). A comparison with a random search algorithm further justifies the strength of the method. Thus, universal rules for fooling networks are here shown to exist. Hopefully, the results from this work will impact the development of yet more sample and model agnostic attacks as well as their defenses, culminating in perhaps a new age for artificial intelligence.

READ FULL TEXT
research
11/17/2020

Generating universal language adversarial examples by understanding and enhancing the transferability across neural models

Deep neural network models are vulnerable to adversarial attacks. In man...
research
11/24/2020

Towards Imperceptible Universal Attacks on Texture Recognition

Although deep neural networks (DNNs) have been shown to be susceptible t...
research
05/01/2020

Universal Adversarial Attacks with Natural Triggers for Text Classification

Recent work has demonstrated the vulnerability of modern text classifier...
research
06/19/2022

A Universal Adversarial Policy for Text Classifiers

Discovering the existence of universal adversarial perturbations had lar...
research
09/25/2021

MINIMAL: Mining Models for Data Free Universal Adversarial Triggers

It is well known that natural language models are vulnerable to adversar...
research
09/16/2021

Don't Search for a Search Method – Simple Heuristics Suffice for Adversarial Text Attacks

Recently more attention has been given to adversarial attacks on neural ...
research
08/20/2019

Universal Adversarial Triggers for Attacking and Analyzing NLP

Adversarial examples highlight model vulnerabilities and are useful for ...

Please sign up or login with your details

Forgot password? Click here to reset