Explain2Attack: Text Adversarial Attacks via Cross-Domain Interpretability

10/14/2020
by   Mahmoud Hossam, et al.
0

Training robust deep learning models for down-stream tasks is a critical challenge. Research has shown that down-stream models can be easily fooled with adversarial inputs that look like the training data, but slightly perturbed, in a way imperceptible to humans. Understanding the behavior of natural language models under these attacks is crucial to better defend these models against such attacks. In the black-box attack setting, where no access to model parameters is available, the attacker can only query the output information from the targeted model to craft a successful attack. Current black-box state-of-the-art models are costly in both computational complexity and number of queries needed to craft successful adversarial examples. For real world scenarios, the number of queries is critical, where less queries are desired to avoid suspicion towards an attacking agent. In this paper, we propose Explain2Attack, a black-box adversarial attack on text classification task. Instead of searching for important words to be perturbed by querying the target model, Explain2Attack employs an interpretable substitute model from a similar domain to learn word importance scores. We show that our framework either achieves or out-performs attack rates of the state-of-the-art models, yet with lower queries cost and higher efficiency.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/13/2023

PhantomSound: Black-Box, Query-Efficient Audio Adversarial Attack via Split-Second Phoneme Injection

In this paper, we propose PhantomSound, a query-efficient black-box atta...
research
04/27/2021

Improved and Efficient Text Adversarial Attacks using Target Information

There has been recently a growing interest in studying adversarial examp...
research
01/21/2021

Adv-OLM: Generating Textual Adversaries via OLM

Deep learning models are susceptible to adversarial examples that have i...
research
12/19/2017

Query-Efficient Black-box Adversarial Examples

Current neural network-based image classifiers are susceptible to advers...
research
11/25/2020

SurFree: a fast surrogate-free black-box attack

Machine learning classifiers are critically prone to evasion attacks. Ad...
research
09/15/2023

Adversarial Attacks on Tables with Entity Swap

The capabilities of large language models (LLMs) have been successfully ...
research
04/22/2020

Live Trojan Attacks on Deep Neural Networks

Like all software systems, the execution of deep learning models is dict...

Please sign up or login with your details

Forgot password? Click here to reset