DeepAI AI Chat
Log In Sign Up

Explain2Attack: Text Adversarial Attacks via Cross-Domain Interpretability

by   Mahmoud Hossam, et al.

Training robust deep learning models for down-stream tasks is a critical challenge. Research has shown that down-stream models can be easily fooled with adversarial inputs that look like the training data, but slightly perturbed, in a way imperceptible to humans. Understanding the behavior of natural language models under these attacks is crucial to better defend these models against such attacks. In the black-box attack setting, where no access to model parameters is available, the attacker can only query the output information from the targeted model to craft a successful attack. Current black-box state-of-the-art models are costly in both computational complexity and number of queries needed to craft successful adversarial examples. For real world scenarios, the number of queries is critical, where less queries are desired to avoid suspicion towards an attacking agent. In this paper, we propose Explain2Attack, a black-box adversarial attack on text classification task. Instead of searching for important words to be perturbed by querying the target model, Explain2Attack employs an interpretable substitute model from a similar domain to learn word importance scores. We show that our framework either achieves or out-performs attack rates of the state-of-the-art models, yet with lower queries cost and higher efficiency.


page 1

page 2

page 3

page 4


PhantomSound: Black-Box, Query-Efficient Audio Adversarial Attack via Split-Second Phoneme Injection

In this paper, we propose PhantomSound, a query-efficient black-box atta...

Improved and Efficient Text Adversarial Attacks using Target Information

There has been recently a growing interest in studying adversarial examp...

Adv-OLM: Generating Textual Adversaries via OLM

Deep learning models are susceptible to adversarial examples that have i...

Query-Efficient Black-box Adversarial Examples

Current neural network-based image classifiers are susceptible to advers...

SurFree: a fast surrogate-free black-box attack

Machine learning classifiers are critically prone to evasion attacks. Ad...

Adversarial Attacks on Tables with Entity Swap

The capabilities of large language models (LLMs) have been successfully ...

Live Trojan Attacks on Deep Neural Networks

Like all software systems, the execution of deep learning models is dict...