Estimating the Adversarial Robustness of Attributions in Text with Transformers

12/18/2022
by   Adam Ivankay, et al.
2

Explanations are crucial parts of deep neural network (DNN) classifiers. In high stakes applications, faithful and robust explanations are important to understand and gain trust in DNN classifiers. However, recent work has shown that state-of-the-art attribution methods in text classifiers are susceptible to imperceptible adversarial perturbations that alter explanations significantly while maintaining the correct prediction outcome. If undetected, this can critically mislead the users of DNNs. Thus, it is crucial to understand the influence of such adversarial perturbations on the networks' explanations and their perceptibility. In this work, we establish a novel definition of attribution robustness (AR) in text classification, based on Lipschitz continuity. Crucially, it reflects both attribution change induced by adversarial input alterations and perceptibility of such alterations. Moreover, we introduce a wide set of text similarity measures to effectively capture locality between two text samples and imperceptibility of adversarial perturbations in text. We then propose our novel TransformerExplanationAttack (TEA), a strong adversary that provides a tight estimation for attribution robustness in text classification. TEA uses state-of-the-art language models to extract word substitutions that result in fluent, contextual adversarial samples. Finally, with experiments on several text classification architectures, we show that TEA consistently outperforms current state-of-the-art AR estimators, yielding perturbations that alter explanations to a greater extent while being more fluent and less perceptible.

READ FULL TEXT

page 13

page 14

page 15

page 16

page 19

page 20

page 21

page 22

research
06/07/2022

Fooling Explanations in Text Classifiers

State-of-the-art text classification models are becoming increasingly re...
research
09/14/2021

Improving Gradient-based Adversarial Training for Text Classification by Contrastive Learning and Auto-Encoder

Recent work has proposed several efficient approaches for generating gra...
research
07/05/2023

DARE: Towards Robust Text Explanations in Biomedical and Healthcare Applications

Along with the successful deployment of deep neural networks in several ...
research
12/20/2021

Unifying Model Explainability and Robustness for Joint Text Classification and Rationale Extraction

Recent works have shown explainability and robustness are two crucial in...
research
10/24/2022

Generating Hierarchical Explanations on Text Classification Without Connecting Rules

The opaqueness of deep NLP models has motivated the development of metho...
research
05/28/2021

SafeAMC: Adversarial training for robust modulation recognition models

In communication systems, there are many tasks, like modulation recognit...
research
10/14/2020

FAR: A General Framework for Attributional Robustness

Attribution maps have gained popularity as tools for explaining neural n...

Please sign up or login with your details

Forgot password? Click here to reset