DARE: Towards Robust Text Explanations in Biomedical and Healthcare Applications

07/05/2023
by   Adam Ivankay, et al.
0

Along with the successful deployment of deep neural networks in several application domains, the need to unravel the black-box nature of these networks has seen a significant increase recently. Several methods have been introduced to provide insight into the inference process of deep neural networks. However, most of these explainability methods have been shown to be brittle in the face of adversarial perturbations of their inputs in the image and generic textual domain. In this work we show that this phenomenon extends to specific and important high stakes domains like biomedical datasets. In particular, we observe that the robustness of explanations should be characterized in terms of the accuracy of the explanation in linking a model's inputs and its decisions - faithfulness - and its relevance from the perspective of domain experts - plausibility. This is crucial to prevent explanations that are inaccurate but still look convincing in the context of the domain at hand. To this end, we show how to adapt current attribution robustness estimation methods to a given domain, so as to take into account domain-specific plausibility. This results in our DomainAdaptiveAREstimator (DARE) attribution robustness estimator, allowing us to properly characterize the domain-specific robustness of faithful explanations. Next, we provide two methods, adversarial training and FAR training, to mitigate the brittleness characterized by DARE, allowing us to train networks that display robust attributions. Finally, we empirically validate our methods with extensive experiments on three established biomedical benchmarks.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/07/2022

Fooling Explanations in Text Classifiers

State-of-the-art text classification models are becoming increasingly re...
research
12/18/2022

Estimating the Adversarial Robustness of Attributions in Text with Transformers

Explanations are crucial parts of deep neural network (DNN) classifiers....
research
06/11/2020

Smoothed Geometry for Robust Attribution

Feature attributions are a popular tool for explaining the behavior of D...
research
10/14/2020

FAR: A General Framework for Attributional Robustness

Attribution maps have gained popularity as tools for explaining neural n...
research
05/24/2023

Scale Matters: Attribution Meets the Wavelet Domain to Explain Model Sensitivity to Image Corruptions

Neural networks have shown remarkable performance in computer vision, bu...
research
12/18/2020

Towards Robust Explanations for Deep Neural Networks

Explanation methods shed light on the decision process of black-box clas...
research
03/29/2021

Efficient Explanations from Empirical Explainers

Amid a discussion about Green AI in which we see explainability neglecte...

Please sign up or login with your details

Forgot password? Click here to reset