Formalising the Robustness of Counterfactual Explanations for Neural Networks

08/31/2022
by   Junqi Jiang, et al.
0

The use of counterfactual explanations (CFXs) is an increasingly popular explanation strategy for machine learning models. However, recent studies have shown that these explanations may not be robust to changes in the underlying model (e.g., following retraining), which raises questions about their reliability in real-world applications. Existing attempts towards solving this problem are heuristic, and the robustness to model changes of the resulting CFXs is evaluated with only a small number of retrained models, failing to provide exhaustive guarantees. To remedy this, we propose the first notion to formally and deterministically assess the robustness (to model changes) of CFXs for neural networks, that we call Δ-robustness. We introduce an abstraction framework based on interval neural networks to verify the Δ-robustness of CFXs against a possibly infinite set of changes to the model parameters, i.e., weights and biases. We then demonstrate the utility of this approach in two distinct ways. First, we analyse the Δ-robustness of a number of CFX generation methods from the literature and show that they unanimously host significant deficiencies in this regard. Second, we demonstrate how embedding Δ-robustness within existing methods can provide CFXs which are provably robust.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
04/24/2023

Generating robust counterfactual explanations

Counterfactual explanations have become a mainstay of the XAI field. Thi...
research
07/06/2022

Robust Counterfactual Explanations for Tree-Based Ensembles

Counterfactual explanations inform ways to achieve a desired outcome fro...
research
05/14/2021

Agree to Disagree: When Deep Learning Models With Identical Architectures Produce Distinct Explanations

Deep Learning of neural networks has progressively become more prominent...
research
05/19/2023

Robust Counterfactual Explanations for Neural Networks With Probabilistic Guarantees

There is an emerging interest in generating robust counterfactual explan...
research
01/29/2022

Counterfactual Plans under Distributional Ambiguity

Counterfactual explanations are attracting significant attention due to ...
research
12/18/2020

Semantics and explanation: why counterfactual explanations produce adversarial examples in deep neural networks

Recent papers in explainable AI have made a compelling case for counterf...
research
07/01/2020

Unifying Model Explainability and Robustness via Machine-Checkable Concepts

As deep neural networks (DNNs) get adopted in an ever-increasing number ...

Please sign up or login with your details

Forgot password? Click here to reset