Consistent Counterfactuals for Deep Models

10/06/2021
by   Emily Black, et al.
0

Counterfactual examples are one of the most commonly-cited methods for explaining the predictions of machine learning models in key areas such as finance and medical diagnosis. Counterfactuals are often discussed under the assumption that the model on which they will be used is static, but in deployment models may be periodically retrained or fine-tuned. This paper studies the consistency of model prediction on counterfactual examples in deep networks under small changes to initial training conditions, such as weight initialization and leave-one-out variations in data, as often occurs during model deployment. We demonstrate experimentally that counterfactual examples for deep models are often inconsistent across such small changes, and that increasing the cost of the counterfactual, a stability-enhancing mitigation suggested by prior work in the context of simpler models, is not a reliable heuristic in deep networks. Rather, our analysis shows that a model's local Lipschitz continuity around the counterfactual is key to its consistency across related models. To this end, we propose Stable Neighbor Search as a way to generate more consistent counterfactual explanations, and illustrate the effectiveness of this approach on several benchmark datasets.

READ FULL TEXT
POST COMMENT

Comments

There are no comments yet.

Authors

page 4

12/06/2019

Preserving Causal Constraints in Counterfactual Explanations for Machine Learning Classifiers

Explaining the output of a complex machine learning (ML) model often req...
10/30/2021

On Quantitative Evaluations of Counterfactuals

As counterfactual examples become increasingly popular for explaining de...
02/05/2021

CF-GNNExplainer: Counterfactual Explanations for Graph Neural Networks

Graph neural networks (GNNs) have shown increasing promise in real-world...
11/18/2021

MCCE: Monte Carlo sampling of realistic counterfactual explanations

In this paper we introduce MCCE: Monte Carlo sampling of realistic Count...
07/21/2021

Leave-one-out Unfairness

We introduce leave-one-out unfairness, which characterizes how likely a ...
06/19/2021

Score-Based Explanations in Data Management and Machine Learning: An Answer-Set Programming Approach to Counterfactual Analysis

We describe some recent approaches to score-based explanations for query...
02/21/2021

Synthesizing Irreproducibility in Deep Networks

The success and superior performance of deep networks is spreading their...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.