CARE: Coherent Actionable Recourse based on Sound Counterfactual Explanations

08/18/2021
by   Peyman Rasouli, et al.
0

Counterfactual explanation methods interpret the outputs of a machine learning model in the form of "what-if scenarios" without compromising the fidelity-interpretability trade-off. They explain how to obtain a desired prediction from the model by recommending small changes to the input features, aka recourse. We believe an actionable recourse should be created based on sound counterfactual explanations originating from the distribution of the ground-truth data and linked to the domain knowledge. Moreover, it needs to preserve the coherency between changed/unchanged features while satisfying user/domain-specified constraints. This paper introduces CARE, a modular explanation framework that addresses the model- and user-level desiderata in a consecutive and structured manner. We tackle the existing requirements by proposing novel and efficient solutions that are formulated in a multi-objective optimization framework. The designed framework enables including arbitrary requirements and generating counterfactual explanations and actionable recourse by choice. As a model-agnostic approach, CARE generates multiple, diverse explanations for any black-box model in tabular classification and regression settings. Several experiments on standard data sets and black-box models demonstrate the effectiveness of our modular framework and its superior performance compared to the baselines.

READ FULL TEXT
research
09/14/2023

Text-to-Image Models for Counterfactual Explanations: a Black-Box Approach

This paper addresses the challenge of generating Counterfactual Explanat...
research
11/15/2020

Declarative Approaches to Counterfactual Explanations for Classification

We propose answer-set programs that specify and compute counterfactual i...
research
09/27/2022

Learning to Counter: Stochastic Feature-based Learning for Diverse Counterfactual Explanations

Interpretable machine learning seeks to understand the reasoning process...
research
04/13/2023

counterfactuals: An R Package for Counterfactual Explanation Methods

Counterfactual explanation methods provide information on how feature va...
research
07/15/2023

Explainable AI with counterfactual paths

Explainable AI (XAI) is an increasingly important area of research in ma...
research
04/12/2021

Consequence-aware Sequential Counterfactual Generation

Counterfactuals have become a popular technique nowadays for interacting...
research
06/15/2021

S-LIME: Stabilized-LIME for Model Explanation

An increasing number of machine learning models have been deployed in do...

Please sign up or login with your details

Forgot password? Click here to reset