Explaining Black-Box Algorithms Using Probabilistic Contrastive Counterfactuals

by   Sainyam Galhotra, et al.

There has been a recent resurgence of interest in explainable artificial intelligence (XAI) that aims to reduce the opaqueness of AI-based decision-making systems, allowing humans to scrutinize and trust them. Prior work in this context has focused on the attribution of responsibility for an algorithm's decisions to its inputs wherein responsibility is typically approached as a purely associational concept. In this paper, we propose a principled causality-based approach for explaining black-box decision-making systems that addresses limitations of existing methods in XAI. At the core of our framework lies probabilistic contrastive counterfactuals, a concept that can be traced back to philosophical, cognitive, and social foundations of theories on how humans generate and select explanations. We show how such counterfactuals can quantify the direct and indirect influences of a variable on decisions made by an algorithm, and provide actionable recourse for individuals negatively affected by the algorithm's decision. Unlike prior work, our system, LEWIS: (1)can compute provably effective explanations and recourse at local, global and contextual levels (2)is designed to work with users with varying levels of background knowledge of the underlying causal model and (3)makes no assumptions about the internals of an algorithmic system except for the availability of its input-output data. We empirically evaluate LEWIS on three real-world datasets and show that it generates human-understandable explanations that improve upon state-of-the-art approaches in XAI, including the popular LIME and SHAP. Experiments on synthetic data further demonstrate the correctness of LEWIS's explanations and the scalability of its recourse algorithm.


page 2

page 12

page 15


Open the Black Box Data-Driven Explanation of Black Box Decision Systems

Black box systems for automated decision making, often based on machine ...

Counterfactual Explanations without Opening the Black Box: Automated Decisions and the GDPR

There has been much discussion of the right to explanation in the EU Gen...

To trust or not to trust an explanation: using LEAF to evaluate local linear XAI methods

The main objective of eXplainable Artificial Intelligence (XAI) is to pr...

Contrastive Explanations for Explaining Model Adaptations

Many decision making systems deployed in the real world are not static -...

Glocal Explanations of Expected Goal Models in Soccer

The expected goal models have gained popularity, but their interpretabil...

Right for the Right Reasons: Training Differentiable Models by Constraining their Explanations

Neural networks are among the most accurate supervised learning methods ...

Please sign up or login with your details

Forgot password? Click here to reset