Adequate and fair explanations

01/21/2020
by   Nicholas Asher, et al.
11

Explaining sophisticated machine-learning based systems is an important issue at the foundations of AI. Recent efforts have shown various methods for providing explanations. These approaches can be broadly divided into two schools: those that provide a local and human interpreatable approximation of a machine learning algorithm, and logical approaches that exactly characterise one aspect of the decision. In this paper we focus upon the second school of exact explanations with a rigorous logical foundation. There is an epistemological problem with these exact methods. While they can furnish complete explanations, such explanations may be too complex for humans to understand or even to write down in human readable form. Interpretability requires epistemically accessible explanations, explanations humans can grasp. Yet what is a sufficiently complete epistemically accessible explanation still needs clarification. We do this here in terms of counterfactuals, following [Wachter et al., 2017]. With counterfactual explanations, many of the assumptions needed to provide a complete explanation are left implicit. To do so, counterfactual explanations exploit the properties of a particular data point or sample, and as such are also local as well as partial explanations. We explore how to move from local partial explanations to what we call complete local explanations and then to global ones. But to preserve accessibility we argue for the need for partiality. This partiality makes it possible to hide explicit biases present in the algorithm that may be injurious or unfair.We investigate how easy it is to uncover these biases in providing complete and fair explanations by exploiting the structure of the set of counterfactuals providing a complete local explanation.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/21/2022

The privacy issue of counterfactual explanations: explanation linkage attacks

Black-box machine learning models are being used in more and more high-s...
research
04/14/2022

Global Counterfactual Explanations: Investigations, Implementations and Improvements

Counterfactual explanations have been widely studied in explainability, ...
research
02/05/2020

`Why not give this work to them?' Explaining AI-Moderated Task-Allocation Outcomes using Negotiation Trees

The problem of multi-agent task allocation arises in a variety of scenar...
research
11/02/2020

A Learning Theoretic Perspective on Local Explainability

In this paper, we explore connections between interpretable machine lear...
research
03/17/2023

Iterative Partial Fulfillment of Counterfactual Explanations: Benefits and Risks

Counterfactual (CF) explanations, also known as contrastive explanations...
research
04/30/2022

ExSum: From Local Explanations to Model Understanding

Interpretability methods are developed to understand the working mechani...
research
08/08/2023

Adding Why to What? Analyses of an Everyday Explanation

In XAI it is important to consider that, in contrast to explanations for...

Please sign up or login with your details

Forgot password? Click here to reset