Explanatory Pluralism in Explainable AI

06/26/2021
by   Yiheng Yao, et al.
0

The increasingly widespread application of AI models motivates increased demand for explanations from a variety of stakeholders. However, this demand is ambiguous because there are many types of 'explanation' with different evaluative criteria. In the spirit of pluralism, I chart a taxonomy of types of explanation and the associated XAI methods that can address them. When we look to expose the inner mechanisms of AI models, we develop Diagnostic-explanations. When we seek to render model output understandable, we produce Explication-explanations. When we wish to form stable generalizations of our models, we produce Expectation-explanations. Finally, when we want to justify the usage of a model, we produce Role-explanations that situate models within their social context. The motivation for such a pluralistic view stems from a consideration of causes as manipulable relationships and the different types of explanations as identifying the relevant points in AI systems we can intervene upon to affect our desired changes. This paper reduces the ambiguity in use of the word 'explanation' in the field of XAI, allowing practitioners and stakeholders a useful template for avoiding equivocation and evaluating XAI methods and putative explanations.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
04/15/2021

LEx: A Framework for Operationalising Layers of Machine Learning Explanations

Several social factors impact how people respond to AI explanations used...
research
03/20/2018

Explanation Methods in Deep Learning: Users, Values, Concerns and Challenges

Issues regarding explainable AI involve four components: users, laws & r...
research
05/17/2022

Is explainable AI a race against model complexity?

Explaining the behaviour of intelligent systems will get increasingly an...
research
11/04/2018

Explaining Explanations in AI

Recent work on interpretability in machine learning and AI has focused o...
research
08/14/2023

Can we Agree? On the Rashōmon Effect and the Reliability of Post-Hoc Explainable AI

The Rashōmon effect poses challenges for deriving reliable knowledge fro...
research
09/11/2023

A Co-design Study for Multi-Stakeholder Job Recommender System Explanations

Recent legislation proposals have significantly increased the demand for...
research
09/09/2021

Modelling GDPR-Compliant Explanations for Trustworthy AI

Through the General Data Protection Regulation (GDPR), the European Unio...

Please sign up or login with your details

Forgot password? Click here to reset