Identifying Explanation Needs of End-users: Applying and Extending the XAI Question Bank

07/18/2023
by   Lars Sipos, et al.
0

Explanations in XAI are typically developed by AI experts and focus on algorithmic transparency and the inner workings of AI systems. Research has shown that such explanations do not meet the needs of users who do not have AI expertise. As a result, explanations are often ineffective in making system decisions interpretable and understandable. We aim to strengthen a socio-technical view of AI by following a Human-Centered Explainable Artificial Intelligence (HC-XAI) approach, which investigates the explanation needs of end-users (i.e., subject matter experts and lay users) in specific usage contexts. One of the most influential works in this area is the XAI Question Bank (XAIQB) by Liao et al. The authors propose a set of questions that end-users might ask when using an AI system, which in turn is intended to help developers and designers identify and address explanation needs. Although the XAIQB is widely referenced, there are few reports of its use in practice. In particular, it is unclear to what extent the XAIQB sufficiently captures the explanation needs of end-users and what potential problems exist in the practical application of the XAIQB. To explore these open questions, we used the XAIQB as the basis for analyzing 12 think-aloud software explorations with subject matter experts. We investigated the suitability of the XAIQB as a tool for identifying explanation needs in a specific usage context. Our analysis revealed a number of explanation needs that were missing from the question bank, but that emerged repeatedly as our study participants interacted with an AI system. We also found that some of the XAIQB questions were difficult to distinguish and required interpretation during use. Our contribution is an extension of the XAIQB with 11 new questions. In addition, we have expanded the descriptions of all new and existing questions to facilitate their use.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
08/18/2022

Transcending XAI Algorithm Boundaries through End-User-Inspired Design

The boundaries of existing explainable artificial intelligence (XAI) alg...
research
06/10/2021

Explainable AI, but explainable to whom?

Advances in AI technologies have resulted in superior levels of AI-based...
research
09/18/2023

Evaluation of Human-Understandability of Global Model Explanations using Decision Tree

In explainable artificial intelligence (XAI) research, the predominant f...
research
02/07/2020

What Would You Ask the Machine Learning Model? Identification of User Needs for Model Explanations Based on Human-Model Conversations

Recently we see a rising number of methods in the field of eXplainable A...
research
11/11/2022

Behaviour Trees for Conversational Explanation Experiences

Explainable AI (XAI) has the potential to make a significant impact on b...
research
11/25/2021

Non-Asimov Explanations Regulating AI through Transparency

An important part of law and regulation is demanding explanations for ac...
research
12/30/2022

Behave-XAI: Deep Explainable Learning of Behavioral Representational Data

According to the latest trend of artificial intelligence, AI-systems nee...

Please sign up or login with your details

Forgot password? Click here to reset