Transcending XAI Algorithm Boundaries through End-User-Inspired Design

08/18/2022
by   Weina Jin, et al.
5

The boundaries of existing explainable artificial intelligence (XAI) algorithms are confined to problems grounded in technical users' demand for explainability. This research paradigm disproportionately ignores the larger group of non-technical end users of XAI, who do not have technical knowledge but need explanations in their AI-assisted critical decisions. Lacking explainability-focused functional support for end users may hinder the safe and responsible use of AI in high-stakes domains, such as healthcare, criminal justice, finance, and autonomous driving systems. In this work, we explore how designing XAI tailored to end users' critical tasks inspires the framing of new technical problems. To elicit users' interpretations and requirements for XAI algorithms, we first identify eight explanation forms as the communication tool between AI researchers and end users, such as explaining using features, examples, or rules. Using the explanation forms, we then conduct a user study with 32 layperson participants in the context of achieving different explanation goals (such as verifying AI decisions, and improving user's predicted outcomes) in four critical tasks. Based on the user study findings, we identify and formulate novel XAI technical problems, and propose an evaluation metric verifiability based on users' explanation goal of verifying AI decisions. Our work shows that grounding the technical problem in end users' use of XAI can inspire new research questions. Such end-user-inspired research questions have the potential to promote social good by democratizing AI and ensuring the responsible use of AI in critical domains.

READ FULL TEXT
research
02/10/2023

Invisible Users: Uncovering End-Users' Requirements for Explainable AI via Explanation Forms and Goals

Non-technical end-users are silent and invisible users of the state-of-t...
research
07/18/2023

Identifying Explanation Needs of End-users: Applying and Extending the XAI Question Bank

Explanations in XAI are typically developed by AI experts and focus on a...
research
02/04/2021

EUCA: A Practical Prototyping Framework towards End-User-Centered Explainable Artificial Intelligence

The ability to explain decisions to its end-users is a necessity to depl...
research
03/20/2021

Explaining decisions made with AI: A workbook (Use case 1: AI-assisted recruitment tool)

Over the last two years, The Alan Turing Institute and the Information C...
research
09/16/2021

Explainability Requires Interactivity

When explaining the decisions of deep neural networks, simple stories ar...
research
06/07/2023

The HCI Aspects of Public Deployment of Research Chatbots: A User Study, Design Recommendations, and Open Challenges

Publicly deploying research chatbots is a nuanced topic involving necess...
research
07/06/2022

Towards the Use of Saliency Maps for Explaining Low-Quality Electrocardiograms to End Users

When using medical images for diagnosis, either by clinicians or artific...

Please sign up or login with your details

Forgot password? Click here to reset