Selective Explanations: Leveraging Human Input to Align Explainable AI

01/23/2023
by   Vivian Lai, et al.
0

While a vast collection of explainable AI (XAI) algorithms have been developed in recent years, they are often criticized for significant gaps with how humans produce and consume explanations. As a result, current XAI techniques are often found to be hard to use and lack effectiveness. In this work, we attempt to close these gaps by making AI explanations selective – a fundamental property of human explanations – by selectively presenting a subset from a large set of model reasons based on what aligns with the recipient's preferences. We propose a general framework for generating selective explanations by leveraging human input on a small sample. This framework opens up a rich design space that accounts for different selectivity goals, types of input, and more. As a showcase, we use a decision-support task to explore selective explanations based on what the decision-maker would consider relevant to the decision task. We conducted two experimental studies to examine three out of a broader possible set of paradigms based on our proposed framework: in Study 1, we ask the participants to provide their own input to generate selective explanations, with either open-ended or critique-based input. In Study 2, we show participants selective explanations based on input from a panel of similar users (annotators). Our experiments demonstrate the promise of selective explanations in reducing over-reliance on AI and improving decision outcomes and subjective perceptions of the AI, but also paint a nuanced picture that attributes some of these positive effects to the opportunity to provide one's own input to augment AI explanations. Overall, our work proposes a novel XAI framework inspired by human communication behaviors and demonstrates its potentials to encourage future work to better align AI explanations with human production and consumption of explanations.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
01/18/2023

Understanding the Role of Human Intuition on Reliance in Human-AI Decision-Making with Explanations

AI explanations are often mentioned as a way to improve human-AI decisio...
research
01/13/2021

Understanding the Effect of Out-of-distribution Examples and Interactive Explanations on Human-AI Decision Making

Although AI holds promise for improving human decision making in societa...
research
02/08/2022

Machine Explanations and Human Understanding

Explanations are hypothesized to improve human understanding of machine ...
research
08/08/2022

An Empirical Evaluation of Predicted Outcomes as Explanations in Human-AI Decision-Making

In this work, we empirically examine human-AI decision-making in the pre...
research
10/16/2020

A general approach to compute the relevance of middle-level input features

This work proposes a novel general framework, in the context of eXplaina...
research
06/16/2021

Toward Affective XAI: Facial Affect Analysis for Understanding Explainable Human-AI Interactions

As machine learning approaches are increasingly used to augment human de...
research
02/04/2023

Appropriate Reliance on AI Advice: Conceptualization and the Effect of Explanations

AI advice is becoming increasingly popular, e.g., in investment and medi...

Please sign up or login with your details

Forgot password? Click here to reset