Machine Explanations and Human Understanding

02/08/2022
by   Chacha Chen, et al.
0

Explanations are hypothesized to improve human understanding of machine learning models and achieve a variety of desirable outcomes, ranging from model debugging to enhancing human decision making. However, empirical studies have found mixed and even negative results. An open question, therefore, is under what conditions explanations can improve human understanding and in what way. Using adapted causal diagrams, we provide a formal characterization of the interplay between machine explanations and human understanding, and show how human intuitions play a central role in enabling human understanding. Specifically, we identify three core concepts of interest that cover all existing quantitative measures of understanding in the context of human-AI decision making: task decision boundary, model decision boundary, and model error. Our key result is that without assumptions about task-specific intuitions, explanations may potentially improve human understanding of model decision boundary, but they cannot improve human understanding of task decision boundary or model error. To achieve complementary human-AI performance, we articulate possible ways on how explanations need to work with human intuitions. For instance, human intuitions about the relevance of features (e.g., education is more important than age in predicting a person's income) can be critical in detecting model error. We validate the importance of human intuitions in shaping the outcome of machine explanations with empirical human-subject studies. Overall, our work provides a general framework along with actionable implications for future algorithmic development and empirical experiments of machine explanations.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
01/18/2023

Understanding the Role of Human Intuition on Reliance in Human-AI Decision-Making with Explanations

AI explanations are often mentioned as a way to improve human-AI decisio...
research
02/16/2023

Assisting Human Decisions in Document Matching

Many practical applications, ranging from paper-reviewer assignment in p...
research
09/09/2020

Beneficial and Harmful Explanatory Machine Learning

Given the recent successes of Deep Learning in AI there has been increas...
research
01/23/2023

Selective Explanations: Leveraging Human Input to Align Explainable AI

While a vast collection of explainable AI (XAI) algorithms have been dev...
research
01/06/2021

Predicting Illness for a Sustainable Dairy Agriculture: Predicting and Explaining the Onset of Mastitis in Dairy Cows

Mastitis is a billion dollar health problem for the modern dairy industr...
research
02/23/2022

Margin-distancing for safe model explanation

The growing use of machine learning models in consequential settings has...
research
08/08/2023

Understanding the Effect of Counterfactual Explanations on Trust and Reliance on AI for Human-AI Collaborative Clinical Decision Making

Artificial intelligence (AI) is increasingly being considered to assist ...

Please sign up or login with your details

Forgot password? Click here to reset