"If it didn't happen, why would I change my decision?": How Judges Respond to Counterfactual Explanations for the Public Safety Assessment

05/11/2022
by   Yaniv Yacoby, et al.
0

Many researchers and policymakers have expressed excitement about how algorithmic explanations may enable more fair and responsible decision-making. However, recent experimental studies have found that explanations do not always improve human use of algorithmic advice. In this study, we shed light on how people interpret and respond to counterfactual explanations (CFEs) – an explanation that shows how a model's output changes with marginal changes to an input – in the context of pretrial risk assessment instruments (PRAIs). We ran think-aloud trials with eight sitting US state court judges, providing them with recommendations from the PRAI as well as CFEs. At first, judges misinterpreted the counterfactuals as real – rather than hypothetical – changes to defendants. Once judges understood what the counterfactuals meant, they ignored them, stating they must make decisions based only on the actual defendant in question. They also expressed a mix of reasons for ignoring or following the advice of the PRAI. These results add to the literature on how people use algorithms and explanations in unexpected ways and the challenges associated with creating effective human-algorithm collaboration.

READ FULL TEXT

page 1

page 13

page 14

page 18

page 20

research
08/19/2020

DECE: Decision Explorer with Counterfactual Explanations for Machine Learning Models

With machine learning models being increasingly applied to various decis...
research
11/27/2022

"Explain it in the Same Way!" – Model-Agnostic Group Fairness of Counterfactual Explanations

Counterfactual explanations are a popular type of explanation for making...
research
06/04/2021

Counterfactual Explanations Can Be Manipulated

Counterfactual explanations are emerging as an attractive option for pro...
research
01/05/2021

GeCo: Quality Counterfactual Explanations in Real Time

Machine learning is increasingly applied in high-stakes decision making ...
research
05/25/2022

Logical Satisfiability of Counterfactuals for Faithful Explanations in NLI

Evaluating an explanation's faithfulness is desired for many reasons suc...
research
10/01/2021

Multi-Agent Algorithmic Recourse

The recent adoption of machine learning as a tool in real world decision...
research
09/04/2023

Why Change My Design: Explaining Poorly Constructed Visualization Designs with Explorable Explanations

Although visualization tools are widely available and accessible, not ev...

Please sign up or login with your details

Forgot password? Click here to reset