DeepAI AI Chat
Log In Sign Up

"If it didn't happen, why would I change my decision?": How Judges Respond to Counterfactual Explanations for the Public Safety Assessment

by   Yaniv Yacoby, et al.
University of Michigan
The University of Arizona
Harvard University

Many researchers and policymakers have expressed excitement about how algorithmic explanations may enable more fair and responsible decision-making. However, recent experimental studies have found that explanations do not always improve human use of algorithmic advice. In this study, we shed light on how people interpret and respond to counterfactual explanations (CFEs) – an explanation that shows how a model's output changes with marginal changes to an input – in the context of pretrial risk assessment instruments (PRAIs). We ran think-aloud trials with eight sitting US state court judges, providing them with recommendations from the PRAI as well as CFEs. At first, judges misinterpreted the counterfactuals as real – rather than hypothetical – changes to defendants. Once judges understood what the counterfactuals meant, they ignored them, stating they must make decisions based only on the actual defendant in question. They also expressed a mix of reasons for ignoring or following the advice of the PRAI. These results add to the literature on how people use algorithms and explanations in unexpected ways and the challenges associated with creating effective human-algorithm collaboration.


page 1

page 13

page 14

page 18

page 20


DECE: Decision Explorer with Counterfactual Explanations for Machine Learning Models

With machine learning models being increasingly applied to various decis...

"Explain it in the Same Way!" – Model-Agnostic Group Fairness of Counterfactual Explanations

Counterfactual explanations are a popular type of explanation for making...

Counterfactual Explanations Can Be Manipulated

Counterfactual explanations are emerging as an attractive option for pro...

GeCo: Quality Counterfactual Explanations in Real Time

Machine learning is increasingly applied in high-stakes decision making ...

Logical Satisfiability of Counterfactuals for Faithful Explanations in NLI

Evaluating an explanation's faithfulness is desired for many reasons suc...

Multi-Agent Algorithmic Recourse

The recent adoption of machine learning as a tool in real world decision...

Why Change My Design: Explaining Poorly Constructed Visualization Designs with Explorable Explanations

Although visualization tools are widely available and accessible, not ev...