Ground(less) Truth: A Causal Framework for Proxy Labels in Human-Algorithm Decision-Making

02/13/2023
by   Luke Guerdan, et al.
0

A growing literature on human-AI decision-making investigates strategies for combining human judgment with statistical models to improve decision-making. Research in this area often evaluates proposed improvements to models, interfaces, or workflows by demonstrating improved predictive performance on "ground truth" labels. However, this practice overlooks a key difference between human judgments and model predictions. Whereas humans reason about broader phenomena of interest in a decision – including latent constructs that are not directly observable, such as disease status, the "toxicity" of online comments, or future "job performance" – predictive models target proxy labels that are readily available in existing datasets. Predictive models' reliance on simplistic proxies makes them vulnerable to various sources of statistical bias. In this paper, we identify five sources of target variable bias that can impact the validity of proxy labels in human-AI decision-making tasks. We develop a causal framework to disentangle the relationship between each bias and clarify which are of concern in specific human-AI decision-making tasks. We demonstrate how our framework can be used to articulate implicit assumptions made in prior modeling work, and we recommend evaluation strategies for verifying whether these assumptions hold in practice. We then leverage our framework to re-examine the designs of prior human subjects experiments that investigate human-AI decision-making, finding that only a small fraction of studies examine factors related to target variable bias. We conclude by discussing opportunities to better address target variable bias in future research.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/27/2022

Human-AI Collaboration in Decision-Making: Beyond Learning to Defer

Human-AI collaboration (HAIC) in decision-making aims to create synergis...
research
08/30/2023

Training Towards Critical Use: Learning to Situate AI Predictions Relative to Human Knowledge

A growing body of research has explored how to support humans in making ...
research
08/24/2022

Identifying and Overcoming Transformation Bias in Forecasting Models

Log and square root transformations of target variable are routinely use...
research
09/28/2022

AI Governance and Ethics Framework for Sustainable AI and Sustainability

AI is transforming the existing technology landscape at a rapid phase en...
research
06/16/2023

Evaluating Superhuman Models with Consistency Checks

If machine learning models were to achieve superhuman abilities at vario...
research
04/08/2021

Causal Decision Making and Causal Effect Estimation Are Not the Same... and Why It Matters

Causal decision making (CDM) at scale has become a routine part of busin...
research
02/09/2022

Designing Closed Human-in-the-loop Deferral Pipelines

In hybrid human-machine deferral frameworks, a classifier can defer unce...

Please sign up or login with your details

Forgot password? Click here to reset