Human-Aligned Calibration for AI-Assisted Decision Making

05/31/2023
by   Nina L. Corvelo Benz, et al.
0

Whenever a binary classifier is used to provide decision support, it typically provides both a label prediction and a confidence value. Then, the decision maker is supposed to use the confidence value to calibrate how much to trust the prediction. In this context, it has been often argued that the confidence value should correspond to a well calibrated estimate of the probability that the predicted label matches the ground truth label. However, multiple lines of empirical evidence suggest that decision makers have difficulties at developing a good sense on when to trust a prediction using these confidence values. In this paper, our goal is first to understand why and then investigate how to construct more useful confidence values. We first argue that, for a broad class of utility functions, there exist data distributions for which a rational decision maker is, in general, unlikely to discover the optimal decision policy using the above confidence values – an optimal decision maker would need to sometimes place more (less) trust on predictions with lower (higher) confidence values. However, we then show that, if the confidence values satisfy a natural alignment property with respect to the decision maker's confidence on her own predictions, there always exists an optimal decision policy under which the level of trust the decision maker would need to place on predictions is monotone on the confidence values, facilitating its discoverability. Further, we show that multicalibration with respect to the decision maker's confidence on her own predictions is a sufficient condition for alignment. Experiments on four different AI-assisted decision making tasks where a classifier provides decision support to real human experts validate our theoretical results and suggest that alignment may lead to better decisions.

READ FULL TEXT
research
01/14/2023

Who Should I Trust: AI or Myself? Leveraging Human and AI Correctness Likelihood to Promote Appropriate Trust in AI-Assisted Decision-Making

In AI-assisted decision-making, it is critical for human decision-makers...
research
01/07/2020

Effect of Confidence and Explanation on Accuracy and Trust Calibration in AI-Assisted Decision Making

Today, AI is being increasingly used to help human experts make decision...
research
01/28/2022

Provably Improving Expert Predictions with Conformal Prediction

Automated decision support systems promise to help human experts solve t...
research
06/06/2023

Designing Decision Support Systems Using Counterfactual Prediction Sets

Decision support systems for classification tasks are predominantly desi...
research
11/15/2020

Right Decisions from Wrong Predictions: A Mechanism Design Alternative to Individual Calibration

Decision makers often need to rely on imperfect probabilistic forecasts....
research
03/22/2022

A Factor-Based Framework for Decision-Making Competency Self-Assessment

We summarize our efforts to date in developing a framework for generatin...
research
03/16/2019

Deciding with Judgment

A decision maker starts from a judgmental decision and moves to the clos...

Please sign up or login with your details

Forgot password? Click here to reset