Evaluating Superhuman Models with Consistency Checks

06/16/2023
by   Lukas Fluri, et al.
0

If machine learning models were to achieve superhuman abilities at various reasoning or decision-making tasks, how would we go about evaluating such models, given that humans would necessarily be poor proxies for ground truth? In this paper, we propose a framework for evaluating superhuman models via consistency checks. Our premise is that while the correctness of superhuman decisions may be impossible to evaluate, we can still surface mistakes if the model's decisions fail to satisfy certain logical, human-interpretable rules. We instantiate our framework on three tasks where correctness of decisions is hard to evaluate due to either superhuman model abilities, or to otherwise missing ground truth: evaluating chess positions, forecasting future events, and making legal judgments. We show that regardless of a model's (possibly superhuman) performance on these tasks, we can discover logical inconsistencies in decision making. For example: a chess engine assigning opposing valuations to semantically identical boards; GPT-4 forecasting that sports records will evolve non-monotonically over time; or an AI judge assigning bail to a defendant only after we add a felony to their criminal record.

READ FULL TEXT

page 5

page 17

page 19

page 20

page 21

page 23

page 27

research
02/15/2023

A Pilot Evaluation of ChatGPT and DALL-E 2 on Decision Making and Spatial Reasoning

We conduct a pilot study selectively evaluating the cognitive abilities ...
research
05/15/2023

MADDM: Multi-Advisor Dynamic Binary Decision-Making by Maximizing the Utility

Being able to infer ground truth from the responses of multiple imperfec...
research
12/15/2015

Conditions for Normative Decision Making at the Fire Ground

We discuss the changes in an attitude to decision making at the fire gro...
research
02/13/2023

Ground(less) Truth: A Causal Framework for Proxy Labels in Human-Algorithm Decision-Making

A growing literature on human-AI decision-making investigates strategies...
research
01/22/2020

Proxy Tasks and Subjective Measures Can Be Misleading in Evaluating Explainable AI Systems

Explainable artificially intelligent (XAI) systems form part of sociotec...
research
06/23/2023

Co-creating a globally interpretable model with human input

We consider an aggregated human-AI collaboration aimed at generating a j...
research
11/12/2021

Catastrophe, Compounding Consistency in Choice

Conditional value-at-risk (CVaR) precisely characterizes the influence t...

Please sign up or login with your details

Forgot password? Click here to reset