Alignment Problems With Current Forecasting Platforms

06/21/2021
by   Nuño Sempere, et al.
0

We present alignment problems in current forecasting platforms, such as Good Judgment Open, CSET-Foretell or Metaculus. We classify those problems as either reward specification problems or principal-agent problems, and we propose solutions. For instance, the scoring rule used by Good Judgment Open is not proper, and Metaculus tournaments disincentivize sharing information and incentivize distorting one's true probabilities to maximize the chances of placing in the top few positions which earn a monetary reward. We also point out some partial similarities between the problem of aligning forecasters and the problem of aligning artificial intelligence systems.

READ FULL TEXT

page 14

page 15

page 35

research
08/19/2019

Implications of Quantum Computing for Artificial Intelligence alignment research

We introduce a heuristic model of Quantum Computing and apply it to argu...
research
02/07/2021

Consequences of Misaligned AI

AI systems often rely on two key components: a specified goal or reward ...
research
02/02/2023

Goal Alignment: A Human-Aware Account of Value Alignment Problem

Value alignment problems arise in scenarios where the specified objectiv...
research
03/15/2023

Learning to Incentivize Information Acquisition: Proper Scoring Rules Meet Principal-Agent Model

We study the incentivized information acquisition problem, where a princ...
research
05/27/2019

Open Platforms for Artificial Intelligence for Social Good: Common Patterns as a Pathway to True Impact

The AI for social good movement has now reached a state in which a large...

Please sign up or login with your details

Forgot password? Click here to reset