On Comparison Of Experts

10/02/2017
by   Itay Kavaler, et al.
0

A policy maker faces a sequence of unknown outcomes. At each stage two (self-proclaimed) experts provide probabilistic forecasts on the outcome in the next stage. A comparison test is a protocol for the policy maker to (eventually) decide which of the two experts is better informed. The protocol takes as input the sequence of pairs of forecasts and actual realizations and (weakly) ranks the two experts. We propose two natural properties that such a comparison test must adhere to and show that these essentially uniquely determine the comparison test. This test is a function of the derivative of the induced pair of measures at the realization.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
12/12/2019

Screening of Informed and Uninformed Experts

Testing the validity of claims made by self-proclaimed experts can be im...
research
12/22/2021

MECATS: Mixture-of-Experts for Quantile Forecasts of Aggregated Time Series

We introduce a mixture of heterogeneous experts framework called , which...
research
02/20/2018

Learning of Optimal Forecast Aggregation in Partial Evidence Environments

We consider the forecast aggregation problem in repeated settings, where...
research
07/11/2012

A Generative Bayesian Model for Aggregating Experts' Probabilities

In order to improve forecasts, a decisionmaker often combines probabilit...
research
09/29/2021

Online Aggregation of Probability Forecasts with Confidence

The paper presents numerical experiments and some theoretical developmen...
research
07/05/2018

Adaptive Paired-Comparison Method for Subjective Video Quality Assessment on Mobile Devices

To effectively evaluate subjective visual quality in weakly-controlled e...

Please sign up or login with your details

Forgot password? Click here to reset