Measuring the Completeness of Theories

10/15/2019
by   Drew Fudenberg, et al.
0

We use machine learning to provide a tractable measure of the amount of predictable variation in the data that a theory captures, which we call its "completeness." We apply this measure to three problems: assigning certain equivalents to lotteries, initial play in games, and human generation of random sequences. We discover considerable variation in the completeness of existing models, which sheds light on whether to focus on developing better models with the same features or instead to look for new features that will improve predictions. We also illustrate how and why completeness varies with the experiments considered, which highlights the role played in choosing which experiments to run.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/21/2017

The Theory is Predictive, but is it Complete? An Application to Human Perception of Randomness

When we test a theory using data, it is common to focus on correctness: ...
research
07/31/2022

Left computably enumerable reals and initial segment complexity

We are interested in the computability between left c.e. reals α and the...
research
10/02/2022

Beyond the Existential Theory of the Reals

We show that completeness at higher levels of the theory of the reals is...
research
07/18/2019

Imperfect Gaps in Gap-ETH and PCPs

We study the role of perfect completeness in probabilistically checkable...
research
08/02/2023

On Bounded Completeness and the L_1-Denseness of Likelihood Ratios

The classical concept of bounded completeness and its relation to suffic...
research
10/02/2022

Relational Models for the Lambek Calculus with Intersection and Constants

We consider relational semantics (R-models) for the Lambek calculus exte...
research
09/14/2019

Propagation complete encodings of smooth DNNF theories

We investigate conjunctive normal form (CNF) encodings of a function rep...

Please sign up or login with your details

Forgot password? Click here to reset