DeepAI AI Chat
Log In Sign Up

Measuring the Completeness of Theories

10/15/2019
by   Drew Fudenberg, et al.
0

We use machine learning to provide a tractable measure of the amount of predictable variation in the data that a theory captures, which we call its "completeness." We apply this measure to three problems: assigning certain equivalents to lotteries, initial play in games, and human generation of random sequences. We discover considerable variation in the completeness of existing models, which sheds light on whether to focus on developing better models with the same features or instead to look for new features that will improve predictions. We also illustrate how and why completeness varies with the experiments considered, which highlights the role played in choosing which experiments to run.

READ FULL TEXT

page 1

page 2

page 3

page 4

06/21/2017

The Theory is Predictive, but is it Complete? An Application to Human Perception of Randomness

When we test a theory using data, it is common to focus on correctness: ...
07/31/2022

Left computably enumerable reals and initial segment complexity

We are interested in the computability between left c.e. reals α and the...
10/02/2022

Beyond the Existential Theory of the Reals

We show that completeness at higher levels of the theory of the reals is...
07/18/2019

Imperfect Gaps in Gap-ETH and PCPs

We study the role of perfect completeness in probabilistically checkable...
09/10/2017

A Straightforward Method to Judge the Completeness of a Polymorphic Gate Set

Polymorphic circuits are a special kind of circuits which possess some d...
02/17/2022

A Completeness Result for Inequational Reasoning in a Full Higher-Order Setting

This paper obtains a completeness result for inequational reasoning with...