The unstable formula theorem revisited

12/09/2022
by   Maryanthe Malliaris, et al.
0

We first prove that Littlestone classes, those which model theorists call stable, characterize learnability in a new statistical model: a learner in this new setting outputs the same hypothesis, up to measure zero, with probability one, after a uniformly bounded number of revisions. This fills a certain gap in the literature, and sets the stage for an approximation theorem characterizing Littlestone classes in terms of a range of learning models, by analogy to definability of types in model theory. We then give a complete analogue of Shelah's celebrated (and perhaps a priori untranslatable) Unstable Formula Theorem in the learning setting, with algorithmic arguments taking the place of the infinite.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/12/2020

Some classical model theoretic aspects of bounded shrub-depth classes

We consider classes of arbitrary (finite or infinite) graphs of bounded ...
research
08/12/2021

Agnostic Online Learning and Excellent Sets

We use algorithmic methods from online learning to revisit a key idea fr...
research
05/06/2021

De Finetti's Theorem in Categorical Probability

We present a novel proof of de Finetti's Theorem characterizing permutat...
research
07/26/2022

A formalization of the change of variables formula for integrals in mathlib

We report on a formalization of the change of variables formula in integ...
research
04/20/2023

Engel's theorem in Mathlib

We discuss the theory of Lie algebras in Lean's Mathlib library. Using n...
research
03/02/2023

Canonical decompositions in monadically stable and bounded shrubdepth graph classes

We use model-theoretic tools originating from stability theory to derive...
research
03/11/2020

Stable variation in multidimensional competition

The Fundamental Theorem of Language Change (Yang, 2000) implies the impo...

Please sign up or login with your details

Forgot password? Click here to reset