Falsification and future performance

11/23/2011
by   David Balduzzi, et al.
0

We information-theoretically reformulate two measures of capacity from statistical learning theory: empirical VC-entropy and empirical Rademacher complexity. We show these capacity measures count the number of hypotheses about a dataset that a learning algorithm falsifies when it finds the classifier in its repertoire minimizing empirical risk. It then follows from that the future performance of predictors on unseen data is controlled in part by how many hypotheses the learner falsifies. As a corollary we show that empirical VC-entropy quantifies the message length of the true hypothesis in the optimal code of a particular probability distribution, the so-called actual repertoire.

READ FULL TEXT
POST COMMENT

Comments

There are no comments yet.

Authors

page 4

page 7

10/17/2011

Information, learning and falsification

There are (at least) three approaches to quantifying information. The fi...
12/12/2018

Divergence measures estimation and its asymptotic normality theory : Discrete case

In this paper we provide the asymptotic theory of the general phi-diverg...
06/01/2020

A Comparison of Empirical Tree Entropies

Whereas for strings, higher-order empirical entropy is the standard entr...
07/27/2017

The Topology of Statistical Verifiability

Topological models of empirical and formal inquiry are increasingly prev...
03/07/2017

Learning opacity in Stratal Maximum Entropy Grammar

Opaque phonological patterns are sometimes claimed to be difficult to le...
08/03/2012

Learning Theory Approach to Minimum Error Entropy Criterion

We consider the minimum error entropy (MEE) criterion and an empirical r...
06/04/2014

A Methodology for Empirical Analysis of LOD Datasets

CoCoE stands for Complexity, Coherence and Entropy, and presents an exte...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.