Finite Littlestone Dimension Implies Finite Information Complexity

06/27/2022
by   Aditya Pradeep, et al.
0

We prove that every online learnable class of functions of Littlestone dimension d admits a learning algorithm with finite information complexity. Towards this end, we use the notion of a globally stable algorithm. Generally, the information complexity of such a globally stable algorithm is large yet finite, roughly exponential in d. We also show there is room for improvement; for a canonical online learnable class, indicator functions of affine subspaces of dimension d, the information complexity can be upper bounded logarithmically in d.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
08/12/2023

Multiclass Learnability Does Not Imply Sample Compression

A hypothesis class admits a sample compression scheme, if for every samp...
research
06/12/2021

Affine OneMax

A new class of test functions for black box optimization is introduced. ...
research
04/16/2018

A Direct Sum Result for the Information Complexity of Learning

How many bits of information are required to PAC learn a class of hypoth...
research
08/21/2023

Fat Shattering, Joint Measurability, and PAC Learnability of POVM Hypothesis Classes

We characterize learnability for quantum measurement classes by establis...
research
02/06/2023

Find a witness or shatter: the landscape of computable PAC learning

This paper contributes to the study of CPAC learnability – a computable ...
research
09/08/2015

On the complexity of piecewise affine system identification

The paper provides results regarding the computational complexity of hyb...
research
11/25/2018

Average-Case Information Complexity of Learning

How many bits of information are revealed by a learning algorithm for a ...

Please sign up or login with your details

Forgot password? Click here to reset