DeepAI AI Chat
Log In Sign Up

Average-Case Information Complexity of Learning

by   Ido Nachum, et al.

How many bits of information are revealed by a learning algorithm for a concept class of VC-dimension d? Previous works have shown that even for d=1 the amount of information may be unbounded (tend to ∞ with the universe size). Can it be that all concepts in the class require leaking a large amount of information? We show that typically concepts do not require leakage. There exists a proper learning algorithm that reveals O(d) bits of information for most concepts in the class. This result is a special case of a more general phenomenon we explore. If there is a low information learner when the algorithm knows the underlying distribution on inputs, then there is a learner that reveals little information on an average concept without knowing the distribution on inputs.


page 1

page 2

page 3

page 4


A Direct Sum Result for the Information Complexity of Learning

How many bits of information are required to PAC learn a class of hypoth...

Quantum Boosting

Suppose we have a weak learning algorithm A for a Boolean-valued problem...

Simple Algorithms for Learning from Random Counterexamples

This work describes two simple and efficient algorithms for exactly lear...

Extractor-Based Time-Space Lower Bounds for Learning

A matrix M: A × X →{-1,1} corresponds to the following learning problem:...

Multi-Instance Learning with Any Hypothesis Class

In the supervised learning setting termed Multiple-Instance Learning (MI...

One-Shot Induction of Generalized Logical Concepts via Human Guidance

We consider the problem of learning generalized first-order representati...

Linear Guardedness and its Implications

Previous work on concept identification in neural representations has fo...