A New Lower Bound for Agnostic Learning with Sample Compression Schemes

05/21/2018
by   Steve Hanneke, et al.
0

We establish a tight characterization of the worst-case rates for the excess risk of agnostic learning with sample compression schemes and for uniform convergence for agnostic sample compression schemes. In particular, we find that the optimal rates of convergence for size-k agnostic sample compression schemes are of the form √(k (n/k)/n), which contrasts with agnostic learning with classes of VC dimension k, where the optimal rates are of the form √(k/n).

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/03/2018

Agnostic Sample Compression for Linear Regression

We obtain the first positive results for bounded sample compression in t...
research
10/11/2022

Unlabelled Sample Compression Schemes for Intersection-Closed Classes and Extremal Classes

The sample compressibility of concept classes plays an important role in...
research
01/11/2018

Quantization/clustering: when does k-means work?

Though mostly used as a clustering algorithm, k-means are originally des...
research
01/11/2018

Quantization/clustering: when and why does k-means work?

Though mostly used as a clustering algorithm, k-means are originally des...
research
01/30/2023

Compression, Generalization and Learning

A compression function is a map that slims down an observational set int...
research
07/16/2023

Optimal Compression of Unit Norm Vectors in the High Distortion Regime

Motivated by the need for communication-efficient distributed learning, ...
research
11/08/2021

Realizable Learning is All You Need

The equivalence of realizable and agnostic learnability is a fundamental...

Please sign up or login with your details

Forgot password? Click here to reset