Agnostic Sample Compression for Linear Regression

10/03/2018
by   Steve Hanneke, et al.
0

We obtain the first positive results for bounded sample compression in the agnostic regression setting. We show that for p in 1,infinity, agnostic linear regression with ℓ_p loss admits a bounded sample compression scheme. Specifically, we exhibit efficient sample compression schemes for agnostic linear regression in R^d of size d+1 under the ℓ_1 loss and size d+2 under the ℓ_∞ loss. We further show that for every other ℓ_p loss (1 < p < infinity), there does not exist an agnostic compression scheme of bounded size. This refines and generalizes a negative result of David, Moran, and Yehudayoff (2016) for the ℓ_2 loss. We close by posing a general open question: for agnostic regression with ℓ_1 loss, does every function class admit a compression scheme of size equal to its pseudo-dimension? This question generalizes Warmuth's classic sample compression conjecture for realizable-case classification (Warmuth, 2003).

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/21/2018

A New Lower Bound for Agnostic Learning with Sample Compression Schemes

We establish a tight characterization of the worst-case rates for the ex...
research
01/11/2022

Optimally compressing VC classes

Resolving a conjecture of Littlestone and Warmuth, we show that any conc...
research
11/29/2018

Unlabeled Compression Schemes Exceeding the VC-dimension

In this note we disprove a conjecture of Kuzmin and Warmuth claiming tha...
research
08/12/2023

Multiclass Learnability Does Not Imply Sample Compression

A hypothesis class admits a sample compression scheme, if for every samp...
research
05/21/2018

Sample Compression for Real-Valued Learners

We give an algorithmically efficient version of the learner-to-compressi...
research
06/27/2022

Sample compression schemes for balls in graphs

One of the open problems in machine learning is whether any set-family o...
research
10/14/2017

Agnostic Distribution Learning via Compression

We study sample-efficient distribution learning, where a learner is give...

Please sign up or login with your details

Forgot password? Click here to reset