Comparative Learning: A Sample Complexity Theory for Two Hypothesis Classes

11/16/2022
by   Lunjia Hu, et al.
0

In many learning theory problems, a central role is played by a hypothesis class: we might assume that the data is labeled according to a hypothesis in the class (usually referred to as the realizable setting), or we might evaluate the learned model by comparing it with the best hypothesis in the class (the agnostic setting). Taking a step beyond these classic setups that involve only a single hypothesis class, we introduce comparative learning as a combination of the realizable and agnostic settings in PAC learning: given two binary hypothesis classes S and B, we assume that the data is labeled according to a hypothesis in the source class S and require the learned model to achieve an accuracy comparable to the best hypothesis in the benchmark class B. Even when both S and B have infinite VC dimensions, comparative learning can still have a small sample complexity. We show that the sample complexity of comparative learning is characterized by the mutual VC dimension 𝖵𝖢(S,B) which we define to be the maximum size of a subset shattered by both S and B. We also show a similar result in the online setting, where we give a regret characterization in terms of the mutual Littlestone dimension 𝖫𝖽𝗂𝗆(S,B). These results also hold for partial hypotheses. We additionally show that the insights necessary to characterize the sample complexity of comparative learning can be applied to characterize the sample complexity of realizable multiaccuracy and multicalibration using the mutual fat-shattering dimension, an analogue of the mutual VC dimension for real-valued hypotheses. This not only solves an open problem proposed by Hu, Peale, Reingold (2022), but also leads to independently interesting results extending classic ones about regression, boosting, and covering number to our two-hypothesis-class setting.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/11/2022

A Characterization of Semi-Supervised Adversarially-Robust PAC Learnability

We study the problem of semi-supervised learning of an adversarially-rob...
research
11/07/2022

A Characterization of List Learnability

A classical result in learning theory shows the equivalence of PAC learn...
research
07/31/2019

Privately Answering Classification Queries in the Agnostic PAC Model

We revisit the problem of differentially private release of classificati...
research
03/27/2023

List Online Classification

We study multiclass online prediction where the learner can predict usin...
research
04/18/2023

Impossibility of Characterizing Distribution Learning – a simple solution to a long-standing problem

We consider the long-standing question of finding a parameter of a class...
research
04/07/2020

On the Complexity of Learning from Label Proportions

In the problem of learning with label proportions, which we call LLP lea...
research
03/09/2022

Metric Entropy Duality and the Sample Complexity of Outcome Indistinguishability

We give the first sample complexity characterizations for outcome indist...

Please sign up or login with your details

Forgot password? Click here to reset