Proper Learning, Helly Number, and an Optimal SVM Bound

05/24/2020
by   Olivier Bousquet, et al.
0

The classical PAC sample complexity bounds are stated for any Empirical Risk Minimizer (ERM) and contain an extra logarithmic factor log(1/ϵ) which is known to be necessary for ERM in general. It has been recently shown by Hanneke (2016) that the optimal sample complexity of PAC learning for any VC class C is achieved by a particular improper learning algorithm, which outputs a specific majority-vote of hypotheses in C. This leaves the question of when this bound can be achieved by proper learning algorithms, which are restricted to always output a hypothesis from C. In this paper we aim to characterize the classes for which the optimal sample complexity can be achieved by a proper learning algorithm. We identify that these classes can be characterized by the dual Helly number, which is a combinatorial parameter that arises in discrete geometry and abstract convexity. In particular, under general conditions on C, we show that the dual Helly number is bounded if and only if there is a proper learner that obtains the optimal joint dependence on ϵ and δ. As further implications of our techniques we resolve a long-standing open problem posed by Vapnik and Chervonenkis (1974) on the performance of the Support Vector Machine by proving that the sample complexity of SVM in the realizable case is Θ((n/ϵ)+(1/ϵ)log(1/δ)), where n is the dimension. This gives the first optimal PAC bound for Halfspaces achieved by a proper learning algorithm, and moreover is computationally efficient.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/02/2015

The Optimal Sample Complexity of PAC Learning

This work establishes a new upper bound on the number of samples suffici...
research
12/05/2022

Bagging is an Optimal PAC Learner

Determining the optimal sample complexity of PAC learning in the realiza...
research
11/10/2022

Probabilistically Robust PAC Learning

Recently, Robey et al. propose a notion of probabilistic robustness, whi...
research
09/20/2019

Do Compressed Representations Generalize Better?

One of the most studied problems in machine learning is finding reasonab...
research
03/01/2021

Robust learning under clean-label attack

We study the problem of robust learning under clean-label data-poisoning...
research
04/18/2023

Optimal PAC Bounds Without Uniform Convergence

In statistical learning theory, determining the sample complexity of rea...
research
11/09/2021

Towards a Unified Information-Theoretic Framework for Generalization

In this work, we investigate the expressiveness of the "conditional mutu...

Please sign up or login with your details

Forgot password? Click here to reset