Estimated VC dimension for risk bounds

11/15/2011
by   Daniel J. McDonald, et al.
0

Vapnik-Chervonenkis (VC) dimension is a fundamental measure of the generalization capacity of learning algorithms. However, apart from a few special cases, it is hard or impossible to calculate analytically. Vapnik et al. [10] proposed a technique for estimating the VC dimension empirically. While their approach behaves well in simulations, it could not be used to bound the generalization risk of classifiers, because there were no bounds for the estimation error of the VC dimension itself. We rectify this omission, providing high probability concentration results for the proposed estimator and deriving corresponding generalization bounds.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
08/20/2018

Use Of Vapnik-Chervonenkis Dimension in Model Selection

In this dissertation, I derive a new method to estimate the Vapnik-Cherv...
research
02/03/2021

Information-Theoretic Bounds on the Moments of the Generalization Error of Learning Algorithms

Generalization error bounds are critical to understanding the performanc...
research
05/18/2023

A unified framework for information-theoretic generalization bounds

This paper presents a general methodology for deriving information-theor...
research
10/22/2018

Adversarial Risk Bounds for Binary Classification via Function Transformation

We derive new bounds for a notion of adversarial risk, characterizing th...
research
10/22/2020

In Search of Robust Measures of Generalization

One of the principal scientific challenges in deep learning is explainin...
research
09/02/2021

Computation of Power Law Equilibrium Measures on Balls of Arbitrary Dimension

We present a numerical approach for computing attractive-repulsive power...
research
12/09/2021

Effective dimension of machine learning models

Making statements about the performance of trained models on tasks invol...

Please sign up or login with your details

Forgot password? Click here to reset