Risk Bounds for Learning via Hilbert Coresets

03/29/2021
by   Spencer Douglas, et al.
0

We develop a formalism for constructing stochastic upper bounds on the expected full sample risk for supervised classification tasks via the Hilbert coresets approach within a transductive framework. We explicitly compute tight and meaningful bounds for complex datasets and complex hypothesis classes such as state-of-the-art deep neural network architectures. The bounds we develop exhibit nice properties: i) the bounds are non-uniform in the hypothesis space, ii) in many practical examples, the bounds become effectively deterministic by appropriate choice of prior and training data-dependent posterior distributions on the hypothesis space, and iii) the bounds become significantly better with increase in the size of the training set. We also lay out some ideas to explore for future research.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/05/2019

Optimal Nonparametric Inference via Deep Neural Network

Deep neural network is a state-of-art method in modern science and techn...
research
01/26/2022

Self-Certifying Classification by Linearized Deep Assignment

We propose a novel class of deep stochastic predictors for classifying m...
research
07/21/2020

On the Rademacher Complexity of Linear Hypothesis Sets

Linear predictors form a rich class of hypotheses used in a variety of l...
research
10/29/2021

Improving Generalization Bounds for VC Classes Using the Hypergeometric Tail Inversion

We significantly improve the generalization bounds for VC classes by usi...
research
10/22/2020

Nonvacuous Loss Bounds with Fast Rates for Neural Networks via Conditional Information Measures

We present a framework to derive bounds on the test loss of randomized l...
research
09/04/2019

Empirical Hypothesis Space Reduction

Selecting appropriate regularization coefficients is critical to perform...
research
05/30/2014

Generalization Bounds for Learning with Linear, Polygonal, Quadratic and Conic Side Knowledge

In this paper, we consider a supervised learning setting where side know...

Please sign up or login with your details

Forgot password? Click here to reset