Bennett-type Generalization Bounds: Large-deviation Case and Faster Rate of Convergence

09/26/2013
by   Chao Zhang, et al.
0

In this paper, we present the Bennett-type generalization bounds of the learning process for i.i.d. samples, and then show that the generalization bounds have a faster rate of convergence than the traditional results. In particular, we first develop two types of Bennett-type deviation inequality for the i.i.d. learning process: one provides the generalization bounds based on the uniform entropy number; the other leads to the bounds based on the Rademacher complexity. We then adopt a new method to obtain the alternative expressions of the Bennett-type generalization bounds, which imply that the bounds have a faster rate o(N^-1/2) of convergence than the traditional results O(N^-1/2). Additionally, we find that the rate of the bounds will become faster in the large-deviation case, which refers to a situation where the empirical risk is far away from (at least not close to) the expected risk. Finally, we analyze the asymptotical convergence of the learning process and compare our analysis with the existing results.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
01/02/2014

Generalization Bounds for Representative Domain Adaptation

In this paper, we propose a novel framework to analyze the theoretical p...
research
02/14/2012

Risk Bounds for Infinitely Divisible Distribution

In this paper, we study the risk bounds for samples independently drawn ...
research
01/09/2019

Generalized Deduplication: Bounds, Convergence, and Asymptotic Properties

We study a generalization of deduplication, which enables lossless dedup...
research
05/31/2019

PAC-Bayes Un-Expected Bernstein Inequality

We present a new PAC-Bayesian generalization bound. Standard bounds cont...
research
05/06/2022

Fast Rate Generalization Error Bounds: Variations on a Theme

A recent line of works, initiated by Russo and Xu, has shown that the ge...
research
02/27/2017

Uniform Deviation Bounds for Unbounded Loss Functions like k-Means

Uniform deviation bounds limit the difference between a model's expected...
research
02/13/2019

Uniform convergence may be unable to explain generalization in deep learning

We cast doubt on the power of uniform convergence-based generalization b...

Please sign up or login with your details

Forgot password? Click here to reset