Empirical Hypothesis Space Reduction

09/04/2019
by   Akihiro Yabe, et al.
0

Selecting appropriate regularization coefficients is critical to performance with respect to regularized empirical risk minimization problems. Existing theoretical approaches attempt to determine the coefficients in order for regularized empirical objectives to be upper-bounds of true objectives, uniformly over a hypothesis space. Such an approach is, however, known to be over-conservative, especially in high-dimensional settings with large hypothesis space. In fact, an existing generalization error bound in variance-based regularization is O(√(d n/n)), where d is the dimension of hypothesis space, and thus the number of samples required for convergence linearly increases with respect to d. This paper proposes an algorithm that calculates regularization coefficient, one which results in faster convergence of generalization error O(√( n/n)) and whose leading term is independent of the dimension d. This faster convergence without dependence on the size of the hypothesis space is achieved by means of empirical hypothesis space reduction, which, with high probability, successfully reduces a hypothesis space without losing the true optimum solution. Calculation of uniform upper bounds over reduced spaces, then, enables acceleration of the convergence of generalization error.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
08/15/2016

Generalization of ERM in Stochastic Convex Optimization: The Dimension Strikes Back

In stochastic convex optimization the goal is to minimize a convex funct...
research
06/30/2016

Ordering as privileged information

We propose to accelerate the rate of convergence of the pattern recognit...
research
05/12/2020

Upper Bounds on the Generalization Error of Private Algorithms

In this work, we study the generalization capability of algorithms from ...
research
10/02/2020

The Efficacy of L_1 Regularization in Two-Layer Neural Networks

A crucial problem in neural networks is to select the most appropriate n...
research
10/30/2019

Risk bounds for reservoir computing

We analyze the practices of reservoir computing in the framework of stat...
research
04/18/2023

Optimal PAC Bounds Without Uniform Convergence

In statistical learning theory, determining the sample complexity of rea...
research
03/29/2021

Risk Bounds for Learning via Hilbert Coresets

We develop a formalism for constructing stochastic upper bounds on the e...

Please sign up or login with your details

Forgot password? Click here to reset