DeepAI AI Chat
Log In Sign Up

Adaptive Stopping Rule for Kernel-based Gradient Descent Algorithms

01/09/2020
by   Xiangyu Chang, et al.
0

In this paper, we propose an adaptive stopping rule for kernel-based gradient descent (KGD) algorithms. We introduce the empirical effective dimension to quantify the increments of iterations in KGD and derive an implementable early stopping strategy. We analyze the performance of the adaptive stopping rule in the framework of learning theory. Using the recently developed integral operator approach, we rigorously prove the optimality of the adaptive stopping rule in terms of showing the optimal learning rates for KGD equipped with this rule. Furthermore, a sharp bound on the number of iterations in KGD equipped with the proposed early stopping rule is also given to demonstrate its computational advantage.

READ FULL TEXT

page 1

page 2

page 3

page 4

05/25/2018

Early Stopping for Nonparametric Testing

Early stopping of iterative algorithms is an algorithmic regularization ...
02/20/2020

Bounding the expected run-time of nonconvex optimization with early stopping

This work examines the convergence of stochastic gradient-based optimiza...
06/18/2019

A twin error gauge for Kaczmarz's iterations

We propose two new methods based on Kaczmarz's method that produce a reg...
11/25/2020

A Lepskiĭ-type stopping rule for the covariance estimation of multi-dimensional Lévy processes

We suppose that a Lévy process is observed at discrete time points. Star...
01/21/2022

Instance-Dependent Confidence and Early Stopping for Reinforcement Learning

Various algorithms for reinforcement learning (RL) exhibit dramatic vari...
06/06/2022

Optimal Stopping Theory for a Distributionally Robust Seller

Sellers in online markets face the challenge of determining the right ti...