Exponential convergence of testing error for stochastic gradient methods

12/13/2017
by   Loucas Pillaud-Vivien, et al.
0

We consider binary classification problems with positive definite kernels and square loss, and study the convergence rates of stochastic gradient methods. We show that while the excess testing loss (squared loss) converges slowly to zero as the number of observations (and thus iterations) goes to infinity, the testing error (classification error) converges exponentially fast if low-noise conditions are assumed.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/14/2018

Stochastic Gradient Descent with Exponential Convergence Rates of Expected Classification Errors

We consider stochastic gradient descent for binary classification proble...
research
11/13/2019

Exponential Convergence Rates of Classification Errors on Learning with SGD and Random Features

Although kernel methods are widely used in many learning problems, they ...
research
05/04/2021

On the stability of the stochastic gradient Langevin algorithm with dependent data stream

We prove, under mild conditions, that the stochastic gradient Langevin d...
research
02/25/2020

Can speed up the convergence rate of stochastic gradient methods to O(1/k^2) by a gradient averaging strategy?

In this paper we consider the question of whether it is possible to appl...
research
10/21/2016

On the Convergence of Stochastic Gradient MCMC Algorithms with High-Order Integrators

Recent advances in Bayesian learning with large-scale data have witnesse...
research
07/22/2022

Statistical Hypothesis Testing Based on Machine Learning: Large Deviations Analysis

We study the performance – and specifically the rate at which the error ...
research
04/03/2019

Exponentially convergent stochastic k-PCA without variance reduction

We present Matrix Krasulina, an algorithm for online k-PCA, by generaliz...

Please sign up or login with your details

Forgot password? Click here to reset