Convergence of online k-means

02/22/2022
by   Sanjoy Dasgupta, et al.
0

We prove asymptotic convergence for a general class of k-means algorithms performed over streaming data from a distribution: the centers asymptotically converge to the set of stationary points of the k-means cost function. To do so, we show that online k-means over a distribution can be interpreted as stochastic gradient descent with a stochastic learning rate schedule. Then, we prove convergence by extending techniques used in optimization literature to handle settings where center-specific learning rates may depend on the past trajectory of the centers.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/20/2021

Stochastic Learning Rate Optimization in the Stochastic Approximation and Online Learning Settings

In this work, multiplicative stochasticity is applied to the learning ra...
research
01/26/2022

On the Convergence of mSGD and AdaGrad for Stochastic Optimization

As one of the most fundamental stochastic optimization algorithms, stoch...
research
11/22/2011

Stochastic gradient descent on Riemannian manifolds

Stochastic gradient descent is a simple approach to find the local minim...
research
05/25/2022

Learning from time-dependent streaming data with online stochastic algorithms

We study stochastic algorithms in a streaming framework, trained on samp...
research
02/21/2020

Debiasing Stochastic Gradient Descent to handle missing values

A major caveat of large scale data is their incom-pleteness. We propose ...
research
08/27/2020

Understanding and Detecting Convergence for Stochastic Gradient Descent with Momentum

Convergence detection of iterative stochastic optimization methods is of...
research
01/07/2020

Backtracking Gradient Descent allowing unbounded learning rates

In unconstrained optimisation on an Euclidean space, to prove convergenc...

Please sign up or login with your details

Forgot password? Click here to reset