Slow Learners are Fast

11/03/2009
by   John Langford, et al.
0

Online learning algorithms have impressive convergence properties when it comes to risk minimization and convex games on very large problems. However, they are inherently sequential in their design which prevents them from taking advantage of modern multi-core architectures. In this paper we prove that online learning with delayed updates converges well, thereby facilitating parallel online learning.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/26/2023

A Set of Essentials for Online Learning : CSE-SET

Distance learning is not a novel concept. Education or learning conducte...
research
12/05/2017

Online Learning with Gated Linear Networks

This paper describes a family of probabilistic architectures designed fo...
research
11/28/2019

Communication-Efficient Distributed Online Learning with Kernels

We propose an efficient distributed online learning protocol for low-lat...
research
05/26/2015

Belief Flows of Robust Online Learning

This paper introduces a new probabilistic model for online learning whic...
research
01/24/2022

Online AutoML: An adaptive AutoML framework for online learning

Automated Machine Learning (AutoML) has been used successfully in settin...
research
11/28/2019

Adaptive Communication Bounds for Distributed Online Learning

We consider distributed online learning protocols that control the excha...
research
02/10/2021

Characterizing the Online Learning Landscape: What and How People Learn Online

Hundreds of millions of people learn something new online every day. Sim...

Please sign up or login with your details

Forgot password? Click here to reset