Accelerated Randomized Coordinate Descent Methods for Stochastic Optimization and Online Learning

06/05/2018
by   Akshita Bhandari, et al.
0

We propose accelerated randomized coordinate descent algorithms for stochastic optimization and online learning. Our algorithms have significantly less per-iteration complexity than the known accelerated gradient algorithms. The proposed algorithms for online learning have better regret performance than the known randomized online coordinate descent algorithms. Furthermore, the proposed algorithms for stochastic optimization exhibit as good convergence rates as the best known randomized coordinate descent algorithms. We also show simulation results to demonstrate performance of the proposed algorithms.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/22/2011

Randomized Smoothing for Stochastic Optimization

We analyze convergence rates of stochastic optimization procedures for n...
research
03/30/2020

Explicit Regularization of Stochastic Gradient Methods through Duality

We consider stochastic gradient methods under the interpolation regime w...
research
10/21/2018

Dynamic Average Diffusion with randomized Coordinate Updates

This work derives and analyzes an online learning strategy for tracking ...
research
06/13/2011

Efficient Transductive Online Learning via Randomized Rounding

Most traditional online learning algorithms are based on variants of mir...
research
04/26/2019

Online Learning Algorithms for Quaternion ARMA Model

In this paper, we address the problem of adaptive learning for autoregre...
research
04/25/2017

Stochastic Optimization from Distributed, Streaming Data in Rate-limited Networks

Motivated by machine learning applications in networks of sensors, inter...
research
06/06/2023

Buying Information for Stochastic Optimization

Stochastic optimization is one of the central problems in Machine Learni...

Please sign up or login with your details

Forgot password? Click here to reset