A Distributed Online Convex Optimization Algorithm with Improved Dynamic Regret

11/12/2019
by   Yan Zhang, et al.
0

In this paper, we consider the problem of distributed online convex optimization, where a network of local agents aim to jointly optimize a convex function over a period of multiple time steps. The agents do not have any information about the future. Existing algorithms have established dynamic regret bounds that have explicit dependence on the number of time steps. In this work, we show that we can remove this dependence assuming that the local objective functions are strongly convex. More precisely, we propose a gradient tracking algorithm where agents jointly communicate and descend based on corrected gradient steps. We verify our theoretical results through numerical experiments.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/06/2019

Distributed Online Convex Optimization with Time-Varying Coupled Inequality Constraints

This paper considers distributed online optimization with time-varying c...
research
09/25/2021

Distributed Online Optimization with Byzantine Adversarial Agents

We study the problem of non-constrained, discrete-time, online distribut...
research
09/09/2016

Distributed Online Optimization in Dynamic Environments Using Mirror Descent

This work addresses decentralized online optimization in non-stationary ...
research
02/23/2023

Dynamic Regret Analysis of Safe Distributed Online Optimization for Convex and Non-convex Problems

This paper addresses safe distributed online optimization over an unknow...
research
09/03/2020

Distributed Online Optimization via Gradient Tracking with Adaptive Momentum

This paper deals with a network of computing agents aiming to solve an o...
research
05/01/2022

Adaptive Online Optimization with Predictions: Static and Dynamic Environments

In the past few years, Online Convex Optimization (OCO) has received not...
research
11/30/2022

Swarm-Based Gradient Descent Method for Non-Convex Optimization

We introduce a new Swarm-Based Gradient Descent (SBGD) method for non-co...

Please sign up or login with your details

Forgot password? Click here to reset