Distributed Online Optimization via Gradient Tracking with Adaptive Momentum

09/03/2020
by   Guido Carnevale, et al.
0

This paper deals with a network of computing agents aiming to solve an online optimization problem in a distributed fashion, i.e., by means of local computation and communication, without any central coordinator. We propose the gradient tracking with adaptive momentum estimation (GTAdam) distributed algorithm, which combines a gradient tracking mechanism with first and second order momentum estimates of the gradient. The algorithm is analyzed in the online setting for strongly convex and smooth cost functions. We prove that the average dynamic regret is bounded and that the convergence rate is linear. The algorithm is tested on a time-varying classification problem, on a (moving) target localization problem and in a stochastic optimization setup from image classification. In these numerical experiments from multi-agent learning, GTAdam outperforms state-of-the-art distributed optimization methods.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/11/2022

Zero-Order One-Point Estimate with Distributed Stochastic Gradient-Tracking Technique

In this work, we consider a distributed multi-agent stochastic optimizat...
research
10/03/2020

Expectigrad: Fast Stochastic Optimization with Robust Convergence Properties

Many popular adaptive gradient methods such as Adam and RMSProp rely on ...
research
05/26/2021

Distributed Zeroth-Order Stochastic Optimization in Time-varying Networks

We consider a distributed convex optimization problem in a network which...
research
03/25/2021

Compressed Gradient Tracking Methods for Decentralized Optimization with Linear Convergence

Communication compression techniques are of growing interests for solvin...
research
05/13/2018

Dyna: A Method of Momentum for Stochastic Optimization

An algorithm is presented for momentum gradient descent optimization bas...
research
04/19/2021

Distributed Derivative-free Learning Method for Stochastic Optimization over a Network with Sparse Activity

This paper addresses a distributed optimization problem in a communicati...
research
11/12/2019

A Distributed Online Convex Optimization Algorithm with Improved Dynamic Regret

In this paper, we consider the problem of distributed online convex opti...

Please sign up or login with your details

Forgot password? Click here to reset