Zero-Order One-Point Estimate with Distributed Stochastic Gradient-Tracking Technique

10/11/2022
by   Elissa Mhanna, et al.
0

In this work, we consider a distributed multi-agent stochastic optimization problem, where each agent holds a local objective function that is smooth and convex, and that is subject to a stochastic process. The goal is for all agents to collaborate to find a common solution that optimizes the sum of these local functions. With the practical assumption that agents can only obtain noisy numerical function queries at exactly one point at a time, we extend the distributed stochastic gradient-tracking method to the bandit setting where we don't have an estimate of the gradient, and we introduce a zero-order (ZO) one-point estimate (1P-DSGT). We analyze the convergence of this novel technique for smooth and convex objectives using stochastic approximation tools, and we prove that it converges almost surely to the optimum. We then study the convergence rate for when the objectives are additionally strongly convex. We obtain a rate of O(1/√(k)) after a sufficient number of iterations k > K_2 which is usually optimal for techniques utilizing one-point estimators. We also provide a regret bound of O(√(k)), which is exceptionally good compared to the aforementioned techniques. We further illustrate the usefulness of the proposed technique using numerical experiments.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/25/2018

Distributed Stochastic Gradient Tracking Methods

In this paper, we study the problem of distributed multi-agent optimizat...
research
09/03/2020

Distributed Online Optimization via Gradient Tracking with Adaptive Momentum

This paper deals with a network of computing agents aiming to solve an o...
research
10/08/2012

A Fast Distributed Proximal-Gradient Method

We present a distributed proximal-gradient method for optimizing the ave...
research
07/10/2023

An Algorithm with Optimal Dimension-Dependence for Zero-Order Nonsmooth Nonconvex Stochastic Optimization

We study the complexity of producing (δ,ϵ)-stationary points of Lipschit...
research
10/28/2022

Secure Distributed Optimization Under Gradient Attacks

In this paper, we study secure distributed optimization against arbitrar...
research
04/19/2021

Distributed Derivative-free Learning Method for Stochastic Optimization over a Network with Sparse Activity

This paper addresses a distributed optimization problem in a communicati...
research
02/01/2021

Distributed Zero-Order Optimization under Adversarial Noise

We study the problem of distributed zero-order optimization for a class ...

Please sign up or login with your details

Forgot password? Click here to reset