DeepAI
Log In Sign Up

Accelerated Multi-Agent Optimization Method over Stochastic Networks

09/08/2020
by   Wicak Ananduta, et al.
0

We propose a distributed method to solve a multi-agent optimization problem with strongly convex cost function and equality coupling constraints. The method is based on Nesterov's accelerated gradient approach and works over stochastically time-varying communication networks. We consider the standard assumptions of Nesterov's method and show that the sequence of the expected dual values converge toward the optimal value with the rate of 𝒪(1/k^2). Furthermore, we provide a simulation study of solving an optimal power flow problem with a well-known benchmark case.

READ FULL TEXT

page 1

page 2

page 3

page 4

12/03/2021

Push-sum Distributed Dual Averaging for Convex Optimization in Multi-agent Systems with Communication Delays

The distributed convex optimization problem over the multi-agent system ...
05/26/2021

Distributed Zeroth-Order Stochastic Optimization in Time-varying Networks

We consider a distributed convex optimization problem in a network which...
06/09/2018

An Optimization Approach to the Langberg-Médard Multiple Unicast Conjecture

The Langberg-Médard multiple unicast conjecture claims that for any stro...
12/01/2017

Optimal Algorithms for Distributed Optimization

In this paper, we study the optimal convergence rate for distributed con...
02/18/2021

ADOM: Accelerated Decentralized Optimization Method for Time-Varying Networks

We propose ADOM - an accelerated method for smooth and strongly convex d...
07/17/2019

Fast Distributed Coordination of Distributed Energy Resources Over Time-Varying Communication Networks

In this paper, we consider the problem of optimally coordinating the res...
05/21/2018

Adaptive Neighborhood Resizing for Stochastic Reachability in Multi-Agent Systems

We present DAMPC, a distributed, adaptive-horizon and adaptive-neighborh...