Distributed Optimization over Directed Graphs with Row Stochasticity and Constraint Regularity

06/19/2018
by   Van Sy Mai, et al.
0

This paper deals with an optimization problem over a network of agents, where the cost function is the sum of the individual objectives of the agents and the constraint set is the intersection of local constraints. Most existing methods employing subgradient and consensus steps for solving this problem require the weight matrix associated with the network to be column stochastic or even doubly stochastic, conditions that can be hard to arrange in directed networks. Moreover, known convergence analyses for distributed subgradient methods vary depending on whether the problem is unconstrained or constrained, and whether the local constraint sets are identical or nonidentical and compact. The main goals of this paper are: (i) removing the common column stochasticity requirement; (ii) relaxing the compactness assumption, and (iii) providing a unified convergence analysis. Specifically, assuming the communication graph to be fixed and strongly connected and the weight matrix to (only) be row stochastic, a distributed projected subgradient algorithm and its variation are presented to solve the problem for cost functions that are convex and Lipschitz continuous. Based on a regularity assumption on the local constraint sets, a unified convergence analysis is given that can be applied to both unconstrained and constrained problems and without assuming compactness of the constraint sets or an interior point in their intersection. Further, we also establish an upper bound on the absolute objective error evaluated at each agent's available local estimate under a nonincreasing step size sequence. This bound allows us to analyze the convergence rate of both algorithms.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/21/2021

Decentralized Constrained Optimization: Double Averaging and Gradient Projection

In this paper, we consider the convex, finite-sum minimization problem w...
research
02/07/2022

Variance reduced stochastic optimization over directed graphs with row and column stochastic weights

This paper proposes AB-SAGA, a first-order distributed stochastic optimi...
research
03/18/2019

Distributed stochastic optimization with gradient tracking over strongly-connected networks

In this paper, we study distributed stochastic optimization to minimize ...
research
01/21/2019

Distributed Nesterov gradient methods over arbitrary graphs

In this letter, we introduce a distributed Nesterov method, termed as AB...
research
08/03/2023

Minimal Convex Environmental Contours

We develop a numerical method for the computation of a minimal convex an...
research
10/05/2020

First-order methods for problems with O(1) functional constraints can have almost the same convergence rate as for unconstrained problems

First-order methods (FOMs) have recently been applied and analyzed for s...
research
12/20/2018

Using First Hitting Times to Find Sets that Maximize the Convergence Rate to Consensus

In a model of communication in a social network described by a simple co...

Please sign up or login with your details

Forgot password? Click here to reset