DeepAI AI Chat
Log In Sign Up

A Multi-Agent Primal-Dual Strategy for Composite Optimization over Distributed Features

by   Sulaiman A. Alghunaim, et al.

This work studies multi-agent sharing optimization problems with the objective function being the sum of smooth local functions plus a convex (possibly non-smooth) function coupling all agents. This scenario arises in many machine learning and engineering applications, such as regression over distributed features and resource allocation. We reformulate this problem into an equivalent saddle-point problem, which is amenable to decentralized solutions. We then propose a proximal primal-dual algorithm and establish its linear convergence to the optimal solution when the local functions are strongly-convex. To our knowledge, this is the first linearly convergent decentralized algorithm for multi-agent sharing problems with a general convex (possibly non-smooth) coupling function.


page 1

page 2

page 3

page 4


A Proximal Zeroth-Order Algorithm for Nonconvex Nonsmooth Problems

In this paper, we focus on solving an important class of nonconvex optim...

A Linearly Convergent Proximal Gradient Algorithm for Decentralized Optimization

Decentralized optimization is a promising paradigm that finds various ap...

Constraint Coupled Distributed Optimization: a Relaxation and Duality Approach

In this paper we consider a distributed optimization scenario in which a...

Multi-Agent Reinforcement Learning via Double Averaging Primal-Dual Optimization

Despite the success of single-agent reinforcement learning, multi-agent ...

On Local Computation for Optimization in Multi-Agent Systems

A number of prototypical optimization problems in multi-agent systems (e...

The distributed dual ascent algorithm is robust to asynchrony

The distributed dual ascent is an established algorithm to solve strongl...

Mini-batch stochastic three-operator splitting for distributed optimization

We consider a network of agents, each with its own private cost consisti...