Distributed Stochastic Nonconvex Optimization and Learning based on Successive Convex Approximation

04/30/2020
by   Paolo Di Lorenzo, et al.
0

We study distributed stochastic nonconvex optimization in multi-agent networks. We introduce a novel algorithmic framework for the distributed minimization of the sum of the expected value of a smooth (possibly nonconvex) function (the agents' sum-utility) plus a convex (possibly nonsmooth) regularizer. The proposed method hinges on successive convex approximation (SCA) techniques, leveraging dynamic consensus as a mechanism to track the average gradient among the agents, and recursive averaging to recover the expected gradient of the sum-utility function. Almost sure convergence to (stationary) solutions of the nonconvex problem is established. Finally, the method is applied to distributed stochastic training of neural networks. Numerical results confirm the theoretical claims, and illustrate the advantages of the proposed method with respect to other methods available in the literature.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/04/2018

Distributed Nonconvex Constrained Optimization over Time-Varying Digraphs

This paper considers nonconvex distributed constrained optimization over...
research
08/22/2018

Distributed Big-Data Optimization via Block-Iterative Gradient Tracking

We study distributed big-data nonconvex optimization in multi-agent netw...
research
05/02/2018

Distributed Big-Data Optimization via Block-Iterative Convexification and Averaging

In this paper, we study distributed big-data nonconvex optimization in m...
research
06/26/2022

Spectrum Sharing Among Multiple-Seller and Multiple-Buyer Operators of A Mobile Network: A Stochastic Geometry Approach

Sharing the spectrum among mobile network operators (MNOs) is a promisin...
research
08/22/2018

Distributed Big-Data Optimization via Block-wise Gradient Tracking

We study distributed big-data nonconvex optimization in multi-agent netw...
research
05/27/2018

Distributed Big-Data Optimization via Block Communications

We study distributed multi-agent large-scale optimization problems, wher...
research
10/24/2016

A Framework for Parallel and Distributed Training of Neural Networks

The aim of this paper is to develop a general framework for training neu...

Please sign up or login with your details

Forgot password? Click here to reset