Linear Speedup in Saddle-Point Escape for Decentralized Non-Convex Optimization

10/30/2019
by   Stefan Vlaski, et al.
31

Under appropriate cooperation protocols and parameter choices, fully decentralized solutions for stochastic optimization have been shown to match the performance of centralized solutions and result in linear speedup (in the number of agents) relative to non-cooperative approaches in the strongly-convex setting. More recently, these results have been extended to the pursuit of first-order stationary points in non-convex environments. In this work, we examine in detail the dependence of second-order convergence guarantees on the spectral properties of the combination policy for non-convex multi agent optimization. We establish linear speedup in saddle-point escape time in the number of agents for symmetric combination policies and study the potential for further improvement by employing asymmetric combination weights. The results imply that a linear speedup can be expected in the pursuit of second-order stationary points, which exclude local maxima as well as strict saddle-points and correspond to local or even global minima in many important learning settings.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/25/2017

Stochastic Non-convex Optimization with Strong High Probability Second-order Convergence

In this paper, we study stochastic non-convex optimization with non-conv...
research
07/03/2019

Distributed Learning in Non-Convex Environments – Part II: Polynomial Escape from Saddle-Points

The diffusion strategy for distributed learning from streaming data empl...
research
06/07/2018

Asynchronous Stochastic Quasi-Newton MCMC for Non-Convex Optimization

Recent studies have illustrated that stochastic gradient Markov Chain Mo...
research
02/02/2017

Natasha: Faster Non-Convex Stochastic Optimization Via Strongly Non-Convex Parameter

Given a nonconvex function f(x) that is an average of n smooth functions...
research
10/06/2020

Dif-MAML: Decentralized Multi-Agent Meta-Learning

The objective of meta-learning is to exploit the knowledge obtained from...
research
02/18/2016

Efficient approaches for escaping higher order saddle points in non-convex optimization

Local search heuristics for non-convex optimizations are popular in appl...
research
02/20/2023

Private (Stochastic) Non-Convex Optimization Revisited: Second-Order Stationary Points and Excess Risks

We consider the problem of minimizing a non-convex objective while prese...

Please sign up or login with your details

Forgot password? Click here to reset