Performance Limits of Stochastic Sub-Gradient Learning, Part II: Multi-Agent Case

04/20/2017
by   Bicheng Ying, et al.
0

The analysis in Part I revealed interesting properties for subgradient learning algorithms in the context of stochastic optimization when gradient noise is present. These algorithms are used when the risk functions are non-smooth and involve non-differentiable components. They have been long recognized as being slow converging methods. However, it was revealed in Part I that the rate of convergence becomes linear for stochastic optimization problems, with the error iterate converging at an exponential rate α^i to within an O(μ)-neighborhood of the optimizer, for some α∈ (0,1) and small step-size μ. The conclusion was established under weaker assumptions than the prior literature and, moreover, several important problems (such as LASSO, SVM, and Total Variation) were shown to satisfy these weaker assumptions automatically (but not the previously used conditions from the literature). These results revealed that sub-gradient learning methods have more favorable behavior than originally thought when used to enable continuous adaptation and learning. The results of Part I were exclusive to single-agent adaptation. The purpose of the current Part II is to examine the implications of these discoveries when a collection of networked agents employs subgradient learning as their cooperative mechanism. The analysis will show that, despite the coupled dynamics that arises in a networked scenario, the agents are still able to attain linear convergence in the stochastic case; they are also able to reach agreement within O(μ) of the optimizer.

READ FULL TEXT
research
11/24/2015

Performance Limits of Stochastic Sub-Gradient Learning, Part I: Single Agent Case

In this work and the supporting Part II, we examine the performance of s...
research
03/14/2016

On the Influence of Momentum Acceleration on Online Learning

The article examines in some detail the convergence rate and mean-square...
research
06/08/2020

Stochastic Optimization with Non-stationary Noise

We investigate stochastic optimization problems under relaxed assumption...
research
08/10/2023

Byzantine-Robust Decentralized Stochastic Optimization with Stochastic Gradient Noise-Independent Learning Error

This paper studies Byzantine-robust stochastic optimization over a decen...
research
05/30/2019

Convergence Analysis of Gradient-Based Learning with Non-Uniform Learning Rates in Non-Cooperative Multi-Agent Settings

Considering a class of gradient-based multi-agent learning algorithms in...
research
09/20/2019

Regularized Diffusion Adaptation via Conjugate Smoothing

The purpose of this work is to develop and study a distributed strategy ...

Please sign up or login with your details

Forgot password? Click here to reset