-
Decentralized Riemannian Gradient Descent on the Stiefel Manifold
We consider a distributed non-convex optimization where a network of age...
read it
-
A Decentralized Approach to Bayesian Learning
Motivated by decentralized approaches to machine learning, we propose a ...
read it
-
An improved convergence analysis for decentralized online stochastic non-convex optimization
In this paper, we study decentralized online stochastic non-convex optim...
read it
-
A fast randomized incremental gradient method for decentralized non-convex optimization
We study decentralized non-convex finite-sum minimization problems descr...
read it
-
Consensus-based Optimization on the Sphere II: Convergence to Global Minimizers and Machine Learning
We present the implementation of a new stochastic Kuramoto-Vicsek-type m...
read it
-
On linear convergence of two decentralized algorithms
Decentralized algorithms solve multi-agent problems over a connected net...
read it
-
Finite-Sample Analyses for Fully Decentralized Multi-Agent Reinforcement Learning
Despite the increasing interest in multi-agent reinforcement learning (M...
read it
On the Convergence of Consensus Algorithms with Markovian Noise and Gradient Bias
This paper presents a finite time convergence analysis for a decentralized stochastic approximation (SA) scheme. The scheme generalizes several algorithms for decentralized machine learning and multi-agent reinforcement learning. Our proof technique involves separating the iterates into their respective consensual parts and consensus error. The consensus error is bounded in terms of the stationarity of the consensual part, while the updates of the consensual part can be analyzed as a perturbed SA scheme. Under the Markovian noise and time varying communication graph assumptions, the decentralized SA scheme has an expected convergence rate of O(log T/ √(T) ), where T is the iteration number, in terms of squared norms of gradient for nonlinear SA with smooth but non-convex cost function. This rate is comparable to the best known performances of SA in a centralized setting with a non-convex potential function.
READ FULL TEXT
Comments
There are no comments yet.