Markov Chain Lifting and Distributed ADMM

03/10/2017
by   Guilherme França, et al.
0

The time to converge to the steady state of a finite Markov chain can be greatly reduced by a lifting operation, which creates a new Markov chain on an expanded state space. For a class of quadratic objectives, we show an analogous behavior where a distributed ADMM algorithm can be seen as a lifting of Gradient Descent algorithm. This provides a deep insight for its faster convergence rate under optimal parameter tuning. We conjecture that this gain is always present, as opposed to the lifting of a Markov chain which sometimes only provides a marginal speedup.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/23/2019

Decentralized Markov Chain Gradient Descent

Decentralized stochastic gradient method emerges as a promising solution...
research
11/23/2017

Markov chain Hebbian learning algorithm with ternary synaptic units

In spite of remarkable progress in machine learning techniques, the stat...
research
03/16/2019

Active and Passive Portfolio Management with Latent Factors

We address a portfolio selection problem that combines active (outperfor...
research
05/30/2018

A Markov Chain Model for the Cure Rate of Non-Performing Loans

A Markov-chain model is developed for the purpose estimation of the cure...
research
09/30/2022

Rethinking skip connection model as a learnable Markov chain

Over past few years afterward the birth of ResNet, skip connection has b...
research
01/17/2018

Imprecise Markov Models for Scalable and Robust Performance Evaluation of Flexi-Grid Spectrum Allocation Policies

The possibility of flexibly assigning spectrum resources with channels o...
research
10/02/2017

How is Distributed ADMM Affected by Network Topology?

When solving consensus optimization problems over a graph, there is ofte...

Please sign up or login with your details

Forgot password? Click here to reset