Decentralized Stochastic Gradient Langevin Dynamics and Hamiltonian Monte Carlo

07/01/2020
by   Mert Gurbuzbalaban, et al.
0

Stochastic gradient Langevin dynamics (SGLD) and stochastic gradient Hamiltonian Monte Carlo (SGHMC) are two popular Markov Chain Monte Carlo (MCMC) algorithms for Bayesian inference that can scale to large datasets, allowing to sample from the posterior distribution of a machine learning (ML) model based on the input data and the prior distribution over the model parameters. However, these algorithms do not apply to the decentralized learning setting, when a network of agents are working collaboratively to learn the parameters of an ML model without sharing their individual data due to privacy reasons or communication constraints. We study two algorithms: Decentralized SGLD (DE-SGLD) and Decentralized SGHMC (DE-SGHMC) which are adaptations of SGLD and SGHMC methods that allow scaleable Bayesian inference in the decentralized setting. We show that when the posterior distribution is strongly log-concave, the iterates of these algorithms converge linearly to a neighborhood of the target distribution in the 2-Wasserstein metric. We illustrate the results for decentralized Bayesian linear regression and Bayesian logistic regression problems.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/14/2016

Relativistic Monte Carlo

Hamiltonian Monte Carlo (HMC) is a popular Markov chain Monte Carlo (MCM...
research
07/15/2021

Decentralized Bayesian Learning with Metropolis-Adjusted Hamiltonian Monte Carlo

Federated learning performed by a decentralized networks of agents is be...
research
07/03/2018

Bayesian Spatial Analysis of Hardwood Tree Counts in Forests via MCMC

In this paper, we perform Bayesian Inference to analyze spatial tree cou...
research
09/14/2019

A Bayesian Approach for De-duplication in the Presence of Relational Data

In this paper we study the impact of combining profile and network data ...
research
12/31/2015

Distributed Bayesian Learning with Stochastic Natural-gradient Expectation Propagation and the Posterior Server

This paper makes two contributions to Bayesian machine learning algorith...
research
12/11/2019

Bayesian Variational Autoencoders for Unsupervised Out-of-Distribution Detection

Despite their successes, deep neural networks still make unreliable pred...
research
07/14/2020

A Decentralized Approach to Bayesian Learning

Motivated by decentralized approaches to machine learning, we propose a ...

Please sign up or login with your details

Forgot password? Click here to reset