Stochastic gradient method with accelerated stochastic dynamics

11/19/2015
by   Masayuki Ohzeki, et al.
0

In this paper, we propose a novel technique to implement stochastic gradient methods, which are beneficial for learning from large datasets, through accelerated stochastic dynamics. A stochastic gradient method is based on mini-batch learning for reducing the computational cost when the amount of data is large. The stochasticity of the gradient can be mitigated by the injection of Gaussian noise, which yields the stochastic Langevin gradient method; this method can be used for Bayesian posterior sampling. However, the performance of the stochastic Langevin gradient method depends on the mixing rate of the stochastic dynamics. In this study, we propose violating the detailed balance condition to enhance the mixing rate. Recent studies have revealed that violating the detailed balance condition accelerates the convergence to a stationary state and reduces the correlation time between the samplings. We implement this violation of the detailed balance condition in the stochastic gradient Langevin method and test our method for a simple model to demonstrate its performance.

READ FULL TEXT

page 10

page 11

research
06/07/2018

Scalable Natural Gradient Langevin Dynamics in Practice

Stochastic Gradient Langevin Dynamics (SGLD) is a sampling scheme for Ba...
research
03/09/2015

Mathematical understanding of detailed balance condition violation and its application to Langevin dynamics

We develop an efficient sampling method by simulating Langevin dynamics ...
research
07/14/2018

Generalization in quasi-periodic environments

By and large the behavior of stochastic gradient is regarded as a challe...
research
06/27/2012

Bayesian Posterior Sampling via Stochastic Gradient Fisher Scoring

In this paper we address the following question: Can we approximately sa...
research
11/20/2021

Bayesian Learning via Neural Schrödinger-Föllmer Flows

In this work we explore a new framework for approximate Bayesian inferen...
research
08/21/2021

Incrementally Stochastic and Accelerated Gradient Information mixed Optimization for Manipulator Motion Planning

This paper introduces a novel motion planning algorithm, incrementally s...
research
05/29/2019

Replica-exchange Nosé-Hoover dynamics for Bayesian learning on large datasets

In this paper, we propose a new sampler for Bayesian learning that can e...

Please sign up or login with your details

Forgot password? Click here to reset