
Stochastic Gradient Hamiltonian Monte Carlo for NonConvex Learning in the Big Data Regime
Stochastic Gradient Hamiltonian Monte Carlo (SGHMC) is a momentum versio...
read it

Parallel Stochastic Gradient Markov Chain Monte Carlo for Matrix Factorisation Models
For large matrix factorisation problems, we develop a distributed Markov...
read it

Decentralized Stochastic Gradient Langevin Dynamics and Hamiltonian Monte Carlo
Stochastic gradient Langevin dynamics (SGLD) and stochastic gradient Ham...
read it

Stochastic Gradient Langevin Dynamics Algorithms with Adaptive Drifts
Bayesian deep learning offers a principled way to address many issues co...
read it

QuantumInspired Hamiltonian Monte Carlo for Bayesian Sampling
Hamiltonian Monte Carlo (HMC) is an efficient Bayesian sampling method t...
read it

Stochastic Gradient Hamiltonian Monte Carlo
Hamiltonian Monte Carlo (HMC) sampling methods provide a mechanism for d...
read it

Faster Hamiltonian Monte Carlo by Learning Leapfrog Scale
Hamiltonian Monte Carlo samplers have become standard algorithms for MCM...
read it
Relativistic Monte Carlo
Hamiltonian Monte Carlo (HMC) is a popular Markov chain Monte Carlo (MCMC) algorithm that generates proposals for a MetropolisHastings algorithm by simulating the dynamics of a Hamiltonian system. However, HMC is sensitive to large time discretizations and performs poorly if there is a mismatch between the spatial geometry of the target distribution and the scales of the momentum distribution. In particular the mass matrix of HMC is hard to tune well. In order to alleviate these problems we propose relativistic Hamiltonian Monte Carlo, a version of HMC based on relativistic dynamics that introduce a maximum velocity on particles. We also derive stochastic gradient versions of the algorithm and show that the resulting algorithms bear interesting relationships to gradient clipping, RMSprop, Adagrad and Adam, popular optimisation methods in deep learning. Based on this, we develop relativistic stochastic gradient descent by taking the zerotemperature limit of relativistic stochastic gradient Hamiltonian Monte Carlo. In experiments we show that the relativistic algorithms perform better than classical Newtonian variants and Adam.
READ FULL TEXT
Comments
There are no comments yet.