Bayesian Learning via Neural Schrödinger-Föllmer Flows

11/20/2021
by   Francisco Vargas, et al.
0

In this work we explore a new framework for approximate Bayesian inference in large datasets based on stochastic control. We advocate stochastic control as a finite time and low variance alternative to popular steady-state methods such as stochastic gradient Langevin dynamics (SGLD). Furthermore, we discuss and adapt the existing theoretical guarantees of this framework and establish connections to already existing VI routines in SDE-based models.

READ FULL TEXT
research
05/05/2019

A Bayesian Variational Framework for Stochastic Optimization

This work proposes a theoretical framework for stochastic optimization a...
research
08/16/2020

Variance reduction for dependent sequences with applications to Stochastic Gradient MCMC

In this paper we propose a novel and practical variance reduction approa...
research
11/19/2015

Stochastic gradient method with accelerated stochastic dynamics

In this paper, we propose a novel technique to implement stochastic grad...
research
08/15/2023

Natural Evolution Strategies as a Black Box Estimator for Stochastic Variational Inference

Stochastic variational inference and its derivatives in the form of vari...
research
10/22/2018

Stochastic Gradient MCMC for State Space Models

State space models (SSMs) are a flexible approach to modeling complex ti...
research
11/30/2021

Optimal friction matrix for underdamped Langevin sampling

A systematic procedure for optimising the friction coefficient in underd...
research
05/17/2021

Stochastic Control through Approximate Bayesian Input Inference

Optimal control under uncertainty is a prevailing challenge in control, ...

Please sign up or login with your details

Forgot password? Click here to reset