Neural Particle Smoothing for Sampling from Conditional Sequence Models

04/28/2018
by   Chu-Cheng Lin, et al.
0

We introduce neural particle smoothing, a sequential Monte Carlo method for sampling annotations of an input string from a given probability model. In contrast to conventional particle filtering algorithms, we train a proposal distribution that looks ahead to the end of the input string by means of a right-to-left LSTM. We demonstrate that this innovation can improve the quality of the sample. To motivate our formal choices, we explain how our neural model and neural sampler can be viewed as low-dimensional but nonlinear approximations to working with HMMs over very large state spaces.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/12/2012

Decentralized, Adaptive, Look-Ahead Particle Filtering

The decentralized particle filter (DPF) was proposed recently to increas...
research
11/18/2021

The Application of Zig-Zag Sampler in Sequential Markov Chain Monte Carlo

Particle filtering methods are widely applied in sequential state estima...
research
04/04/2022

Variance estimation for Sequential Monte Carlo Algorithms: a backward sampling approach

In this paper, we consider the problem of online asymptotic variance est...
research
06/10/2015

Neural Adaptive Sequential Monte Carlo

Sequential Monte Carlo (SMC), or particle filtering, is a popular class ...
research
05/14/2019

Imputing Missing Events in Continuous-Time Event Streams

Events in the world may be caused by other, unobserved events. We consid...
research
09/20/2019

Particle Smoothing Variational Objectives

A body of recent work has focused on constructing a variational family o...
research
08/01/2021

Fast and numerically stable particle-based online additive smoothing: the AdaSmooth algorithm

We present a novel sequential Monte Carlo approach to online smoothing o...

Please sign up or login with your details

Forgot password? Click here to reset