Log In Sign Up

Adaptive Correlated Monte Carlo for Contextual Categorical Sequence Generation

by   Xinjie Fan, et al.

Sequence generation models are commonly refined with reinforcement learning over user-defined metrics. However, high gradient variance hinders the practical use of this method. To stabilize this method, we adapt to contextual generation of categorical sequences a policy gradient estimator, which evaluates a set of correlated Monte Carlo (MC) rollouts for variance control. Due to the correlation, the number of unique rollouts is random and adaptive to model uncertainty; those rollouts naturally become baselines for each other, and hence are combined to effectively reduce gradient variance. We also demonstrate the use of correlated MC rollouts for binary-tree softmax models, which reduce the high generation cost in large vocabulary scenarios by decomposing each categorical action into a sequence of binary actions. We evaluate our methods on both neural program synthesis and image captioning. The proposed methods yield lower gradient variance and consistent improvement over related baselines.


page 7

page 14

page 19

page 20


Trajectory-wise Control Variates for Variance Reduction in Policy Gradient Methods

Policy gradient methods have demonstrated success in reinforcement learn...

Deep Bayesian Quadrature Policy Optimization

We study the problem of obtaining accurate policy gradient estimates. Th...

Policy Learning and Evaluation with Randomized Quasi-Monte Carlo

Reinforcement learning constantly deals with hard integrals, for example...

B-SCST: Bayesian Self-Critical Sequence Training for Image Captioning

Bayesian deep neural networks (DNN) provide a mathematically grounded fr...

Model-Based Policy Search Using Monte Carlo Gradient Estimation with Real Systems Application

In this paper, we present a Model-Based Reinforcement Learning algorithm...

A Generalized Framework for Approximate Control Variates

We describe and analyze a Monte Carlo (MC) sampling framework for accele...