DeepAI
Log In Sign Up

ARSM: Augment-REINFORCE-Swap-Merge Estimator for Gradient Backpropagation Through Categorical Variables

05/04/2019
by   Mingzhang Yin, et al.
0

To address the challenge of backpropagating the gradient through categorical variables, we propose the augment-REINFORCE-swap-merge (ARSM) gradient estimator that is unbiased and has low variance. ARSM first uses variable augmentation, REINFORCE, and Rao-Blackwellization to re-express the gradient as an expectation under the Dirichlet distribution, then uses variable swapping to construct differently expressed but equivalent expectations, and finally shares common random numbers between these expectations to achieve significant variance reduction. Experimental results show ARSM closely resembles the performance of the true gradient for optimization in univariate settings; outperforms existing estimators by a large margin when applied to categorical variational auto-encoders; and provides a "try-and-see self-critic" variance reduction method for discrete-action policy gradient, which removes the need of estimating baselines by generating a random number of pseudo actions and estimating their action-value functions.

READ FULL TEXT

page 1

page 2

page 3

page 4

03/13/2019

Augment-Reinforce-Merge Policy Gradient for Binary Stochastic Policy

Due to the high variance of policy gradients, on-policy optimization alg...
02/14/2020

Estimating Gradients for Discrete Random Variables by Sampling without Replacement

We derive an unbiased estimator for expectations over discrete random va...
07/30/2018

ARM: Augment-REINFORCE-Merge Gradient for Discrete Latent Variable Models

To backpropagate the gradients through discrete stochastic layers, we en...
06/18/2020

DisARM: An Antithetic Gradient Estimator for Binary Latent Variables

Training models with discrete latent variables is challenging due to the...
10/26/2021

CARMS: Categorical-Antithetic-REINFORCE Multi-Sample Gradient Estimator

Accurately backpropagating the gradient through categorical variables is...
03/08/2022

An Analysis of Measure-Valued Derivatives for Policy Gradients

Reinforcement learning methods for robotics are increasingly successful ...
03/10/2018

Variance Networks: When Expectation Does Not Meet Your Expectations

In this paper, we propose variance networks, a new model that stores the...