Estimating Gradients for Discrete Random Variables by Sampling without Replacement

by   Wouter Kool, et al.

We derive an unbiased estimator for expectations over discrete random variables based on sampling without replacement, which reduces variance as it avoids duplicate samples. We show that our estimator can be derived as the Rao-Blackwellization of three different estimators. Combining our estimator with REINFORCE, we obtain a policy gradient estimator and we reduce its variance using a built-in control variate which is obtained without additional model evaluations. The resulting estimator is closely related to other gradient estimators. Experiments with a toy problem, a categorical Variational Auto-Encoder and a structured prediction problem show that our estimator is the only estimator that is consistently among the best estimators in both high and low entropy settings.


page 1

page 2

page 3

page 4


ARSM: Augment-REINFORCE-Swap-Merge Estimator for Gradient Backpropagation Through Categorical Variables

To address the challenge of backpropagating the gradient through categor...

Stratified Random Sampling for Dependent Inputs

A new approach of obtaining stratified random samples from statistically...

Bias-Variance Tradeoffs in Single-Sample Binary Gradient Estimators

Discrete and especially binary random variables occur in many machine le...

Reparameterization trick for discrete variables

Low-variance gradient estimation is crucial for learning directed graphi...

Straight-Through Estimator as Projected Wasserstein Gradient Flow

The Straight-Through (ST) estimator is a widely used technique for back-...

On systems of quotas based on bankruptcy with a priori unions: estimating random arrival-style rules

This paper addresses a sampling procedure for estimating extensions of t...

An unbiased ray-marching transmittance estimator

We present an in-depth analysis of the sources of variance in state-of-t...