A unified view of likelihood ratio and reparameterization gradients and an optimal importance sampling scheme

by   Paavo Parmas, et al.

Reparameterization (RP) and likelihood ratio (LR) gradient estimators are used throughout machine and reinforcement learning; however, they are usually explained as simple mathematical tricks without providing any insight into their nature. We use a first principles approach to explain LR and RP, and show a connection between the two via the divergence theorem. The theory motivated us to derive optimal importance sampling schemes to reduce LR gradient variance. Our newly derived distributions have analytic probability densities and can be directly sampled from. The improvement for Gaussian target distributions was modest, but for other distributions such as a Beta distribution, our method could lead to arbitrarily large improvements, and was crucial to obtain competitive performance in evolution strategies experiments.


page 27

page 29


A unified view of likelihood ratio and reparameterization gradients

Reparameterization (RP) and likelihood ratio (LR) gradient estimators ar...

Refractor Importance Sampling

In this paper we introduce Refractor Importance Sampling (RIS), an impro...

Notes on optimal approximations for importance sampling

In this manuscript, we derive optimal conditions for building function a...

Optimality in Noisy Importance Sampling

In this work, we analyze the noisy importance sampling (IS), i.e., IS wo...

Bayesian Optimization with Output-Weighted Importance Sampling

In Bayesian optimization, accounting for the importance of the output re...

Variance Analysis of Multiple Importance Sampling Schemes

Multiple importance sampling (MIS) is an increasingly used methodology w...

Bootstrap Your Flow

Normalizing flows are flexible, parameterized distributions that can be ...