A unified view of likelihood ratio and reparameterization gradients and an optimal importance sampling scheme

10/14/2019
by   Paavo Parmas, et al.
11

Reparameterization (RP) and likelihood ratio (LR) gradient estimators are used throughout machine and reinforcement learning; however, they are usually explained as simple mathematical tricks without providing any insight into their nature. We use a first principles approach to explain LR and RP, and show a connection between the two via the divergence theorem. The theory motivated us to derive optimal importance sampling schemes to reduce LR gradient variance. Our newly derived distributions have analytic probability densities and can be directly sampled from. The improvement for Gaussian target distributions was modest, but for other distributions such as a Beta distribution, our method could lead to arbitrarily large improvements, and was crucial to obtain competitive performance in evolution strategies experiments.

READ FULL TEXT

page 27

page 29

05/31/2021

A unified view of likelihood ratio and reparameterization gradients

Reparameterization (RP) and likelihood ratio (LR) gradient estimators ar...
06/13/2012

Refractor Importance Sampling

In this paper we introduce Refractor Importance Sampling (RIS), an impro...
07/26/2017

Notes on optimal approximations for importance sampling

In this manuscript, we derive optimal conditions for building function a...
01/07/2022

Optimality in Noisy Importance Sampling

In this work, we analyze the noisy importance sampling (IS), i.e., IS wo...
04/22/2020

Bayesian Optimization with Output-Weighted Importance Sampling

In Bayesian optimization, accounting for the importance of the output re...
07/09/2022

Variance Analysis of Multiple Importance Sampling Schemes

Multiple importance sampling (MIS) is an increasingly used methodology w...
11/22/2021

Bootstrap Your Flow

Normalizing flows are flexible, parameterized distributions that can be ...