A unified view of likelihood ratio and reparameterization gradients

05/31/2021
by   Paavo Parmas, et al.
0

Reparameterization (RP) and likelihood ratio (LR) gradient estimators are used to estimate gradients of expectations throughout machine learning and reinforcement learning; however, they are usually explained as simple mathematical tricks, with no insight into their nature. We use a first principles approach to explain that LR and RP are alternative methods of keeping track of the movement of probability mass, and the two are connected via the divergence theorem. Moreover, we show that the space of all possible estimators combining LR and RP can be completely parameterized by a flow field u(x) and an importance sampling distribution q(x). We prove that there cannot exist a single-sample estimator of this type outside our characterized space, thus, clarifying where we should be searching for better Monte Carlo gradient estimators.

READ FULL TEXT

page 1

page 2

page 3

page 4

10/14/2019

A unified view of likelihood ratio and reparameterization gradients and an optimal importance sampling scheme

Reparameterization (RP) and likelihood ratio (LR) gradient estimators ar...
02/22/2016

Variational inference for Monte Carlo objectives

Recent progress in deep latent variable models has largely been driven b...
06/25/2019

Monte Carlo Gradient Estimation in Machine Learning

This paper is a broad and accessible survey of the methods we have at ou...
01/31/2019

New Tricks for Estimating Gradients of Expectations

We derive a family of Monte Carlo estimators for gradients of expectatio...
04/03/2018

Renewal Monte Carlo: Renewal theory based reinforcement learning

In this paper, we present an online reinforcement learning algorithm, ca...
02/14/2018

DiCE: The Infinitely Differentiable Monte-Carlo Estimator

The score function estimator is widely used for estimating gradients of ...
11/05/2020

Harnessing Distribution Ratio Estimators for Learning Agents with Quality and Diversity

Quality-Diversity (QD) is a concept from Neuroevolution with some intrig...