GFlowNets and variational inference

10/02/2022
by   Nikolay Malkin, et al.
0

This paper builds bridges between two families of probabilistic algorithms: (hierarchical) variational inference (VI), which is typically used to model distributions over continuous spaces, and generative flow networks (GFlowNets), which have been used for distributions over discrete structures such as graphs. We demonstrate that, in certain cases, VI algorithms are equivalent to special cases of GFlowNets in the sense of equality of expected gradients of their learning objectives. We then point out the differences between the two families and show how these differences emerge experimentally. Notably, GFlowNets, which borrow ideas from reinforcement learning, are more amenable than VI to off-policy training without the cost of high gradient variance induced by importance sampling. We argue that this property of GFlowNets can provide advantages for capturing diversity in multimodal target distributions.

READ FULL TEXT

page 7

page 19

research
08/27/2018

Importance Weighting and Variational Inference

Recent work used importance sampling ideas for better variational bounds...
research
01/30/2023

A theory of continuous generative flow networks

Generative flow networks (GFlowNets) are amortized variational inference...
research
08/27/2018

Importance Weighting and Varational Inference

Recent work used importance sampling ideas for better variational bounds...
research
09/06/2018

Improving Explorability in Variational Inference with Annealed Variational Objectives

Despite the advances in the representational capacity of approximate dis...
research
07/24/2019

On the relationship between variational inference and adaptive importance sampling

The importance weighted autoencoder (IWAE) (Burda et al., 2016) and rewe...
research
03/01/2021

Challenges and Opportunities in High-dimensional Variational Inference

We explore the limitations of and best practices for using black-box var...
research
03/30/2022

Marginalized Operators for Off-policy Reinforcement Learning

In this work, we propose marginalized operators, a new class of off-poli...

Please sign up or login with your details

Forgot password? Click here to reset