-
Probabilistic Programs with Stochastic Conditioning
We tackle the problem of conditioning probabilistic programs on distribu...
read it
-
Stochastic probabilistic programs
We introduce the notion of a stochastic probabilistic program and presen...
read it
-
Spreadsheet Probabilistic Programming
Spreadsheet workbook contents are simple programs. Because of this, prob...
read it
-
Composing Modeling and Inference Operations with Probabilistic Program Combinators
Probabilistic programs with dynamic computation graphs can define measur...
read it
-
Inference Policies
It is suggested that an AI inference system should reflect an inference ...
read it
-
Checkpointing and Localized Recovery for Nested Fork-Join Programs
While checkpointing is typically combined with a restart of the whole ap...
read it
-
Stabilized Nested Rollout Policy Adaptation
Nested Rollout Policy Adaptation (NRPA) is a Monte Carlo search algorith...
read it
Bayesian Policy Search for Stochastic Domains
AI planning can be cast as inference in probabilistic models, and probabilistic programming was shown to be capable of policy search in partially observable domains. Prior work introduces policy search through Markov chain Monte Carlo in deterministic domains, as well as adapts black-box variational inference to stochastic domains, however not in the strictly Bayesian sense. In this work, we cast policy search in stochastic domains as a Bayesian inference problem and provide a scheme for encoding such problems as nested probabilistic programs. We argue that probabilistic programs for policy search in stochastic domains should involve nested conditioning, and provide an adaption of Lightweight Metropolis-Hastings (LMH) for robust inference in such programs. We apply the proposed scheme to stochastic domains and show that policies of similar quality are learned, despite a simpler and more general inference algorithm. We believe that the proposed variant of LMH is novel and applicable to a wider class of probabilistic programs with nested conditioning.
READ FULL TEXT
Comments
There are no comments yet.