Stochastic Logic Programs: Sampling, Inference and Applications

01/16/2013
by   James Cussens, et al.
0

Algorithms for exact and approximate inference in stochastic logic programs (SLPs) are presented, based respectively, on variable elimination and importance sampling. We then show how SLPs can be used to represent prior distributions for machine learning, using (i) logic programs and (ii) Bayes net structures as examples. Drawing on existing work in statistics, we apply the Metropolis-Hasting algorithm to construct a Markov chain which samples from the posterior distribution. A Prolog implementation for this is described. We also discuss the possibility of constructing explicit representations of the posterior.

READ FULL TEXT

page 1

page 2

page 3

page 4

page 5

page 6

page 8

research
03/12/2019

Computational Bayes-Predictive Stochastic Programming: Finite Sample Bound

We study stochastic programming models where the stochastic variable is ...
research
03/12/2019

Computational Bayes-Predictive Stochastic Programming: Finite Sample Bounds

We study stochastic programming models where the stochastic variable is ...
research
05/23/2021

PASOCS: A Parallel Approximate Solver for Probabilistic Logic Programs under the Credal Semantics

The Credal semantics is a probabilistic extension of the answer set sema...
research
05/18/2018

Markov Chain Importance Sampling - a highly efficient estimator for MCMC

Markov chain algorithms are ubiquitous in machine learning and statistic...
research
10/06/2019

FIS-GAN: GAN with Flow-based Importance Sampling

Generative Adversarial Networks (GAN) training process, in most cases, a...
research
11/24/2016

Quantum Enhanced Inference in Markov Logic Networks

Markov logic networks (MLNs) reconcile two opposing schools in machine l...
research
06/04/2010

Variational Program Inference

We introduce a framework for representing a variety of interesting probl...

Please sign up or login with your details

Forgot password? Click here to reset