Multi-Context Models for Reasoning under Partial Knowledge: Generative Process and Inference Grammar

Arriving at the complete probabilistic knowledge of a domain, i.e., learning how all variables interact, is indeed a demanding task. In reality, settings often arise for which an individual merely possesses partial knowledge of the domain, and yet, is expected to give adequate answers to a variety of posed queries. That is, although precise answers to some queries, in principle, cannot be achieved, a range of plausible answers is attainable for each query given the available partial knowledge. In this paper, we propose the Multi-Context Model (MCM), a new graphical model to represent the state of partial knowledge as to a domain. MCM is a middle ground between Probabilistic Logic, Bayesian Logic, and Probabilistic Graphical Models. For this model we discuss: (i) the dynamics of constructing a contradiction-free MCM, i.e., to form partial beliefs regarding a domain in a gradual and probabilistically consistent way, and (ii) how to perform inference, i.e., to evaluate a probability of interest involving some variables of the domain.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/11/2020

Query Training: Learning and inference for directed and undirected graphical models

Probabilistic graphical models (PGMs) provide a compact representation o...
research
11/15/2020

Learning of Structurally Unambiguous Probabilistic Grammars

The problem of identifying a probabilistic context free grammar has two ...
research
10/28/2017

Partial Knowledge In Embeddings

Representing domain knowledge is crucial for any task. There has been a ...
research
02/06/2013

Learning Bayesian Nets that Perform Well

A Bayesian net (BN) is more than a succinct way to encode a probabilisti...
research
04/21/2015

Reasoning about Unmodelled Concepts - Incorporating Class Taxonomies in Probabilistic Relational Models

A key problem in the application of first-order probabilistic methods is...
research
06/26/2022

Marginal Inference queries in Hidden Markov Models under context-free grammar constraints

The primary use of any probabilistic model involving a set of random var...
research
12/05/2019

Learning undirected models via query training

Typical amortized inference in variational autoencoders is specialized f...

Please sign up or login with your details

Forgot password? Click here to reset