Encoding Markov Logic Networks in Possibilistic Logic

by   Ondrej Kuzelka, et al.

Markov logic uses weighted formulas to compactly encode a probability distribution over possible worlds. Despite the use of logical formulas, Markov logic networks (MLNs) can be difficult to interpret, due to the often counter-intuitive meaning of their weights. To address this issue, we propose a method to construct a possibilistic logic theory that exactly captures what can be derived from a given MLN using maximum a posteriori (MAP) inference. Unfortunately, the size of this theory is exponential in general. We therefore also propose two methods which can derive compact theories that still capture MAP inference, but only for specific types of evidence. These theories can be used, among others, to make explicit the hidden assumptions underlying an MLN or to explain the predictions it makes.


page 1

page 2

page 3

page 4


Induction of Interpretable Possibilistic Logic Theories from Relational Data

The field of Statistical Relational Learning (SRL) is concerned with lea...

Logical Credal Networks

This paper introduces Logical Credal Networks, an expressive probabilist...

A Delayed Column Generation Strategy for Exact k-Bounded MAP Inference in Markov Logic Networks

The paper introduces k-bounded MAP inference, a parameterization of MAP ...

Improving the Accuracy and Efficiency of MAP Inference for Markov Logic

In this work we present Cutting Plane Inference (CPI), a Maximum A Poste...

Stratified Knowledge Bases as Interpretable Probabilistic Models (Extended Abstract)

In this paper, we advocate the use of stratified logical theories for re...

Markov Logic Networks with Statistical Quantifiers

Markov Logic Networks (MLNs) are well-suited for expressing statistics s...

Probabilistic Approximate Logic and its Implementation in the Logical Imagination Engine

In spite of the rapidly increasing number of applications of machine lea...

Please sign up or login with your details

Forgot password? Click here to reset