Induction of Interpretable Possibilistic Logic Theories from Relational Data

by   Ondrej Kuzelka, et al.

The field of Statistical Relational Learning (SRL) is concerned with learning probabilistic models from relational data. Learned SRL models are typically represented using some kind of weighted logical formulas, which make them considerably more interpretable than those obtained by e.g. neural networks. In practice, however, these models are often still difficult to interpret correctly, as they can contain many formulas that interact in non-trivial ways and weights do not always have an intuitive meaning. To address this, we propose a new SRL method which uses possibilistic logic to encode relational models. Learned models are then essentially stratified classical theories, which explicitly encode what can be derived with a given level of certainty. Compared to Markov Logic Networks (MLNs), our method is faster and produces considerably more interpretable models.


page 1

page 2

page 3

page 4


Encoding Markov Logic Networks in Possibilistic Logic

Markov logic uses weighted formulas to compactly encode a probability di...

Stratified Knowledge Bases as Interpretable Probabilistic Models (Extended Abstract)

In this paper, we advocate the use of stratified logical theories for re...

Scalable Structure Learning for Probabilistic Soft Logic

Statistical relational frameworks such as Markov logic networks and prob...

Relational Theories with Null Values and Non-Herbrand Stable Models

Generalized relational theories with null values in the sense of Reiter ...

On the Semantic Relationship between Probabilistic Soft Logic and Markov Logic

Markov Logic Networks (MLN) and Probabilistic Soft Logic (PSL) are widel...

Implicitly Learning to Reason in First-Order Logic

We consider the problem of answering queries about formulas of first-ord...

Please sign up or login with your details

Forgot password? Click here to reset