Stratified Knowledge Bases as Interpretable Probabilistic Models (Extended Abstract)

11/18/2016
by   Ondrej Kuzelka, et al.
0

In this paper, we advocate the use of stratified logical theories for representing probabilistic models. We argue that such encodings can be more interpretable than those obtained in existing frameworks such as Markov logic networks. Among others, this allows for the use of domain experts to improve learned models by directly removing, adding, or modifying logical formulas.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/19/2017

Induction of Interpretable Possibilistic Logic Theories from Relational Data

The field of Statistical Relational Learning (SRL) is concerned with lea...
research
05/17/2022

LogicSolver: Towards Interpretable Math Word Problem Solving with Logical Prompt-enhanced Learning

Recently, deep learning models have made great progress in MWP solving o...
research
02/15/2019

Coalgebra Learning via Duality

Automata learning is a popular technique for inferring minimal automata ...
research
04/15/2012

Lower Complexity Bounds for Lifted Inference

One of the big challenges in the development of probabilistic relational...
research
07/25/2019

Probabilistic Approximate Logic and its Implementation in the Logical Imagination Engine

In spite of the rapidly increasing number of applications of machine lea...
research
06/03/2015

Encoding Markov Logic Networks in Possibilistic Logic

Markov logic uses weighted formulas to compactly encode a probability di...
research
09/25/2021

Logical Credal Networks

This paper introduces Logical Credal Networks, an expressive probabilist...

Please sign up or login with your details

Forgot password? Click here to reset