DeepAI
Log In Sign Up

Taming Reasoning in Temporal Probabilistic Relational Models

11/16/2019
by   Marcel Gehrke, et al.
0

Evidence often grounds temporal probabilistic relational models over time, which makes reasoning infeasible. To counteract groundings over time and to keep reasoning polynomial by restoring a lifted representation, we present temporal approximate merging (TAMe), which incorporates (i) clustering for grouping submodels as well as (ii) statistical significance checks to test the fitness of the clustering outcome. In exchange for faster runtimes, TAMe introduces a bounded error that becomes negligible over time. Empirical results show that TAMe significantly improves the runtime performance of inference, while keeping errors small.

READ FULL TEXT

page 1

page 2

page 3

page 4

03/15/2012

Probabilistic Similarity Logic

Many machine learning applications require the ability to learn from and...
01/16/2013

Probabilistic Arc Consistency: A Connection between Constraint Reasoning and Probabilistic Reasoning

We document a connection between constraint reasoning and probabilistic ...
10/11/2019

Rk-means: Fast Clustering for Relational Data

Conventional machine learning algorithms cannot be applied until a data ...
01/16/2014

Planning with Noisy Probabilistic Relational Rules

Noisy probabilistic relational rules are a promising world model represe...
07/02/2018

Answering Hindsight Queries with Lifted Dynamic Junction Trees

The lifted dynamic junction tree algorithm (LDJT) efficiently answers fi...
02/22/2022

Relational Causal Models with Cycles:Representation and Reasoning

Causal reasoning in relational domains is fundamental to studying real-w...
07/20/2011

Towards Completely Lifted Search-based Probabilistic Inference

The promise of lifted probabilistic inference is to carry out probabilis...