DeepAI AI Chat
Log In Sign Up

Lifted Relax, Compensate and then Recover: From Approximate to Exact Lifted Probabilistic Inference

by   Guy Van den Broeck, et al.

We propose an approach to lifted approximate inference for first-order probabilistic models, such as Markov logic networks. It is based on performing exact lifted inference in a simplified first-order model, which is found by relaxing first-order constraints, and then compensating for the relaxation. These simplified models can be incrementally improved by carefully recovering constraints that have been relaxed, also at the first-order level. This leads to a spectrum of approximations, with lifted belief propagation on one end, and exact lifted inference on the other. We discuss how relaxation, compensation, and recovery can be performed, all at the firstorder level, and show empirically that our approach substantially improves on the approximations of both propositional solvers and lifted belief propagation.


page 1

page 2

page 3

page 4


Dual Decomposition from the Perspective of Relax, Compensate and then Recover

Relax, Compensate and then Recover (RCR) is a paradigm for approximate i...

Lifted Region-Based Belief Propagation

Due to the intractable nature of exact lifted inference, research has re...

pRSL: Interpretable Multi-label Stacking by Learning Probabilistic Rules

A key task in multi-label classification is modeling the structure betwe...

Anytime Exact Belief Propagation

Statistical Relational Models and, more recently, Probabilistic Programm...

Incremental Probabilistic Inference

Propositional representation services such as truth maintenance systems ...

Programming with Personalized PageRank: A Locally Groundable First-Order Probabilistic Logic

In many probabilistic first-order representation systems, inference is p...

Soft Constraints for Inference with Declarative Knowledge

We develop a likelihood free inference procedure for conditioning a prob...