Learning to Reason: Leveraging Neural Networks for Approximate DNF Counting

by   Ralph Abboud, et al.

Weighted model counting has emerged as a prevalent approach for probabilistic inference. In this paper, we are interested in weighted DNF counting, or briefly, weighted #DNF, which admits a fully polynomial randomized approximation scheme, as shown by Karp and Luby. To this date, the best algorithm for approximating #DNF is due to Karp, Luby and Madras. The drawback of this algorithm is that it runs in quadratic time and hence is not suitable for fast online reasoning. To overcome this, we propose a novel approach that combines approximate model counting with deep learning. We conduct detailed experiments to validate our approach, and show that our model learns and generalizes from #DNF instances with a very high accuracy.


Weighted First-Order Model Counting in the Two-Variable Fragment With Counting Quantifiers

It is known due to the work of Van den Broeck et al [KR, 2014] that weig...

On the Approximability of Weighted Model Integration on DNF Structures

Weighted model counting admits an FPRAS on DNF structures. We study weig...

Approximate Weighted First-Order Model Counting: Exploiting Fast Approximate Model Counters and Symmetry

We study the symmetric weighted first-order model counting task and pres...

Fast Converging Anytime Model Counting

Model counting is a fundamental problem which has been influential in ma...

Parallel Weighted Model Counting with Tensor Networks

A promising new algebraic approach to weighted model counting makes use ...

On Hashing-Based Approaches to Approximate DNF-Counting

Propositional model counting is a fundamental problem in artificial inte...

Scaling up Probabilistic Inference in Linear and Non-Linear Hybrid Domains by Leveraging Knowledge Compilation

Weighted model integration (WMI) extends weighted model counting (WMC) i...

Please sign up or login with your details

Forgot password? Click here to reset