
Improving SAT Solver Heuristics with Graph Networks and Reinforcement Learning
We present GQSAT, a branching heuristic in a Boolean SAT solver trained ...
read it

The Configurable SAT Solver Challenge (CSSC)
It is well known that different solution strategies work well for differ...
read it

Solving #SAT and Bayesian Inference with Backtracking Search
Inference in Bayes Nets (BAYES) is an important problem with numerous ap...
read it

On the Hierarchical Community Structure of Practical Boolean Formulas
Modern CDCL SAT solvers easily solve industrial instances containing ten...
read it

Exploiting Structure in Weighted Model Counting Approaches to Probabilistic Inference
Previous studies have demonstrated that encoding a Bayesian network into...
read it

Approximate Model Counting, Sparse XOR Constraints and Minimum Distance
The problem of counting the number of models of a given Boolean formula ...
read it

Computing Storyline Visualizations with Few Block Crossings
Storyline visualizations show the structure of a story, by depicting the...
read it
Learning Branching Heuristics for Propositional Model Counting
Propositional model counting or #SAT is the problem of computing the number of satisfying assignments of a Boolean formula and many discrete probabilistic inference problems can be translated into a model counting problem to be solved by #SAT solvers. Generic “exact” #SAT solvers, however, are often not scalable to industriallevel instances. In this paper, we present Neuro#, an approach for learning branching heuristics for exact #SAT solvers via evolution strategies (ES) to reduce the number of branching steps the solver takes to solve an instance. We experimentally show that our approach not only reduces the step count on similarly distributed heldout instances but it also generalizes to much larger instances from the same problem family. The gap between the learned and the vanilla solver on larger instances is sometimes so wide that the learned solver can even overcome the run time overhead of querying the model and beat the vanilla in wallclock time by orders of magnitude.
READ FULL TEXT
Comments
There are no comments yet.