DeepAI AI Chat
Log In Sign Up

Limits of CDCL Learning via Merge Resolution

by   Marc Vinyals, et al.

In their seminal work, Atserias et al. and independently Pipatsrisawat and Darwiche in 2009 showed that CDCL solvers can simulate resolution proofs with polynomial overhead. However, previous work does not address the tightness of the simulation, i.e., the question of how large this overhead needs to be. In this paper, we address this question by focusing on an important property of proofs generated by CDCL solvers that employ standard learning schemes, namely that the derivation of a learned clause has at least one inference where a literal appears in both premises (aka, a merge literal). Specifically, we show that proofs of this kind can simulate resolution proofs with at most a linear overhead, but there also exist formulas where such overhead is necessary or, more precisely, that there exist formulas with resolution proofs of linear length that require quadratic CDCL proofs.


page 1

page 2

page 3

page 4


QRAT Polynomially Simulates Merge Resolution

Merge Resolution (MRes [Beyersdorff et al. J. Autom. Reason.'2021] ) is ...

Hard QBFs for Merge Resolution

We prove the first proof size lower bounds for the proof system Merge Re...

Circular (Yet Sound) Proofs

We introduce a new way of composing proofs in rule-based proof systems t...

Resolution with Symmetry Rule applied to Linear Equations

This paper considers the length of resolution proofs when using Krishnam...

Extending Merge Resolution to a Family of Proof Systems

Merge Resolution (MRes [Beyersdorff et al. J. Autom. Reason.'2021]) is a...

Multi-qubit doilies: enumeration for all ranks and classification for ranks four and five

For N ≥ 2, an N-qubit doily is a doily living in the N-qubit symplectic ...

Are Two Binary Operators Necessary to Finitely Axiomatise Parallel Composition?

Bergstra and Klop have shown that bisimilarity has a finite equational a...