Compactly Representing Uniform Interpolants for EUF using (conditional) DAGS

02/22/2020 ∙ by Silvio Ghilardi, et al. ∙ Free University of Bozen-Bolzano University of New Mexico Università degli Studi di Milano 0

The concept of a uniform interpolant for a quantifier-free formula from a given formula with a list of symbols, while well-known in the logic literature, has been unknown to the formal methods and automated reasoning community. This concept is precisely defined. Two algorithms for computing the uniform interpolant of a quantifier-free formula in EUF endowed with a list of symbols to be eliminated are proposed. The first algorithm is non-deterministic and generates a uniform interpolant expressed as a disjunction of conjunction of literals, whereas the second algorithm gives a compact representation of a uniform interpolant as a conjunction of Horn clauses. Both algorithms exploit efficient dedicated DAG representation of terms. Correctness and completeness proofs are supplied, using arguments combining rewrite techniques with model-theoretic tools.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

0.1 Introduction

The theory of equality over uninterpreted symbols, henceforth denoted by , is one of the simplest theories which has found numerous applications in computer science, formal methods and logic. Starting with the works of Shostak [26] and Nelson and Oppen [23] in the early 80’s, some of the first algorithms were proposed in the context of developing approaches for combining decision procedures for quantifier-free theories including freely constructed data structures and linear arithmetic over the rationals. was first exploited for hardware verification of pipelined processors by Dill [5] and more widely subsequently in formal methods and verification using model checking framework. With the popularity of SMT solvers where serves as a glue for combining solvers for different theories, numerous new graph-based algorithms have been proposed in the literature over the last two decades for checking unsatisfiability of a conjunction of equalities and disequalities of terms built using function symbols and constants.

In [22], the use of interpolants for automatic invariant generation was proposed, leading to a mushroom of research activities to develop algorithms for generating interpolants for specific theories as well as their combination. This new application is different from the role of interpolants for analyzing proof theories of various logics starting with the pioneering work of [11, 15, 25] (for a recent survey in the SMT area, see [3, 2]). Approaches like [22, 15, 25] however assume access to a proof of for which interpolant is being generated. Given that there can in general be many interpolants including infinitely many for some theories, little is known about what the kind of interpolants are effective for different applications, even though some research has been reported on the strength and quality of interpolants.

In this paper, a different approach is taken, motivated by the insight connecting interpolating theories with those admitting quantifier-elimination as advocated in [19]. Particularly, the concept of a uniform interpolant (UI) defined by a formula , in the context of formal methods and verification, is proposed for which is well-known not to admit quantifier elimination. A uniform interpolant acts as a classical interpolant for any such that (as well as a reverse interpolant for an unsatisfiable pair ).111The third author recently learned from the first author that this concept has been used extensively in logic for decades [13, 24] to his surprise since he had the erroneous impression that he came up with the concept in 2012, which he presented in a series of talks [17, 18]. A uniform interpolant is defined for theories irrespective of whether they admit quantifier elimination; for theories admitting quantifier elimination, a uniform interpolant can be obtained using quantifier elimination, which can be prohibitively expensive. A UI is shown to exist for and to be unique. A related concept of a cover is proposed in [14] (see also [8]).

Two different algorithms for generating UIs from a formula in (with a list of symbols to be eliminated) are proposed with different characteristics. They share a common subpart based on concepts used in a ground congruence closure proposed in [16] which flattens the input and generates a canonical rewrite system on constants along with unique rules of the form , where is an uninterpreted symbol and arguments are canonical forms of constants. Further, eliminated symbols are represented as a DAG to avoid any exponential blow-up. The first algorithm is non-deterministic where undecided equalities on constants are hypothesized to be true or false, generating a branch in each case, and recursively applying the algorithm. It could also easily be formulated as an algorithm similar in spirit to the use of equality interpolants in Nelson and Oppen framework for combination, where different partitions on constants are tried, with each leading to a branch in the algorithm. New symbols are introduced along each branch to avoid exponential blow-up.

The second algorithm generalizes the concept of a DAG to conditional DAG in which subterms are replaced by new symbols under a conjunction of equality atoms, resulting in its compact and efficient representation. A fully or partially expanded form of a UI can be derived based on their use in applications. Because of their compact representation, UIs can be generated in polynomial times for a large class of formulas.

The termination, correctness and completeness of both the algorithms are proved by using results in model theory about model completions; this relies on a basic result (Lemma 0.5.1 below) taken from [8].

Both our algorithms are simple, intuitive and easy to understand in contrast to related algorithms in the literature. In fact, the algorithm from [8] requires full saturation in a version of superposition calculus equipped with ad hoc settings, whereas the main merit of our second algorithm is to show that a very light form of completion is sufficient, thus simplifying the whole procedure and getting better complexity results.222Although we feel that some improvement is possible, the termination argument in [8] gives a double exponential bound, whereas we have a simple exponential bound for both algorithms (with optimal chances to keep the output polynomial in the case of the second algorithm). The algorithm from [14] requires some bug fixes (as pointed out in [8]) and the related completeness proof is still missing.

The use of uniform interpolants in model-checking safety problems for infinite state systems was already mentioned in [14] and further exploited in a recent research line on the verification of data-aware processes [7, 6, 9]. Model checkers need to explore the space of all reachable states of a system; a precise exploration (either forward starting from a description of the initial states or backward starting from a description of unsafe states) requires quantifier elimination. The latter is not always available or might have prohibitive complexity; in addition, it is usually preferable to make overapproximations of reachable states both to avoid divergence and to speed up convergence. One well-established technique for computing overapproximations consists in extracting interpolants from spurious traces, see e.g. [22]; interpolants are used for various symbol elimination tasks in first-order settings [20, 21].One possible advantage of uniform interpolants over ordinary interpolants is that they do not introduce overapproximations and so abstraction/refinements cycles are not needed in case they are employed (the precise reason for that goes through the connection between uniform interpolants, model completeness and existentially closed structures, see [9] for a full account). In this sense, computing uniform interpolants have the same advantages and disadvantages as computing quantifier eliminations, with two remarkable differences. The first difference is that uniform interpolants may be available also in theories not admitting quantifier elimination ( being the typical example); the second difference is that computing uniform interpolants may be tractable when the language is suitably restricted e.g. to unary function symbols (this was already mentioned in [14], see also Remark 0.3.2 below). Restrictions to unary function symbols is sufficient in database driven verification to encode primary and foreign keys [9]. It is also worth noticing that, precisely by using uniform interpolants for this restricted language, in [9] new decidability results have been achieved for interesting classes of infinite state systems. Notably, such results also operationally mirrored in the MCMT [12] implementation since version 2.8.

The paper is structured as follows: in Section 0.2 we state the main problem, fix some notation, discuss DAG representations and congruence closure. In Sections 0.3 and 0.4, we give two algorithms for computing uniform interpolants in (correctness and completeness of such algorithms are proved in Section 0.5).The former algorithm is tableaux-shaped and produces the output in disjunctive normal form, whereas the second algorithm is based on manipulation of Horn clauses and gives the output in (compressed) conjunctive normal form. We believe that the two algorithms are in a sense complementary to each others, especially from the point of view of applications. Model checkers typically synthesize safety invariants using conjunctions of clauses and in this sense they might better take profit from the second algorithm; however, model-checkers dually representing sets of backward reachable states ad disjunctions of cubes, would better adopt the first algorithm. Non-deterministic manipulations of cubes is also required to match certain PSPACE lower bounds, as in the case of SAS systems mentioned in [9]. On the other hand, regarding the overall complexity, it seems to be easier to avoid exponential blow-ups in concrete examples by adopting the second algorithm.

0.2 Preliminaries

We adopt the usual first-order syntactic notions of signature, term, atom, (ground) formula, and so on; our signatures are always finite or countable and include equality. For simplicity, we only consider functional signatures, i.e. signatures whose only predicate symbol is equality. We compactly represent a tuple of variables as . The notation means that the term , the formula has free variables included in the tuple . This tuple is assumed to be formed by distinct variables, thus we underline that, when we write e.g. , we mean that the tuples are made of distinct variables that are also disjoint from each other. A formula is said to be universal (resp., existential) if it has the form (resp., ), where is quantifier-free. Formulae with no free variables are called sentences.

From the semantic side, we use the standard notion of -structure : this is a pair formed of a set (the ‘support set’, indicated as ) and of an interpretation function. The interpretation function maps -ary function symbols to -ary operations on (in particular, constants symbols are mapped to elements of ). A free variables assignment on extends the interpretation function by mapping also variables to elements of ; the notion of truth of a formula in a -structure under a free variables assignment is the standard one.

It may happen that we need to expand a signature with a fresh name for every : such expanded signature is named and is by abuse seen as a -structure itself by interpreting the name of as (the name of is directly indicated as for simplicity).

A -theory is a set of -sentences; a model of is a -structure where all sentences in are true. We use the standard notation to say that is true in all models of for every assignment to the variables occurring free in . We say that is -satisfiable iff there is a model of and an assignment to the variables occurring free in making true in .

0.2.1 Uniform Interpolants

Fix a theory and an existential formula ; call a residue of any quantifier-free formula belonging to the set of quantifier-free formulae

A quantifier-free formula is said to be a -uniform interpolant333In some literature [14, 8] uniform interpolants are called covers. (or, simply, a uniform interpolant, abbreviated UI) of iff and implies (modulo ) all the other formulae in . It is immediately seen that UI are unique (modulo -equivalence). We say that a theory has uniform quantifier-free interpolation iff every existential formula has a UI.

It is clear that if has uniform quantifier-free interpolation, then it has ordinary quantifier-free interpolation [4], in the sense that if we have (for quantifier-free formulae ), then there is a quantifier-free formula such that and . In fact, if has uniform quantifier-free interpolation, then the interpolant is independent on (the same can be used as interpolant for all entailments , varying ). Uniform quantifier-free interpolation has a direct connection to an important notion from classical model theory, namely model completeness (see [8] for more information).

0.2.2 Problem Statement

In this paper we deal about the problem of computing UI for the case in which is pure identity theory in a functional signature ; this theory is called or just in the SMT-LIB2 terminology. We shall provide two different algorithms for that (while proving correctness and completeness of such algorithms, we simultaneously also show that UI exist in ). The first algorithm computes a UI in disjunctive normal form format, whereas the second algorithm supplies a UI in conjunctive normal form format. Both algorithm use suitable DAG-compressed representation of formulae.

We use the following notation throughout the paper. Since it is easily seen that UI commute with disjunctions, it is sufficient to compute UI for primitive formulae, i.e. for formulae of the kind , where is a constraint, i.e. a conjunction of literals. We partition all the constant symbols from the input as well as symbols newly introduced into disjoint sets. We use the following conventions:

-

are the symbols to be eliminated, called variables,

-

are the symbols not to be eliminated, called parameters,

-

letters stand for both variables and parameters.

Variables are usually skolemized during the manipulations of our algorithms and proofs below, so that they have to be considered as fresh individual constants.

Remark 0.2.1.

UI computations eliminate symbols which are existentially quantified variables (or skolemized constants). Elimination of function symbols can be reduced to elimination of variables in the following way. Consider a formula , where is quantifier-free. Successively abstracting out functional terms, we get that is equivalent to a formula of the kind , where the are fresh variables, does not occur in and is quantifier-free. The latter is semantically equivalent to , where is the conjunction of the component-wise equalities of the tuples and .

0.2.3 Flat Literals, DAGs and Congruence Closure

A flat literal is a literal of one of the following kinds

(1)

where are (not necessarily distinct) variables or constants. A formula is flat iff all literals occurring in it are flat; flat terms are terms that may occur in a flat literal (i.e. terms like those appearing in (1)).

We call a DAG-definition (or simply a DAG) any formula of the following form (let )

Thus, provides in fact an explicit definition of the in terms of the parameters . To such a DAG , is in fact associated the substitution recursively defined by the mapping

We may sometimes confuse a DAG like above with its associated substitution . DAGs are commonly used to represent formulae and substitution in compressed form: in fact a formula like

(2)

is equivalent to , however the full unravelling of such an equivalence causes an exponential blow-up. This is why we shall systematically prefer DAG-representations like (2) to their uncompressed forms.

As above stated, our main aim is to compute the UI of a primitive formula ; using trivial logical manipulations (that have just linear complexity costs), it can be easily seen that the constraint can be assumed to be flat. In order to do that, it is sufficient to apply well-known Congruence Closure Transformations: the reader is referred to [16] for a full account.

0.3 The Tableaux Algorithm

The algorithm proposed in this section is tableaux-like. It manipulates formulae in the following DAG-primitive format

(3)

where is a DAG and are flat constraints (notice that the do not occur in ). To make reading easier, we shall omit in (3) the existential quantifiers, so as (3) will be written simply as

(4)

Initially the DAG and the constraint are the empty conjunction. In the DAG-primitive formula (4), variables are called parameter variables, variables are called (explicitly) defined variables and variables are called (truly) quantified variables. Variables are never modified; in contrast, during the execution of the algorithm it could happen that some quantified variables may disappear or become defined variables (in the latter case they are renamed: a quantified variables becoming defined is renamed as , for a fresh ). Below, letters range over .

Definition 0.3.1.

A term (resp. a literal ) is -free when there is no occurrence of any of the variables in (resp. in ). Two flat terms of the kinds

(5)

are said to be compatible iff for every , either is identical to or both and are -free. The difference set of two compatible terms like above is the set of disequalities such that is not identical to .

0.3.1 The Algorithm

Our algorithm applies the transformation below (except the last one) in a “don’t care” non-deterministic way. The last transformation has lower priority and splits the execution of the algorithm in several branches: each branch will produce a different disjunct in the output formula. Each state of the algorithm is a dag-primitive formula like (4). We now provide the rules that constitute our ‘tableaux-like’ algorithm.

(1)

Simplification Rules:

(1.0)

if an atom like belongs to , just remove it; if a literal like occurs somewhere, delete , replace with and stop;

(1.i)

If is not a variable and contains both and , remove the latter and replace it with .

(1.ii)

If contains with , remove it and replace everywhere by .

(2)

DAG Update Rule: if contains , remove it, rename everywhere as (for fresh ) and add to . More formally:

(3)

-Free Literal Rule: if contains a literal , move it to . More formally:

(4)

Splitting Rule: If contains a pair of atoms and , where and are compatible flat terms like in (5), and no disequality from the difference set of belongs to , then non-deterministically apply one of the following alternatives:

(4.0)

remove from the atom , add it the atom and add to all equalities such that is in the difference set of ;

(4.1)

add to one of the disequalities from the difference set of (notice that the difference set cannot be empty, otherwise Rule (1.i) applies).

When no rule is still applicable, delete from the resulting formula

so as to obtain for any branch an output formula in DAG-representation of the kind

The following proposition states that, by applying the previous rules, termination is always guaranteed.

Proposition 0.3.1 ().

The non-deterministic procedure presented above always terminates.

Proof.

It is sufficient to show that every branch of the algorithm must terminate. In order to prove that, first observe that the total number of the variables involved never increases and it decreases if (1.ii) is applied (it might decrease also by the effect of (1.0)). Whenever such a number does not decrease, there is a bound on the number of inequalities that can occur in . Now transformation (4.1) decreases the number of inequalities that are actually missing; the other transformations do not increase this number. Finally, all transformations except (4.1) reduce the length of . ∎

The following remark will be useful to prove the correctness of our algorithm, since it gives a description of the kind of literals contained in a state triple that is terminal (i.e., when no rule applies).

Remark 0.3.1.

Notice that if no transformation applies to (3), the set can only contain inequalities of the kind , together with equalities of the kind . However, when it contains , one of the must belong to (otherwise (2) or (3) applies). Moreover, if and are both in , then either they are not compatible or belongs to for some and for some variables not in (otherwise (4) or (1.i) applies).

Remark 0.3.2.

The complexity of the above algorithm is exponential, however the complexity of producing a single branch is quadratic. Notice that if functions symbols are all unary, there is no need to apply Rule 4, hence for this restricted case computing UI is a tractable problem. The case of unary functions has relevant applications in database driven verification [9, 7, 6] (where unary function symbols are used to encode primary and foreign keys).

Example 0.3.1.

Let us compute the UI of the formula . Flattening gives the set of literals

(6)

where the newly introduced variables need to be eliminated too. Applying (4.0) removes and introduces the new equalities , . This causes to be renamed as by (2). Applying again (4.0) removes and adds the equalities , ; moreover, is renamed as . To the literal we can apply (3). The branch terminates with . This produces as a first disjunct of the uniform interpolant. The other branches produce , and as further disjuncts, so that the UI turns out to be equivalent to .

0.4 The Conditional Algorithm

This section discusses a new algorithm with the objective of generating a compact representation of the UI in by avoiding having to split based on conditions in Horn clauses generated from literals whose left sides have the same function symbol. A by-product of this approach is that often the UI can be computed in polynomial time for a large class of formulas. Further, the output of this algorithm generates the UI of (where is a conjunction of literals and , , as usual) in conjunctive normal form with literals and conditional Horn equations. Toward this goal, a new data structure of a conditional DAG, a generalization of DAG, is introduced so as to maximize sharing of sub-formulas.

Using the core preprocessing procedure explained in Subsection 0.2.3, it is assumed that is the conjunction of a set of literals containing only literals of the following two kinds:

(7)
(8)

(recall that we use letters for elements of ). In addition we can assume that variables in must occur in (8) and in the left member of (7).We do not include equalities like because they can be eliminated by replacement.

0.4.1 The Algorithm

The algorithm requires two steps in order to get a set of clauses representing the output in a suitably compressed format. Step 1. Out of every pair of literals and of the kind (7) we produce the Horn clause

(9)

Let us call the set of clauses obtained from by adding to it these new Horn clauses. Step 2. We saturate with respect to the following rewriting rule

where , means the result of the replacement of by in the position of the clause and is the clause obtained by merging with the antecedent of the clause .

Notice that we apply the rewriting rule only to conditional equalities of the kind : this is because clauses like are considered ‘conditional definitions’ (and the clauses like as ‘conditional facts’).

We let be the set of clauses obtained from by saturating it with respect to the above rewriting rule, by removing from antecedents identical literals of the kind and by removing subsumed clauses.

Example 0.4.1.

Let be the set of the following literals

Step 1 produces the following set of Horn clauses

Since there are no Horn clauses whose consequent is an equality of the kind , Step 2 does not produce further clauses and we have .

0.4.2 Conditional DAGs

In order to be able to extract the output UI in a plain (uncompressed) format out of the above set of clauses , we must identify all the ‘implicit conditional definitions’ it contains.

Let be an ordered subset of the : that is, in order to specify we must take a subset of the and an ordering of this subset. Intuitively, these will play the role of placeholders inside a conditional definition.

If we let be (where, say, is some with ), we let be the language restricted to and (for ): in other words, an -term or an -clause may contain only terms built up from by applying them function symbols. In particular, (also called ) is the language restricted to . We let be the language restricted to .

Given a set of clauses and as above, a -conditional DAG (or simply a conditional DAG ) built out of is a set of Horn clauses from

(10)

where is a finite tuple of -atoms and is a -term. Given a -conditional DAG we can define the formulae (for ) as follows:

-

is the conjunction of all -clauses belonging to ;

-

for , the formula is .

It is clear that is equivalent to a quantifier-free formula,444 It can be easily seen that such a formula can be turned, again up to equivalence, into a conjunction of Horn clauses. in particular (abbreviated as ) is equivalent to an -quantifier-free formula. The explicit computation of such quantifier-free formulae may however produce an exponential blow-up.

Example 0.4.2.

Consider the set of the Horn clauses mentioned in Example 0.4.1. We can get not logically equivalent formulae for and considering with and conditional definitions or with and conditional definitions In fact, is logically equivalent to

(11)

whereas is logically equivalent to

(12)

where we used the notation to mean the result of the substitution in the conjunction of -clauses not involving of for and of for (a similar notation is used for ). A third possibility is to use the conditional definitions and with (equivalently) either or resulting in a conditional dag with logically equivalent to

(13)

Next lemma shows the relevant property of the formula :

Lemma 0.4.1 ().

For every set of clauses and for every -conditional DAG built out of , the formula

is logically valid.

Proof.

We prove that is valid by induction on . The base case is clear. For the case , proceed e.g. in natural deduction as follows: assume and in order to prove . Since , then by implication elimination you get and also by transitivity of equality. Now you get what you need from induction hypothesis and equality replacement. ∎

Notice that it is not true that the conjunction of all possible (varying and ) implies : in fact, such a conjunction can be empty (hence ) in case there is no conditional DAG built up from at all (this happens for instance if is just ).

0.4.3 Extraction of UI’s

We shall prove below that in order to get a UI of , one can take the conjunction of all possible , varying among the conditional DAGs that can be built out of the set of clauses from Step 2 of the above algorithm.

Example 0.4.3.

If is the conjunction of the literals of Example 0.4.1, then the conjunction of , and is a UI of ; in fact, no further non-trivial conditional dag can be extracted (if we take or or to extract , then it happens that is the empty conjunction ).

Example 0.4.4.

Let us turn to the literals (6) of Example 0.3.1. Step 1 produces out of them the conditional clauses

(14)

Step 2 produces by rewriting the further clauses and . We can extract two conditional dags (using both the conditional definitions (14) or just the first one); in both cases is , which is the UI.

As should be evident from the two examples above, the conditional DAGs representation of the output considerably reduces computational complexity in many cases; this is a clear advantage of the present algorithm over the algorithm from Section 0.3 and over other approaches like e.g. [8]. Still, next example shows that in some cases the overall complexity remains exponential.

Example 0.4.5.

Let be and let be Let be the conjunction of the identities , and the set of identities varying such that . After applying Step 1 of the algorithm presented in Subsection 0.4.1, we get the Horn clauses as well as the clause . If we now apply Step 2, it is clear that we can never produce a conditional clause of the kind with being -free (because we can only rewrite some into some ). Thus no sequence of clauses like (10) can be extracted from : notice in fact that the term from such a sequence must not contain the . In other words, the only -conditional DAG that can be extracted is based on the empty and is empty itself. However such produces a formula , in fact quite big: it is the conjunction of the exponentially many clauses from where the do not occur.

0.5 Correctness and Completeness Proofs

In this section we prove correctness and completeness of our two algorithms. To this aim, we need some elementary background, both from model theory and from term rewriting.

For model theory, we refer to [10]. We just recall few definitions. A -embedding (or, simply, an embedding) between two -structures and is a map among the support sets of and of satisfying the condition for all -literals ( is regarded as a -structure, by interpreting each additional constant into itself and is regarded as a -structure by interpreting each additional constant into ). If is an embedding which is just the identity inclusion , we say that is a substructure of or that is an extension of .

Extensions and UI are related to each other by the following result we take from [8]:

Lemma 0.5.1 (Cover-by-Extensions).

A formula is a UI in of iff it satisfies the following two conditions:

(i)

;

(ii)

for every model of , for every tuple of elements from the support of such that it is possible to find another model of such that embeds into and .

To conveniently handle extensions, we need diagrams. Let be a -structure. The diagram of  [10], written (or just ), is the set of ground -literals that are true in . An easy but important result, called Robinson Diagram Lemma [10], says that, given any -structure , the embeddings are in bijective correspondence with expansions of to -structures which are models of . The expansions and the embeddings are related in the obvious way: the name of is interpreted as . It is convenient to see as a set of flat literals as follows: the positive part of contains the -equalities which are true in and the negative part of contains the -inequalities , varying among the pairs of different elements of .

For term rewriting we refer to a textbook like [1]; we only recall the following classical result:

Lemma 0.5.2 ().

Let be a canonical ground rewrite system over a signature . Then there is a -structure such that for every pair of ground terms we have that iff the -normal form of is the same as the -normal form of . Consequently is consistent with a set of negative literals iff for every the -normal forms of and are different.

We are now ready to prove correctness and completeness of our algorithms. We first give the relevant intuitions for the proof technique, which is the same for both cases. By Lemma 0.5.1 above, what we need to show is that if a model satisfies the output formula of the algorithm, then it can be extended to a superstructure satisfying the input formula of the algorithm. By the Diagram Lemma, this is achieved if we show that is consistent with the output formula of the algorithm. The output formula is equivalent to a disjunction of constraints and the diagram is also a constraint (albeit infinitary). The positive part of is a canonical rewriting system (equalities like are obviously oriented from left-to-right) and every term occurring in is in normal form. If an algorithm does a good job, it will be easy to see that the completion of the union of with the relevant disjunct constraint is trivial and does not produce inconsistencies.

0.5.1 Correctness and Completeness of the Tableaux Algorithm

Theorem 0.5.1 ().

Suppose that we apply the algorithm of Subsection 0.3.1 to the primitive formula and that the algorithm terminates with its branches in the states

then the UI of in is the DAG-unravelling of the formula

(15)
Proof.

Since is logically equivalent to , it is sufficient to check that if a formula like (3) is terminal (i.e. no rule applies to it) then its UI is . To this aim, we apply Lemma 0.5.1: we pick a model satisfying via an assignment to the variables 555Actually the values of the assignment to the uniquely determines the values of to the . and we show that can be embedded into a model such that, for a suitable extensions of to the variables