 # Justifications in Constraint Handling Rules for Logical Retraction in Dynamic Algorithms

We present a straightforward source-to-source transformation that introduces justifications for user-defined constraints into the CHR programming language. Then a scheme of two rules suffices to allow for logical retraction (deletion, removal) of constraints during computation. Without the need to recompute from scratch, these rules remove not only the constraint but also undo all consequences of the rule applications that involved the constraint. We prove a confluence result concerning the rule scheme and show its correctness. When algorithms are written in CHR, constraints represent both data and operations. CHR is already incremental by nature, i.e. constraints can be added at runtime. Logical retraction adds decrementality. Hence any algorithm written in CHR with justifications will become fully dynamic. Operations can be undone and data can be removed at any point in the computation without compromising the correctness of the result. We present two classical examples of dynamic algorithms, written in our prototype implementation of CHR with justifications that is available online: maintaining the minimum of a changing set of numbers and shortest paths in a graph whose edges change.

Comments

There are no comments yet.

## Authors

##### This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

## 1 Introduction

Justifications have their origin in truth maintenance systems (TMS) [McA90]

for automated reasoning. In this knowledge representation method, derived information (a formula) is explicitly stored and associated with the information it originates from by means of justifications. This dependency can be used to explain the reason for a conclusion (consequence) by its initial premises. With the help of justifications, conclusions can be withdrawn by retracting their premises. By this

logical retraction, e.g. default reasoning can be supported and inconsistencies can be repaired by retracting one of the reasons for the inconsistency. An obvious application of justifications are dynamic constraint satisfaction problems (DCSP), in particular over-constrained ones [BM06].

In this work, we extend the applicability of logical retraction to arbitrary algorithms that are expressed in the programming language Constraint Handling Rules (CHR) [Frü09, Frü15]. To accomplish logical retraction, we have to be aware that CHR constraints can also be deleted by rule applications. These constraints may have to be restored when a premise is retracted. With logical retraction, any algorithm written in CHR will become fully dynamic111Dynamic algorithms for dynamic problems should not be confused with dynamic programming..

Minimum Example. Given a multiset of numbers min(), min(),…, min(). The constraint (predicate) min() means that the number is a candidate for the minimum value. The following CHR rule filters the candidates.

min(N) \ min(M) <=> N=<M | true.


The rule consists of a left-hand side, on which a pair of constraints has to be matched, a guard check N=<M that has to be satisfied, and an empty right-hand side denoted by true. In effect, the rule takes two min candidates and removes the one with the larger value (constraints after the \ symbol are deleted). Note that the min constraints behave both as operations (removing other constraints) and as data (being removed).

CHR rules are applied exhaustively. Here the rule keeps on going until only one, thus the smallest value, remains as single min constraint, denoting the current minimum. If another min constraint is added during the computation, it will eventually react with a previous min constraint, and the correct current minimum will be computed in the end. Thus the algorithm as implemented in CHR is incremental. It is not decremental, though: We cannot logically retract a min candidate. While removing a candidate that is larger than the minimum would be trivial, the retraction of the minimum itself requires to remember all deleted candidates and to find their minimum. With the help of justifications, this logical retraction will be possible automatically.

Contributions and Overview of the Paper. In the next section we recall syntax and operational semantics for CHR. Our contributions are as follows:

• We introduce CHR with justifications (CHR) in Section 3. We enhance standard CHR programs with justifications by a source-to-source program transformation. We show the operational equivalence of rule applications in both settings. Thus CHR is a conservative extension of standard CHR.

• We define a scheme of two rules to enable logical retraction of constraints based on justifications in Section 4. We show that the rule scheme is confluent with each rule in any given program, independent of the confluence of that program. We prove correctness of logical retraction: the result of a computation with retraction is the same as if the constraint would never have been introduced in the computation.

• We present a proof-of-concept implementation of CHR in CHR and Prolog (available online) in Section 5. We discuss two classical examples for dynamic algorithms, maintaining the minimum of a changing set of numbers and maintaining shortest paths in a graph whose edges change.

The paper ends with discussion of related work in Section 6 and with conclusions and directions for future work.

## 2 Preliminaries

We recall the abstract syntax and the equivalence-based abstract operational semantics of CHR in this section. Upper-case letters stand for (possibly empty) conjunctions of constraints in this paper.

### 2.1 Abstract Syntax of CHR

Constraints are relations, distinguished predicates of first-order predicate logic. We differentiate between two kinds of constraints: built-in (pre-defined) constraints and user-defined (CHR) constraints which are defined by the rules in a CHR program.

###### Definition 1

A CHR program is a finite set of rules. A (generalized) simpagation rule is of the form

 r:H1∖H2⇔C|B

where is an optional name (a unique identifier) of a rule. In the rule head (left-hand side), and are conjunctions of user-defined constraints, the optional guard is a conjunction of built-in constraints, and the body (right-hand side) is a goal. A goal is a conjunction of built-in and user-defined constraints. A state is a goal. Conjunctions are understood as multisets of their conjuncts.

In the rule, are called the kept constraints, while are called the removed constraints. At least one of and must be non-empty. If is empty, the rule corresponds to a simplification rule, also written

 s:H2⇔C|B.

If is empty, the rule corresponds to a propagation rule, also written

 p:H1⇒C|B.

In this work, we restrict given CHR programs to rules without built-in constraints in the body except and . This restriction is necessary as long as built-in constraint solvers do not support the removal of built-in constraints.

### 2.2 Abstract Operational Semantics of CHR

Computations in CHR are sequences of rule applications. The operational semantics of CHR is given by the state transition system. It relies on a structural equivalence between states that abstracts away from technical details in a transition[RBF09, Bet14].

State equivalence treats built-in constraints semantically and user-defined constraints syntactically. Basically, two states are equivalent if their built-in constraints are logically equivalent (imply each other) and their user-defined constraints form syntactically equivalent multisets. For example,

 X=

For a state , the notation denotes the built-in constraints of and denotes the user-defined constraints of .

###### Definition 2 (State Equivalence)

Two states and are equivalent, written , if and only if

 ⊨∀(S1bi→∃¯y((S1ud=S2ud)∧S2bi))∧∀(S2bi→∃¯x((S1ud=S2ud)∧S1bi))

with those variables that only occur in and those variables that only occur in .

Using this state equivalence, the abstract CHR semantics is defined by a single transition (computation step). It defines the application of a rule. Note that CHR is a committed-choice language, i.e. there is no backtracking in the rule applications.

###### Definition 3 (Transition)

Let the rule be a variant222A variant (renaming) of an expression is obtained by uniformly replacing its variables by fresh variables. of a rule from a given program . The transition (computation step) is defined as follows, where is called source state and is called target state:

The goal is called context of the rule application. It is left unchanged.

A computation (derivation) of a goal in a program is a connected sequence beginning with the initial state (query) that is and ending in a final state (answer, result) or the sequence is non-terminating (diverging). We may drop the reference to the rules to simplify the presentation. The notation denotes the reflexive and transitive closure of .

If the source state can be made equivalent to a state that contains the head constraints and the guard built-in constraints of a variant of a rule, then we delete the removed head constraints from the state and add the rule body constraints to it. Any state that is equivalent to this target state is in the transition relation.

The abstract semantics does not account for termination of inconsistent states and propagation rules. From a state with inconsistent built-in constraints, any transition is possible. If a state can fire a propagation rule once, it can do so again and again. This is called trivial non-termination of propagation rules.

Minimum Example, contd. Here is a possible transition from a state to a state :

## 3 CHR with Justifications (CHRJ)

We present a conservative extension of CHR by justifications. If they are not used, programs behave as without them. Justifications annotate atomic CHR constraints. A simple source-to-source transformation extends the rules with justifications.

###### Definition 4 (CHR Constraints and Initial States with Justifications)

A justification is a unique identifier. Given an atomic CHR constraint , a CHR constraint with justifications is of the form , where is a set of justifications. An initial state with justifications is of the form where the are distinct justifications.

We now define a source-to-source translation from rules to rules with justifications. Let and (remove) be to unary reserved CHR constraint symbols. This means they are only allowed to occur in rules as specified in the following.

###### Definition 5 (Translation to Rules with Justifications)

Given a generalized simpagation rule

 r:l⋀i=1Ki ∖ m⋀j=1Rj⇔C | n⋀k=1Bk

Its translation to a simpagation rule with justifications is of the form

 rf:l⋀i=1KFii ∖ m⋀j=1RFjj⇔C | m⋀j=1rem(RFjj)F∧n⋀k=1BFk where F=l⋃i=1Fi∪m⋃j=1Fj.

The translation ensures that the head and the body of a rule mention exactly the same justifications. More precisely, each CHR constraint in the body is annotated with the union of all justifications in the head of the rule, because its creation is caused by the head constraints. The reserved CHR constraint rem/1 (remember removed) stores the constraints removed by the rule together with their justifications.

### 3.1 Operational Equivalence of Rule Applications

Let be states. For convenience, we will often consider them as multisets of atomic constraints. Then the notation denotes multiset difference, without . By abuse of notation, let be conjunctions and corresponding states whose atomic CHR constraints are annotated with justifications according to the above definition of the rule scheme. Similarly, let denote the conjunction .

We show that rule applications correspond to each other in standard CHR and in CHR.

###### Lemma 1 (Equivalence of Program Rules)

There is a computation step with simpagation rule

 r:H1∖H2⇔C|B

if and only if there is a computation step with justifications with the corresponding simpagation rule with justifications

 rf:HJ1∖HJ2⇔C|rem(H2)J∧BJ.

Proof. We compare the two transitions involving rule r and rf, respectively:

Given the standard transition with rule , the transition with justifications with rule is always possible: The rule by definition does not impose any constraints on its justifications. The justifications in the rule body are computed as the union of the justifications in the rule head, which is always possible. Furthermore, the reserved rem constraints always belong to the context of the transition since by definition there is no rule rf that can match any of them.

Conversely, given the transition with justifications with rule , by the same arguments, we can strip away all justifications from it and remove from the rule and the target state to arrive at the standard transition with rule . ∎

Since computations are sequences of connected computation steps, this lemma implies that computations in standard CHR program and in CHR correspond to each other. Thus CHR with justifications is a conservative extension of CHR.

## 4 Logical Retraction Using Justifications

We use justifications to remove a CHR constraint from a computation without the need to recompute from scratch. This means that all its consequences due to rule applications it was involved in are undone. CHR constraints added by those rules are removed and CHR constraints removed by the rules are re-added. To specify and implement this behavior, we give a scheme of two rules, one for retraction and one for re-adding of constraints. The reserved CHR constraint undoes all consequences of the constraint with justification .

###### Definition 6 (Rules for CHR Logical Retraction)

For each -ary CHR constraint symbol (except the reserved and ), we add a rule to kill constraints and a rule to revive removed constraints of the form:

 kill:kill(f) ∖ GF⇔f∈F | true
 revive:kill(f) ∖ rem(GFc)F⇔f∈F | GFc,

where , where are different variables.

Note that a constraint may be revived and subsequently killed. This is the case when both and contain the justification .

### 4.1 Confluence of Logical Retraction

Confluence of a program guarantees that any computation starting from a given initial state can always reach equivalent states, no matter which of the applicable rules are applied. There is a decidable, sufficient and necessary syntactic condition to check confluence of terminating programs and to detect rule pairs that lead to non-confluence when applied.

###### Definition 7 (Confluence)

If and then there exist states and such that and where .

###### Theorem 4.1

[Abd97, AFM99] A terminating CHR program is confluent if and only if all its critical pairs are joinable.

Decidability comes from the fact that there is only a finite number of critical pairs to consider.

###### Definition 8 (Overlap, Critical Pair)

Given two (not necessarily different) simpagation rules whose variables have been renamed apart, and . Let and be non-empty conjunctions of constraints taken from and , respectively. An overlap of the two rules is the state consisting of the rules heads and guards:

 ((K1∧R1)−A1)∧K2∧R2∧A1=A2∧C1∧C2.

The critical pair are the two states that come from applying the two rules to the overlap, where :

 (((K1∧K2∧R2)−A2)∧B1∧E<>((K1∧R1∧K2)−A1)∧B2∧E).

Note that the two states in the critical pair differ by and .

A critical pair is trivially joinable if its built-in constraints are inconsistent or if both and do not contain removed constraints [AFM99].

We are ready to show the confluence of the and rules with each other and with each rule in any given program. It is not necessary that the given program is confluent. This means for any given program, the order between applying applicable rules from the program and retracting constraints can be freely interchanged. It does not matter for the result, if we kill a constraint first or if we apply a rule to it and kill it and its consequences later.

###### Theorem 4.2 (Confluence of Logical Retraction)

Given a CHR program whose rules are translated to rules with justifications together with the and rules. We assume there is at most one constraint for each justification in any state. Then all critical pairs between the and rules and any rule from the program with justifications are joinable.

Proof. There is only one overlap between the and rules,

 kill:kill(f) ∖ GF⇔f∈F | true
 revive:kill(f) ∖ rem(GFc)F⇔f∈F | GFc,

since cannot have the reserved constraint symbol . The overlap is in the constraint. But since it is not removed by any rule, the resulting critical pair is trivially joinable.

By our assumption, the only overlap between two instances of the rule must have a single constraint. Again, since it is not removed, the resulting critical pair is trivially joinable. The same argument applies to the only overlap between two instances of the rule.

Since the head of a simpagation rule with justifications from the given program

 rf:KJ∖RJ⇔C|rem(R)J∧BJ

cannot contain reserved and constraints, these program rules cannot have an overlap with the rule.

But there are overlaps between program rules, say a rule , and the rule. They take the general form:

 kill(f)∧KJ∧RJ∧GF=AF∧f∈F∧C,

where occurs in . This leads to the critical pair

 (kill(f)∧((KJ∧RJ)−GF)∧E<>kill(f)∧KJ∧rem(R)J∧BJ∧E),

where . In the first state of the critical pair, the rule has been applied and in the second state the rule . Note that is atomic since it is equated to in . Since has been removed in the first state and , rule is no longer applicable in that state.

We would like to join these two states. The joinability between a rule rf and the kill rule can be visualized by the diagram:

We now explain this joinability result. The states of the critical pair differ. In the first state we have the constraints and have removed from , while in the second state we have the body constraints of rule instead. Any constraint in must include as justification by definition, because occurred in the head constraint and contains .

The goal contains constraints for each removed constraint from . But then we can use with the rule to replace all constraints by the removed constraints, thus adding back again. Furthermore, we can use with the rule to remove each constraint in , as each constraint in contains the justification . So has been removed completely and has been re-added.

The two states may still differ in the occurrence of (which is ). In the first state, was removed by the kill rule. Now if () was in , it has been revived with . But then the rule is applicable and we can remove again. In the second state, if was in it has been removed together with by application of rule rf. Otherwise, is still contained in . But then the rule is applicable to and removes it from . Now () does not occur in the second state either.

We thus have arrived at the first state of the critical pair. Therefore the critical pair is joinable. ∎

This means that given a state, if there is a constraint to be retracted, we can either kill it immediately or still apply a rule to it and use the and rules afterwards to arrive at the same resulting state.

Note that the confluence between the and rules and any rule from the program is independent of the confluence of the rules in the given program.

### 4.2 Correctness of Logical Retraction

We prove correctness of logical retraction: the result of a computation with retraction is the same as if the constraint would never have been introduced in the computation. We show that given a computation starting from an initial state with a kill(f) constraint that ends in a state where the kill and revive rules are not applicable, i.e. these rules have been applied to exhaustion, then there is a corresponding computation without constraints that contain the justification .

###### Theorem 4.3 (Correctness of Logical Retraction)

Given a computation

 AJ∧G{f}∧kill(f)↦∗BJ∧rem(R)J∧kill(f)↦/kill,revive,

where does not occur in . Then there is a computation without and

 AJ↦∗BJ∧rem(R)J.

Proof. We distinguish between transitions that involve the justification or do not. A rule that applies to constraints that do not contain the justification will produce constraints that do not contain the justification. A rule application that involves at least one constraint with a justification will only produce constraints that contain the justification .

We now define a mapping from a computation with to a corresponding computation without . The mapping essentially strips away constraints that contain the justification except those that are remembered by rem constraints. In this way, the exhaustive application of the revive and kill rules is mimicked.

if

if is an atomic constraint except rem/1 and

otherwise.

We extend the mapping from states to transitions. We keep the transitions except where the source and target state are equivalent, in that case we replace the transition by an equivalence . This happens when a rule is applied that involves the justification . The mapping is defined in such a way that in this case the source and target state are equivalent. Otherwise a rule that does not involve has been applied. The mapping ensures in this case that all necessary constraints are in the source and target state, since it keeps all constraints that do not mention the justification . For a computation step we define the mapping as:

if rule involves

otherwise.

We next have to show is that the mapping results in correct state equivalences and transitions. If a rule is applied that does not involve justification , then it is easy to see that the mapping strip() leaves states and transitions unchanged.

Otherwise the transition is the application of a rule rf from the program, the rule kill or the rule revive where is contained in the justifications. Let the context be an arbitrary goal where . Then we have to compute

 strip(f,kill(f)∧GF∧f∈F∧EJ↦killkill(f)∧EJ)
 strip(f,kill(f)∧ rem(GFc)F∧f∈F∧EJ↦revivekill(f)∧GFc∧EJ)
 strip(f,KJ∧RJ∧C∧EJ↦rfKJ∧rem(R)J∧BJ∧C∧EJ)

and to show that equivalent states are produced in each case. The resulting states are

 true∧true∧true∧EJ′≡true∧EJ′
 true∧GFc∧true∧EJ′≡true∧GFc∧EJ′ if f∉Fc
 true∧true∧true∧EJ′≡true∧true∧EJ′ if f∈Fc

where, given a goal , the expression contains all constraints from that do not contain the justification .

In the end state of the given computation we know that the revive and kill rules have been applied to exhaustion. Therefore all where contains have been replaced by by the revive rule. Therefore all standard constraints with justification have been removed by the kill rule (including those revived), just as we do in the mapping strip().

Therefore the end states are indeed equivalent except for the remaining kill constraint. ∎

## 5 Implementation

As a proof-of-concept, we implement CHR with justifications (CHR) in SWI-Prolog using its CHR library. This prototype source-to-source transformation is available online at http://pmx.informatik.uni-ulm.de/chr/translator/. The translated programs can be run in Prolog or online systems like WebCHR.

Constraints with Justifications. CHR constraints annotated by a set of justifications are realized by a binary infix operator ##, where the second argument is a list of justifications:

is realized as C ## [F1,F2,...].

For convenience, we add rules that add a new justification to a given constraint C. For each constraint symbol c with arity n there is a rule of the form

addjust @ c(X1,X2,...Xn) <=> c(X1,X2,...Xn) ## [_F].

where the arguments of X1,X2,...Xn are different variables.

Rules with Justifications. A CHR simpagation rule with justifications is realized as follows:

 rf:l⋀i=1KFii ∖ m⋀j=1RFjj⇔C | m⋀j=1rem(RFjj)F∧n⋀k=1BFk where F=l⋃i=1Fi∪m⋃j=1Fj
rf @ K1 ## FK1,... \ R1 ## FR1,... <=> C |
union([FK1,...FR1,...],Fs), rem(R1##FR1) ## Fs,...B1 ## Fs,...


where the auxiliary predicate union/2 computes the ordered duplicate-free union of a list of lists333More precisely, a simplification rule is generated if there are no kept constraints and a propagation rule is generated if there are no removed constraints..

Rules remove and revive. Justifications are realized as flags that are initially unbound logical variables. This eases the generation of new unique justifications and their use in killing. Concretely, the reserved constraint is realized as built-in equality F=r, i.e. the justification variable gets bound. If occurred in the head of a kill or revive rule, it is moved to the guard as equality test F==r. Note that we rename rule kill to remove in the implementation.

revive @ rem(C##FC) ## Fs <=> member(F,Fs),F==r | C ## FC.
remove @ C ## Fs <=> member(F,Fs),F==r | true.


Since rules are tried in program order in the CHR implementation, the constraint C in the second rule is not a reserved rem/1 constraint when the rule is applicable. The check for set membership in the guards is expressed using the standard nondeterministic Prolog built-in predicate member/2.

Logical Retraction with killc/1. We extend the translation to allow for retraction of derived constraints. The constraint killc(C) logically retracts constraint C. The two rules killc and killr try to find the constraint C - also when it has been removed and is now present in a rem constraint. The associated justifications point to all initial constraints that where involved in producing the constraint C. For retracting the constraint, it is sufficient to remove one of its producers. This introduces a choice implemented by the member predicate.

killr @  killc(C), rem(C ## FC) ## _Fs <=> member(F,FC),F=r.
killc @  killc(C), C ## Fs <=> member(F,Fs),F=r.

Note that in the first rule, we bind a justification F from FC, because FC contains the justifications of the producers of constraint C, while Fs also contains those that removed it by a rule application.

### 5.1 Examples

We discuss two classical examples for dynamic algorithms, maintaining the minimum of a changing set of numbers and shortest paths when edges change.

Dynamic Minimum. Translating the minimum rule to one with justifications results in:

min(A)##B \ min(C)##D <=> A<C | union([B,D],E), rem(min(C)##D)##E.

The following shows an example query and the resulting answer in SWI-Prolog:
?- min(1)##[A], min(0)##[B], min(2)##[C].
rem(min(1)##[A])##[A,B], rem(min(2)##[C])##[B,C],
min(0)##[B].

The constraint min(0) remained. This means that 0 is the minimum. The constraints min(1) and min(2) have been removed and are now remembered. Both have been removed by the constraint with justification B, i.e. min(0).

We now logically retract with killc the constraint min(1) at the end of the query. The killr rule applies and removes rem(min(1)##[A])##[A,B]. (In the rule body, the justification A is bound to r - to no effect, since there are no other constraints with this justification.)

?- min(1)##[A], min(0)##[B], min(2)##[C], killc(min(1)).
rem(min(2)##[C])##[B,C],
min(0)##[B].


What happens if we remove the current minimum min(0)? The constraint min(0) is removed by binding justification B. The two rem constraints for min(1) and min(2) involve B as well, so these two constraints are re-introduced and react with each other. Note that min(2) is nwo removed by min(1) (before it was min(0)). The result is the updated minimum, which is 1.

?- min(1)##[A], min(0)##[B], min(2)##[C], killc(min(0)).
rem(min(2)##[C])##[A,C],
min(1)##[B].


Dynamic Shortest Path. Given a graph with directed arcs e(X,Y), we compute the lengths of the shortest paths between all pairs of reachable nodes:

 % keep shorter of two paths from X to Y
pp @ p(X,Y,L1) \ p(X,Y,L2) <=> L1=<L2 | true.
% edges have a path of unit length
e  @ e(X,Y) ==> p(X,Y,1).
% extend path in front by an edge
ep @ e(X,Y), p(Y,Z,L) ==> L1=:=L+1 | p(X,Z,L1).

The corresponding rules in the translated program are:
pp@p(A,B,C)##D \ p(A,B,E)##F <=> C=<E |
union([D,F],G), rem(p(A,B,E)##F)##G.
e @e(A,B)##C ==> true | union([C],D), p(A,B,1)##D.
ep@e(A,B)##C,p(B,D,E)##F ==> G is E+1 | union([C,F],H),p(A,D,G)##H.


We now use constraints without justifications in queries. Justifications will be added by the addjust rules.

?- e(a,b), e(b,c), e(a,c).
rem(p(a, c, 2)##[A, B])##[A,B,C],
p(a, b, 1)##[A], e(a, b)##[A],
p(b, c, 1)##[B], e(b, c)##[B],
p(a, c, 1)##[C], e(a, c)##[C].

We see that a path of length 2 has been removed by the constraint e(a,c)##[C], which produced a shorter path of length one. We next kill this constraint e(a,c).
?- e(a,b), e(b,c), e(a,c), kill(e(a,c)).
p(a, b, 1)##[A], e(a, b)##[A],
p(b, c, 1)##[B], e(b, c)##[B],
p(a, c, 2)##[A,B].

Its path p(a,c,1) disappears and the removed remembered path p(a,c,2) is re-added. We can see that the justifications of a path contains are those from the edges in that path. The same happens if we logically retract p(a,c,1) instead of e(a,c).

What happens if we remove p(a,c,2) from the initial query? The killr rule applies. Since the path has two justifications, there are two computations generated by the member predicate. In the first one, the constraint e(a,b) disappeared, in the second answer, it is e(b,c). In both cases, the path cannot be computed anymore, i.e. it has been logically retracted.

?- e(a,b), e(b,c), e(a,c), kill(p(a,c,2)).
p(b, c, 1)##[B], e(b, c)##[B],
p(a, c, 1)##[C], e(a, c)##[C]
;
p(a, b, 1)##[A], e(a, b)##[A],
p(a, c, 1)##[C], e(a, c)##[C].


## 6 Related Work

The idea of introducing justifications into CHR is not new. The thorough work of Armin Wolf on Adaptive CHR [WGG00] was the first to do so. Different to our work, this technically involved approach requires to store detailed information about the rule instances that have been applied in a derivation in order to undo them. Adaptive CHR had a low-level implementation in Java [Wol01], while we give an implementation in CHR itself by a straightforward source-to-source transformation that we prove confluent and correct. Moreover we prove confluence of the rule scheme for logical retraction with the rules of the given program. The application of adaptive CHR considered dynamic constraint satisfaction problems (DCSP) only, in particular for the implementation of search strategies [Wol05], while we apply our approach to arbitrary algorithms in order to make them fully dynamic.

The issue of search strategies was further investigated by Leslie De Koninck et. al. [DKSD08]. They introduce a flexible search framework in CHR (CHR with disjunction) extended with rule and search branch priorities. In their work, justifications are introduced into the semantics of CHR to enable dependency-directed backtracking in the form of conflict-directed backjumping. Our work does not need a new semantics for CHR, nor its extension with disjunction, it rather relies on a source-to-source transformation within the standard semantics.

Our work does not have a particular application of justifications in mind, but rather provides the basis for any type of application that requires dynamic algorithms. There are, however, CHR applications that use justifications.

The work of Jeremy Wazny et. al. [SSW03] introduced informally a particular kind of justifications into CHR for the specific application of type debugging and reasoning in Haskell. Justifications correspond to program locations in the given Haskell program. Unlike other work, the constraints in the body of CHR rules have given justifications to which justifications from the rule applications are added at runtime.

The more recent work of Gregory Duck [Duc12] introduces SMCHR, a tight integration of CHR with a Boolean Satisfiability (SAT) solver for quantifier-free formulae including disjunction and negation as logical connectives. It is mentioned that for clause generation, SMCHR supports justifications for constraints that include syntactic equality constraints between variables. A dynamic unification algorithm using justifications has been considered in [Wol98].

## 7 Conclusions

In this paper, the basic framework for CHR with justifications (CHR) has been established and formally analyzed. We defined a straightforward source-to-source program transformation that introduces justifications into CHR as a conservative extension. Justifications enable logical retraction of constraints. If a constraint is retracted, the computation continues as if the constraint never was introduced. We proved confluence and correctness of the two-rule scheme that encodes the logical retraction. We presented a prototype implementation that is available online together with two classical examples.

Future work could proceed along three equally important lines: investigate implementation, dynamic algorithms and application domains of CHR with justifications. First, we would like to research how logical as well as classical algorithms implemented in CHR behave when they become dynamic. While our approach does not require confluence of the given program, the arbitrary re-introduction of removed constraints may lead to unwanted orders of rule applications in non-confluent programs. Second, we would like to improve the implementation, optimize and benchmark it [Fru17]. Currently, the entire history of removed constraints is stored. It could suffice to remember only a partial history if only certain constraints can be retracted or if partial recomputation proves to be more efficient for some constraints. A lower-level implementation could benefit from the propagation history that comes with the application of propagation rules in most CHR implementations. Third, we would like to extend the rule scheme to support typical application domains of justifications: explanation of derived constraints by justifications (for debugging), detection and repair of inconsistencies (for error diagnosis), and implementing nonmonotonic logical behaviors (e.g. default logic, abduction, defeasible reasoning). Logical retraction of constraints can also lead to reversal of computations, and as a first step the related literature on reversibility should be explored.

Acknowledgements. We thank Daniel Gall for implementing the online transformation tool for CHR. We also thank the anonymous reviewers for their helpful suggestions on how to improve the paper.

## References

• [Abd97] Slim Abdennadher. Operational semantics and confluence of constraint propagation rules. In G. Smolka, editor, CP ’97: Proc. Third Intl. Conf. Principles and Practice of Constraint Programming, volume 1330 of LNCS, pages 252–266. Springer, 1997.
• [AFM99] Slim Abdennadher, Thom Frühwirth, and Holger Meuss. Confluence and semantics of constraint simplification rules. Constraints, 4(2):133–165, 1999.
• [Bet14] Hariolf Betz. A unified analytical foundation for constraint handling rules. BoD, 2014.
• [BM06] Kenneth N Brown and Ian Miguel. Uncertainty and change, chapter 21. Handbook of Constraint Programming, pages 731–760, 2006.
• [DKSD08] Leslie De Koninck, Tom Schrijvers, and Bart Demoen. A flexible search framework for chr. In Constraint Handling Rules — Current Research Topics, volume LNAI 5388, pages 16–47. Springer, 2008.
• [Duc12] Gregory J Duck. Smchr: Satisfiability modulo constraint handling rules.

Theory and Practice of Logic Programming

, 12(4-5):601–618, 2012.
• [Frü09] Thom Frühwirth. Constraint Handling Rules. Cambridge University Press, 2009.
• [Frü15] Thom Frühwirth. Constraint handling rules – what else? In Rule Technologies: Foundations, Tools, and Applications, pages 13–34. Springer International Publishing, 2015.
• [Fru17] Thom Fruehwirth. Implementation of Logical Retraction in Constraint Handling Rules with Justifications. In 21st International Conference on Applications of Declarative Programming and Knowledge Management (INAP 2017), September 2017.
• [McA90] David A McAllester. Truth maintenance. In AAAI, volume 90, pages 1109–1116, 1990.
• [RBF09] Frank Raiser, Hariolf Betz, and Thom Frühwirth. Equivalence of CHR states revisited. In F. Raiser and J. Sneyers, editors, CHR ’09, pages 33–48. K.U.Leuven, Dept. Comp. Sc., Technical report CW 555, July 2009.
• [SSW03] Peter J Stuckey, Martin Sulzmann, and Jeremy Wazny. Interactive type debugging in haskell. In Proceedings of the 2003 ACM SIGPLAN workshop on Haskell, pages 72–83. ACM, 2003.
• [WGG00] Armin Wolf, Thomas Gruenhagen, and Ulrich Geske. On the incremental adaptation of chr derivations.

Applied Artificial Intelligence

, 14(4):389–416, 2000.
• [Wol98] Armin Wolf. Adaptive solving of equations over rational trees. In Principles and Practice of Constraint Programming, pages 475–475, Berlin, Heidelberg, 1998. Springer Berlin Heidelberg.
• [Wol01] Armin Wolf. Adaptive constraint handling with chr in java. In International Conference on Principles and Practice of Constraint Programming, pages 256–270. Springer, 2001.
• [Wol05] Armin Wolf. Intelligent search strategies based on adaptive constraint handling rules. Theory and Practice of Logic Programming, 5(4-5):567–594, 2005.