# The unified higher-order dependency pair framework

In recent years, two higher-order extensions of the powerful dependency pair approach for termination analysis of first-order term rewriting have been defined: the static and the dynamic approach. Both approaches offer distinct advantages and disadvantages. However, a grand unifying theory is thus far missing, and both approaches lack the modularity present in the dependency pair framework commonly used in first-order rewriting. Moreover, neither approach can be used to prove non-termination. In this paper, we aim to address this gap by defining a higher-order dependency pair framework, integrating both approaches into a shared formal setup. The framework has been implemented in the (fully automatic) higher-order termination tool WANDA.

Comments

There are no comments yet.

## Authors

• 2 publications
• 10 publications
• ### A static higher-order dependency pair framework

We revisit the static dependency pair method for proving termination of ...
02/15/2019 ∙ by Carsten Fuhs, et al. ∙ 0

read it

• ### A unifying framework for continuity and complexity in higher types

We set up a parametrised monadic translation for a class of call-by-valu...
06/25/2019 ∙ by Thomas Powell, et al. ∙ 0

read it

• ### The Derivational Complexity Induced by the Dependency Pair Method

We study the derivational complexity induced by the dependency pair meth...
04/03/2009 ∙ by Georg Moser, et al. ∙ 0

read it

• ### Dependency Pairs Termination in Dependent Type Theory Modulo Rewriting

Dependency pairs are a key concept at the core of modern automated termi...
06/27/2019 ∙ by Frédéric Blanqui, et al. ∙ 0

read it

• ### Formalizing the Dependency Pair Criterion for Innermost Termination

Rewriting is a framework for reasoning about functional programming. The...
10/30/2019 ∙ by Ariane Alves Almeida, et al. ∙ 0

read it

• ### Higher-order dependency pairs

Arts and Giesl proved that the termination of a first-order rewrite syst...
04/24/2018 ∙ by Frédéric Blanqui, et al. ∙ 0

read it

• ### Coroutines with Higher Order Functions

Coroutines are non-preemptive concurrent subroutines that, unlike preemp...
12/19/2018 ∙ by Dimitri Racordon, et al. ∙ 0

read it

##### This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

## 1. Introduction

Term rewriting [5, 45] is an important area of logic, with applications in many different areas of computer science [6, 13, 17, 21, 24, 33, 38]. Higher-order term rewriting – which extends the traditional first-order term rewriting with higher-order types and binders as in the -calculus – offers a formal foundation of functional programming and a tool for equational reasoning in higher-order logic. A key question in the analysis of both first- and higher-order term rewriting is termination, or strong normalisation – both for its own sake, and as part of confluence and equivalence analysis.

In first-order term rewriting, a highly effective method to prove termination (both manually and automatically) is the dependency pair (DP) approach [4]. This approach has been extended to the DP framework [18, 20], a highly modular methodology which new techniques for proving termination and non-termination can easily be plugged into in the form of processors.

In higher-order term rewriting, two adaptations of the DP approach have been defined: dynamic [42, 30] and static [8, 41, 32, 43]. Each approach has distinct costs and benefits; while dynamic DPs are more broadly applicable, analysis of static DPs is often easier.

This difference can be problematic for defining new techniques based on the DP approach, as they must be proved correct for both dynamic and static DPs. This problem is exacerbated by the existence of multiple styles of higher-order rewriting, such as Algebraic Functional Systems (AFSs) [25] (used in the annual Termination Competition [47]) and Higher-order Rewrite Systems (HRSs) [36, 34] (used in the annual Confluence Competition [12]), which have similar but not fully compatible syntax and semantics. What is more, neither approach offers the modularity and extendability of the DP framework, nor can they be used to prove non-termination. Both approaches are less general than they could be. For example, most versions of the static approach use a restriction which does not consider strictly positive inductive types [10]. The dynamic approach is sound for all systems, but only complete for left-linear ones – that is, a non-left-linear AFS may have an infinite dependency chain following [29, 30] even if it is terminating; the static approach is incomplete in this sense for even more systems.

In this paper, we define a higher-order dependency pair framework, which combines the dynamic and static styles, is fully modular, and can be used for both termination and non-termination without restrictions. For broad applicability, we use a new rewriting formalism, AFSMs, designed to capture several flavours ofhigher-order rewriting, including AFSs and HRSs with a pattern restriction. We have dropped the restriction to left-linear systems for completeness of dynamic DPs and liberalised both the restrictions to use static DPs and to obtain a complete analysis if we do. In addition, we introduce a series of new techniques (“processors”) to provide key termination techniques within this framework.

This is a foundational paper, focused on defining a general theoretical framework for higher-order termination analysis rather than implementation concerns. We have, however, implemented most results in the fully automatic termination tool WANDA [27].

#### Related Work.

There is a vast body of work in the first-order setting regarding the DP approach [4] and framework [18, 20, 22]. The approach for context-sensitive rewriting [2] is somewhat relevant, as it also admits collapsing DPs (in their case, a DP with a variable as its right-hand side) and therefore requires some similar adaptations of common techniques. However, beyond this, the two different settings are not really comparable.

The static DP approach is discussed in, e.g., [32, 41, 43, 31]. This approach can be used only for plain function passing (PFP) systems. The definition of PFP is not fixed, as later papers sometimes weaken earlier restrictions or transpose them to a different rewriting formalism, but always concerns the position of higher-order variables in the left-hand sides of rules. These works include non-pattern HRSs [32, 43] and polymorphic rewriting [31], which we do not consider, but do not employ formative rules or meta-variable conditions, which we do. Importantly, these methods do not consider strictly positive inductive types, which could be used to significantly broaden the PFP restriction. Such types are considered in an early paper which defines a variation of static higher-order dependency pairs [8] based on a computability closure [10, 9]. However, this work carries different restrictions (e.g., DPs must be type-preserving and not introduce fresh variables) and provides only one analysis technique (reduction pairs) on these DPs. Moreover, although the proof method is based on Tait and Girard’s notion of computability [44], the approach thus far does not exploit this beyond the way the initial set of DPs is obtained. We will present a variation of PFP for the AFSM formalism that is strictly more permissive than earlier definitions as applied to AFSMs, and our framework exploits the inherent computability by introducing a flag that can be used by the static subterm criterion processor (Thm. E.6). In addition, we allow static DPs to also be used for non-termination and add features such as formative rules.

Unlike the static approach, the dynamic approach [3, 29, 30] is not restricted, but it allows for collapsing DPs of the form with a variable, which can be difficult to handle. Thus far, this approach has been incomplete for non-left-linear systems due to bound variables that become free in a dependency pair. Here, we repair that problem by using a rewriting formalism that separates variables used for matching from those used as binders.

Both static and dynamic approaches actually lie halfway between the original “DP approach” of first-order rewriting and a full DP framework as in [18, 20] and the present work. Most of these works [29, 30, 31, 32, 43] prove “non-loopingness” or “chain-freeness” of a set of DPs through a number of theorems. However, there is no concept of DP problems, and the set of rules cannot be altered. They also fix assumptions on dependency chains – such as minimality [32] or being “tagged” [30] – which frustrate extendability and are more naturally dealt with in a DP framework using flags.

The clear precursor of the present work is [30], which provides such a halfway framework for dynamic DPs, introduces a notion of formative rules, and briefly translates a basic form of static DPs to the same setting. Our formative reductions consider the shape of reductions rather than the rules they use, and they can be used as a flag in the framework to gain additional power in other processors. Our integration of the two styles also goes deeper, allowing for static and dynamic DPs to be used in the same proof and giving a complete method using static DPs for a larger group of systems.

In addition, we have several completely new features, including meta-variable conditions (an essential ingredient for a complete method), new flags to DP problems, and various processors including ones that modify collapsing DPs.

For a more elaborate discussion of the static and dynamic DP approaches, we refer to [30, 28].

The paper is organised as follows: Sec. 2 introduces higher-order rewriting using AFSMs and recapitulates computability. In Sec. 3 we state dynamic and static DPs for AFSMs. Sec. 4 formulates the DP framework and a number of DP processors for existing and new termination proving techniques. Sec. 5 concludes. A discussion of the translation of existing static DP approaches to the AFSM formalism, as well as detailed proofs for all results in this paper, are available in the appendix. In addition, many of the results have been informally published in the second author’s PhD thesis [28].

## 2. Preliminaries

In this section, we first define our notation by introducing the AFSM formalism. Although not one of the standards of higher-order rewriting, AFSMs combine features from various forms of higher-order rewriting and can be seen as a form of IDTSs [7] which includes application. Then we present a definition of computability, a technique often used for higher-order termination.

### 2.1. Higher-order term rewriting using AFSMs

Unlike first-order term rewriting, there is no single, unified approach to higher-order term rewriting, but rather a number of similar but not fully compatible systems aiming to combine term rewriting and typed -calculi. For generality, we will use Algebraic Functional Systems with Meta-variables: a formalism which admits translations from the main formats of higher-order term rewriting.

[Simple types] We fix a set of sorts. All sorts are simple types, and if are simple types, then so is .

We let be right-associative. All types have a unique form with .

[Terms] We fix disjoint sets of function symbols and of variables, each symbol equipped with a type. We assume that both and contain infinitely many symbols of all types. Terms are expressions where can be derived for some by:

 (V) x:σ if x:σ∈V (F) f:σ if f:σ∈F (@) s t:τ if s:σ→τ and t:σ (Λ) λx.s:σ→τ if x:σ∈V and s:τ

The binds variables as in the -calculus; unbound variables are called free, and is the set of free variables in . A term is closed if . Terms are considered modulo -conversion. Application (@) is left-associative; abstractions () extend as far to the right as possible. A term has type if ; it has base type if . A term has a subterm , notation , if (a) , (b) and , or (c) and or . Finally, we define if , and otherwise.

Note that any term has a form with and a variable, function symbol, or abstraction. Separate from terms, we use special expressions for matching and rewrite rules:

[Meta-terms and patterns] We fix a set , disjoint from and , of meta-variables; each meta-variable is equipped with a type declaration (where and all are simple types). Meta-terms are expressions such that can be derived for some type using (V), (F), (@), (), and (M) below:

(M) if
and

We call the minimal arity of and write . A meta-term is a pattern if it has one of the forms with all distinct variables and ; with and a pattern; or with and all patterns (). is the set of meta-variables occurring in a meta-term . A pattern is fully extended if for all occurrences of an abstraction in , the bound variable is an argument to all meta-variables in . It is linear if each meta-variable in occurs exactly once.

Meta-variables are used in early forms of higher-order rewriting (e.g., [1, 26]) and strike a balance between matching modulo and syntactic matching. Note that in earlier applications it is not permitted to give a meta-variable more arguments than its minimal arity. We allow this because of applications in the DP framework. However, in all our examples, meta-variable applications in the unmodified rules take the expected number of arguments.

Notationally, we will use for variables, for meta-variables, for symbols that could be variables or meta-variables, or more suggestive notation for function symbols, and for (meta-)terms. Types are denoted , and are sorts. We will regularly overload notation and write , or without stating a type. For meta-terms we will often omit the brackets, writing just . In addition, notational conventions and definitions like and closed carry over from terms to meta-terms; a meta-term is closed if , even if .

[Substitution] A meta-substitution is a type-preserving function from variables and meta-variables to meta-terms; if then has the form . Let (the domain of ). Formeta-variables with and for with , we write ifeither , or there is such that with not an abstraction and . We let be the meta-substitution with , for , and for . We will also consider meta-substitutions with infinite domain. Even if the domain is infinite, for all in we assume infinitely many variables of all types with .

A substitution is a meta-substitution mapping everything in its domain to terms. The result of applying a meta-substitution to a term is obtained recursively:

 xγ = γ(x) if x∈V fγ = f if f∈F (s t)γ = (sγ) (tγ) (λx.s)γ = λx.(sγ) if γ(x)=x∧x∉FV(sγ)

For meta-terms, the result is obtained by the clauses above and:

if

Note that for with and , there is always exactly one such that . The result of applying a meta-substitution is well-defined by induction on the multiset , with meta-terms compared by their sizes.

Essentially, applying a meta-substitution with meta-variables in its domain combines a substitution with a -development. So equals , and equals .

[Rules and rewriting] A rule is a pair of closed meta-terms of the same type such that is a pattern of the form with and . A set of rules defines a rewrite relation as the smallest monotonic relation on terms which includes:

 (Rule) (Beta) ℓδ ⇒R rδ if ℓ⇒r∈R and δ a sub- stitution on domain FMV(ℓ) (λx.s) t ⇒R s[x:=t]

We say if is derived using a (Beta) step. A term is terminating under if there is no infinite reduction , and is -normal if there is no with . Note that it is allowed to reduce at any position of a term, even below a . A relation is terminating if all terms are terminating under . A set of rules is terminating if is terminating. The set of defined symbols consists of those such that a rule exists; all other symbols are called constructors.

Note that is allowed to be infinite – which is useful for instance to model polymorphic systems. Also, right-hand sides of rules do not have to be in -normal form. While this is rarely used in practical examples, non--normal rules may arise through transformations, such as the one used in our Def. 3.2.

Let and consider the following rules :

 map (λx.Z[x]) nil⇒nilmap (λx.Z[x]) (cons H T)⇒cons Z[H] (map (λx.Z[x]) T)

Then . Note that the bound variable does not need to occur in the body of to match . However, note also that a term like cannot be reduced, because does not instantiate . We could alternatively consider the rules:

 map Z nil⇒nilmap Z (cons H T)⇒cons (Z H) (map Z T)

Here, has a type declaration instead of , and we use explicit application. Then . However, we will often need explicit -reductions; e.g., .

For the set of terms to analyse for (non-)termination, it suffices to consider a minimal number of arguments for each function symbol, induced by the rewrite rules of the given AFSM. To capture this minimal number of arguments, we introduce arity functions.

[Arity] An arity function is a function with for all . A meta-term respects if any occurring in is applied to at least arguments. respects if and respect for all .

For a fixed set of function symbols and arity function , we say that the minimal arity of in is , and the maximal arity of is the unique number such that . The set of arity-respecting terms is denoted .

An AFSM is a triple ; types of (meta-)variables can be derived from context. However, if respects an arity function , then if there is any non-terminating term, there is one that respects (see Appendix A). So for fixed we can set the arity function to give the greatest possible values that respects, and we do not need to explicitly give (choosing the greatest possible minimal arities is always useful, as it requires a termination proof for fewer terms). Thus, we will typically speak of an AFSM .

Note that, while we have suggestively used the same notation for the minimal arity of function symbols and meta-variables, the minimal arity of meta-variables is fixed by their declaration.

[Ordinal recursion] Let and given by:

 rec 0 K F G⇒Krec (s X) K F G⇒F X (rec X K F G)rec (lim H) K F G⇒G H (λm.rec (H m) K F G)

Then we can assume that without explicitly giving .

Observant readers may also notice that by the given constructors, the type is not inhabited. However, following Def. 2.1, contains infinitely many symbols of all types; thus, constructors of all sorts (with minimal arity ) are implicitly present.

The two most common formalisms in the context of termination analysis of higher-order rewriting are algebraic functional systems (AFSs) and higher-order rewriting systems [36, 34] (HRSs), often used with a pattern restriction. AFSs are very similar to our AFSMs, but they use variables for matching rather than meta-variables; this is trivially translated to the AFSM format, giving rules where all meta-variables have minimal arity , like the “alternative” rules in Ex. 2.1. HRSs use matching modulo , but the common restriction of pattern HRSs can be directly translated into ASFMs, provided terms are -normalised after every reduction step. Even without strategy restrictions, termination of the obtained AFSM still implies termination of the original HRS; for second-order systems, termination is equivalent. AFSMs can also naturally encode CRSs [26] and several applicative systems (cf. [28, Chapter 3]).

### 2.2. Computability

A common technique in higher-order termination is Tait and Girard’s computability notion [44]. There are several ways to define computability predicates; here we follow, e.g., [7, 10, 11, 9] in considering accessible meta-variables using strictly positive inductive types. The definition presented below is adapted from these works, both to account for the altered formalism and to introduce (and obtain termination of) a relation that we will use in Thm. E.6. This allows for a minimal presentation that avoids the use of ordinals that would otherwise be needed to obtain .

To define computability, we use the notion of an RC-set:

A set of reducibility candidates, or RC-set, for a rewrite relation of an AFSM is a set of base-type terms such that: every term in is terminating under ; is closed under (so if and then ); if with or with , and for all with we have , then .

We define -computability for an RC-set by induction on types: is -computable if (); is -computable if for all that are -computable, is -computable.

The traditional notion of computability is obtained by taking for the set of all terminating base-type terms. However, we can do better, using the notion of accessible arguments, applied to termination analysis also in the General Schema [10], the Computability Path Ordering [11], and the Computability Closure [9].

[Accessible arguments] We fix a quasi-ordering on with well-founded strict part . For (with ) and sort , let if and for all , and let if and for all . (Here corresponds to “ occurs only positively in ” in [7, 10, 11].)

For , let . For , let has the form with. We write if either , or and , or with and for some .

Consider a quasi-ordering such that . In Ex. 2.1, we then have . Therefore, , which gives .

Let if both sides have base type, , and all are -computable. There is an RC-set such that is terminating under if then is -computable for all .

###### Proof sketch.

This follows the proof in, e.g., [10, 11], defining as the fixpoint of a monotone function operating on RC-sets.

The full proof is available in Appendix B. ∎

## 3. Higher-order dependency pairs

In this section we transpose the definitions of dynamic and static dependency pairs [42, 30, 8, 41, 32, 43] to AFSMs and thus formulate them in a single unified language. We add the new features of meta-variable conditions, formative reductions, and computable chains.

### 3.1. Common definitions

Although we keep the first-order terminology of dependency pairs, the setting with meta-variables makes it better to use triples.

[Dependency Pair] A dependency pair (DP) is a triple , where is a closed pattern with , is a meta-term, and is a set of meta-variable conditions: pairs indicating that regards its argument. A substitution respects a set of meta-variable conditions if for all in we have with . DPs will be used only with substitutions that respect their meta-variable conditions. We call the DP collapsing if has the form with . A set of DPs is collapsing if it contains a collapsing DP.

There are two approaches to generate DPs, originating from distinct lines of work [30, 32]. As in the first-order setting, both approaches employ marked symbols:

[Marked symbols] Define , and . For a meta-term , let if with and ; otherwise.

Note that is simply if .

Moreover, we will consider candidates. In the first-order setting these are subterms of the right-hand sides of rules whose root symbol is defined. In the current setting, we have to consider also meta-variables as well as rules whose right-hand side is not -normal.

[-reduced-sub-meta-term, , ] A meta-term has a -reduced-sub-meta-term (shortly, BRSMT), notation , if there exists a set of meta-variable conditions such that . Here holds if at least one of the following holds:

• for some

• and

• and

• and for some , with an abstraction, meta-variable application, or element of

• and for some such that

Essentially, means that can be reached from by taking -reductions at the root and “subterm”-steps, and must be in whenever we pass into argument of a meta-variable . We also do not include subterms of in . We use -reduced-sub-meta-terms in the following definition of candidates:

[Candidates] For a meta-term , the set of candidates of consists of those pairs with (a) , (b) haseither the form with and , or the form , (c) does not have the form with all distinct variables, and (d) there is no with .

In AFSMs where all meta-variables have minimal arity , the set for a meta-term consists of the pairs where is a BRSMT of that has either the form with a meta-variable and , or with and .

In the AFSM of Ex. 2.1, the set therefore consists of and and as well as .

If some meta-variables do take arguments, the forms considered in Ex. 3.1 do not suffice: we must also consider candidates such as . In addition, the meta-variable conditions matter: candidates are pairs where contains exactly those pairs where we pass through the argument of to reach .

Consider an AFSM with the signature from Ex. 2.1 but a rule using meta-variables with larger minimal arities:

 rec (lim (λn.H[n])) K (λxn.F[x,n]) (λfg.G[f,g]) ⇒G[λn.H[n], λm.rec H[m] K (λxn.F[x,n]) (λfg.G[f,g])]

The candidates of the right-hand side are:

• and

Note that for instance is not the source of a candidate, as is a variable and has minimal arity (as the left-hand side of the rule shows). Note also that cannot be partially applied, so there is no counterpart to the candidate in Ex. 3.1.

Dynamic DPs involve also collapsing DPs. This makes the notion of chains a bit more complicated than its first-order analogue.

[Dependency chain] Let be a set of DPs and a set of rules. An infinite -dependency chain (or just -chain) is a sequence where each and all are terms, such that for all :

1. if , then and either (a) and , or (b) and for some with and but .

2. if then there exists a substitution on domain such that maps all variables in to fresh variables, , and:

1. if is an application or symbol , then

2. if and , then for some non-variable subterm of such that

3. for all : if then

3. or we can write with , and each

Both cases 1 and 2b essentially perform a -step and then mark a specific subterm of the result: this subterm previously occurred between a -abstraction and an occurrence of its bound variable. This often makes it possible to use reduction triples that do not satisfy the subterm property, as observed in [30] and Thm. E.2.

Dependency chains can exhibit some particular properties:

[Minimal chain, formative chain, formative reduction] A -chain is minimal if the strict subterms of all are terminating under . It is formative if for all with having the form , the reduction is -formative.

Here, for a pattern , substitution and term , a reduction is -formative if one of the following statements holds:

• is not a fully extended linear pattern

• is a meta-variable application and

• and with and each by an -formative reduction

• and and by an -formative reduction

• and by an -formative reduction

• is not a meta-variable application, and there exist and and meta-variables () such that by an -formative reduction, and by an -formative reduction.

Formative reductions are used as a proof technique in [30] and are formally introduced for the first-order DP framework in [16]. The property will be essential in our Theorems E.2 and E.5.

### 3.2. Dynamic higher-order dependency pairs

With these preparations, we move on to dynamic DPs. Since rules of functional type sometimes cause non-termination only in a certain applicative context (e.g., if , then is terminating, but is not), we use an extended set of rules to include applicative contexts, using a variant of -saturation [23].

[] Let . Now .

###### Remark .

The corresponding definition in [28] also excludes all DPs if already contains a DP – so most of those DPs generated from rules in . For a simpler definition, we haven chosen not to do so. Instead of excluding these DPs immediately, we can remove them afterwards using a processor (see Thm. C in the appendix).

Consider an AFSM with and . Then . Then consists of:

 deriv♯ (λx.sin F[x])⇛deriv (λx.F[x]) y(∅)deriv♯ (λx.sin F[x])⇛deriv♯ (λx.F[x])(∅)deriv (λx.sin F[x]) Y⇛deriv (λx.F[x]) Y(∅)deriv (λx.sin F[x]) Y⇛deriv♯ (λx.F[x])(∅)deriv (λx.sin F[x]) Y⇛F[Y](∅)

The first two DPs come from , the last three from .

As observed before, may contain collapsing dependency pairs , in contrast to the first-order DP framework. This is somewhat problematic for extending techniques like the subterm criterion or the dependency graph processor that rely on the shape of the right-hand side of DPs.

For the first two rules in Ex. 2.1,

 {map♯ (λx.Z[x]) (cons H T)⇛map♯ (λx.Z[x]) T(∅)map♯ (λx.Z[x]) (cons H T)⇛Z[H](∅)}

Key to the DP framework is the relationship between dependency chains and termination: an AFSM with rules is terminating if and only if there is no -dependency chain. Indeed, we can limit interest to specific chains, following Def. 3.1.

[Thm. 6.44 in [28]] If is non-terminating, then there is an infinite minimal formative -chain. If there is an infinite -chain, then is non-terminating.

###### Proof sketch.

The proof of the first claim follows the proof of [30, Thm. 5.7]: we select a minimal non-terminating term (MNT) (all whose subterms terminate) and an infinite reduction starting in . Then we stepwise build an infinite minimal -dependency chain as follows. If with , then also is non-terminating; we continue with a MNT subterm of . Otherwise and there is such that by reductions in the , and is still non-terminating. We can identify a candidate of such that respects and is a MNT subterm of ; we continue with . For the formative property, we note that if and terminates, then by an -formative reduction for some where each ; this follows by induction first on using , second on the reduction length.

For the second claim, we show by induction on the definition of that implies for all substitutions which respect ; thus, any infinite -chain induces an infinite -reduction, which contradicts termination of .

The full proof is available in Appendix C. ∎

Thm. C is similar to [30, Thm. 5.7], but provides progress by considering AFSMs and meta-variable conditions, and by regarding formative chains; also, in [30] the second statement holds only if all left-hand sides in are linear, as their definition of replaces fresh variables in the right-hand sides of DPs by constants. Here, this is not needed due to the distinction between variables and meta-variables.

[Encoding the untyped -calculus] Consider anAFSM with