A static higher-order dependency pair framework

02/15/2019 ∙ by Carsten Fuhs, et al. ∙ 0

We revisit the static dependency pair method for proving termination of higher-order term rewriting and extend it in a number of ways: (1) We introduce a new rewrite formalism designed for general applicability in termination proving of higher-order rewriting, Algebraic Functional Systems with Meta-variables. (2) We provide a syntactically checkable soundness criterion to make the method applicable to a large class of rewrite systems. (3) We propose a modular dependency pair framework for this higher-order setting. (4) We introduce a fine-grained notion of formative and computable chains to render the framework more powerful. (5) We formulate several existing and new termination proving techniques in the form of processors within our framework. The framework has been implemented in the (fully automatic) higher-order termination tool WANDA.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Term rewriting [4, 47] is an important area of logic, with applications in many different areas of computer science [5, 12, 17, 22, 24, 35, 40]. Higher-order term rewriting – which extends the traditional first-order term rewriting with higher-order types and binders as in the -calculus – offers a formal foundation of functional programming and a tool for equational reasoning in higher-order logic. A key question in the analysis of both first- and higher-order term rewriting is termination; both for its own sake, and as part of confluence and equivalence analysis.

In first-order term rewriting, a hugely effective method for proving termination (both manually and automatically) is the dependency pair (DP) approach [3]. This approach has been extended to the DP framework [19, 21], a highly modular methodology which new techniques for proving termination and non-termination can easily be plugged into in the form of processors.

In higher-order term rewriting, two adaptations of the DP approach have been defined: dynamic [44, 30] and static [7, 43, 33, 45, 31, 32] dependency pairs. Each approach has distinct costs and benefits; while dynamic DPs are more broadly applicable, static DPs often allow for more powerful analysis techniques. However, neither approach offers the modularity and extendability of the DP framework. They also cannot be used to prove non-termination. Another problem is that these approaches are defined on different formalisms of higher-order rewriting, which means that for all results, certain language features are not available.

In this paper, we will address these issues for the static DP approach by extending this approach to a full higher-order dependency pair framework for both termination and non-termination analysis. For broad applicability, we will introduce a new rewriting formalism, AFSMs, designed to capture several flavours of higher-order rewriting, including AFSs [25] (used in the annual Termination Competition [49]) and pattern HRSs [38, 36] (used in the annual Confluence Competition [11]). To demonstrate the versatility and power of this methodology, we will also define various processors within the framework – both adaptations of existing processors from the literature and entirely new ones.

Detailed contributions. We will reformulate the results of [7, 43, 33, 45, 31] into a dependency pair framework for AFSMs. In doing so, we will instantiate the applicability restriction of [31] to a very liberal syntactic condition, and add two new flags to track properties of DP problems: one completely new, one proposed in an earlier work by the authors for the first-order DP framework [16]. We will also provide eight processors for reasoning within this framework: four trans-lations of techniques previously defined for static DP approaches, three adaptations of techniques for first-order or dynamic DPs, and one completely new.

This is a foundational paper, focused on defining a general theoretical framework for higher-order termination analysis using dependency pairs rather than questions of implementation. We have, however, implemented most of these results in the fully automatic termination analysis tool WANDA [27].

Related Work. There is a vast body of work in the first-order setting regarding the DP approach [3] and framework [19, 21, 23]. We have drawn from the ideas in these works for the core structure of the higher-order framework, but have added some new features of our own and adapted results to the higher-order setting.

There is no true higher-order DP framework yet: both static and dynamic approaches actually lie halfway between the original “DP approach” of first-order rewriting and a full DP framework as in [19, 21]. Most of these works [29, 30, 31, 33, 45] prove “non-loopingness” or “chain-freeness” of a set of DPs through a number of theorems. Yet, there is no concept of DP problems, and the set of rules cannot be altered. They also fix assumptions on dependency chains – such as minimality [33] or being “tagged” [30] – which frustrate extendability and are more naturally dealt with in a DP framework using flags.

The static DP approach for higher-order term rewriting is discussed in, e.g., [33, 43, 45]. The approach is limited to plain function passing (PFP) systems. The definition of PFP has been made more liberal in later papers, but always concerns the position of higher-order variables in the left-hand sides of rules. These works include non-pattern HRSs [33, 45], which we do not consider, but do not employ formative rules or meta-variable conditions, or consider non-termination, which we do. Importantly, they do not consider strictly positive inductive types, which could be used to significantly broaden the PFP restriction. Such types are considered in an early paper which defines a variation of static higher-order dependency pairs [7] based on a computability closure [9, 8]. However, this work carries different restrictions (e.g., DPs must be type-preserving and not introduce fresh variables) and considers only one analysis technique (reduction pairs).

Definitions of DP approaches for functional programming also exist [31, 32], which consider applicative systems with ML-style polymorphism. These works also employ a much broader, semantic definition than PFP, which is actually more general than the syntactic restriction we propose here. However, like the static approaches for term rewriting, they do not truly exploit the computability [46] properties inherent in this restriction: it is only used for the initial generation of dependency pairs. In the present work, we will take advantage of our exact computability notion by introducing a flag that can be used by the computable subterm criterion processor (Thm. 0.C.7) to handle benchmark systems that would otherwise be beyond the reach of static DPs. Also in these works, formative rules, meta-variable conditions and non-termination are not considered.

Regarding dynamic DP approaches, a precursor of the present work is [30], which provides a halfway framework (methodology to prove “chain-freeness”) for dynamic DPs, introduces a notion of formative rules, and briefly translates a basic form of static DPs to the same setting. Our formative reductions consider the shape of reductions rather than the rules they use, and they can be used as a flag in the framework to gain additional power in other processors. The adaptation of static DPs in [30] was very limited, and did not for instance consider strictly positive inductive types or rules of functional type.

For a more elaborate discussion of both static and dynamic DP approaches in the literature, we refer to [30] and the second author’s PhD thesis [28].

The paper is organised as follows: Sec. 2 introduces higher-order rewriting using AFSMs and recapitulates computability. In Sec. 3 we impose restrictions on the input AFSMs for which our framework is soundly applicable. In Sec. 4 we define static DPs for AFSMs, and derive the key results on them. Sec. 5 formulates the DP framework and a number of DP processors for existing and new termination proving techniques. Sec. 7 concludes. Detailed proofs for all results in this paper are available in the appendix. In addition, many of the results have been informally published in the second author’s PhD thesis [28].

2 Preliminaries

In this section, we first define our notation by introducing the AFSM formalism. Although not one of the standards of higher-order rewriting, AFSMs combine features from various forms of higher-order rewriting and can be seen as a form of IDTSs [6] which includes application. We will finish with a definition of computability, a technique often used for higher-order termination methods.

2.1 Higher-order term rewriting using AFSMs

Unlike first-order term rewriting, there is no single, unified approach to higher-order term rewriting, but rather a number of similar but not fully compatible systems aiming to combine term rewriting and typed -calculi. For generality, we will use Algebraic Functional Systems with Meta-variables: a formalism which admits translations from the main formats of higher-order term rewriting.

Definition 1 (Simple types)

We fix a set of sorts. All sorts are simple types, and if are simple types, then so is .

We let be right-associative. Note that all types have a unique representation in the form with .

Definition 2 (Terms and meta-terms)

We fix disjoint sets of function symbols, of variables and of meta-variables, each symbol equipped with a type. Each meta-variable is additionally equipped with a natural number. We assume that both and contain infinitely many symbols of all types. The set of terms over consists of expressions where can be derived for some type by the following clauses:

(V) if (@) if and
(F) if () if and

Meta-terms are expressions whose type can be derived by those clauses and:

(M)
if and

The binds variables as in the -calculus; unbound variables are called free, and is the set of free variables in . Meta-variables cannot be bound; we write for the set of meta-variables occurring in . A meta-term is called closed if (even if ). Meta-terms are considered modulo -conversion. Application (@) is left-associative; abstractions () extend as far to the right as possible. A meta-term has type if ; it has base type if . We define if , and otherwise.

A (meta-)term has a sub-(meta-)term , notation , if either or , where if (a) and , (b) and or (c) and . A (meta-)term has a fully applied sub-(meta-)term , notation , if either or , where if (a) and , (b) and or (c) and (so if , then and are not fully applied subterms, but and both and are).

For , we call the arity of , notation .

Clearly, all fully applied subterms are subterms, but not all subterms are fully applied. Every term has a form with and a variable, function symbol, or abstraction; in meta-terms may also be a meta-variable application . Terms are the objects that we will rewrite; meta-terms are used to define rewrite rules. Note that all our terms (and meta-terms) are, by definition, well-typed. For rewriting, we will employ patterns:

Definition 3 (Patterns)

A meta-term is a pattern if it has one of the forms with all distinct variables; with and a pattern; or with and all patterns ().

In rewrite rules, we will use meta-variables for matching and variables only with binders. In terms, variables can occur both free and bound, and meta-variables cannot occur. Meta-variables originate in very early forms of higher-order rewriting (e.g., [1, 26]), but have also been used in later formalisms (e.g., [9]). They strike a balance between matching modulo and syntactic matching. By using meta-variables, we obtain the same expressive power as with Miller patterns [36], but do so without including a reversed -reduction as part of matching.

Notational conventions: We will use for variables, for meta-variables, for symbols that could be variables or meta-variables, or more suggestive notation for function symbols, and for (meta-)terms. Types are denoted , and are sorts. We will regularly overload notation and write , or without stating a type (or minimal arity). For meta-terms we will usually omit the brackets, writing just .

Definition 4 (Substitution)

A meta-substitution is a type-preserving function from variables and meta-variables to meta-terms. Let the domain of be given by: ; this domain is allowed to be infinite. We let denote the meta-substitution with and for , and for . We assume there are infinitely many variables of all types such that (a) and (b) for all : .

A substitution is a meta-substitution mapping everything in its domain to terms. The result of applying a meta-substitution to a term is obtained by:

if
if if

For meta-terms, the result is obtained by the clauses above and:

 if
 if
 if
xand is not an abstraction

Note that for fixed , any term has exactly one of the two forms above ( with and not an abstraction, or ).

Essentially, applying a meta-substitution that has meta-variables in its domain combines a substitution with (possibly several) -steps. For example, we have that: equals . We also have: equals .

Definition 5 (Rules and rewriting)

Let be fixed sets of function symbols, variables and meta-variables respectively. A rule is a pair of closed meta-terms of the same type such that is a pattern of the form with and . A set of rules defines a rewrite relation as the smallest monotonic relation on terms which includes:

(Rule) if and
(Beta)

We say if is derived using a (Beta) step. A term is terminating under if there is no infinite reduction , is in normal form if there is no such that , and is -normal if there is no with . Note that we are allowed to reduce at any position of a term, even below a . The relation is terminating if all terms over are terminating. The set of defined symbols consists of those such that a rule exists; all other symbols are called constructors.

Note that is allowed to be infinite, which is useful for instance to model polymorphic systems. Also, right-hand sides of rules do not have to be in -normal form. While this is rarely used in practical examples, non--normal rules may arise through transformations, and we lose nothing by allowing them.

Example 1

Let and consider the following rules :

Then . Note that the bound variable does not need to occur in the body of to match . However, a term like cannot be reduced, because does not instantiate . We could alternatively consider the rules:

Where the system before had , here we assume . Thus, rather than meta-variable application we use explicit application . Then . However, we will often need explicit -reductions; e.g., .

Definition 6 (Afsm)

An AFSM is a tuple of a signature and a set of rules built from meta-terms over ; as types of relevant variables and meta-variables can always be derived from context, we will typically just refer to the AFSM . An AFSM implicitly defines the abstract reduction system : a set of terms and a rewrite relation on this set. An AFSM is terminating if is terminating (on all terms in ).

Discussion: The two most common formalisms in termination analysis of higher-order rewriting are algebraic functional systems [25] (AFSs) and higher-order rewriting systems [38, 36] (HRSs). AFSs are very similar to our AFSMs, but use variables for matching rather than meta-variables; this is trivially translated to the AFSM format, giving rules where all meta-variables have arity , like the “alternative” rules in Ex. 1. HRSs use matching modulo , but the common restriction of pattern HRSs can be directly translated into AFSMs, provided terms are -normalised after every reduction step. Even without this -normalisation step, termination of the obtained AFSM implies termination of the original HRS; for second-order systems, termination is equivalent. AFSMs can also naturally encode CRSs [26] and several applicative systems (cf. [28, Chapter 3]).

Example 2 (Ordinal recursion)

A running example is the AFSM with and given below. As all meta-variables have arity , this can be seen as an AFS.

Observant readers may notice that by the given constructors, the type in Ex. 2 is not inhabited. However, as the given symbols are only a subset of , additional symbols (such as constructors for the type) may be included. The presence of additional function symbols does not affect termination of AFSMs:

Theorem 2.1 (Invariance of termination under signature extensions)

For an AFSM with at most countably infinite, let be the set of function symbols occurring in some rule of . Then is terminating if and only if is terminating.

Proof

Trivial by replacing all function symbols in by corresponding variables of the same type. ∎

Therefore, we will typically only state the types of symbols occurring in the rules, but may safely assume that infinitely many symbols of all types are present (which for instance allows us to select unused constructors in some proofs).

2.2 Computability

A common technique in higher-order termination is Tait and Girard’s computability notion [46]. There are several ways to define computability predicates; here we follow, e.g., [6, 9, 10, 8] in considering accessible meta-terms using strictly positive inductive types. The definition presented below is adapted from these works, both to account for the altered formalism and to introduce (and obtain termination of) a relation that we will use in the “computable subterm criterion processor” of Thm. 0.C.7 (a termination criterion that allows us to handle systems that would otherwise be beyond the reach of static DPs). This allows for a minimal presentation that avoids the use of ordinals that would otherwise be needed to obtain (see, e.g., [10, 8]).

To define computability, we use the notion of an RC-set:

Definition 7

A set of reducibility candidates, or RC-set, for a rewrite relation of an AFSM is a set of base-type terms such that: every term in is terminating under ; is closed under (so if and then ); if with or with , and for all with we have , then (for any ).

We define -computability for an RC-set by induction on types. For , we say that is -computable if either is of base type and ; or and for all that are -computable, is -computable.

The traditional notion of computability is obtained by taking for the set of all terminating base-type terms. Then, a term is computable if and only if (a) has base type and is terminating; or (b) and for all computable the term is computable. This choice is simple but, for reasoning, not ideal: we do not have a property like: “if is computable then so is each ”. Such a property would be valuable to have for generalising termination proofs from first-order to higher-order rewriting, as it allows us to use computability where the first-order proof uses termination. While it is not possible to define a computability notion with this property alongside case (b) (as such a notion would not be well-founded), we can come close to this property by choosing a different set for . To define this set, we will use the notion of accessible arguments, which is used for the same purpose also in the General Schema [9], the Computability Path Ordering [10], and the Computability Closure [8].

Definition 8 (Accessible arguments)

We fix a quasi-ordering on with well-founded strict part .111Well-foundedness is immediate if is finite, but we have not imposed that requirement. For a type (with ) and sort , let if and for all , and let if and for all .222Here corresponds to “ occurs only positively in ” in [6, 9, 10].

For , let . For , let has the form with . We write if either , or and , or with and for some with .

With this definition, we will be able to define a set such that, roughly, is -computable if and only if (a) and is -computable for all -computable , or (b) has base type, is terminating, and if then is -computable for all accessible (see Thm. 2.2 below). The reason that for is different is proof-technical: computability of implies the computability of more arguments than computability of does, since can be instantiated by anything.

Example 3

Consider a quasi-ordering such that . In Ex. 2, we then have . Thus, , which gives .

Theorem 2.2

Let be an AFSM. Let if both sides have base type, , and all are -computable. There is an RC-set such that has base type is terminating under if then is -computable for all .

Proof (sketch)

Note that we cannot define as this set, as the set relies on the notion of -computability. However, we can define as the fixpoint of a monotone function operating on RC-sets. This follows the proof in, e.g., [9, 10]. ∎

The full proof (for the definitions in this paper) is available in Appendix 0.A.

3 Restrictions

The termination methodology in this paper is restricted to AFSMs that satisfy certain limitations: they must be properly applied (a restriction on the number of terms each function symbol is applied to) and accessible function passing (a restriction on the positions of variables of a functional type in the left-hand sides of rules). Both are syntactic restrictions that are easily checked by a computer.

3.1 Properly applied AFSMs

In properly applied AFSMs, function symbols are assigned a certain, minimal number of arguments that they must always be applied to.

Definition 9

An AFSM is properly applied if for every there exists an integer such that for all rules : (1) if then ; and (2) if then . We denote .

That is, every occurrence of a function symbol in the right-hand side of a rule has at least as many arguments as the occurrences in the left-hand sides of rules. This means that partially applied functions are often not allowed: an AFSM with rules such as and is not properly applied, because is applied to one argument in the left-hand side of some rule, and to zero in the right-hand side of another.

This restriction is not as severe as it may initially seem since partial applications can be replaced by -abstractions; e.g., the rules above can be made properly applied by replacing the second rule by: . By using -expansion, we can transform any AFSM to satisfy this restriction:

Definition 10 ()

Given a set of rules , let their -expansion be given by with , , and fresh meta-variables, where

  • if is an application or element of , and otherwise;

  • for and for , while and and .

Note that is a pattern if is. By [28, Thm. 2.16], a relation is terminating if is terminating, which allows us to transpose any methods to prove termination of properly applied AFSMs to all AFSMs.

However, there is a caveat: this transformation can introduce non-termination in some special cases, e.g., the terminating rule with and , whose -expansion is non-terminating. Thus, for a properly applied AFSM the methods in this paper apply directly. For an AFSM that is not properly applied, we can use the methods to prove termination (but not non-termination) by first -expanding the rules. Of course, if this analysis leads to a counterexample for termination, we may still be able to verify whether this counterexample applies in the original, untransformed AFSM.

Example 4

Both AFSMs in Ex. 1 and the AFSM in Ex. 2 are properly applied.

Example 5

Consider an AFSM with and . Although the one rule has a functional output type (), this AFSM is properly applied, with having always at least argument. Therefore, we do not need to use . However, if were to additionally include some rules that did not satisfy the restriction (such as the and rules above), then -expanding all rules, including this one, would be necessary. We have: . Note that the right-hand side of the -expanded rule is not -normal.

3.2 Accessible Function Passing AFSMs

In accessible function passing AFSMs, variables of functional type may not occur at arbitrary places in the left-hand sides of rules: their positions are restricted using the sort ordering and accessibility relation from Def. 8.

Definition 11 (Accessible function passing)

An AFSM is accessible function passing (AFP) if there exists a sort ordering following Def. 8 such that: for all and all : there are variables and some such that .

The key idea of this definition is that computability of each implies computability of all meta-variables in . This excludes cases like Example 7 below. Many common examples satisfy this restriction, including those we saw before:

Example 6

Both systems from Ex. 1 are AFP: choosing the sort ordering that equates and , we indeed have and (as ) and both and . The AFSM from Ex. 2 is AFP because we can choose and have following Ex. 3 (and also and ). The AFSM from Ex. 5 is AFP, because for any : because because .

In fact, all first-order AFSMs (where all fully applied sub-meta-terms of the left-hand side of a rule have base type) are AFP via the sort ordering that equates all sorts. Also (with the same sort ordering), an AFSM is AFP if, for all rules and all , we can write: where and all fully applied sub-meta-terms of have base type.

This covers many practical systems, although for Ex. 2 we need a non-trivial sort ordering. Also, there are AFSMs that cannot be handled with any .

Example 7 (Encoding the untyped -calculus)

Consider an AFSM with and (note that the only rule has type ). This AFSM is not accessible function passing, because cannot hold for any (as this would require ).

Note that this example is also not terminating. With , we get this self-loop as evidence:

Intuitively: in an accessible function passing AFSM, meta-variables of a higher type may occur only in “safe” places in the left-hand sides of rules. Rules like the ones in Ex. 7, where a higher-order meta-variable is lifted out of a base-type term, are not admitted (unless the base type is greater than the higher type).

In the remainder of this paper, we will refer to a properly applied, accessible function passing AFSM as a PA-AFP AFSM.

Discussion: This definition is strictly more liberal than the notions of “plain function passing” in both [33] and [45] as adapted to AFSMs. The notion in [45] largely corresponds to AFP if equates all sorts, and the HRS formalism guarantees that rules are properly applied (in fact, all fully applied sub-meta-terms of both left- and right-hand sides of rules have base type). The notion in [33] is more restrictive. The current restriction of PA-AFP AFSMs lets us handle examples like ordinal recursion (Ex. 2) which are not covered by [33, 45]. However, note that [33, 45] consider a different formalism, which does take rules whose left-hand side is not a pattern into account (which we do not consider). Our restriction also quite resembles the “admissible” rules in [7] which are defined using a pattern computability closure [6], but that work carries additional restrictions.

In later work [31, 32], K. Kusakari extends the static DP approach to forms of polymorphic functional programming, with a very liberal restriction: the definition is parametrised with an arbitrary RC-set and corresponding accessibility (“safety”) notion. Our AFP restriction is actually an instance of this condition (although a more liberal one than the example RC-set used in [31, 32]). We have chosen a specific instance because it allows us to use dedicated techniques for the RC-set; for example, our computable subterm criterion processor (Thm. 0.C.7).

4 Static higher-order dependency pairs

To obtain sufficient criteria for both termination and non-termination of AFSMs, we will now transpose the definition of static dependency pairs [7, 33, 45, 32] to AFSMs. In addition, we will add the new features of meta-variable conditions, formative reductions, and computable chains.

Although we retain the first-order terminology of dependency pairs, the setting with meta-variables makes it more suitable to define DPs as triples.

Definition 12 ((Static) Dependency Pair)

A dependency pair (DP) is a triple , where is a closed pattern , is a closed meta-term , and is a set of meta-variable conditions: pairs indicating that regards its argument. A DP is conservative if .

A substitution respects a set of meta-variable conditions if for all in we have with either , or and . DPs will be used only with substitutions that respect their meta-variable conditions.

For (so a DP whose set of meta-variable conditions is empty), we often omit the third component and just write .

Like the first-order setting, the static DP approach employs marked function symbols to obtain meta-terms whose instances cannot be reduced at the root.

Definition 13 (Marked symbols)

Let be an AFSM. Define . For a meta-term with and , we let ; for of other forms is not defined.

Moreover, we will consider candidates. In the first-order setting, candidate terms are subterms of the right-hand sides of rules whose root symbol is a defined symbol. Intuitively, these subterms correspond to function calls. In the current setting, we have to consider also meta-variables as well as rules whose right-hand side is not -normal (which might arise for instance due to -expansion).

Definition 14 (-reduced-sub-meta-term, , )

A meta-term has a fully applied -reduced-sub-meta-term (shortly, BRSMT), notation , if there exists a set of meta-variable conditions with . Here holds if:

  • , or

  • and , or

  • and some , or , or

  • with and some , or

  • and some , or

  • and for some with .

Essentially, means that can be reached from by taking -reductions at the root and “subterm”-steps, where is in whenever we pass into argument of a meta-variable . BRSMTs are used to generate candidates:

Definition 15 (Candidates)

For a meta-term , the set of candidates of consists of those pairs such that (a) has the form with and , and (b) there are (with ) such that , and (c) is minimal: there is no subset with .

Example 8

In AFSMs where all meta-variables have arity and the right-hand sides of rules are -normal, the set for a meta-term consists exactly of the pairs where has the form and occurs as part of . In Ex. 2, we thus have .

If some of the meta-variables do take arguments, then the meta-variable conditions matter: candidates of are pairs where contains exactly those pairs for which we pass through the argument of to reach in .

Example 9

Consider an AFSM with the signature from Ex. 2 but a rule using meta-variables with larger arities:

The right-hand side has one candidate:

The original static approaches define DPs as pairs where is a rule and a subterm of of the form – as their rules are built using terms, not meta-terms. This can set variables bound in free in . In the current setting, we use candidates with their meta-variable conditions and implicit -steps rather than subterms, and we replace such variables by meta-variables.

Definition 16 ()

Let be a meta-term and be an AFSM. Let denote with all free variables replaced by corresponding meta-variables. Now .

Although static DPs always have a pleasant form (as opposed to the dynamic DPs of, e.g., [30], whose right-hand sides can have a meta-variable at the head, which complicates various techniques in the framework), they have two important complications not present in first-order DPs: the right-hand side of a DP may contain meta-variables that do not occur in the left-hand side – traditional analysis techniques are not really equipped for this – and the left- and right-hand sides may have different types. In Sec. 5 we will explore some methods to deal with these features.

Example 10

For the non--expanded rules of Ex. 5, the set has one element: . (As and are not defined symbols, they do not generate dependency pairs.) The set for the -expanded rules is . To obtain the relevant candidate, we used the -reduction step of BRSMTs.

Example 11

The AFSM from Ex. 2 is AFP following Ex. 6; here is: