# Implicit complexity via structure transformation

Implicit computational complexity, which aims at characterizing complexity classes by machine-independent means, has traditionally been based, on the one hand, on programs and deductive formalisms for free algebras, and on the other hand on descriptive tools for finite structures. We consider here "uninterpreted" programs for the transformation of finite structures, which define functions over a free algebra A once the elements of A are themselves considered as finite structures. We thus bridge the gap between the two approaches above to implicit complexity, with the potential of streamlining and clarifying important tools and techniques, such as set-existence and ramification. We illustrate this potential by delineating a broad class of programs, based on the notion of loop variant familiar from imperative program construction, that characterizes a generic notion of primitive-recursive complexity, without reference to any data-driven recurrence.

## Authors

• 3 publications
• 3 publications
11/11/2019

### A generic imperative language for polynomial time

We propose a generic imperative programming language STR that captures P...
12/28/2010

### On Elementary Loops of Logic Programs

Using the notion of an elementary loop, Gebser and Schaub refined the th...
12/02/2014

### Expressiveness of Logic Programs under General Stable Model Semantics

The stable model semantics had been recently generalized to non-Herbrand...
10/03/2021

### Cyclic Implicit Complexity

Circular (or cyclic) proofs have received increasing attention in recent...
06/30/2019

### Typed lambda-calculi and superclasses of regular functions

We propose to use Church encodings in typed lambda-calculi as the basis ...
12/24/2019

### Aggressive Aggregation: a New Paradigm for Program Optimization

In this paper, we propose a new paradigm for program optimization which ...
##### This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

## 1 Introduction

Implicit computational complexity (ICC) strives to characterize complexity classes by resource-independent methods, thereby elucidating the nature of those classes and relating them to more abstract complexity measures, such as levels of descriptive or deductive abstractions. The various approaches to ICC fall, by and large, into two broad classes. One is descriptive complexity, which focuses on finite structures, and as such forms a branch of Finite Model Theory [18]. Its historical roots go back at least to the characterization of Log-Space queries by recurrence [16], and of NP by existential set-quantification [10].

The other broad class in ICC focuses on computing over infinite structures, such as the natural numbers, strings, or lists, and uses programming and proof-theoretic methods to articulate resource-independent characterizations of complexity classes.

We argue here that computing over finite structures is, in fact, appropriate for implicit complexity over infinite structures as well. Our point of departure is the observation that inductive data-objects, such as natural numbers, strings and lists, are themselves finite structures, and that their computational behavior is determined by their internal makeup rather than by their membership in this or that infinite structure. For example, the natural number three is the structure (or more precisely partial-structure, see below)

Lifting this representation, a function is perceived as a mapping over finite second-order objects, namely the natural numbers construed as structures. This view of inductive objects as finite structures is implicit already in long-standing representations, such as the Church-Berarducci-Böhm lambda-coding of inductive data [7, 5].

As a programming language of reference we propose a Turing-complete imperative language ST for structure transformation, in the spirit of Gurevich’s ASMs [6, 14, 15]. We regard such programs as operating over classes of finite structures.

We illustrate the naturalness and effectiveness of our approach by delineating a variant STV of ST, based on the notion of loop variants familiar from program development and verification [12, 8, 26], and proving that it captures exactly primitive recursion, in the strongest possible sense: all functions defined by recurrence over free algebras are computable directly by STV programs, and all STV programs run in time and space that are primitive-recursive in the size of the input.

We caution against confounding our approach with unrelated prior research addressing somewhat similar themes. Recurrence and recursion over finite structures have been shown to characterize logarithmic space and polynomial time queries, respectively [16, 23], but the programs in question do not allow inception of new structure elements, and so remain confined to linear space complexity, and are inadequate for the kind of characterizations we seek. On the other hand, unbounded recurrence over arbitrary structures has been considered by a number of authors [1, 2, 25], but always in the traditional sense of computing within an infinite structure. Also, while the meta-finite structures of [11] merge finite and infinite components, both of those are considered in the traditional framework, whereas we deal with purely finite structures, and the infinite appears via the consideration of collections of such structures. Finally, the functions we consider are from structures to structures (as in [23]), and are thus unrelated to the global functions of [13, 9], which are (isomorphism-invariant) mappings that assigns to each structure a function over it.

## 2 General setting

### 2.1 Partial structures

We use the phrase vocabulary for a finite set of function-identifiers and relation-identifiers, with each identifier  assigned an arity denoted . We refer to nullary function-identifiers as tokens, and to ones of arity 1 as pointers.

By -structure we’ll mean here a finite partial-structure over the vocabulary ; that is, a -structure  consists of a finite non-empty universe , for each function identifier f of a partial-function , where , and for each relation-identifier Q of , a relation , where . We refer to the elements of as ’s nodes.

We insist on referring to partial-structures since we consider partiality to be a core component of our approach. For example, we shall identify each string in with a structure over the vocabulary with a token and pointers  and . So is identified with the four element structure

 e∘\scriptsize\bf 0⟶∘\scriptsize\bf 1⟶∘\scriptsize\bf 1⟶∘

Here  is interpreted as the partial-function defined only for the leftmost element, and  as the partial-function defined only for the second and third elements.

We might, in fact, limit attention to vocabularies without relation identifiers, since a -ary relation () can be represented by its support, that is the -ary partial-function

 \boldmathσ(x1,…,xk)=df\bf if →x∈Q\bf then x1\bf else undefined

Thus, for instance, is empty iff is empty (which is not the case if relations are represented by their characteristic

functions). Note that by using the support rather than the characteristic function we bypass the traditional representation of truth values by elements, and obtain a uniform treatment of functional and relational structure revisions (defined below), as well as initiality conditions.

A tuple of structures is easily presentable as a single structure. Given structures , where is a -structure, let be the disjoint union of , and let be the -structure whose universe is the disjoint union of (), and where the interpretation of an identifier of is the same as it is in , i.e. is empty/undefined on for every .

### 2.2 Accessible structures and free structures

The terms over , or -terms, are generated by the closure condition: if and are terms, then so is . (We use parentheses and commas for function application only at the discourse level.) Note that we do not use variables, so our “terms” are all closed. The height of a term t is the height of its syntax-tree: . Given a -structure  the value of a -term  in , , is defined as usual by recurrence on : If , then We say that a term  denotes its value , and also that it is an address for .

A node of a -structure  is accessible if it is the value in  of a -term. The height of an accessible node is the minimum of the heights of addresses of . A structure  is accessible when all its nodes are accessible. If, moreover, every node has a unique address we say that  is free.

A -structure  is a term-structure if

1. its universe consists of -terms; and

2. if then and .

From the definitions we have

###### Proposition 1

A -structure  is free iff it is isomorphic to a term -structure.

Note that if is functional (no relation identifiers), then for each -term q we have a free term-structure consisting of the sub-terms of q (q included). Each can be represented as a dag of terms, whose terminal nodes are tokens. It will be convenient to fix a reserved token, say , that will denote in each structure the term q as a whole.

## 3 Structure-transformation programs

Programs operating on structures and transforming them are well known, for example from Gurevich’s Abstract State Machines [14, 15, 6]. We define a version of such programs, giving special attention to basic execution steps (structure revisions).

### 3.1 Structure revisions

We consider the following basic operations on -structures, transforming a -structure  to a -structure which, aside from the changes indicated below, is identical to .

• Function-revisions

1. A function-extension is an expression . The intent is that if
are all defined, but    is undefined,
then  . f is the eigen-function of the extension.

2. A function-contraction is an expression . The intent is that
is undefined.

• Relation-revisions

Relation revisions may be viewed as a special case of function-revisions, given the functional representation of relations described above. We mention them explicitly since they are used routinely.

1. A relation-extension is an expression where R is a -ary relation identifier. The intent is that if each is defined, then is augmented with the tuple (if not already there). R is the eigen-relation of the extension.

2. A relation-contraction is an expression . The intent is that if each is defined, then is with the tuple removed (if there).

• Node-revisions

1. A node-inception is an expression of the form , where c is a token. The intent is that, if is undefined, then is augmented with a new node  denoted by c (i.e. ). A traditional alternative notation is . Assigning  to a compound address can be viewed as an abbreviation for , where c is a fresh token.

2. A node-deletion is an expression of the form , where c is a token. The intent is that is obtained from  by removing the node (if defined), and removing all tuples containing  from each (R a relation-identifier) and from the graph of each (f a function identifier). Again, a more general form of node-deletion, , can be implemented as the composition of a function-extension and , c a fresh token.

Deletions are needed, for example, when the desired output structure has fewer nodes than the input structure (“garbage collection”).

We refer to the operations above collectively as revisions. Revisions cannot be split into smaller actions. On the other hand, a function-extension and a function-contraction can be combined into an assignment, i.e. a phrase of the form . This can be viewed as an abbreviation, with  a fresh token, for the composition of four revisions:

 b\makebox[−1.422638pt]↓\makebox[−1.422638pt]β;\bf f→α↑;\bf f→α\makebox[−1.422638pt]↓\makebox[−1.422638pt]% b;b\makebox[−1.422638pt]↑\makebox[−1.422638pt]

### 3.2 St Programs

Our programming language ST consists of guarded iterative programs built from structure revisions. Uninterpreted programs over a vocabulary normally refer to an expansion of , as needed to implement algorithms and to generate output. We refer from now to such an expansion .

• A test is one of for following types of phrases.

1. A convergence-expression , where  is an address. This is intended to state that the address  is defined for the current values of the function-identifiers. Thus states that  is undefined in the current structure.

2. An equation where  and  are addresses. This is intended to state that both addresses are defined and evaluate to the same node.

3. A relational-expression , where R is a -ary relation-identifier and each is an address. By the convention above, this may be construed as a special case of the equation .

• A guard is a boolean combination of tests.

Given a vocabulary the -programs of ST are generated inductively as follows (we omit the reference to when un-needed).

1. A structure-revision is a program.

2. If and are programs then so is .

3. If is a guard and are programs, then and are programs.

### 3.3 Program semantics

Given a vocabulary , a -configuration (cfg) is a -structure. Given a -structure  and , we write for the expansion of  to with all identifiers in interpreted as empty (everywhere undefined functions and empty relations). For a program over we define the binary yield relation between -configurations by recurrence on . For a structure-revision the definition follows the intended semantics described informally above. The cases for composition, branching, and iteration, are straightforward as usual.

Let be a partial-mapping from a class  of -structures to a class of -structures. A -program computes if for every , for some -expansion  of .

The vocabulary of the output structure need not be related to the input vocabulary .111Of course, if  is a proper class (in the sense of Gödel-Bernays set theory), then the mapping defined by is a proper-class.

We shall focus mostly on programs as transducers. Note that all structure revisions refer only to accessible structure nodes. It follows that non-accessible nodes play no role in the computational behavior of ST programs. We shall therefore focus from now on accessible structure only.

### 3.4 Examples

1. Concatenation by splicing. The following program computes concatenation over . It takes as input a pair of structures, where the nil and two successor identifiers are for and for . The output is , with vocabulary .

2. Concatenation by copying. The previous program uses no inception, as it splices the second argument over the first. The following program copies the second argument over the first, thereby enabling a repeated and modular use of concatenation, as in the multiplication example below.

 a↓e;\sl\% moving % \hbox{a}\ to end of input 1\bf do[!0a∨!1a]{a:=0a;a:=1a};b↓^e;\sl\% copy of input 2 % incepted after input 1\bf do[!^0b∨!^1b]{c⇓;\bf if[!^0b]{0a↓c;a↓c% ;b:=^0b}{1a↓c;a↓c% ;b:=^1b};c\makebox[−1.422638pt]↑\makebox[−1.422638pt]}
3. String multiplication is the function that for inputs and returns the result of concatenating copies of . This is computed by the following program, which takes as input a pair of structures, with vocabularies and respectively, and output vocabulary .

 i↓z;a↓e;\bf do[!si]\sl\% iterate over % numerical input{i:=si;b:=e;\sl\% % copy of string input concatenated over\bf do[!0b∨!1b]{c\makebox[−1.422638pt]⇓\makebox[−1.422638pt];\bf if[!0b]{0a↓c;b:=0b}{1a↓c;b:=^1b};a:=c;c\makebox[−1.422638pt]↑\makebox[−1.422638pt]}}

### 3.5 Computability

Since guarded iterative programs are well known to be sound and complete for Turing computability, the issue of interest here is articulating Turing computability in the ST setting. Consider a Turing transducer over an I/O alphabet , with full alphabet , set of states , start state , print state , and transition function . The input is taken to be the structure .

Define to be the vocabulary with ,  and each state in as tokens; and with  and each symbol in  as pointers. Thus the program vocabulary is broader than the input vocabulary, both in representing ’s machinery, and with auxiliary components. The intent is that a configuration (i.e. with cursored) be represented by the -structure

 ∘σ1⟶∘⋅⋅⋅∘σi⟶∘⋅⋅⋅∘σk⟶∘e,qc

All remaining tokens are undefined.

The program simulating implements the following phases:

1. Convert the input structure into the structure for the initial configuration, and initialize  to the initial input element, and  to be the destructor function for the input string.

2. Main loop: configurations are revised as called for by . The pointer  is used to represent backwards cursor movements. The loop’s guard is  (the “print” state) being undefined.

3. Convert the final configuration into the output.

## 4 Stv: programs with variants

### 4.1 Loop variants

A variant is a finite set of function- and relation-identifiers of positive arity, to which we refer as ’s components.

Given a vocabulary the -programs of STV are generated inductively as follows, in tandem with the notion of a variant being terminating in an STV-program . Again, we omit the reference to when it is clear or irrelevant.

1. A structure-revision over is a program. A variant is terminating in any revision except for a function- or relation-extension whose eigen function/relation is a component of .

2. If and are STV-programs with terminating, then so is .

3. If is a guard and are STV-programs with terminating, then so is .

4. If is a guard, and is a STV-program with and terminating variants, then is a STV-program, with terminating.

We write STV for the programming language consisting of STV-programs over vocabulary , and omitting when in no loss of clarity.

### 4.2 Semantics of Stv-programs

The semantics of STV-programs is defined as for programs of ST, with the exception of the looping construct do. A loop is entered if is true in the current state, and is re-entered if is true in the current state, and the previous pass executes at least one contraction for some component of the variant . Thus, as is executed, no component of is extended within (by the syntactic condition that is terminating in ), and is contracted at least once for each iteration, save the last (by the semantic condition on loop execution).

### 4.3 String duplication

The following program duplicates a string given as a structure: the output structure has the same nodes as the input, but with functions appearing in duplicate. The algorithm has two phases: a first loop, with the variant consisting collectively of the functions, creates two new copies of the string (while depleting the input function in the process). A second loop restores one of the two copies to the original identifiers, thereby allowing the duplication to be useful within a larger program that refers to the original identifiers. Function duplication in arbitrary structures is more complicated, and will be discussed below.

 a:=e% ;\bf do[!0a∨!1a][0,1]\%% 0/1 copied to ¯0/¯1 and ^0/^1{b↓a;while being consumed as variant\bf if[!0a]{¯0(a% )↓0a;^0(a)↓0a;a:=0% a;0b↑}{¯1(a% )↓1a;^1(a)↓1a;a:=1% a;1b↑}};a:=e;^0/^1 restored to 0/1\bf do[!^0a∨!^1a][^0,^1]{\bf if[!^0a]{0a\makebox[−1.422638pt]↓\makebox[−1.422638pt]^0a;^0a\makebox[−1.422638pt]↑\makebox[−1.422638pt];a:=0a;}{1a\makebox[−1.422638pt]↓\makebox[−1.422638pt]^1a;^1a\makebox[−1.422638pt]↑\makebox[−1.422638pt];a:=1a;}}

The ability of STV programs to duplicate structures (for now only string structures) is at the core their ability to implement recurrence, so be discussed below.

### 4.4 Further examples

1. Concatenation. Using string duplication, we can easily convert the concatenation examples of §3.4 to STV. The changes are similar for the splicing and for the copying programs. The programs are preceded by the duplication of each of the two inputs. The copy of is then used as guard for the first loop, and is depleted by an entry in each cycle. The copy of is used as guard for the second loop, and is similarly depleted.

2. Multiplication. The program of §3.4 is preceded by a duplication of the string input. The outer loop has  as a variant, which is depleted by a contraction in each cycle of the current . The inner loop has the copy of as variant.

3. Exponentiation A program transforming the structure for to the structure for is obtained by combining the programs for duplication and concatenation. Using for the input vocabulary a token  and a pointer , and for output a token  and a pointer , The program first initializes the output to the structure for 1. The main loop has as guard and  as variant. The body triplicates its initial , and uses one copy as variant for an inner loop that concatenates the other two copies.

## 5 Programs for structure expansions

In this section we describe programs that expand arbitrary (finite) structures in important ways.

### 5.1 Enumerators

Given a -structure  we say that a pair , with and , is an enumerator for if for some the sequence

 a,e(a),…,e[n](a)

consists of all accessible nodes of , without repetitions, and is undefined. An enumerator is monotone if the value of a term never precedes the value of its sub-terms. This is guaranteed if the value of a term of height never precedes the value of terms of height .

###### Theorem 2

For each vocabulary there is a program that for -structures  as input yields an expansion of with a monotone enumerator .

Proof. The program maintains, in addition to the identifiers in , four auxiliary identifiers, as follows.

• A token , intended to set the head of the enumerator.

• A pointer , intended to denote a (repeatedly growing) initial segment of the intended enumerator ;

• A set identifier , intended to denote the set of nodes enumerates by so far.

• A pointer  intended to list, starting from a token , some accessible nodes not yet listed in ; these are to be appended to  at the end of each loop-cycle.

• A token , intended to serve as a flag to indicate that the last completed cycle has added some elements to .

A preliminary program-segment sets  and  to be the node denoted by one of the -tokens (there must be one, or else there would be no accessible nodes), and defines  to list any additional nodes denoted by tokens. (The value of  is immaterial, only  being defined matters.) Note that ,  and  are initially empty by default.

The main loop starts by re-initializing  to empty, using string duplication described above, resetting  to undefined (i.e. false), and duplicating  as needed for the following construction. Each pass then adds to  all nodes that are obtained from the current values in  by applications of ’s functions, and that are not already in . That is, for each unary function-id  of a secondary loop travels through , using an auxiliary token . When  applied to an entry is not in , the value of that output is appended to both  and . The guard of that loop is , and the variant is .

For function identifiers  of arity the process is similar, except that nested loops are required, with additional duplications of  ahead of each loop. Whenever a new node is appended to , the token  is set to be defined (say as the current vale of ).

When every non-nullary function-identifier of is treated, the list  is appended to , leaving  empty.

In §3.4 we gave a program for duplicating a string. Using an enumerator, a program using the same method would duplicate, for the accessible nodes, each structure function. Namely, to duplicate a -ary function denoted by  to one denoted by , the program’s traverses copies of the enumerator with tokens , and whenever is defined, the program defines .

Observe that an enumerator for a structure usually ceases to be one with the execution of a structure revision; for example, a function contraction may turn an accessible node into an inaccessible one. This can be repaired by accompanying each revision by an auxiliary program tailored to it, or simply by redefining an enumerator whenever one is needed.

### 5.2 Quasi-inverses

We shall need to refer below to decomposition of inductive data, i.e. inverses of constructors. While in general structure functions need not be injective, we can still have programs for quasi-inverses, which we define as follows.222A common equivalent definition is that .

For a relation and , define .333We use infix notation for binary relations. We call a partial-function a choice-function for if and is defined whenever . A partial-function is a quasi-inverse of if it is a choice function for the relation . When is -ary, i.e. , can be construed as an -tuple of functions . We write for . If is injective then its unique quasi-inverse is its inverse .

###### Theorem 3

For each vocabulary there is a program that for each -structure  as input yields an expansion of with quasi-inverses for each non-nullary -function.

Proof. The proof of Theorem 2 can be easily modified to generate quasi-inverses for each structure function, either in tandem with the construction of an enumerator, or independently. Namely, whenever the program in the proof of Theorem 2 adds a node to  and  (where ), our enhanced program defines ().

Note that, contrary to enumerators, quasi-inverses are easy to maintain through structure revisions. An extension of a function  can be augmented with appropriate extensions of ’s quasi-inverses, and a contraction of  with appropriate contractions of those quasi-inverses.

## 6 A generic delineation of primitive recursion

### 6.1 Recurrence over inductive data

Recall that the schema of recurrence over consists of the two equations

 f(0,→x)=g0(→x)f(sn,→x)=gs(n,→x,f(n,→x)) (1)

More generally, given a free algebra generated from a finite set of constructors, recurrence over has one equation per constructor:

 f(c(z1,…,zk),→x)=g% \hbox{\scriptsize\bf c}(→x,→z,y1…yk)whereyj=f(zj,→x)(j=1..k,k=r(c)) (2)

The set of primitive recursive functions over  is generated from the constructors of  (for example zero and successor for ), by recurrence over  and explicit definitions.444The phrase “primitive recursive” was coined by Rosza Peter [21], triggered by the discoveries by Ackermann and Sudan of computable (“recursive”) functions that are not in . Given the present-day use of “recursion” for recursive procedures, “recurrence” seems all the more appropriate. Using standard codings, it is easy to see that any non-trivial (i.e. infinite) algebra can be embedded in any other. Consequently, the classes are essentially the same for all non-trivial , and we refer to them jointly as PR.555Note that we are not dealing in generalizations of recurrence to well-orderings (“Noetherian induction”). A natural question is whether there is a generic approach, unrelated to free algebras, that delineates the class PR.

The recurrence schema (for ) was seemingly initiated by the interest of Dedekind in formalizing arithmetic, and articulated by Skolem [24]. It was studied extensively (e.g. [21]), and generalized to all admissible structures [3]. Our aim here is to characterize the underlying notion of primitive recursion generically, via uninterpreted programs. We delineate a natural variant of ST, STV which is sound and complete for PR. That is, on the one hand every STV program terminates in time primitive-recursive in the size of the input structure. On the other hand, STV captures PR in two ways: any instance of recurrence over a free algebra can be implemented directly by an STV program; and every ST program that runs in PR resources in the size of the input structure can be transformed into an extensionally equivalent STV program.

Recurrence is guaranteed to terminate because it consumes its recurrence argument. The very same consumption phenomenon is used, in a broad and generic sense, in the Dijkstra-Hoare program verification style, in the notion of a variant [12, 8, 26]. Our core idea is to use a generic notion of program variants in lieu of recurrence arguments taken from free algebras.

### 6.2 Resource measures

We first identify appropriate notions of size measures for structures. We focus on accessible structures, since non-accessible nodes remain non-accessible through revisions and are inert through the execution of any program. Consequently they do not affect the time or space consumption of computations.

We take the size of an accessible -structure  to be the count of tuples of nodes that occur in the structure’s relations and (graphs of) functions. Note that this is in tune with our use of variants, which are consumed not by the elimination of nodes, but by the contraction of functions and relations. Moreover, we believe that the size of functions and relations is an appropriate measure in general, since they convey more accurately than the number of nodes the information contents of a structure.

Note that for word-structures, i.e. for ( an alphabet) the total size of the structure’s functions is precisely the length or , so in this important case our measure is identical to the count of nodes.

Suppose is a vocabulary with all identifiers of arity . If  is a -structure of size , then the number of accessible nodes is . Conversely, if the number of accessible nodes is , then the size is . It follows that the distinction between our measure and node-count does not matter for super-polynomial complexity.

We say that a program runs within time if for all structures , the number of configurations in a complete trace of on input  is ; it runs within space if for all , all configurations in an execution trace of on input  are of size .

We say that runs in PR if it runs within time , for some PR function , or — equivalently — within space , for some PR function .

### 6.3 PR-soundness of Stv-programs

We assign to each STV-program a primitive-recursive function as follows. The aim is to satisfy Theorem 4 below.

• If is an extension or an inception revision, then ; if is any other revision then .

• If is then

• If is then .

• If is then .

###### Theorem 4

If is an STV-program computing a mapping between structures, and  is a structure, then

 #ΦP(S)⩽bP(#S)

Proof. Structural induction on .

• If is a revision, then the claim is immediate by the definition of .

• If is then

 #ΦP(S)=#ΦQ(ΦS(S))⩽bQ(#ΦS(S))(IH for Q)⩽bQ(bS(#S))(IH for S, bQ is non-% decreasing)=bP(#S)
• The case for of the form is immediate.

• If is then is for some . By the definition of variants, and the semantics of looping, is bounded by the size of , which is bounded by the size of . So

 #ΦP(S)=#Φ[m]Q(S)for some m⩽#S⩽b[m]Q(#S)IH, bQ is non-decreasing⩽b[n]Q(#S)where n=#Ssince bQ is terminating=bP(#S)

From Theorem 4 we obtain the soundness of STV-programs for PR:

###### Theorem 5

Every STV-program runs in PR space, and therefore in PR time.

### 6.4 Completeness of Stv-programs for PR

We finally turn to the completeness of STV for PR. The easiest approach would be to prove that STV is complete for , and then invoke the coding of primitive recurrence over any free algebra in . This, however, would fail to establish a direct representation of generic recurrence by STV-programs, which is one of the raisons d’être of STV. We therefore follow a more general approach.

###### Lemma 6

For each free algebra , each instance of recurrence over  as in (2) above (with ), the following holds. Given STV-programs for the functions , there is an STV-program that maps the structure to .

Proof. The program gradually constructs a pointer  that maps each node  of to the root of the structure , where is the sub-term of denoting  (it is uniquely defined since is a free algebra).

starts by constructing a monotone enumerator for the structure , as well as inverses for all constructors, by Theorems 2 and 3. (Since is a term of a free algebra, a quasi-inverse of a constructor is an inverse). The main loop of then scans that enumerator, using a token; reaching the end of the enumerator is the guard, and the enumerator itself is the variant.

For each node  encountered on the enumerator, first identifies the constructor  defining , which is unique since . This identification is possible by testing for equality with the tokens, and — that failing — testing, for non-nullary constructor , the definability of the first inverse . Since the enumerator is monotone,  is already defined for the values (). can thus invoke the program for the function , adapted to the disjoint union of

1. The structures ;

2. The structures spanned by the ’s, i.e. for each the substructure of the input consisting of the sub-terms of ;

is then set to be the root of the result.

The program’s final output is then ; that is the structure yielded for the program’s given recurrence argument.

###### Theorem 7

For each free algebra , the collection of STV-programs is complete for .

Proof. The proof proceeds by induction on the PR definition of . The cases where is a constructor are trivial. For explicit definitions, and more particularly composition, we need to address the need of duplicating substructures, for which we have programs, as explained in §5.1.

Finally, the case of recurrence is treated in Lemma 6.

Theorem 7 establishes a simple and direct mapping of PR function definitions, over any free algebra, to STV programs. Another angle on the completeness of STV for PR refers directly to ST-programs (i.e. to programs without variants):

###### Corollary 8

For every ST-program running in PR resources, and defining a structure transformation , there is an STV-program that computes .

Proof. Recall from §6.2 that the size of a structure, measured in size of functions and relations, is polynomial in the number of nodes. It follows that runs in time PR in the input’s number of nodes.

Suppose now that ’s input is a -structure, and that operates within time , where is a PR function over .

Let be the composition of the following STV-programs:

1. A program that expands each -structure  with an enumerator , as in Theorem 2. The constructed enumerator  is a list without repetition of the nodes of . I.e.,  is essentially , where is the number of nodes in .

2. A program that takes as input the structure constructed in (1), and outputs with, say,  as the output’s successor function. Such a program exists by Theorem 7 applied to the free algebra .

3. The given ST-program , with each loop assigned as variant a copy of , and each loop-body preceded by a function-contraction of .

Then computes the same structure-transformation as .

[17]

## References

• [1] Philippe Andary, Bruno Patrou, and Pierre Valarcher. About implementation of primitive recursive algorithms. In Beauquier et al. [4], pages 77–90.
• [2] Philippe Andary, Bruno Patrou, and Pierre Valarcher. A representation theorem for primitive recursive algorithms. Fundam. Inform., 107(4):313–330, 2011.
• [3] Jon Barwise. Admissible Sets and Structures, volume 7 of Perspectives in Mathematical Logic. Springer-Verlag, Berlin, 1975.
• [4] Danièle Beauquier, Egon Börger, and Anatol Slissenko, editors. Proceedings of the 12th International Workshop on Abstract State Machines, 2005.
• [5] Corrado Böhm and Alessandro Berarducci. Automatic synthesis of typed lambda-programs on term algebras. Theor. Comput. Sci., 39:135–154, 1985.
• [6] Egon Börger. The origins and the development of the ASM method for high level system design and analysis. J. UCS, 8(1):2–74, 2002.
• [7] Alonzo Church. The Calculi of Lambda-Conversion. Annals of Mathematics Studies. Princeton University Press, 1941.
• [8] Edsger W. Dijkstra. A Discipline of Programming. Prentice-Hall, 1976.
• [9] H.-D. Ebbinghaus and J. Flum. Finite Model Theory. Springer-Verlag, Berlin, 1995.
• [10] Ronald Fagin. Generalized first order spectra and polynomial time recognizable sets. In [19], pages 43–73, 1974.
• [11] Erich Grädel and Yuri Gurevich. Metafinite model theory. In Leivant [20], pages 313–366.
• [12] David Gries. The Science of Programming. Texts and Monographs in Computer Science. Springer, 1981.
• [13] Yuri Gurevich. Logic in computer science column. Bulletin of the EATCS, 35:71–81, 1988.
• [14] Yuri Gurevich. Evolving algebras: an attempt to discover semantics. In Rozenberg and Salomaa [22], pages 266–292.
• [15] Yuri Gurevich. The sequential ASM thesis. In Current Trends in Theoretical Computer Science, pages 363–392. World Scientific, 2001.
• [16] Juris Hartmanis. On non-determinancy in simple computing devices. Acta Inf., 1:336–344, 1972.
• [17] Jean van Heijenoort. From Frege to Gödel, A Source Book in Mathematical Logic, 1879–1931. Harvard University Press, Cambridge, MA, 1967.
• [18] Neil Immerman. Descriptive complexity. Graduate texts in computer science. Springer, 1999.
• [19] Richard Karp, editor. Complexity of Computation. AMS, Providence, R.I, 1974.
• [20] Daniel Leivant, editor. Logic and Computational Complexity, volume 960 of Lecture Notes in Computer Science. Springer, 1995.