1 Introduction
Implicit computational complexity (ICC) strives to characterize complexity classes by resourceindependent methods, thereby elucidating the nature of those classes and relating them to more abstract complexity measures, such as levels of descriptive or deductive abstractions. The various approaches to ICC fall, by and large, into two broad classes. One is descriptive complexity, which focuses on finite structures, and as such forms a branch of Finite Model Theory [18]. Its historical roots go back at least to the characterization of LogSpace queries by recurrence [16], and of NP by existential setquantification [10].
The other broad class in ICC focuses on computing over infinite structures, such as the natural numbers, strings, or lists, and uses programming and prooftheoretic methods to articulate resourceindependent characterizations of complexity classes.
We argue here that computing over finite structures is, in fact, appropriate for implicit complexity over infinite structures as well. Our point of departure is the observation that inductive dataobjects, such as natural numbers, strings and lists, are themselves finite structures, and that their computational behavior is determined by their internal makeup rather than by their membership in this or that infinite structure. For example, the natural number three is the structure (or more precisely partialstructure, see below)
Lifting this representation, a function is perceived as a mapping over finite secondorder objects, namely the natural numbers construed as structures. This view of inductive objects as finite structures is implicit already in longstanding representations, such as the ChurchBerarducciBöhm lambdacoding of inductive data [7, 5].
As a programming language of reference we propose a Turingcomplete imperative language ST for structure transformation, in the spirit of Gurevich’s ASMs [6, 14, 15]. We regard such programs as operating over classes of finite structures.
We illustrate the naturalness and effectiveness of our approach by delineating a variant STV of ST, based on the notion of loop variants familiar from program development and verification [12, 8, 26], and proving that it captures exactly primitive recursion, in the strongest possible sense: all functions defined by recurrence over free algebras are computable directly by STV programs, and all STV programs run in time and space that are primitiverecursive in the size of the input.
We caution against confounding our approach with unrelated prior research addressing somewhat similar themes. Recurrence and recursion over finite structures have been shown to characterize logarithmic space and polynomial time queries, respectively [16, 23], but the programs in question do not allow inception of new structure elements, and so remain confined to linear space complexity, and are inadequate for the kind of characterizations we seek. On the other hand, unbounded recurrence over arbitrary structures has been considered by a number of authors [1, 2, 25], but always in the traditional sense of computing within an infinite structure. Also, while the metafinite structures of [11] merge finite and infinite components, both of those are considered in the traditional framework, whereas we deal with purely finite structures, and the infinite appears via the consideration of collections of such structures. Finally, the functions we consider are from structures to structures (as in [23]), and are thus unrelated to the global functions of [13, 9], which are (isomorphisminvariant) mappings that assigns to each structure a function over it.
2 General setting
2.1 Partial structures
We use the phrase vocabulary for a finite set of functionidentifiers and relationidentifiers, with each identifier assigned an arity denoted . We refer to nullary functionidentifiers as tokens, and to ones of arity 1 as pointers.
By structure we’ll mean here a finite partialstructure over the vocabulary ; that is, a structure consists of a finite nonempty universe , for each function identifier f of a partialfunction , where , and for each relationidentifier Q of , a relation , where . We refer to the elements of as ’s nodes.
We insist on referring to partialstructures since we consider partiality to be a core component of our approach. For example, we shall identify each string in with a structure over the vocabulary with a token and pointers and . So is identified with the four element structure
Here is interpreted as the partialfunction defined only for the leftmost element, and as the partialfunction defined only for the second and third elements.
We might, in fact, limit attention to vocabularies without relation identifiers, since a ary relation () can be represented by its support, that is the ary partialfunction
Thus, for instance, is empty iff is empty (which is not the case if relations are represented by their characteristic
functions). Note that by using the support rather than the characteristic function we bypass the traditional representation of truth values by elements, and obtain a uniform treatment of functional and relational structure revisions (defined below), as well as initiality conditions.
A tuple of structures is easily presentable as a single structure. Given structures , where is a structure, let be the disjoint union of , and let be the structure whose universe is the disjoint union of (), and where the interpretation of an identifier of is the same as it is in , i.e. is empty/undefined on for every .
2.2 Accessible structures and free structures
The terms over , or terms, are generated by the closure condition: if and are terms, then so is . (We use parentheses and commas for function application only at the discourse level.) Note that we do not use variables, so our “terms” are all closed. The height of a term t is the height of its syntaxtree: . Given a structure the value of a term in , , is defined as usual by recurrence on : If , then We say that a term denotes its value , and also that it is an address for .
A node of a structure is accessible if it is the value in of a term. The height of an accessible node is the minimum of the heights of addresses of . A structure is accessible when all its nodes are accessible. If, moreover, every node has a unique address we say that is free.
A structure is a termstructure if

its universe consists of terms; and

if then and .
From the definitions we have
Proposition 1
A structure is free iff it is isomorphic to a term structure.
Note that if is functional (no relation identifiers), then for each term q we have a free termstructure consisting of the subterms of q (q included). Each can be represented as a dag of terms, whose terminal nodes are tokens. It will be convenient to fix a reserved token, say , that will denote in each structure the term q as a whole.
3 Structuretransformation programs
Programs operating on structures and transforming them are well known, for example from Gurevich’s Abstract State Machines [14, 15, 6]. We define a version of such programs, giving special attention to basic execution steps (structure revisions).
3.1 Structure revisions
We consider the following basic operations on structures, transforming a structure to a structure which, aside from the changes indicated below, is identical to .

Functionrevisions

A functionextension is an expression . The intent is that if
are all defined, but is undefined,
then . f is the eigenfunction of the extension. 
A functioncontraction is an expression . The intent is that
is undefined.


Relationrevisions
Relation revisions may be viewed as a special case of functionrevisions, given the functional representation of relations described above. We mention them explicitly since they are used routinely.

A relationextension is an expression where R is a ary relation identifier. The intent is that if each is defined, then is augmented with the tuple (if not already there). R is the eigenrelation of the extension.

A relationcontraction is an expression . The intent is that if each is defined, then is with the tuple removed (if there).


Noderevisions

A nodeinception is an expression of the form , where c is a token. The intent is that, if is undefined, then is augmented with a new node denoted by c (i.e. ). A traditional alternative notation is . Assigning to a compound address can be viewed as an abbreviation for , where c is a fresh token.

A nodedeletion is an expression of the form , where c is a token. The intent is that is obtained from by removing the node (if defined), and removing all tuples containing from each (R a relationidentifier) and from the graph of each (f a function identifier). Again, a more general form of nodedeletion, , can be implemented as the composition of a functionextension and , c a fresh token.
Deletions are needed, for example, when the desired output structure has fewer nodes than the input structure (“garbage collection”).

We refer to the operations above collectively as revisions. Revisions cannot be split into smaller actions. On the other hand, a functionextension and a functioncontraction can be combined into an assignment, i.e. a phrase of the form . This can be viewed as an abbreviation, with a fresh token, for the composition of four revisions:
3.2 St Programs
Our programming language ST consists of guarded iterative programs built from structure revisions. Uninterpreted programs over a vocabulary normally refer to an expansion of , as needed to implement algorithms and to generate output. We refer from now to such an expansion .

A test is one of for following types of phrases.

A convergenceexpression , where is an address. This is intended to state that the address is defined for the current values of the functionidentifiers. Thus states that is undefined in the current structure.

An equation where and are addresses. This is intended to state that both addresses are defined and evaluate to the same node.

A relationalexpression , where R is a ary relationidentifier and each is an address. By the convention above, this may be construed as a special case of the equation .


A guard is a boolean combination of tests.
Given a vocabulary the programs of ST are generated inductively as follows (we omit the reference to when unneeded).

A structurerevision is a program.

If and are programs then so is .

If is a guard and are programs, then and are programs.
3.3 Program semantics
Given a vocabulary , a configuration (cfg) is a structure. Given a structure and , we write for the expansion of to with all identifiers in interpreted as empty (everywhere undefined functions and empty relations). For a program over we define the binary yield relation between configurations by recurrence on . For a structurerevision the definition follows the intended semantics described informally above. The cases for composition, branching, and iteration, are straightforward as usual.
Let be a partialmapping from a class of structures to a class of structures. A program computes if for every , for some expansion of .
The vocabulary of the output structure need not be related to the input vocabulary .^{1}^{1}1Of course, if is a proper class (in the sense of GödelBernays set theory), then the mapping defined by is a properclass.
We shall focus mostly on programs as transducers. Note that all structure revisions refer only to accessible structure nodes. It follows that nonaccessible nodes play no role in the computational behavior of ST programs. We shall therefore focus from now on accessible structure only.
3.4 Examples

Concatenation by splicing. The following program computes concatenation over . It takes as input a pair of structures, where the nil and two successor identifiers are for and for . The output is , with vocabulary .

Concatenation by copying. The previous program uses no inception, as it splices the second argument over the first. The following program copies the second argument over the first, thereby enabling a repeated and modular use of concatenation, as in the multiplication example below.

String multiplication is the function that for inputs and returns the result of concatenating copies of . This is computed by the following program, which takes as input a pair of structures, with vocabularies and respectively, and output vocabulary .
3.5 Computability
Since guarded iterative programs are well known to be sound and complete for Turing computability, the issue of interest here is articulating Turing computability in the ST setting. Consider a Turing transducer over an I/O alphabet , with full alphabet , set of states , start state , print state , and transition function . The input is taken to be the structure .
Define to be the vocabulary with , and each state in as tokens; and with and each symbol in as pointers. Thus the program vocabulary is broader than the input vocabulary, both in representing ’s machinery, and with auxiliary components. The intent is that a configuration (i.e. with cursored) be represented by the structure
All remaining tokens are undefined.
The program simulating implements the following phases:

Convert the input structure into the structure for the initial configuration, and initialize to the initial input element, and to be the destructor function for the input string.

Main loop: configurations are revised as called for by . The pointer is used to represent backwards cursor movements. The loop’s guard is (the “print” state) being undefined.

Convert the final configuration into the output.
4 Stv: programs with variants
4.1 Loop variants
A variant is a finite set of function and relationidentifiers of positive arity, to which we refer as ’s components.
Given a vocabulary the programs of STV are generated inductively as follows, in tandem with the notion of a variant being terminating in an STVprogram . Again, we omit the reference to when it is clear or irrelevant.

A structurerevision over is a program. A variant is terminating in any revision except for a function or relationextension whose eigen function/relation is a component of .

If and are STVprograms with terminating, then so is .

If is a guard and are STVprograms with terminating, then so is .

If is a guard, and is a STVprogram with and terminating variants, then is a STVprogram, with terminating.
We write STV for the programming language consisting of STVprograms over vocabulary , and omitting when in no loss of clarity.
4.2 Semantics of Stvprograms
The semantics of STVprograms is defined as for programs of ST, with the exception of the looping construct do. A loop is entered if is true in the current state, and is reentered if is true in the current state, and the previous pass executes at least one contraction for some component of the variant . Thus, as is executed, no component of is extended within (by the syntactic condition that is terminating in ), and is contracted at least once for each iteration, save the last (by the semantic condition on loop execution).
4.3 String duplication
The following program duplicates a string given as a structure: the output structure has the same nodes as the input, but with functions appearing in duplicate. The algorithm has two phases: a first loop, with the variant consisting collectively of the functions, creates two new copies of the string (while depleting the input function in the process). A second loop restores one of the two copies to the original identifiers, thereby allowing the duplication to be useful within a larger program that refers to the original identifiers. Function duplication in arbitrary structures is more complicated, and will be discussed below.
The ability of STV programs to duplicate structures (for now only string structures) is at the core their ability to implement recurrence, so be discussed below.
4.4 Further examples

Concatenation. Using string duplication, we can easily convert the concatenation examples of §3.4 to STV. The changes are similar for the splicing and for the copying programs. The programs are preceded by the duplication of each of the two inputs. The copy of is then used as guard for the first loop, and is depleted by an entry in each cycle. The copy of is used as guard for the second loop, and is similarly depleted.

Multiplication. The program of §3.4 is preceded by a duplication of the string input. The outer loop has as a variant, which is depleted by a contraction in each cycle of the current . The inner loop has the copy of as variant.

Exponentiation A program transforming the structure for to the structure for is obtained by combining the programs for duplication and concatenation. Using for the input vocabulary a token and a pointer , and for output a token and a pointer , The program first initializes the output to the structure for 1. The main loop has as guard and as variant. The body triplicates its initial , and uses one copy as variant for an inner loop that concatenates the other two copies.
5 Programs for structure expansions
In this section we describe programs that expand arbitrary (finite) structures in important ways.
5.1 Enumerators
Given a structure we say that a pair , with and , is an enumerator for if for some the sequence
consists of all accessible nodes of , without repetitions, and is undefined. An enumerator is monotone if the value of a term never precedes the value of its subterms. This is guaranteed if the value of a term of height never precedes the value of terms of height .
Theorem 2
For each vocabulary there is a program that for structures as input yields an expansion of with a monotone enumerator .
Proof. The program maintains, in addition to the identifiers in , four auxiliary identifiers, as follows.

A token , intended to set the head of the enumerator.

A pointer , intended to denote a (repeatedly growing) initial segment of the intended enumerator ;

A set identifier , intended to denote the set of nodes enumerates by so far.

A pointer intended to list, starting from a token , some accessible nodes not yet listed in ; these are to be appended to at the end of each loopcycle.

A token , intended to serve as a flag to indicate that the last completed cycle has added some elements to .
A preliminary programsegment sets and to be the node denoted by one of the tokens (there must be one, or else there would be no accessible nodes), and defines to list any additional nodes denoted by tokens. (The value of is immaterial, only being defined matters.) Note that , and are initially empty by default.
The main loop starts by reinitializing to empty, using string duplication described above, resetting to undefined (i.e. false), and duplicating as needed for the following construction. Each pass then adds to all nodes that are obtained from the current values in by applications of ’s functions, and that are not already in . That is, for each unary functionid of a secondary loop travels through , using an auxiliary token . When applied to an entry is not in , the value of that output is appended to both and . The guard of that loop is , and the variant is .
For function identifiers of arity the process is similar, except that nested loops are required, with additional duplications of ahead of each loop. Whenever a new node is appended to , the token is set to be defined (say as the current vale of ).
When every nonnullary functionidentifier of is treated,
the list is appended to , leaving empty.
In §3.4 we gave a program for duplicating a string. Using an enumerator, a program using the same method would duplicate, for the accessible nodes, each structure function. Namely, to duplicate a ary function denoted by to one denoted by , the program’s traverses copies of the enumerator with tokens , and whenever is defined, the program defines .
Observe that an enumerator for a structure usually ceases to be one with the execution of a structure revision; for example, a function contraction may turn an accessible node into an inaccessible one. This can be repaired by accompanying each revision by an auxiliary program tailored to it, or simply by redefining an enumerator whenever one is needed.
5.2 Quasiinverses
We shall need to refer below to decomposition of inductive data, i.e. inverses of constructors. While in general structure functions need not be injective, we can still have programs for quasiinverses, which we define as follows.^{2}^{2}2A common equivalent definition is that .
For a relation and , define .^{3}^{3}3We use infix notation for binary relations. We call a partialfunction a choicefunction for if and is defined whenever . A partialfunction is a quasiinverse of if it is a choice function for the relation . When is ary, i.e. , can be construed as an tuple of functions . We write for . If is injective then its unique quasiinverse is its inverse .
Theorem 3
For each vocabulary there is a program that for each structure as input yields an expansion of with quasiinverses for each nonnullary function.
Proof. The proof of Theorem 2 can be easily modified
to generate quasiinverses for each structure function, either
in tandem with the construction of an enumerator, or independently.
Namely, whenever the program in the proof of Theorem 2
adds a node
to and (where ), our enhanced program
defines ().
Note that, contrary to enumerators, quasiinverses are easy to maintain through structure revisions. An extension of a function can be augmented with appropriate extensions of ’s quasiinverses, and a contraction of with appropriate contractions of those quasiinverses.
6 A generic delineation of primitive recursion
6.1 Recurrence over inductive data
Recall that the schema of recurrence over consists of the two equations
(1) 
More generally, given a free algebra generated from a finite set of constructors, recurrence over has one equation per constructor:
(2) 
The set of primitive recursive functions over is generated from the constructors of (for example zero and successor for ), by recurrence over and explicit definitions.^{4}^{4}4The phrase “primitive recursive” was coined by Rosza Peter [21], triggered by the discoveries by Ackermann and Sudan of computable (“recursive”) functions that are not in . Given the presentday use of “recursion” for recursive procedures, “recurrence” seems all the more appropriate. Using standard codings, it is easy to see that any nontrivial (i.e. infinite) algebra can be embedded in any other. Consequently, the classes are essentially the same for all nontrivial , and we refer to them jointly as PR.^{5}^{5}5Note that we are not dealing in generalizations of recurrence to wellorderings (“Noetherian induction”). A natural question is whether there is a generic approach, unrelated to free algebras, that delineates the class PR.
The recurrence schema (for ) was seemingly initiated by the interest of Dedekind in formalizing arithmetic, and articulated by Skolem [24]. It was studied extensively (e.g. [21]), and generalized to all admissible structures [3]. Our aim here is to characterize the underlying notion of primitive recursion generically, via uninterpreted programs. We delineate a natural variant of ST, STV which is sound and complete for PR. That is, on the one hand every STV program terminates in time primitiverecursive in the size of the input structure. On the other hand, STV captures PR in two ways: any instance of recurrence over a free algebra can be implemented directly by an STV program; and every ST program that runs in PR resources in the size of the input structure can be transformed into an extensionally equivalent STV program.
Recurrence is guaranteed to terminate because it consumes its recurrence argument. The very same consumption phenomenon is used, in a broad and generic sense, in the DijkstraHoare program verification style, in the notion of a variant [12, 8, 26]. Our core idea is to use a generic notion of program variants in lieu of recurrence arguments taken from free algebras.
6.2 Resource measures
We first identify appropriate notions of size measures for structures. We focus on accessible structures, since nonaccessible nodes remain nonaccessible through revisions and are inert through the execution of any program. Consequently they do not affect the time or space consumption of computations.
We take the size of an accessible structure to be the count of tuples of nodes that occur in the structure’s relations and (graphs of) functions. Note that this is in tune with our use of variants, which are consumed not by the elimination of nodes, but by the contraction of functions and relations. Moreover, we believe that the size of functions and relations is an appropriate measure in general, since they convey more accurately than the number of nodes the information contents of a structure.
Note that for wordstructures, i.e. for ( an alphabet) the total size of the structure’s functions is precisely the length or , so in this important case our measure is identical to the count of nodes.
Suppose is a vocabulary with all identifiers of arity . If is a structure of size , then the number of accessible nodes is . Conversely, if the number of accessible nodes is , then the size is . It follows that the distinction between our measure and nodecount does not matter for superpolynomial complexity.
We say that a program runs within time if for all structures , the number of configurations in a complete trace of on input is ; it runs within space if for all , all configurations in an execution trace of on input are of size .
We say that runs in PR if it runs within time , for some PR function , or — equivalently — within space , for some PR function .
6.3 PRsoundness of Stvprograms
We assign to each STVprogram a primitiverecursive function as follows. The aim is to satisfy Theorem 4 below.

If is an extension or an inception revision, then ; if is any other revision then .

If is then

If is then .

If is then .
Theorem 4
If is an STVprogram computing a mapping between structures, and is a structure, then
Proof. Structural induction on .

If is a revision, then the claim is immediate by the definition of .

If is then

The case for of the form is immediate.

If is then is for some . By the definition of variants, and the semantics of looping, is bounded by the size of , which is bounded by the size of . So
From Theorem 4 we obtain the soundness of STVprograms for PR:
Theorem 5
Every STVprogram runs in PR space, and therefore in PR time.
6.4 Completeness of Stvprograms for PR
We finally turn to the completeness of STV for PR. The easiest approach would be to prove that STV is complete for , and then invoke the coding of primitive recurrence over any free algebra in . This, however, would fail to establish a direct representation of generic recurrence by STVprograms, which is one of the raisons d’être of STV. We therefore follow a more general approach.
Lemma 6
For each free algebra , each instance of recurrence over as in (2) above (with ), the following holds. Given STVprograms for the functions , there is an STVprogram that maps the structure to .
Proof. The program gradually constructs a pointer that maps each node of to the root of the structure , where is the subterm of denoting (it is uniquely defined since is a free algebra).
starts by constructing a monotone enumerator for the structure , as well as inverses for all constructors, by Theorems 2 and 3. (Since is a term of a free algebra, a quasiinverse of a constructor is an inverse). The main loop of then scans that enumerator, using a token; reaching the end of the enumerator is the guard, and the enumerator itself is the variant.
For each node encountered on the enumerator, first identifies the constructor defining , which is unique since . This identification is possible by testing for equality with the tokens, and — that failing — testing, for nonnullary constructor , the definability of the first inverse . Since the enumerator is monotone, is already defined for the values (). can thus invoke the program for the function , adapted to the disjoint union of

The structures ;

The structures spanned by the ’s, i.e. for each the substructure of the input consisting of the subterms of ;

The structures already obtained.
is then set to be the root of the result.
The program’s final output is then ; that is
the structure yielded for the program’s given recurrence argument.
Theorem 7
For each free algebra , the collection of STVprograms is complete for .
Proof. The proof proceeds by induction on the PR definition of . The cases where is a constructor are trivial. For explicit definitions, and more particularly composition, we need to address the need of duplicating substructures, for which we have programs, as explained in §5.1.
Finally, the case of recurrence is treated in Lemma 6.
Theorem 7 establishes a simple and direct mapping of PR function definitions, over any free algebra, to STV programs. Another angle on the completeness of STV for PR refers directly to STprograms (i.e. to programs without variants):
Corollary 8
For every STprogram running in PR resources, and defining a structure transformation , there is an STVprogram that computes .
Proof. Recall from §6.2 that the size of a structure, measured in size of functions and relations, is polynomial in the number of nodes. It follows that runs in time PR in the input’s number of nodes.
Suppose now that ’s input is a structure, and that operates within time , where is a PR function over .
Let be the composition of the following STVprograms:

A program that expands each structure with an enumerator , as in Theorem 2. The constructed enumerator is a list without repetition of the nodes of . I.e., is essentially , where is the number of nodes in .

A program that takes as input the structure constructed in (1), and outputs with, say, as the output’s successor function. Such a program exists by Theorem 7 applied to the free algebra .

The given STprogram , with each loop assigned as variant a copy of , and each loopbody preceded by a functioncontraction of .
Then computes the same structuretransformation as .
[17]
References
 [1] Philippe Andary, Bruno Patrou, and Pierre Valarcher. About implementation of primitive recursive algorithms. In Beauquier et al. [4], pages 77–90.
 [2] Philippe Andary, Bruno Patrou, and Pierre Valarcher. A representation theorem for primitive recursive algorithms. Fundam. Inform., 107(4):313–330, 2011.
 [3] Jon Barwise. Admissible Sets and Structures, volume 7 of Perspectives in Mathematical Logic. SpringerVerlag, Berlin, 1975.
 [4] Danièle Beauquier, Egon Börger, and Anatol Slissenko, editors. Proceedings of the 12th International Workshop on Abstract State Machines, 2005.
 [5] Corrado Böhm and Alessandro Berarducci. Automatic synthesis of typed lambdaprograms on term algebras. Theor. Comput. Sci., 39:135–154, 1985.
 [6] Egon Börger. The origins and the development of the ASM method for high level system design and analysis. J. UCS, 8(1):2–74, 2002.
 [7] Alonzo Church. The Calculi of LambdaConversion. Annals of Mathematics Studies. Princeton University Press, 1941.
 [8] Edsger W. Dijkstra. A Discipline of Programming. PrenticeHall, 1976.
 [9] H.D. Ebbinghaus and J. Flum. Finite Model Theory. SpringerVerlag, Berlin, 1995.
 [10] Ronald Fagin. Generalized first order spectra and polynomial time recognizable sets. In [19], pages 43–73, 1974.
 [11] Erich Grädel and Yuri Gurevich. Metafinite model theory. In Leivant [20], pages 313–366.
 [12] David Gries. The Science of Programming. Texts and Monographs in Computer Science. Springer, 1981.
 [13] Yuri Gurevich. Logic in computer science column. Bulletin of the EATCS, 35:71–81, 1988.
 [14] Yuri Gurevich. Evolving algebras: an attempt to discover semantics. In Rozenberg and Salomaa [22], pages 266–292.
 [15] Yuri Gurevich. The sequential ASM thesis. In Current Trends in Theoretical Computer Science, pages 363–392. World Scientific, 2001.
 [16] Juris Hartmanis. On nondeterminancy in simple computing devices. Acta Inf., 1:336–344, 1972.
 [17] Jean van Heijenoort. From Frege to Gödel, A Source Book in Mathematical Logic, 1879–1931. Harvard University Press, Cambridge, MA, 1967.
 [18] Neil Immerman. Descriptive complexity. Graduate texts in computer science. Springer, 1999.
 [19] Richard Karp, editor. Complexity of Computation. AMS, Providence, R.I, 1974.
 [20] Daniel Leivant, editor. Logic and Computational Complexity, volume 960 of Lecture Notes in Computer Science. Springer, 1995.
 [21] Rosza Peter. Rekursive Funktionen. Akadémia Kiadó, Budapest, 1951.
 [22] Grzegorz Rozenberg and Arto Salomaa, editors. Current Trends in Theoretical Computer Science, volume 40. World Scientific, 1993.
 [23] Vladimir Yu. Sazonov. Polynomial computability and recursivity in finite domains. Elektronische Informationsverarbeitung und Kybernetik, 16(7):319–323, 1980.
 [24] Thoralf Skolem. Einige bemerkungen zur axiomatischen begründung der mengenlehre. In Matematikerkongressen in Helsingfors Den femte skandinaviske matematikerkongressen, 1922 [17], pages 217–232. English translation in [17].
 [25] Thomas Strahm and Jeffery I. Zucker. Primitive recursive selection functions for existential assertions over abstract algebras. J. Log. Algebr. Program., 76(2):175–197, 2008.
 [26] Glynn Winskel. The Formal Semantics of Programming Languages: An Introduction. MIT Press, Cambridge, MA, USA, 1993.
Comments
There are no comments yet.