1 Introduction
The analysis and certification of resource requirements of computer programs is of obvious practical as well as foundational importance. Of particular interest is the certification of feasibility, commonly identified with polynomial time (PTime) execution, i.e. algorithms that terminate in a number of steps polynomial in the size of the input. Unfortunately, no certification method can recognize all PTime algorithms:^{1}^{1}1The undecidability of PTime seems to be folklore, but nonsemidecidability seems to be new, even though semidecidability is a natural property here.
Theorem 1
Let
be a Turingcomplete programming language, whose programs simulates Turing machines within PTime overhead.
^{2}^{2}2As do all programming languages in use. Let consist of the PTime programs. Then is not semidecidable.Proof. The decision problem that asks whether
a Turing acceptor fails to accept the empty string is wellknown to
be nonsemidecidable.
We reduce it to , thereby showing that is not
semidecidable either. Fix a Turing acceptor running in time ,
and a Turing acceptor running in time .
Our reduction maps a given Turing machine to the machine that
on input simulates the computation of on input and, in lockstep,
the computation of on input .
If the former terminates first,
accepts if accepts .
If the latter computation terminates first, then
switches to simulating on input . Thus, if fails to
accept then runs in time , and is thus in ;
whereas if accepts , say in steps,
then, since runs in time , also runs in time
, with the possible exception of a finite number of inputs
(accepted by within fewer than steps). Thus
runs in time , and is thus not in , completing
the reduction.
The challenge is thus to design programming languages that accommodate PTime algorithmic methods as comprehensively and flexibly as possible. Given that PTime is often related to microcode and memory management, a PTime certification framework that applies to imperative programming, and encompasses both inductive types and microlevel data, should be particularly desirable. We propose here just such a framework.
Two leading approaches to resource certification have been Static Analysis (SA) and Implicit Computational Complexity (ICC). SA is algorithmic in nature: it focuses on a broad programming language of choice, and seeks to determine by syntactic means whether given programs in that language are feasible. In contrast, ICC attempts to create from the outset specialized programming languages or methods that delineate a complexity class. Thus, SA’s focus is on compile time, making no demand on the programmer; whereas ICC is a languagedesign discipline, that seeks to confine programming to a safe regime. The distinction between SA and ICC is not clear cut, however: the syntactic restrictions embedded in a programming language designed by ICC, might be derived by a smart compiler; conversely, program properties sought by an SA algorithm might be broad enough to be rephrased as delineating a programming language. An example of the SA approach is the line of research that refers to the MeyerRitchie characterization of primitive recursion by imperative “loop”programs over [34], seeking algorithms for ascertaining the PTime termination of such programs [20, 19, 6, 7, 5].
The main ICC approach to PTime originates with Cobham’s characterization of PTime in terms of bounded recurrence [12]. Advances in this area since the 1990’s were focused on mechanisms that limit dataduplication (linearity), datagrowth (nonsizeincrease), and nesting of iteration (predicativity) (see [42] for a survey). In its basic form, predicative recurrence, also known as ramified recurrence, refers to computational ranks, and requires that every iteration is paced by dataobjects of higher rank than the output produced. This prevents the use of nontrivial computed functions as the stepfunctions of recurrence. Caseiro observed [10] that important algorithms, notably for sorting, do use recursively defined step functions, and yet are in PTime, because those step functions are not increasing the size of their principal argument. Hofmann built on that observation [17, 1], and developed a type system for nonsizeincreasing PTime functions, based on linearity and an explicit account of information unit. Unfortunately, the functions obtained are all nonsizeincreasing, leaving open the meshing of these results with full PTime.
Another line of ICC research has considered imperative, rather than declarative, programming. Stacks are taken as basic data in [19, 20], words in [32, 31], and finite graphs in [24].
One novelty of our approach is the use of finite partialfunctions as fundamental dataobjects. That choice leads to using dataconsumption as a generic form of recurrence, capturing primitive recursive complexity [25]. However, a simpleminded ramification of datadepletion is fruitless, because it blocks all forms of duplication, resulting in lineartime programs.^{3}^{3}3This issue does not come up with traditional ramified recurrence, due to the free repetition of variables in function composition. We resolve this snag by ramifying all data, and trading off sizereduction of depleted data with sizeincrease of nondepleted data within the same rank.
The rest of the paper is organized as follows. Section 2 introduces the use of finite partial (fp) functions as basic data, and describes an imperative programming language of primitiverecursive complexity, based on fpfunctions depletion [30, 25]. Section 3 introduces the ramified programming language STR, shows that it is sound for PTime, and presents examples that illustrate the methods and scope of the language. Some of those examples are used in §4 to prove that STR is extensionally complete for PTime, i.e. has a program for every PTime mapping between finite partialstructures. The conclusion (§5) argues that the method is particularly amenable to serve as a synthesis of an ICC core language, whose implementation can be refined using SA methods.
2 Programs for transformation of structured data
2.1 Finite functions as data objects
Basic data objects come in two forms: structureless “points”, such as the nodes of graphs, versus elements of inductive data, such as natural numbers and strings over an alphabet. The former have no independent computational content, whereas the computational nature of the latter is conveyed by the recursive definition of the entire data type to which they belong, via the corresponding recurrence operators. This dichotomy is antithetical, however, to an ancient alternative approach that takes individual inductive data objects, such as natural numbers, to be finite structures on their own, whose computational behavior is governed by their internal makings [15, 33]. Under this approach, computing over inductive data is reduced to operating over finite structures, and functions over inductive data are construed as mappings between finite structures.
Embracing this approach yields a number of benefits. First, we obtain a common “hardwarebased” platform for programming not only within finite structures, but also for the transformation of inductive data. Conjoining these two provides a common platform for microcode and highlevel programming constructs, In particular, the depletion of natural numbers the drives the schema of recurrence over , is generalized here to the depletion of finite functions as loop variants.
Yet another benefit of our approach is the generalization of the ramification method in implicit computational complexity to imperative programs over finite structures. The step of ramifying recurrence over (or ) to obtain a PTime form of recurrence, is reformulated here in an imperative context which provides greater algorithmic flexibility, and deals with algorithms that are difficult to express using traditional ramified recurrence.
Focusing on finite structures may seem akin to finite model theory, with finite structures taken to be particular Tarskian structures. But once we construe finite structures as data objects, we obtain an infinite data type, such as , to be a collection of particular finite structures, and computing over as a process of transforming those structures. For instance, a program for string reversal takes as input a stringasfinitestructure and yields another stringasfinitestructure.
2.2 Finite partialstructures
We take our basic dataobjects to be finite partialfunctions (fpfunctions) in the following sense. We posit a denumerable set of atoms, i.e. unspecified and unstructured objects. To accommodate in due course nondenoting terms we extend to a set , where is a fresh object intended to denote “undefined.” The elements of are the standard elements of . A (ary) fpfunction is a function which satisfies:

for all except for a in a finite set , dubbed here the domain of .

is strict: whenever is one of the arguments .
An entry of is a tuple where . The image of is the set .
Function partiality provides a natural representation of finite relations over by partial functions, avoiding ad hoc constants. Namely, a finite ary relation over () is represented by the fpfunctions
Conversely, any partial ary function over determines the ary relation
A vocabulary is a finite set of functionidentifiers, referred to as ids, where each is assigned an arity . We optionally rightsuperscript an identifier by its arity, when convenient. We refer to nullary ids as tokens and to identifiers of positive arities as pointers. Our default is to use typewriter symbols for identifiers: for tokens and for pointers. The distinction between tokens and pointers is computationally significant, because (nonnullary) functions can serve as unbounded memory, atoms cannot. For a vocabulary , we write for the set of tokens, and for the set of pointers.
An fpstructure over , or briefly a structure, is a mapping that to each , assigns a ary fpfunction , said to be a component of . The intention is to identify isomorphic fpstructures, but That intention may be left implicit without complicating matters with perpetual references to equivalence classes. Note that a tuple of fpstructures is representable as a single structure, defined as the union over the disjoint union of the vocabularies .
The domain (respectively, range) of an fpstructure is the union of the domains (ranges) of its components, and its scope is the union of its domain and its range. If is a structure, and a structure, , then is an expansion of (to ), and a reduct of (to ), if the two structures have identical interpretations for each identifier in . For structures and
Given and a structure , the size of f in , denoted , is the number of entries in . For the size of in , denoted , is . We refer to as the size of .
2.3 Terms
Given a vocabulary , the set of terms is generated inductively, as usual: ; and if (), and then . A term t is standard if does not occur in it. Note that tokens assume here the traditional role of program variables. In other words, we do not distinguish between an underlying structure and a store.
We write function application in formal terms without parentheses and commas, as in or . Also, we implicitly posit that the arity of a function matches the number of arguments displayed; thus writing assumes that
is a vector of length
, and (with no superscript) that the vector is as long as ’s arity.Given a structure the value of a term t in , denoted , is obtained by recurrence on t: ; and for For , . An atom is accessible in if it is the value in of some term. A structure is accessible if every atom in the domain of is accessible (and therefore every atom in the range of is also accessible).
If every atom in the range of an accessible structure is the value of a unique term then is free. It is not hard to see that an accessible structure is free iff there is a finite set of terms, closed under taking subterms, such that the valuation function is injective.
If q is a standard term, and consists of the subterms of q, then we write for the resulting free fpstructure, with a token designating the term as a whole. Here are the free structures for the natural number 3 (i.e. the term ), the binary string 110 (the term ), and the binary trees for the terms and . They use 4,4, 3 and 3 atoms, respectively. (The vocabulary identifiers are in green, the atoms are indicated by bullets, and the formal terms they represent are in smaller font.)
2.4 Structure updates
Fix a vocabulary . We consider the following three basic operations on structures. In each case we indicate how an input structure (with ) is transformed by the operation into a structure that differs from only as indicated.

1. A extension is a phrase where and each are standard terms. The intent is that if , then . Thus, is identical to if is defined.

2. A contraction, the dual of an extension, is a phrase of the form The intent is that . Note that this removes the entry (if defined) from , but not from for other identifiers g.

3. A inception is a phrase of the form , where c is a token. A common alternative notation is . The intent is that is identical to , except that if , then is an atom not in the scope of .

4. In all cases we omit the reference to the vocabulary if in no danger of confusion.
We refer to extensions and contractions as revisions, and to revisions and inceptions as updates. The identifiers f in the revision templates above are the revision’s eigenid. An extension [contraction] is active (in ) if, when triggered in , it adds [respectively, removes] an entry from its eigenid.
Remarks.

An assignment can be programmed by composing extensions and contractions:
(1) where b is a fresh token which memorizes the atom denoted by q, in case the contraction renders it inaccessible.

Inception does not have a dual operation, since atoms can be released from a structure by repeated contractions.

A more general form of inception, with a fresh atom assigned to an arbitrary term t, may be defined by
2.5 Programs for transformation of fpstructures
Fix a vocabulary . A guard is a boolean combination of equations. A variant is a set of pointers. The imperative programming language STV, consists of the programs inductively generated as follows [30, 25], (We omit the references to if in no danger of confusion.)

[Update] A update is a program.
If and are programs, then so are the following.

[Composition]

[Branching] ( a guard)

[Iteration] ( a guard, a variant)
The denotational semantics of the Iteration template above calls for the loop’s body to be entered initially if is true in the initial structure , and reenter if

is true for the current structure, and

The size of the variant is reduced, that is: the execution of the latest pass through executes more active contractions than active extensions of the variant.
In particular, the loop is existed if the variant is depleted. A formal definition of this semantics in terms of configurations and execution traces is routine.
From the vantage point of language design, termination by depletion is a common practice. However, keeping track of the balance of active extensions and active contractions requires an unbounded counter. If this, for some reason, is to be avoided, one can resort to more local forms of control. Here are two such options.

Syntactically, require that loops have no extension of in . Semantically, scale down the depletion condition of STV to just one active contraction. This implementation eliminates the need for unbounded counters in an implementation of STV to just one flag per loop. The resulting variation of STV still yields full primitive recursion [25].

Define a pod to be the composition of updates (). Programs are then generated from pods as basic building blocks.
The semantics of iteration is defined in terms of pods, as follows. Say that the execution of a pod is positive [respectively, negative] in if its updates, starting with structure , executes more active extensions than active contractions [respectively, more contractions than extensions].
The semantics of is defined to exit the loop if the latest pass has no positive execution of any pod, and has at least one negative one. This reduced the unbounded counter of STV to local counters for each pod.
The resources of a program are defined in terms of the size of an input fpstructure, i.e. the total number of entries (not of atoms). For a function we say that program is in if there is a constant such that for all fpstructures the size of structures in the execution trace of for input is . is in PSpace if it is in for some . is in if there is a such that for all fpstructures as input, terminates using an execution trace of length . is in PTime if it runs in time for some . We focus here on programs as transducers. a partial mapping from a class of structures to a class of structures is computed by a program , over vocabulary , if for every , for some expansion of , where is the trivial expansion of to , with every interpreted as empty (i.e. undefined for all arguments).
Theorem 2
[25] Every STVprogram runs in time and space primitiverecursive in the size of the input.
3 Feasible termination
3.1 The ramification method
One main strand of implicit computational complexity (icc) has been ramification, also known as “ranked”, “stratified”, “predicative”, and “normal/safe” programming. Ramification has been associated primarily with recurrence (primitive recursion) over free algebras, raising the hope of a practical delineation of feasible computing within the primitive recursive functions, which arguably include all functions of interest. this idea goes back to Ritchie and Cobham [40, 12], who introduced recurrence restricted explicitly by bounding conditions. although the characterizations they obtained use one form of bounded resources (i.e. output size) to delineate another form (i.e. time/space resources), they proved useful, for example in suggesting complexity measures for higherorder functionals [13].
A more foundational approach was initiated by a proof theoretic characterization of FPtime based on a distinction between two second order definitions of the natural numbers [27]. This triggered^{4}^{4}4personal communication with Steve Bellantoni the “safe recurrence” characterization of PTime by Bellantoni and Cook [2], as well as the formally more general approach of [27].
In fact, the ramification of programs can be traced to the type theory of Fundamenta Mathematicae [44] whose simplest form is conveyed in schütte’s ramified second order logic [43]. The idea is to prevent impredicative set quantification. A formula implies in second order logic^{5}^{5}5We write for the set consisting of those elements for which is true under the valuation . , for any formula , even if it is more complex than . Thus, the truth of depends on the truth of , which may itself have as a subformula. While this form of circularity is generally admitted as sound, it does raise onthological and epistemological questions [18], and implies a dramatic increase in definitional, computational, and prooftheoretic complexities of secondorder over firstorder logic. Schütte’s ramified second order logic blocks impredicative inferences of the kind above by assigning a rank to each set definition, starting with set variables. In particular, the rank of , is larger than the rank of , so cannot be instantiated to (or any formula having as a subformula).
Schutte’s ramification of sets yields a separate definition of for each rank : where is analogously, ramified recurrence allows the definition of a function with output of rank only if the recurrence argument is of rank .
ramified recurrence has been used to obtain machineindependent characterizations of several major complexity classes, such as polynomial time [2, 27] and polynomial space [22, 37], as well as alternating log time[8, 23], alternating polylog time [8], NC [28, 36], logarithmic space [35], monotonic PTime [14], linear space [26, 16, 27], NP [3, 38], the polytime hierarchy [4], exponential time [11], Kalmarelementary resources [29], and probabilistic polynomial time [21]. The method is all the more of interest given the roots of ramification in the foundations of mathematics [44, 43], thus bridging abstraction levels in settheory and typetheory to computational complexity classes.
Notwithstanding its strengths, the ramification method has, unfortunately, failed to date to evolve into an effective practical method for static certification of computational resources. Indeed, the implicit characterizations obtained for the various complexity classes, such as PTime (i.e. FPTime), are extensional: every function computable in PTime is computable by a ramified program, but not every PTime algorithm is captured by the method. This limitation is unavoidable by Theorem 1 above. Unfortunately, among the algorithms that elude the ramification method are numerous common algorithms. The limited success of ramification, so far, is plainly related to deliberate and avoidable constraints. For one, confining ramification to recurrence ties it to inductive data, thereby dissociating it from finite datastructures. More generally, the focus on declarative programming complicates direct access to memory which lies at the heart of many feasible algorithms. To overcome these limitations one need a germane applicability of the ramification method to imperative programming, which is what we are proposing.
3.2 Programs for generic PTime
We define the programming language STR, which modifies STV by the ramification of variants. We depart from traditional ramification, which assigns ranks to firstorder inductive dataobjects, and ramify instead the loop variants, i.e. secondorder objects. This is in direct agreement with the ramification of secondorder logic [43] and, more broadly, of type theory [44, 39].
Ramification is a classification of the computational powers of objects that drive iteration. we use natural numbers for ranks.
A ramified vocabulary is a pair where is a vocabulary and . we refer to as the rank of f, and let . A variant (of rank ), for a ramified vocabulary , is a set .
The syntax of STR is identical to STV, except for the iteration clause, which is replaced by

[Ramified Iteration] If is a guard, a variant of rank , is a program.
The semantics of this iterative program has the loop’s body reentered when in configuration (i.e. fpstructure) if the two conditions of STV above are supplemented by a third:

is true in .

is shrunk in the previous pass, i.e. the number of active contractions of components of in exceeds the number of active extensions.

For , does not grow; that is, the total number of active extensions of pointers of rank does not exceed the total number of active contraction.
Remarks.

A loop with a variant of rank will be reentered after decreasing even if that decrease is offset by extensions of pointers in . Allowing such extensions is essential: programs in which loops of rank cannot extend execute in linear time, as can easily be seen by structural induction.

We caution the reader familiar with existing approaches to ramified recurrence that our ranks are properties of functionidentifiers, and not of atoms, fpfunctions, or terms. Moreover, no ranking for atoms or functions is inherited from the ranking of functionids: an fpfunction may be the value of distinct identifiers, possibly of different ranks. In particular, there is no rankdriven restriction on inceptions or extensions: the functionentries created have no rank, e.g. an extension may have f of rank 0 whereas q refers to arbitrarily large ranks.

The condition on nonincrease of for has no parallel in ramified recurrence, but is needed for imperative programs, in which every variable may be considered an outputvariable.

If unbounded counters for the size of ranks are to be avoided, they can be replaced by local counters for pods, as in §2.5. The simpler approach described there, of disallowing extensions altogether, is not available for STR, because (as observed) extensions of an iteration’s rank is essential to permit datatransfers within that rank.
3.3 PTime soundness of Str
Theorem 3
For each STRprogram with loop ranks , there is a positive^{6}^{6}6I.e. defined without subtraction polynomial such that for all structures
Moreover, for each there is a positive polynomial such that
Proof. Parts 1 and 2 are proved by a simultaneous induction on . Nontrivial case: , where (by the definition of programs) is decreasing in , and are each nonincreasing in . Suppose , where
Since is decreasing in , we have . For each is nonincreasing in , so we take .
For , , we proceed by a secondary induction on . The induction base , i.e. , is already proven. For the step, we have
So it suffices to take
where stands for .
This concludes the inductive step for (2).
For the inductive step for (1), we have
So it suffices to take
where the ’s are as above.
From Lemma 3 we conclude:
Theorem 4
Every program of STR runs in time polynomial in the size of the input structure.
3.4 Examples of Strprograms
Aside of illustrating our ramification mechanism, the following examples will establish structural expansions (§§5.15.3, to be used in §§4.1 and 4.2), consider arithmetic operations (§5.4, used in§4.2), and code several sorting algorithms (§5.5) which are problematic under the traditional ramification regime.
3.4.1 String duplication
The following program has as input a token and unary pointers and of rank 1. The intended output consists of and the unary pointers and of rank 0. Termination is triggered solely by depletion of the variant , whence an empty guard (i.e. true).^{7}^{7}7Termination by depletion is indeed frequent in imperative programming! Note that the loop’s body executes a contraction of the variant, unless the variant is empty.
The program consumes the variant while creating two copies at a lower rank, but in fact a rank1 copy of the variant may be constructed as well:
and similarly for . The variant still decreases with each pass, while is nonincreasing. Of course, no more than a single rank1 copy can be created, lest would increase.
The copy created must be syntactically different from the variant , but the loop above can be followed by a loop that similarly renames to . We shall refer to this sort of variant recreation as spawning. In particular, given a chain and a guard , spawning in rank 0 allows a scan of for an atom that satisfies , while consuming as a variant and recreating it at the same time.
3.4.2 Enumerators
We refer to an implementation of lists that we dub chain, consisting of a token and a unary injective pointer . The intent is to represent a list of atoms as the denotations of , where is injective. Since is an fpstructure and injective, the chain must be finite.
A chain is an enumerator for an fpstructure if for some
is a listing (possibly with repetitions) of the accessible atoms of , and .
Let be a ramified vocabulary. We may assume that uses only ranks , since raising all ranks by 2 results in a ranking function equivalent to (i.e. yielding the same domination relation on ).
We outline an STRprogram that for a structure as input yields an expansion of with an enumerator in rank 0 for .
initializes to a listing of for ’s tokens .
Let . iterates then its main cycle , which collects accessible elements that are not yet listed in , into copies of a unary cache of rank 0, with for each of arity a block of copies of dedicated to f. Using the entire vocabulary as variant, takes each in turn, cycles through all tuples in and for each tuple appends to all copies of if it is not already in . That cycling through takes as variant in rank 1, using spawning to preserve as needed. When this process is completed for all , concatenates (any one of the copies of) to ,
The loop is exited by variantdepletion, when the cache remains empty at the end of (no new atom found).
If input is free, then the enumerator is monotone:
for each term the enumerator lists
before q.
3.4.3 Arithmetic functions
Natural numbers are taken to be the free structures for the unary numerals , for a vocabulary with one token (the “zero”) and on unary pointer (the “successor”).
Addition can be computed in rank 0, which should not be surprising since it does not increase the (combined) size of the inputs. Splicing one input onto the other is not quite acceptable syntactically, since the two inputs are given with different successor identifiers. But the sum of natural numbers and , both of rank 0, can be computed by a loop that uses as variant, and appends the first input to the first, starting from .
Note though that the first input is depleted in the process, and that spawning (in the sense of 3.4.1) is disallowed since the first input is in rank 0. Positing that the first input is in rank 1 enables spawning, whence a reuse of the first input.
That is precisely what we need for a program for multiplication. We take both inputs to be in rank 1. The second input is a variant for an outer loop, that sets the output to , and then iterates an inner loop driven by the first input as variant, that adds itself to the input while spawning itself as well.
It is worthwhile to observe how ramification blocks exponentiation, as predicted by Theorem 4. A simple program for exponentiation iterates the doubling operation starting with 1. We have seen that any in rank can be duplicated into any number of copies, but at most one of these can be in rank , and all others in ranks .
As for an iteration of multiplication, our program above for the product function takes two inputs in rank , yielding an output in rank , a process that can be repeated only many times for any fixed rank .
3.4.4 Insertion Sort
Insertion sort is a nonsizeincreasing algorithm, and consequently has an unramified (i.e. single ranked) STRprogram. In general, we construe sorting algorithms as taking a chain and a partial order relation , and return a chain listing the same atoms as without repetition, and consistent with , i.e.
Our program for Insertion Sort is:
Note that the bundle executes two extensions and two contractions, while executing just one contraction on .
4 Completeness of Str for PTime
4.1 Closure of Str under composition
Theorem 5
If partialmappings between fpstructures are defined by STRprograms, the so is their composition .
Proof. Given a transducerprogram that uses ranks for the input vocabulary, we can modify to a program that takes inputs that are all of a rank , copy the input into ranks , and then invokes . Dually, if the outputs of use ranks , we can modify to that invokes and then copies the outputs into a rank .
Let transducerprograms of STR compute , respectively. Suppose that the outputs of are the inputs of (so that composition may be defined). As observed above, we may assume that ’s inputs have a common rank , and the outputs have a common rank (). We wish to obtain an STV transducer program for .
If where , let be with all ranks incremented by . is trivially a correct program of STV, with input of rank equal to the output rank of . So is a correct STVprogram for .
Otherwise, , where . Let be with
all ranks incremented by . Then is a correct STVprogram
for .
4.2 Extensional completeness of Str for PTime
As noted in Theorem 1, no programming language can be sound and complete for PTime algorithms. STR is, however, extensionally complete for PTime. This statement is best interpreted in relation to the programming language ST of [30]. ST is imply STV without the variants, and it is easily seen to be Turing complete.
Theorem 6
Every STprogram running in PTime is extensionally equivalent to some STR program ; i.e. computes the same mapping between fpstructures as .
Proof. Let be an STprogram over vocabulary , running within time . For simplicity, we’ll use as common bound on the iteration of every loop in . is defined by recurrence on the loopnesting depth of . is if is loopfree; is ; and is .
If is , let be the ranks in . Note that we can defined a “clock” program that yields for an input structure a listing of size . Indeed, by §3.4.2 there is an STRprogram that augment any structures with an enumerator . By § 3.4.3 there is a program that for a listing as input outputs a listing of length . By §4.1 we can compose and to obtain our STRprogram , generating a listing ( of length . Choose with fresh identifiers, and with dominating all ranked identifiers in .
Now define to be
Since dominates all ranked identifiers in , the operation
of is the same in as in . Also, since the size of exceeds the number of iterations of in , and
the variant is contracted in each pass,
in STR of the loop above remains the same as in ST.
5 Conclusion
The quest for a programming language for PTime has no final destination, because no language can be both sound and complete for PTime algorithms. Over the decades a good number of methods were proposed that were sound and extensionally complete for PTime, i.e. complete for PTime computabiliTY. But the existence of such methods is trivial, and the methods proposed so far all miss important classes of PTime algorithms. We propose here a novel approach, which yields a natural programming language for PTime, which is generic for both inductive data and classes of finite structures, and which accommodates a substantially broader class of algorithms than previous approaches.
We built on [30, 25], where finite partialfunctions form the basic data, and are used as loopvariants whose depletion is an abstract form of recurrence. We consider here a ramification of data that applied simultaneously to each variant and to its entire rank. This leads to a programming language STR for PTime, which is more inclusive than previously proposed works.
While the purely functional approach of ramified recurrence does not require a change of semantics of the underlying recurrence operation, this is no longer the case for the permissive imperative programming that we consider. The semantics of loops is modified here to ensure the necessary forms of data depletion, which in the functional realm are guaranteed by the simplicity of the syntax. This tradeoff seems to be necessary, if we strive for more algorithmically inclusive programming languages. The static analysis method, mentioned in the Introduction, can be called upon to complement the ICC framework to demonstrate that certain STR programs satisfy the depletion conditions under the standard semantics of looping, following the line of research of [20, 19, 6, 7, 5] for MeyerRitchie’s loop programs, but here with far greater generality.
References
 [1] (2004) An arithmetic for nonsizeincreasing polynomialtime computation. Theor. Comput. Sci. 318 (12), pp. 3–27. Cited by: §1.
 [2] (1992) A new recursiontheoretic characterization of the polytime functions. Computational Complexity 2, pp. 97–110. Cited by: §3.1, §3.1.
 [3] (1992) Predicative recursion and computational complexity. Ph.D. Thesis, University of Toronto. Cited by: §3.1.
 [4] (1994) Predicative recursion and the polytime hierarchy. In Feasible Mathematics II, P. Clote and J. Remmel (Eds.), Perspectives in Computer Science, pp. 15–29. Cited by: §3.1.
 [5] (2019) Tight worstcase bounds for polynomial loop programs. See Foundations of software science and computation structures (fossacs)  22nd international conference, Boja’nczyk and Simpson, pp. 80–97. Cited by: §1, §5.
 [6] (2008) Polynomial or exponential? complexity inference in polynomial time. In Computability in Europe 2008: Logic and Theory of Algorithms, LNCS, Vol. 5028, pp. 67–76. Cited by: §1, §5.
 [7] (2010) On decidable growthrate properties of imperative programs. In International Workshop on Developments in Implicit Computational Complexity, P. Baillot (Ed.), EPTCS, Vol. 23, pp. 1–14. Cited by: §1, §5.
 [8] (1992) Functional characterizations of uniform logdepth and polylogdepth circuit families. In Proceedings of the Seventh Annual Structure in Complexity Theory Conference, pp. 193–206. Cited by: §3.1.
 [9] M. Boja’nczyk and A. Simpson (Eds.) (2019) Foundations of software science and computation structures (fossacs)  22nd international conference. Lecture Notes in Computer Science, Vol. 11425, Springer. Cited by: 5.
 [10] (1997) Equations for defning polytime functions. Ph.D. Thesis, University of Oslo. Cited by: §1.
 [11] (1997) A safe recursion scheme for exponential time. In Logical Foundations of Computer Science 4th International Symposium, pp. 44–52. Cited by: §3.1.
 [12] (1962) The intrinsic computational difficulty of functions. In Proceedings of the International Conference on Logic, Methodology, and Philosophy of Science, Y. BarHillel (Ed.), pp. 24–30. Cited by: §1, §3.1.

[13]
(1973)
Type two computational complexity.
In
Proceedings of the Fifth ACM Symposium on Theory of Computing
, pp. 108–121. Cited by: §3.1.  [14] (2018) A recursiontheoretic characterisation of the positive polynomialtime dunctions. In 27th Computer Science Logic, D. Ghica and A. Jung (Eds.), Leibniz International Proceedings in Informatics, Vol. 119, Dagstuhl, Germany, pp. 18:1–18:17. Cited by: §3.1.
 [15] (1956) Elements. Dover, New York. Note: Translated to English by Thomas L. Heath Cited by: §2.1.
 [16] (199208) Bellantoni and Cook’s characterization of polynomial time functions. Department of Pure Mathematics, University of Leeds. Note: Typescript Cited by: §3.1.
 [17] (2003) Linear types and nonsizeincreasing polynomial time computation. Inf. Comput. 183 (1), pp. 57–85. External Links: Link Cited by: §1.
 [18] (1960) La prédicativité. Bulletin de la Société Mathématique de France 88, pp. 371–391. Cited by: §3.1.
 [19] (2004) On the computational complexity of imperative programming languages. Theor. Comput. Sci. 318 (12), pp. 139–161. External Links: Link Cited by: §1, §1, §5.
 [20] (2001) The implicit computational complexity of imperative programming languages. Technical report Report RS0146, BRICS. Cited by: §1, §1, §5.
 [21] (2015) A higherorder characterization of probabilistic polynomial time. Inf. Comput. 241, pp. 114–141. Cited by: §3.1.
 [22] (1995) Ramified recurrence and computational complexity II: substitution and polyspace. In Proceedings of CSL 94, L. Pacholski and J. Tiuryn (Eds.), Lecture Notes in Computer Science, Vol. 933, Berlin and New York, pp. 486–500. Cited by: §3.1.
 [23] (2000) A characterization of alternating log time by ramified recurrence. Theor. Comput. Sci. 236 (12), pp. 193–208. Cited by: §3.1.
 [24] (2013) Evolving graphstructures and their implicit computational complexity. In Automata, Languages, and Programming  40th International Colloquium, pp. 349–360. Cited by: §1.
 [25] (2019) Primitive recursion in the abstract. Mathematical structures in computer science. Note: To appear. Preliminary version under the title Implicit complexity via structure transformation, in arXiv:1802.03115 Cited by: Document, §1, §1, item 1, §2.5, §5, Theorem 2.
 [26] (1993) Stratified functional programs and computational complexity. In Twentieth Annual ACM Symposium on Principles of Programming Languages, New York, pp. 325–333. Cited by: §3.1.
 [27] (1994) Predicative recurrence in finite types. In Logical Foundations of Computer Science 3rd International Symposium, pp. 227–239. Cited by: §3.1, §3.1.
 [28] (1998) A characterization of NC by tree recurrence. In Thirty Ninth Annual Symposium on Foundations of Computer Science (FOCS), Los Alamitos, CA, pp. 716–724. Cited by: §3.1.
 [29] (1998) Ramified recurrence and computational complexity III: higher type recurrence and elementary complexity. Annals of Pure and Applied Logic. Note: Special issue in honor of Rohit Parikh’s 60th Birthday; editors: M. Fitting, R. Ramanujam and K. Georgatos Cited by: §3.1.
 [30] (2019) A theory of finite structures. Logical methods in computer science. Note: To appear. Preliminary version as arXiv.org:1808.04949 Cited by: §1, §2.5, §4.2, §5.
 [31] (2014) Complexity information flow in a multithreaded imperative language. In Theory and Applications of Models of Computation  11th Annual Conference, pp. 124–140. Cited by: §1.
 [32] (2011) A type system for complexity flow analysis. In Proceedings of the 26th Annual IEEE Symposium on Logic in Computer Science, pp. 123–132. Cited by: §1.
 [33] (2000) The foundations of mathematics in the theory of sets. Encyclopedia of Mathematics, Vol. 82, Cambridge University Press. Cited by: §2.1.
 [34] (1967) The complexity of loop programs. In Proceedings of the 1967 22nd National Conference, New York, NY, USA, pp. 465–469. Cited by: §1.
 [35] (1993) Logspace without bounds. See Current trends in theoretical computer science  essays and tutorials, Rozenberg and Salomaa, Computer Science, Vol. 40, pp. 355–362. Cited by: §3.1.
 [36] (2004) Characterizing nc with tier 0 pointers. Mathematical Logic Quarterly 50(1), pp. 9–17. Cited by: §3.1.
 [37] (2008) Characterizing pspace with pointers. Mathematical Logic Quarterly 54(3), pp. 323–329. Cited by: §3.1.
 [38] (2011) A recursiontheoretic approach to NP. Ann. Pure Appl. Logic 162 (8), pp. 661–666. Cited by: §3.1.
 [39] (2018) A constructive examination of a russellstyle ramified type theory. Bulletin of Symbolic Logic 24 (1), pp. 90–106. Cited by: §3.2.
 [40] (1963) Classes of predictably computable functions. Trans. A.M.S. 106, pp. 139–173. Cited by: §3.1.
 [41] G. Rozenberg and A. Salomaa (Eds.) (1993) Current trends in theoretical computer science  essays and tutorials. World Scientific Series in Computer Science, Vol. 40, World Scientific. Cited by: 35.
 [42] (2008) Polynomial time calculi. Ph.D. Thesis, LudwigMaximiliansUniversität München. Note: Published as a monograph by Lulu.com, 2009 Cited by: §1.
 [43] (1977) Proof theory. SpringerVerlag, Berlin. Cited by: §3.1, §3.1, §3.2.
 [44] (1912) Principia mathematica. Vol. II, Cambridge University Press. Cited by: §3.1, §3.1, §3.2.
Comments
There are no comments yet.