1 Introduction
Recursion on notation is a fundamental tool for syntactic characterizations of feasible computation, in particular capturing the notion of bounding the number of steps of a computation in terms of input size. However, as a constraint, it is too weak on its own to capture feasibility as characterized by polynomial time computability. A wellknown example is the following: consider a function , mapping binary strings to binary strings, which for any input string returns the string concatenated with itself: . The function should clearly be accepted as feasible, but recursion on notation allows the definition of a new function which, on input of length returns , which is a string of length . To capture feasibility through a recursion scheme, further restrictions are required to prevent this kind of exponential blowup. Indeed, Cobham, in perhaps one of the earliest works mentioning polynomialtime computability, gives a characterization using a scheme of limited recursion on notation [4]. Here, definition of a new function through recursion on functions already known to be from the class is allowed only in case the length of the resulting function may be a priori bounded by the length of a function already known to be in the class. Cobham’s approach is a canonical example of explicit bounding. It is also possible to formulate forms of limited recursion with implicit bounding and recover the same class of polynomial time functions [14, 2].
It is possible to consider feasibility in the typetwo setting, which allows computation with respect to an arbitrary function oracle. The original definition of typetwo polynomial time was given by Mehlhorn using a straightforward generalization of Cobham’s scheme [15]. Just like the polynomial time functions, this class of functionals allows for a number of different characterizations and is accepted as capturing feasibility at type level two appropriately: Cook and Urquhart gave a formulation of Mehlhorn’s class, and in fact generalized it to all finite types by use of an applied typed lambda calculus with constant symbols for a collection of basic typeone polytime functions, as well as a recursor , which captures Mehlhorn’s scheme as a typetwo functional [7]
. Kapron and Cook showed that Mehlhorn’s class may be characterized in terms of oracle Turing machines (
s) whose runtime is bounded by a secondorder polynomial [10]. Both of these characterizations have lead to a multitude of applications and further characterizations.The content of this paper is inspired by a recent description of Mehlhorn’s class given by Kapron and Steinberg [11]. For this it is instructive to think of an analogue of unrestricted recursion on notation in the setting. Informally, this corresponds to Cook’s notion of oracle polynomial time () [5], which bounds the running time of s by a polynomial in the size of the input and the largest answer returned by any call to the oracle. Here, a higher time consumption can be justified by an increasing chain of oracle return values and in particular it is possible to recover the example above within . To force feasibility, Kapron and Steinberg use restrictions of based on querysize revisions. They considered two forms of revision: a length revision occurs when a query to the oracle returns an answer with size larger than the size of the input or the answer to any previous query, a lookahead revision occurs when the size of a query provided to the oracle is larger than the size of any previous such query. Strong polynomial time () allows only a constant number of length revisions, while moderate polynomial time () allows only a constant number of lookahead revisions. Kapron and Steinberg prove that both of the classes and give proper subsets of Mehlhorn’s class even when restricted to the functionals of the type that they are meant to capture, but that Mehlhorn’s class can be recovered from each of the classes by closing under abstraction and application. It should be noted that length revisions and make an earlier apparent in a somewhat different setting in work of Kawamura and Steinberg [13].
The outline of this paper is as follows: In the first section we describe the setting. Namely we work in a simply typed lambdacalculus with constant symbols for all type1 polynomialtime computable functions. This is identical to the setting Cook and Urquhart chose for their characterization of higherorder polynomial time through the recursor and means that we reason about higherorder complexity modulo the availability of the full strength of a firstorder bounded recursion scheme. The paper starts from the observation that the bounded recursor is meant to model Mehlhorn’s scheme, which is strictly more expressive than the first order scheme that is already available through the constants. Clearly
adds something, as the class of functionals expressible without its presence has been classified by Seth and is considerably restricted in its access of the oracle
[17]. Thus, one may ask for functionals that are less expressive and still generate the same class given the context. Section 2 weakens in two steps by first simplifying the way in which the bounding is done and afterwards by restricting the data that is available to the stepfunction. This leaves us with a functional that no longer captures bounded recursion but is better understood as doing bounded iteration.Section 3 starts involving the ideas of length revisions: Inspired by the definitions of the classes and we change the way in which iteration is bounded. The new conditions intuitively provide more freedom than the direct bounding the iterator uses and do so in a way that is somewhat orthogonal to how Cook and Urquhart’s original recursor did more complicated bounding. We are lead to consider a family of operators where the condition that is imposed becomes less restrictive as grows. Over the chosen background theory, all of the operators as well as and are of equal expressive power. However, the parameter is tightly connected to runtimebounds for in the setting, and the use of higher values should allow expressing some functionals that feature more complicated interaction with the oracle more concisely. The proof that all considered operators are equivalent additionally covers a similarly defined family of iterators based on the idea of lookahead revisions that is introduced in Section 4. The final section specifies an efficient generation scheme for the values of the new iterators.
Kapron and Steinberg define the classes and using the framework which is bound to a specific machine model. This paper transfers the notions of length and lookahead revisions to the machine independent setting of iteration schemes, where the number of iterations is determined by the length of a specified input parameter (which is a string over some finite alphabet). Our proofs introduce some interesting and useful idioms for programming in this setting.
1.1 Preliminaries
Let denote a finite alphabet that contains symbols 0 and 1, and the set of finite strings over . The empty string is denoted , and arbitrary elements of are denoted . We attempt to bind names of string variables to their meanings as far as possible: is associated with initial values, with sizebounds with values that a recursion or iteration is carried out over and the previous values in a recursion or iteration. For let to denote the length of and its digits, i.e., . We write to indicate that is an initial segment of . We assume that we have symbols for all type1 polytime functions, for instance:

Truncation: The 2ary function sending and , to , if and otherwise. Note that always and . For we use the shorthand .

Tupling and projection functions and , such that tupling is monotone with respect to length in each argument. Namely if for all apart from , then if and only if .

Length minimum: We adopt the convention used by Cook and Urquart, i.e.
We also use definition by cases extensively, relying on the fact that there is a polynomialtime conditional and avoid overuse of abstractions via explicit function definition. Tupling functions that satisfy the demands above exist and are 11, but not bijective. In spite of this we still write as short hand for . This is all done for the sake of readability.
1.2 Definability
The treatment of the typed calculus here follows that used by Cook and Urquart for their characterization of Mehlhorn’s class [7]. The set of types is defined inductively as follows:

0 is a type

is a type, if and are types.
The set of functionals of type is defined by induction on :


.
It is not hard to show that each type has a unique normal form
where the missing parentheses are put in with association to the right. Hence a functional of type is considered in a natural way as a function of variables , with ranging over , and returning a natural number value:
The level of a type is defined inductively: The level of type 0 is 0, and the level of the type written in the above normal form is 1 + the maximum of the levels of . This paper is mostly only concerned with functionals of type level smaller or equal two.
Let be a class of functionals. The set of terms over , denoted is defined as follows:

For each type there are infinitely many variables of type , and each such variable is a term of type .

For each functional (of type ) in there is a term of type .

If is a term of type and is a variable of type , then is a term of type (an abstraction).

If is a term of type and is a term of type , then is a term of type (an application).
For readability, we write for ; we also write for , and for .
The set of free variables of a lambda term can be defined inductively and are those that are not bound by a lambda abstraction. A term is called closed, if it has no free variables. In a natural way each closed term of type represents a functional in . This correspondence is demonstrated in the standard way, by showing that a mapping of variables to functionals with corresponding type can be extended to a mapping of terms to functionals with corresponding type.
An assignment is a mapping taking variables to functionals with corresponding type. Suppose is an assignment and a term over . The value of with respect to is defined by induction on as follows.
When is a variable, is . If is a constant symbol for some , then .
Suppose that . When has the form , is a type functional and are type functionals, then
where , but is otherwise identical to . When has the form ,
It is not hard to show that if are terms such that is a or redex and is its contractum, then for all , . A functional is represented by a term relative to an assignment if .
Our goal in this paper is to prove the equivalence, with respect to representability in the presence of polytime type1 functions, of type2 functionals capturing different forms of recursion on notation. To this end we have the following definitions.
Definition 1.1.
Let be the class of (type1) polytime functions, and be functionals. We say that is reducible to , denoted if is representable by a term of , and that is equivalent to , denoted , if and .
We regularly use that reducibility is a transitive relation, which is easily verified. We refer to the class of functionals representable by a term from as the class of functionals generated by . Clearly two functionals are equivalent if and only if they generate the same classes of functionals.
2 The CookUrquhart recursor and bounded iteration
Our starting point is the recursor that Cook and Urquart use to characterize a class of higherorder polynomialtime functionals [7]. This recursor is patterned on the scheme of limited recursion on notation introduced by Cobham [4] and its secondorder variant, introduced by Mehlhorn [15]. In [12] it is proved that the typetwo functionals definable in the CookUrquhart system coincide with Mehlhorn’s class. The recursor is defined as follows:
Here, the length minimum returns its left argument if it has strictly smaller length and the right argument otherwise as defined in the preliminaries. The schemes used by Cobham and Mehlhorn feature explicit external bounding that captures almost directly the notion of bounding by a polynomial (in the firstorder setting we could easily use a scheme with explicit bounding by polynomials in the argument size, and as shown by [9] this may be extended to the secondorder setting as well). In the CookUrquart recursor, this limiting is realized via an additional type1 input . Our first observation is that the limiting may instead be realized through a type0 input.
Lemma 2.1 ().
The CookUrquart recursor and its restriction to constant bounding functions generate the same class of functionals. More specifically is equivalent to the functional
Proof.
From the definition of it is immediate that . To see that also argue that it suffices to show that , where is the functional that maximizes return values of a function over the initial segments of a string, i.e. is recursively defined via and
Indeed, once this is proven follows from the equality
where the second argument is the maximum with an additional digit added to make sure it is always strictly bigger than any value of on an initial segement of . This equality can be proven by an induction where the crucial point in the induction step is that the outer of the nested minima always chooses its left argument as value.
To see that the length maximization functional is definable using , note that the functional, which returns the smallest initial segment where a given inputfunction assumes its maximum, can be defined from via
where if and otherwise. Since , it follows that can be expressed and thus that . ∎
As a further simplification of , it is possible to eliminate the reference to the current value of the recursion parameter at each step, that is, to replace a functional capturing a form primitive recursion on notation with one that captures functional iteration. This is known as a folklore result, but to the best of our knowledge does not appear explicitly in any previous work in this setting. The most similar characterization we are aware of is one based on typed loopprograms and appeared in [6]. In the case of primitive recursion, the equivalence with iteration was first explictly proved in [16].
For a function let the fold iteration be inductively defined by and . An unbounded iterator would be a functional that takes and as inputs and returns . Recall from the introduction, that there are polynomialtime computable such that the function exhibits exponential growth and is in particular not polynomial time computable. Thus, to capture the class of feasible functionals, the considered iterator needs to be bounded. We define the bounded iterator by
That is: performs iterations of the input function on starting value , truncating the resulting value after each iteration to be no longer than the bound .
Before we go on to prove the bounded iterator equivalent to the CookUrquart recursor, let us briefly discuss the choices we have taken in bounding. First off, it is easy to see that whether or not the starting value is truncated is irrelevant up to equivalence. Furthermore, the definition of is such that the bounding is done after is applied. Another possibility would be to consider an iterator where the bounding is done on the argument side of , i.e. before its application. We give a short proof that the resulting iterator is equivalent.
Lemma 2.2.
() Outputbounded iteration generates the same class of functionals as argumentbounded iteration. More specifically is equivalent to the functional
Proof.
We prove that for all ,
(*)  
(**) 
We prove (*) and (**) simultaneously by induction on . The case when is clear, so suppose (*) and (**) hold for all with . Consider with , say where . Then
(By IH (**)) 
and
(By IH (*)) 
∎
We end the section with the proof that the bounded iterator, and its modification from the preceding lemma, generate the basic feasible functionals. That is, that they generate the same class of functionals as the CookUrquart recursor.
Lemma 2.3 ().
The CookUrquart recursor and the bounded iterator are equivalent.
Proof.
The first implication, namely that , follows from the equality
that can be proven through a simple induction.
For the converse note that, by Lemma 2.1 the recursor is equivalent to its version where the bounding is done via a constant instead of a function. Thus it suffices to prove that . Furthermore note how close the expanded definition of the iterator is to the definition of :
The main difference is that for the recursor the step function is additionally given access to the value of the recursion parameter . We postpone the discussion of how to accomodate the additional bounding of the initial value to the end of the proof and show that the operator defined by
can be expressed by using the bounded iterator. To achieve this define
In the above has the type of a functional input of the recursor and has the type of a functional input for the bounded iterator for fixed , and . We claim that for any , and ,
(*) 
and so, in particular
which proves the reducibility of to . The equality (* ‣ 2) can be verified by fixing , an proving the following statement by induction on : if , then (* ‣ 2) holds for . The base case of this induction follows from the properties we demanded the pairing functions to have. Next suppose that the assertion is true for . If , then it is also the case that , and the induction hypothesis implies that (* ‣ 2) holds for . But then
Where the last equality uses the properties of the tupling functions again and the fact that is either strictly shorter than or equal to .
Finally, to change the initial value, define as follows:
then
and thus we obtain that and with the fact from Lemma 2.1 also the desired reducibility . ∎
3 Iteration with Constant Length Revision
Both the CookUrquart recursor as well as the bounded iterator require an absolute bound on the size of intermediate value encountered during a recursion. Specifying such a bound a priori can be cumbersome and this section provides an alternative way of bounding an iteration that is inspired by the classes and we considered in earlier work [13, 11]. The elementary notion used in the definition of is that of a length revision. In an computation a length revision is encountered whenever the answer to an oracle query is longer than any previous response. This notion of a length revision can easily be translated to the realm of recursion schemes: in a recursive definition a length revision happens when the return value of the step function is bigger than any of the values returned earlier. In particular, define
where is maximum such that the sequence of applications contains no more than length revisions, that is an application where exceeds and for any previous call. In particular, when , this means that no calls return a value that exceeds . For , the revision iterator is the functional defined by
Superficially, this definition is similar to that of the bounded iterator from the last section. The functional iterates a function where the iteration is bounded by just like the bounded iterator does for each fixed bounding argument . The essential difference is that is a statically fixed parameter, i.e., constitutes a family of iterators. Our goal is to show that for each fixed the operator is equivalent to the bounded iterator (Theorem 4.3 below).
Without restrictions on , neither of the reducibilities required to prove that are obvious. However, the claim that should appear reasonable given the based characterization of the basic feasible functionals [10]. As proven in the last section, is equivalent to and thus it is enough to check that is a basic feasible functional, i.e., that is computable by an whose runtime is bounded by a secondorder polynomial. This may be done in a straightforward way, but it is important to note that the complexity of the bounding polynomial (in terms of the depth of calls to the function input, rather than the degree) increases with . In particular, while is equivalent to for every , the revision parameter provides a finer delineation of expressive power.
Without an appeal to the based characterization, showing the equivalence of and becomes more of a challenge, although the case for is relatively straightforward:
Lemma 3.1 ().
The revision iterator is reducible to the bounded iterator.
Proof.
The main hurdle is to account for the difference in how the violation of the bound is realized: defaults to the previous value in the iteration while defaults to the value it is given as bound. Set
Then can be obtained from by simply dropping the last bit. Since the definition of only uses type1 polynomial time operations and application, the reducibility follows. ∎
Note that for unrestricted iteration it holds that . The following observation points out a similar additivity property for and is the starting point for recursively constructing reductions of to :
Lemma 3.2.
For given , and numbers and set
then and it holds that
Proof.
Since always fulfills the condition in the minimization, it follows that . To prove we first note that the minimization condition may be satisfied in two different ways. It may be the case that gives the same return value on all of the strings . In this case there will be no further length revisions, and so . Thus suppose that it is not the case that is constant on these strings and let be such that and . By definition of , the strings are still all equal. Then and and can only be different if the st call to triggers the st length revision. Thus, in this case for any and in particular must hold. ∎
In fact, the above proof proves the following slightly stronger statement.
Corollary 3.3.
The equality in the last Lemma may be replaced by .
This allows us to establish the following.
Lemma 3.4 ().
For , the revision iterator is reducible to the bounded iterator.
Proof.
We proceed by induction on . The case of has been taken care of in Lemma 3.1. Suppose that the Lemma holds for . We must now define using . By Lemma 3.2 it is sufficient to show that there exists a function that on inputs , , and returns the value and is reducible to . First note that the condition can be checked by a function from , where the inclusion follows by the induction hypothesis. Now all that remains is to use to characterize the bounded quantification and search used to define in Corollary 3.3. Define the following functionals:
We first show that and appeal to Lemma 2.3 to see that it is reducible to . Define
Then . Since the definition of only uses polynomialtime computable type1 functions and application we conclude . To show that , first define
and define
Then . In particular, if ends in , then and if it ends in then . ∎
4 Iteration with Constant Lookahead Revision
Moving to lookahead revision, the definition is similar. Consider the following variant of function iteration
where is maximum such that the sequence of applications contains no more than lookahead revisions, that is an application where exceeds for any previous call . Note that we have not included the initial call as a lookahead revision (choosing to do so would not change any results below.) Then, for , the lookahead revision iterator is the functional such that
We now consider the relative power of .
Lemma 4.1.
For any , .
Proof.
We claim that . This is clear in the case that there are no more than length revisions in the evaluation of , as any lookahead revision corresponds exactly to a preceding length revision, and so . Otherwise suppose that , which means in particular that is the minimum value less than such that evaluating results in length revisions. But then is the minimum value less than such that results in length revisions. But this means . ∎
Lemma 4.2.
For any , .
Proof.
Unfortunately, the situation is a little less straightforward than we might hope, as and differ slightly in the way they do bounding. expects queries to be bounded in length by previous queries while uses an explicit bound . Define as follows:
Claim. For all ,
To prove this claim, first note the in the iteration on the left, the first call to is , of length . All subsequent calls are clearly bounded by . So there will be no lookahead revisions in this iteration and it remains to prove equality without consideration of the lookahead bound . We use induction on . When ,
Now assume that the claim holds for . Then
Now define as follows:
First note that , as definition by cases is a polytime operation. When , . Otherwise, by the claim,
so that
∎
Putting everything together, we have a characterization which is the main result of this paper.
Theorem 4.3.
For every , .
5 More efficient approaches
The implementation of by given in Lemma 3.4 requires considerable overhead, involving a bounded quanitification and bounded search at each step. An implementation which directly follows this definition is polytime, but is needlessly complex. The following observation (which in this setting correpsonds to tailrecursion elimination) will simplify things considerably. In particular, we note an alternate characterization of : and . This leads to the following characterization of .
Lemma 5.1.
For all we have
Proof.
We prove by induction on that the claim holds for all . When , iteration stops (absolutely) if , otherwise it proceeds to the next step. Now assume for that the claim holds for all . We show that for it holds for all , by induction on . When this is immediate. Assume that it holds for , and consider . Clearly, if , no length revision occurs on the first call, and so are still available for the remaining calls. Otherwise, only length revisions are available for the remaining calls. ∎
We also note that, implicit in the proof of Lemma 3.4, is an implementation which is also efficient – in particular, if we “unwind” the induction, we are eventually left relying only on . As described in [11], §4.3, we can implement the resulting definition using a form of “reentrant” recursion. We may view the violation of the lengthrevision bound as triggering an exception, which may then be caught by an exception handler which restarts the recursion at the point after the offending oracle call has taken place.
6 Conclusions and Future Work
We have provided a new linguistic characterization of the higherorder polynomial time via iteration schemes that restrict the number of times a step function, presented as an oracle, may return an answer or be presented an input which in length exceeds all previous answers (resp. queries). The characterization and the methods used to prove it lead to a number of questions and potential directions for future research.
The characterization provided in this paper could be termed intrinsic, in that no external bounding is present in the iteration schemes and . The condition itself, however, appears to depend on the dynamics of a particular computation. On its face it is not a structural/syntactic restriction, as is usual in implicit computational complexity. This suggests two directions for further research. The first is to investigate the possibility of statically deriving bounds on query revision. The second is to investigate distinctions on how computational resources are bounded as suggested by this and related work, for example intrinsic versus extrinsic, dynamic versus static, and feasibly constructive versus nonfeasibly constructive (an example of nonfeasibly constructive bounding would be the secondorder polynomials of [10]). A related observation is that iteration with bounded query revision appears to be a generalization of nonsizeincreasing computation [8]. This apparent connection merits further investigation.
In §5 above, we begin to explore the interplay between familiar programming techiques from the implemenation of functional programming languages (e.g, tailrecursion elimination) with respect to the efficient implementation of our iteration schemes. We have also noted that the introduction on control primitives (e.g., catch and throw) may be relevant to the characterization of complexity classes in this setting. We note that such control operators have been shown in [3] to be relevant to the general characterization of sequential higherorder computation. Here we only scratch the surface. Further investigation of these and related techniques in the context of linguistic characterizations of computational complexity could prove fruitful.
As noted at several points in our development, there are issues of finergrained complexity that arise from our translations. This gives rise to natural questions on the efficiency, or syntactic complexity, of translations, which bear further investigation.
Finally, while we have drawn an analogy between s with bounded query revision (as introduced in [11]) and certain recursion schemes, we have not investigated just how closely related they are. While the equivalences proved in [11] and in this paper imply an equivalence for all the models, a direct proof would be very interesting in furthering our understanding of polytime s. It would be very rewarding if a simplified proof of the equivalence of [10] could be obtained in this setting.
References
 [1]
 [2] Stephen Bellantoni & Stephen A. Cook (1992): A New RecursionTheoretic Characterization of the Polytime Functions. Computational Complexity 2, pp. 97–110, doi:10.1007/BF01201998.
 [3] R. Cartwright, P.L. Curien & M. Felleisen (1994): Fully Abstract Semantics for Observably Sequential Languages. Information and Computation 111(2), pp. 297 – 401, doi:10.1006/inco.1994.1047.
 [4] A. Cobham (1965): The intrinsic computational difficulty of functions. In Yehoshua BarHillel, editor: Logic, Methodology and Philosophy of Science: Proc. 1964 Intl. Congress (Studies in Logic and the Foundations of Mathematics), NorthHolland Publishing, pp. 24–30.
 [5] S.A. Cook (1992): Computability and complexity of higher type functions. In: Logic from computer science (Berkeley, CA, 1989), Math. Sci. Res. Inst. Publ. 21, Springer, New York, pp. 51–72, doi:10.1007/9781461228226_3.
 [6] S.A. Cook & B.M. Kapron (1990): Characterizations of the basic feasible functionals of finite type. In: Feasible mathematics (Ithaca, NY, 1989), Progr. Comput. Sci. Appl. Logic 9, Birkhäuser, pp. 71–96, doi:10.1007/9781461234661_5.
 [7] S.A. Cook & A. Urquhart (1993): Functional interpretations of feasibly constructive arithmetic. Ann. Pure Appl. Logic 63(2), pp. 103–200, doi:10.1016/01680072(93)90044E.
 [8] Martin Hofmann (2003): Linear types and nonsizeincreasing polynomial time computation. Inf. Comput. 183(1), pp. 57–85, doi:10.1016/S08905401(03)000099.
 [9] A. Ignjatovic & A. Sharma (2004): Some applications of logic to feasibility in higher types. ACM TOCL 5(2), pp. 332–350, doi:10.1145/976706.976713.
 [10] B.M. Kapron & S.A. Cook (1996): A new characterization of type feasibility. SIAM J. Comput. 25(1), pp. 117–132, doi:10.1137/S0097539794263452.
 [11] B.M. Kapron & F. Steinberg (2018): Typetwo polynomialtime and restricted lookahead. In: Proceedings of the 33rd Annual ACM/IEEE Symposium on Logic in Computer Science (Oxford, UK), 2018, ACM, New York, pp. 579–598, doi:10.1145/3209108.3209124.
 [12] Bruce M. Kapron (1991): Feasible Computation in Higher Types. Technical Report 249/91, Computer Science Department, University of Toronto.
 [13] Akitoshi Kawamura & Florian Steinberg (2017): Polynomial Running Times for PolynomialTime Oracle Machines. In: 2nd International Conference on Formal Structures for Computation and Deduction, FSCD 2017, September 39, 2017, Oxford, UK, pp. 23:1–23:18, doi:10.4230/LIPIcs.FSCD.2017.23.
 [14] Daniel Leivant (1991): A Foundational Delineation of Computational Feasiblity. In: Proceedings of the Sixth Annual IEEE Symposium on Logic in Computer Science (Amsterdam, The Netherlands), 1991, IEEE Computer Society, pp. 2–11, doi:10.1109/LICS.1991.151625.
 [15] K. Mehlhorn (1976): Polynomial and abstract subrecursive classes. J. Comp. Sys. Sci. 12(2), pp. 147–178, doi:10.1016/S00220000(76)800359.
 [16] Raphael M. Robinson (1947): Primitive recursive functions. Bull. Amer. Math. Soc. 53(10), pp. 925–942, doi:10.1090/S000299041947089114.
 [17] Anil Seth (1993): Some desirable conditions for feasible functionals of type . In: Eighth Annual IEEE Symposium on Logic in Computer Science (Montreal, PQ, 1993), IEEE Comput. Soc. Press, Los Alamitos, CA, pp. 320–331, doi:10.1109/LICS.1993.287576.
Comments
There are no comments yet.