The notion of an algorithm is fundamental for computing, so it may seem surprising that there is still no commonly accepted definition. This is different for the notion of computable function that is captured by several equivalent formalisms such as Turing machines, random access machines, partial-recursive functions,-definable functions and many more . However, as there is typically a huge gap between the abstraction level of an algorithm and the one of Turing machines, Gurevich concludes that the latter ones cannot serve as a definition for the notion of an algorithm . He formulated a new thesis based on the observation that “if an abstraction level is fixed (disregarding low-level details and a possible higher-level picture) and the states of an algorithm reflect all the relevant information, then a particular small instruction set suffices to model any algorithm, never mind how abstract, by a generalised machine very closely and faithfully”.
Still it took many years from the formulation of this new thesis to the publication of the behavioural theory of sequential algorithms in . In this seminal work—also known as the “sequential ASM thesis”—a sequential algorithm is defined by three postulates111A mathematically precise formulation of these postulates requires more care, but the rough summary here will be sufficient for now.:
- Sequential Time.
A sequential algorithm proceeds in sequential time using states, initial states and transitions from states to successor states.
- Abstract State.
States are universal algebras (aka Tarski structures), i.e. functions resulting from the interpretation of a signature, i.e. a set of function symbols, over a base set.
- Bounded Exploration.
There exists a finite set of ground terms such that the difference between a state and its successor state is uniquely determined by the values of these terms in the state222This set of terms is usually called a bounded exploration witness, while the difference between a state and its successor is formally given by an update set. Informally, bounded exploration states that there can be only finitely many terms, the interpretation of which determine how a state will be updated by the algorithm to produce the successor state..
The behavioural theory further comprises the definition of sequential Abstract State Machines (ASMs) and the proof that sequential ASMs capture sequential algorithms, i.e. they satisfy the postulates, and every sequential algorithm can be step-by-step simulated by a sequential ASM. It is rather straightforward to extend the theory to cover also bounded non-determinism333For this the sequential time postulate requires a successor relation instead of a function, and ASM rules must permit the choice between finitely many rules..
It should be noted that the definition of a sequential algorithm given by Gurevich does not require a particular formalism for the specification. Sequential ASMs capture sequential algorithms, so they are a suitable candidate for specification444In particular, as pointed out in , rules in an ASM look very much like pseudo-code, so the appearance of ASM specifications is often close to the style, in which algorithms are described in history and in textbooks. The difference is of course that the semantics of ASMs is precisely defined., but they are not the only possible choice. For instance, in the light of the proofs in  it is not an overly difficult exercise to show that deterministic Event-B  or B  also capture sequential algorithms.
We believe that in order to obtain a commonly acceptable definition of the notion of algorithm, this distinction between the axiomatic definition, without reference to a particular language or programming style (as by the postulates for sequential algorithms), and the capture by an abstract machine model (such as sequential ASMs, deterministic Event-B or others) is essential.
In  Moschovakis raised the question how recursive algorithms, e.g. the well-known mergesort, are covered. He questions that algorithms can be adequately defined by machines, and provides recursors as an alternative. While it is obvious that mergesort or any other recursive algorithm are not sequential and thus not covered by Gurevich’s thesis, his perception that Gurevich used sequential ASMs as a definition for the notion of algorithm is a misinterpretation.
Unfortunately, the response by Blass and Gurevich  to Moschovakis’s criticism is not convincing. Instead of admitting that an extended behavioural theory for recursive algorithms still needs to be developed, distributed ASMs with a semantics defined through partial-order runs  are claimed to be sufficient to capture recursive algorithms.555The definition of recursive ASMs in  uses a special case of this translation of recursive into distributed computations. As Börger and Bolognesi point out in their contribution to the debate , a much simpler extension of ASMs suffices for the specification of recursive algorithms. Furthermore, the response blurs the subtle distinction between the language-independent axiomatic definition and the possibility to express any algorithm on an arbitrary level of abstraction by an abstract machine. This led also to Vardi’s almost cynical comment that the debate is merely about the preferred specification style (functional or imperative), which is as old as the field of programming 666This debate, however, is still much younger than the use of the notion of algorithm..
While the difficult epistemological issue concerning the definition of the notion of algorithm has been convincingly addressed for sequential algorithms by Gurevich’s behavioural theory, no such theory for recursive algorithms or distributed algorithms was available at the time of the debate between Moschovakis, Blass and Gurevich, Börger and Bolognesi, and Vardi. In the meantime a behavioural theory for concurrent algorithms has been developed . It comprises a language-independent axiomatic definition of the notion of concurrent algorithm as a family of sequential algorithms indexed by agents that is subject to an additional concurrency postulate, by means of which Lamport’s sequential consistency requirement is covered and generalised . In a nutshell, the concurrency postulate requires that a successor state of the global state of the concurrent algorithm results from simultaneously applying update sets of finitely many agents that have been built on some previous (not necessarily the latest) states.
Using this theory of concurrency it is possible to reformulate the answer given by Blass and Gurevich to Moschovakis’s question: every recursive algorithm is a concurrent algorithm with partial-order runs. As concurrent ASMs capture concurrent algorithms (as shown in ), they provide a natural candidate for the specification of all concurrent algorithms, thus in particular recursive algorithms. However, the “overkill” argument will remain, as the class of concurrent algorithms is much larger than the class of recursive algorithms.
For example, take the mergesort algorithm. Every call to (a copy of) itself and every call to (a copy of) the merge algorithm could give rise to a new agent. However, these agents only interact by passing input parameters and return values, but otherwise operate on disjoint sets of locations. In addition, a calling agent always waits to receive return values, which implies that only one or (in case of parallel calls) two agents are active in any state. In contrast, in a concurrent algorithm all agents may be active, and they can interact in many different ways on shared locations as well as on different clocks. As a consequence, concurrent runs are highly non-deterministic and not linearisable, whereas a recursive algorithm is deterministic or permits at most bounded non-determinism; several simultaneous calls can always be sequentialised.
This motivates the research we report in this article. Our objective is to develop a behavioural theory of recursive algorithms777As we do not consider unbounded parallelism here, it would be more accurate to speak about recursive sequential algorithms, but due to the relationship between recursive and unbounded parallel algorithms mentioned below we dispense with this subtlety.. For this we propose an axiomatic definition of recursive algorithms which enriches sequential algorithms by call steps, such that the parent-child relationship between caller and callee defines well-defined shared locations representing input and return parameters. We will present and motivate our axiomatisation in Section 2.
In Section 3 we define recursive ASMs by an appropriate extension of sequential ASMs with a call rule and show that recursive algorithms are captured by them888It is possible to conduct a similar proof for Moschovakis’s recursors showing that recursors also capture recursive algorithms.. Section 4 is dedicated to an illustration of our theory by examples. We concentrate on mergesort, quicksort and the sieve of Eratosthenes for which we present recursive ASMs. We show by the examples that the unbounded parallelism of ASMs is stronger than recursion, so that there is no need to investigate recursive parallel algorithms separately from sequential recursive algorithms.
In Section 5 we return to the claim by Blass and Gurevich—though not explicitly stated in —that recursive algorithms are linked to concurrent algorithms with partial-order runs. We first show that indeed the runs of a recursive algorithm are definable by partial-order runs, which comes at no surprise, but the amazing second discovery is that also a converse relation holds, namely if all runs of a finitely composed concurrent algorithm, i.e. an algorithm which consists only of instances of a bounded number of sequential algorithms, are definable by partial-order runs, then the algorithm is behaviorally equivalent to a recursive algorithm. This relativates the overkill argument999In fact, it shows that, roughly speaking, finitely composed concurrent algorithms with partial-order runs are indeed the recursive algorithms, and the response given in  may be seen as the result of ingenious serendipity. However, arbitrary concurrent algorithms as discussed in  are a much wider class of algorithms.. As a corollary one obtains that Petri nets can be simulated by runs of a non-deterministic sequential ASM. Finally, in Section 6 we embed our work into a larger picture of related work on behavioural theories, and in Section 7 we present a brief summary and outlook on further research.
2 Axiomatisation of Recursive Algorithms
A decisive feature of a recursive algorithm is that it calls itself, or more precisely a copy (we also say an instance) of itself. If we consider mutual recursion, then this becomes slightly more general, as there is a finite family of algorithms calling (copies of) each other. Therefore, providing copies of algorithms and enabling calls will be essential for the intended definition of the notion of recursive algorithm, whereas otherwise we can rely on Gurevich’s axiomatic definition of sequential algorithms. Furthermore, there may be several simultaneous calls, which give rise to non-determinism,101010The presence of this non-determinism in recursive algorithms has also been observed in Moschovakis criticism , e.g. mergesort calls two copies of itself, each sorting one half of the list of given elements. as these simultaneously called copies may run sequentially in one order or the other, in parallel or even asynchronously. However, there is no interaction between simultaneously called algorithms, which implies that the mentioned execution latitude already covers all choices.
2.1 Non-deterministic Sequential Algorithms
Therefore, we first recall the axiomatic definition of non-deterministic sequential algorithms, which only slightly generalises Gurevich’s definition for sequential algorithms by weakening the sequential time postulate.
Postulate 1 (Branching Time Postulate)
An nd-seq algorithm comprises a set , elements of which are called states, a subset , elements of which are called initial states, and a one-step transition relation . Whenever holds, the state is called a successor state of the state and we say that the algorithm performs a step in to yield .
Note that the only difference to the sequential time postulate in  is that is defined as a relation rather than a function, i.e. for a state there may be more than one successor state.
Though Postulate 1 only gives a necessary condition for nd-seq algorithms and in particular leaves open what states are, we can already derive some consequences from it such as the notions of run, final state and behavioural equivalence.
Let be a nd-seq algorithm with states , intial states and transition relation .
A run of is a sequence with for all and such that holds for all .
If an nd-seq algorithm has exactly the same runs as , then and are called behaviourally equivalent.
Note that we obtain behavioural equivalence, if the sets of states and initial states and the one-step transition relation of two nd-seq algorithms coincide111111The converse does not hold, as an algorithm may have redundant states that do not appear in any run..
Often is called a final state of a run of (and the run is called terminated in this state) if holds for all . But sometimes it is more convenient to use a dynamic termination predicate whose negation guards the execution of the algorithm and which is set to true by when reaches a state one wants to consider as final.
Next we clarify what states are. As argued in , the notion of universal algebra (aka Tarski structure) captures all desirable structures that appear in mathematics, so it is adequate to choose this highly expressive concept for the definition of the notion of state.
A signature is a finite set of function symbols, and each is associated with an arity . A structure over comprises a base set121212For convenience to capture partial functions we tacitly assume that base sets contain a constant undef and that each isomorphism maps undef to itself. and an interpretation of the function symbols by functions . An isomorphism between two structures is given by a bijective mapping between the base sets that is extended to the functions by for all and .
Postulate 2 (Abstract State Postulate)
Each nd-seq algorithm comprises a signature such that
Each state of is a structure over .
The sets and of states and initial states, respectively, are both closed under isomorphisms.
Whenever holds, then the states and have the same base set.
Whenever holds and is an isomorphism defined on , then also holds.
In the following we write to denote the interpretation of the function symbol in the state . Though we still have only necessary conditions for nd-seq algorithms, we can define further notions that are important for the development of our theory.
A location of the nd-seq algorithm is a pair with a function symbol of arity and all . If is the base set of state and holds, then is called the value of the location in state .
We write for the value of the location in state . The evaluation function val can be extended to ground terms in a straightforward way.
The set of ground terms over the signature is the smallest set such that holds for all with and 131313Clearly, for the special case we get . Instead of we usually write simply .. The value of a term in a state is defined by .
With the notions of location and value we can further define updates and their result on states141414Note that update sets as we use them are merely differences of states..
An update of the nd-seq algorithm in state is a pair with a location and a value , where is the base set of and . An update is trivial iff holds. An update set is a set of updates. An update set in state is consistent iff implies , i.e. there can be at most one non-trivial update of a location in a consistent update set. If is a consistent 151515Otherwise, usually the term used to define the successor state is considered as not defined. An alternative is to extend this definition letting , if is inconsistent. update set in state , then denotes the unique state with .
Considering the locations, where a state and a successor state differ, gives us the following well-known fact (see ).
If holds, then there exists a unique minimal consistent update set with .161616The conclusion is true for any given pair of states, independently of the relation .
We use the notation for the consistent update set that is defined by . We further write for the set of all such update sets defined in state , i.e. .
Our third postulate concerns bounded exploration. It is motivated by the simple observation that any algorithm requires a finite representation, which implies that only finitely many ground terms may appear in the representation, and these must then already determine the successor state—for a more detailed discussion refer to —or the successor states in the case of non-determinism. Formally, this requires a notion of coincidence for a set of ground terms in dfferent states.
Let be a set of ground terms for a nd-seq algorithm . Two states and with the same base set coincide on iff holds for all terms .
Postulate 3 (Bounded Exploration Postulate)
Each nd-seq algorithm comprises a finite set of ground terms such that whenever two states and with the same base set coincide on the corresponding sets of update sets for and are equal, i.e. we have . The set is called a bounded exploration witness.
Bounded exploration witnesses are not unique. In particular, the defining property remains valid, if is extended by finitely many terms. Therefore, without loss of generality we may tacitly assume that a bounded exploration witness is always closed under subterms. We then call the elements of critical terms. If is a critical term, then its value in a state is called a critical value. This gives rise to the following well-known fact.
The set of update sets of an nd-seq algorithm in a state is finite, and every update set is also finite.
For a proof we first need to show that in every update in an update set the values are critical. As is finite, there are only finitely many critical values, and we can only build finite update sets and only finitely many sets of update sets with these. We will use such arguments later in Section 3 to show that recursive algorithms are captured by recursive ASMs, and dispense with giving more details here.
2.2 Recursion Postulate
As remarked initially, an essential property of any recursive algorithm is the ability to perform call steps, i.e. to trigger an instance of a given algorithm (maybe of itself) and remain waiting until the callee has computed an output for the given input. We make this explicit by extending the postulate on the one-step transition relation of nd-seq algorithms by characteristic conditions for a call step (see Postulate 4 below).
Furthermore, it seems to be characteristic for runs of recursive algorithms that in a given state, the caller may issue in one step more than one call, though only finitely many, of callees which perform their subcomputations independently of each other. For an example see the rule in the mergesort algorithm in Section 4. The resulting ‘asynchronous parallelism’ implies that the states in runs of a recursive algorithm are built over the union of the signatures of the calling and the called algorithms.
The independency condition for parallel computations of different instances of the given algorithms requires that for different calls, in particular for different calls of the same algorithm, the state spaces of the triggered subcomputations are separated from each other. Below we make the term instance of an algorithm more precise to capture the needed encapsulation of subcomputations. This must be coupled with an appropriate input/output relation between the input provided by the caller and the output computed by the callee for this input, which will be captured by a call relationship in Definition 9.
This explains the following definition of an i/o-algorithm as nd-seq algorithm with call steps and distinguished function symbols for input and output.
An algorithm with input and output (for short: i/o-algorithm) is an nd-seq algorithm whose one-step transition relation may comprise call steps satisfying the Call Step Postulate 4 formulated below and whose signature is the disjoint union of three subsets
containing respectively input, local and output functions that satisfy the input/output assumption defined below.
Function symbols in , and , respectively, are called input, output and local function symbols. Correspondingly, locations with function symbol in , and , respectively, are called input, output and local locations. We include into input resp. output locations also variables which appear as input resp. output parameters of calls, although they are not function symbols.
The assumption on input/output locations of i/o-algorithms is not strictly needed, but it can always be arranged and it eases the development of the theory.
Input/Output Assumption for i/o algorithms :
Input locations of are only read by , but never updated by . Formally, this implies that if is an update in an update set of in any state , then the function symbol in is not in of .
Output locations of are never read by , but can be written by . This can be formalised by requiring that if is a bounded exploration witness, then for any term we have .
Any initial state of only depends on its input locations, so we may assume that holds in every initial state of for all output and local locations . This assumption guarantees that when an i/o-algorithm is called, its run is initialized by the given input, which reflects the common intuition using input and output.
In a call relationship we call the caller the parent and the callee the child algorithm. Intuitively, a) the parent algorithm is able to update input locations of the child algorithm, which determines the child’s initial state; b) when the child algorithm is called, control is handed over to it until it reaches a final state, in which state the parent takes back control and is able to read the output locations of the child; c) the two algorithms have no other common locations. Therefore we define:
A call relationship holds for (instances of) two i/o-algorithms (parent) and (child) if and only if they satisfy the following:
. Furthermore, may update input locations of , but never reads these locations. Formally this implies that for a bounded exploration witness of and any term we have .
. Furthermore, may read but never updates output locations of , so we have that for any update in an update set in any state of , its function symbol is not in .
(no other common locations).
Postulate 4 (Call Step Postulate)
When an i/o-algorithm —the caller, viewed as parent algorithm—calls a finite number of i/o-algorithms —the callees, viewed as child algorithms —a call relationship (denoted as ) holds between the caller and each callee. The caller activates a fresh instance of each callee so that they can start their computations. These computations are independent of each other and the caller remains waiting—i.e. performs no step—until every callee has terminated its computation (read: has reached a final state). For each callee, the initial state of its computation is determined only by the input passed by the caller; the only other interaction of the callee with the caller is to return in its final state an output to .
Differently from runs of a nd-seq algorithm as defined by Definition 2, where in each state at most one step of the nd-seq algorithm is performed, in a recursive run a recursive algorithm can perform in one step simultaneously one step of each of finitely many not terminated and not waiting called instances of its i/o-algorithms. This is expressed by the recursive run postulate 5 below. In this postulate we refer to and not instances of components, which are defined as follows:
To be resp. in a state is defined as follows:
collects the instances of algorithms that are called during the run. denotes the subset of which contains all the children called by . and are true in the initial state , for each i/o-algorithm . In particular, in the original component is considered to not be , for any .
Postulate 5 (Recursive Run Postulate)
For a recursive algorithm with main component a recursive run is a sequence of states together with a sequence of sets of instances of components of which satisfy the following constraints concerning recursive run and bounded call tree branching:
- Recursive run constraint.
is the singleton set , i.e. every run starts with ,
every is a finite set of in and not instances of components of ,
every is obtained in one -step by performing in simultaneously one step of each i/o-algorithm in . Such an -step is also called a recursive step of .
- Bounded call tree branching.
There is a fixed natural number , depending only on , which in every -run bounds the number of callees which can be called by a call step.
To capture the required independence of callee computations we now describe a way to make the concept of an instance of an algorithm and its computation more precise. The idea is to use for each callee a different state space, with the required connection between caller and callee through input and output terms. One can define an instance of an algorithm by adding a label , which we invite the reader to view as an agent executing the instance of . The label can be used as environment parameter for the evaluation of a term in state with the given environment. This yields different functions as interpretation of the same function symbol for different agents , so that the run-time interpretations of a common signature element can be made to differ for different agents, due to different inputs which determine their initial states.171717 The idea underlies the definition of ambient ASMs we will use in the following. It allows one to classify certain
The idea underlies the definition of ambient ASMs we will use in the following. It allows one to classify certainas ambient-dependent functions, whereby the algorithm instances become context-aware. For the technical details we refer to the definition in the textbook [9, Ch.4.1].
This allows us to make the meaning of ‘activating a fresh instance of a callee’ in the Call Step Postulate more precise by using as fresh instance of a child algorithm called by an instance with a new label , where the interpretation of each input or output function satisfies during the run of . Note that by the call relationship constraint in the Call Step Postulate, input/output function symbols are in the signature of both the parent and the child algorithm. This provides the ground for the ‘asynchronous parallelism’ of independent subcomputations in the run constraint of the recursive run postulate. In fact, when a state is obtained from state by one step of each of finitely many and not i/o-algorithms , this means that for each the one-step transition relation holds for the corresponding state restrictions, namely where denotes the restriction of state to the signature .
With the above definitions one can make the Call Step Postulate more explicit by saying that if calls in a state so that as a result holds181818To simplify the presentation we adopt a slight abuse of notation, writing with the global states even where really holds for their restriction to the sub-signature of the concrete algorithm ., then for fresh instances of with input locations () the following holds:
The predicate expresses that the restriction of to the signature of is an initial state of determined by , so that is ready to start its computation.
Remark on Call Trees.
If in a recursive -run the main algorithm calls some i/o-algorithms, this call creates a finitely branched call tree whose nodes are labeled by the instances of the i/o-algorithms involved, with active and not waiting algorithms labeling the leaves and with the main (the parent) algorithm labeling the root of the tree and becoming waiting. When the algorithm at a leaf makes a call, this extends the tree correspondingly. When the algorithm at a child of a node has terminated its computation, we delete the child from the tree. The leaves of this (dynamic) call tree are labeled by the active not waiting algorithms in the run. When the main algorithm terminates, the call tree is reduced again to the root labeled by the initially called main algorithm.
Usually, it is expected that for recursive -runs each called i/o-algorithm reaches a final state, but in general it is not excluded that this is not the case. An example of the former case is given by mergesort, whereas an example for the latter case is given by the recursive sieve of Eratosthenes algorithm discussed in  and used in Section 4 to illustrate our definitions.
3 Capture of Recursive Algorithms
We now proceed with the second step of our behavioural theory, the definition of an abstract machine model—these will be recursive ASMs, an extension of sequential ASMs—and the proof that the runs of this model capture the runs of recursive algorithms.
3.1 Recursive Abstract State Machines
As common with ASMs let be a signature and let be a universe of values. In addition, we assume a background structure comprising at least truth values and their connectives as well as the operations on them. Values defined by the background are assumed to be elements of . Then (ground) terms over are built in the usual way (using also the operations from the background), and they are interpreted taking as base set—for details we refer to the standard definitions of ASMs . This defines the set of states of recursive ASM rules we are going to define now syntactically. We proceed by induction, adding to the usual rules of non-deterministic sequential (nd-seq) ASMs (which we repeat here for the sake of completeness) named rules which can be called.202020The terse definition here avoids complicated syntax. We tacitly permit parentheses to be used in rules when needed. We use an arbitrary set of names for named rules.
If are terms over the signature and is a function symbol of arity , then is a recursive ASM rule.
If is a Boolean term over the signature and is a recursive ASM rule, then also IF THEN is a recursive ASM rule.
- Bounded Parallelism.
If are recursive ASM rules, then also their parallel composition, denoted PAR is a recursive ASM rule.
- Bounded Choice.
If are recursive ASM rules, then also the non-deterministic choice among them, denoted CHOOSE is a recursive ASM rule.
If is a recursive ASM rule and is a term and is a variable, then LET IN is also a recursive ASM rule.
Let be terms where the outermost function symbol of is different from the outermost function symbol of for every . Let be the name of a rule of arity , declared by , where is a recursive ASM rule all of whose free variables are contained in . Then is a recursive ASM rule.
A recursive ASM rule of form is called a named i/o-rule or simply i/o-rule.
The same way a recursive algorithm consists of finitely many i/o-algorithms, a recursive ASM consists of finitely many recursive ASM rules, also called component (or component ASM) of .
A recursive Abstract State Machine (rec-ASM) consists of a finite set of recursive ASM rules, one of which is declared to be the main rule.
For the signature of recursive ASM rules we use the notation for the split of into the disjoint union of input, output and local functions. For named i/o-rules the outermost function symbol of is declared as an element of and for each the outermost function symbol of is declared as an element of (). In the definition of the semantics of a named i/o-rule we will take care that the input/output assumption and the call relationship defined in Section 2.2 for i/o-algorithms are satisfied by named i/o-rules.
Sequential and recursive ASMs differ in their run concept, analogously to the difference between runs of an nd-seq algorithm and a recursive algorithm. A sequential ASM is a ‘mono-agent’ machine: it consists of just one rule212121For notational convenience, this rule is often spelled out as a set of rules, however these rules are always executed together, in parallel. and in a sequential run this very same rule is applied in each step—by an execution agent that normally remains unmentioned. This changes with recursive ASMs which are ‘multi-agent’ machines. They consist of a set of independent rules, multiple instances of which (even of a same rule) may be called to be executed independently (for an example see the rule in Sect. 4). We capture this by associating an execution agent with each rule so that each agent can execute its rule instance independently of the other agents, in its own copy of the state space (i.e. instances of states over the signature of the executed rule), taking into account the call relationship between caller and callee (see below).
Therefore every single step of a recursive ASM may involve the execution of one step by each of finitely many and not agents which execute in their state space the rule they are associated (we also say equipped) with. To describe this separation of state spaces of different agents (in particular if they execute the same program), we define instances of a rule by ambient ASMs of form with agents (see below for details). The following definition paraphrases the run constraint in the Recursive Run Postulate 5.
A recursive run of a recursive ASM is a sequence of states together with a sequence of subsets of , where each is equipped with a that is an instance of a rule , such that the following holds:
is a singleton set , which in equals the set , and its agent is equipped with .
is a finite set of in and not agents. We define (see Definition 11):
is obtained from in one -step by performing for each agent one step of .
To complete the definition of recursive ASM runs, which extends the notion of runs of sequential ASMs, it suffices (besides explaining ambient ASMs) to add a definition for what it means to apply a named i/o-rule. Using the ASM framework this boils down to extend the inductive definition of the update sets computed by sequential ASMs in a given state by defining the update sets computed by named i/o-rules.
A detailed definition of ambient ASMs can be found in [9, Ch.4.1]. Here it suffices to say that using as instance of a called rule permits to isolate the state space of agent from that of other agents, namely by evaluating terms in state considering also the agent parameter , using instead of . To establish the call relationship we require below the following: when a recursive ASM rule , executed by a parent agent , calls a rule to be executed by a child agent , then the input/output functions of are also functions in and are interpreted there in the state space of the same way as in the state space of .
For the sake of completeness we repeat the definition of update sets for sequential ASM rules from  and extend it for named i/o-rules. Rules of sequential ASMs do not change neither the set nor the function, so and do not appear in the definition of . -rules are the only rules which involve also introducing a new element into with an assignment to and a state initialization corresponding to the provided input, so that executes its instance of the called rule.
If is an assignment rule , then let . We define .
If is a branching rule IF THEN , then let be the truth value . We define for and otherwise.
If is a parallel composition rule PAR ,222222Parallel composition rules are also written by displaying the components vertically, omitting PAR and . then we define .
If is a bounded choice rule CHOOSE , then we define .
If is a let rule LET IN , then let , and define .
Now consider the case that is a call rule . In this case let , and let be the declaration of the rule named , with all free variables of among .
In the call tree, the caller program plays the role of the parent of the called child program that will be executed by a new agent . The child program is an instance of with the outer function symbols of for classified as denoting input functions (which are not read by the caller program) and with the outer function symbol of classified as denoting an output function (which is not updated by the caller program).232323The input parameters and the output location parameters are passed by value, so that the involved i/o-function symbols can be considered as belonging to the signature of caller and callee. The first two of the call relationship conditions are purely syntactical and can be assumed (without loss of generality) for caller and callee programs. The third condition is satisfied, because each local function symbol of arity is implicitly turned in a program instance into an ()-ary function symbol, namely by adding the additional agent as environment parameter for the evaluation of terms with as leading function symbol. Therefore, each local function of the callee is different from each local function of the caller, and to execute the call rule means to create a new agent ,242424The function is assumed to yield for each invocation a fresh element, pairwise different ones for parallel invocations. One can define such a function also by an construct which operates on a (possibly infinite) special reserve set and comes with an additional constraint on the construct to guarantee that parallel imports yield pairwise different fresh elements, see [12, 2.4.4]. which is the agent that executes the call, to equip with the fresh program instance and Initialize its state by the values of . This makes the callee ready to run and puts the caller into mode, in the sense defined by Definition 11 (except the trivial case that is already when so that it will not be executed).
In other words we define as the singleton set containing the update set computed in state by the following ASM, a rule we denote by which interpretes the named i/o-rule .
Note that denotes the output location which the caller expects to be updated by the callee with the return value.
Each recursive ASM defines a recursive algorithm (in the sense of Definition 10) that is behaviourally equivalent to .
Remember that each sequential ASM (i.e. without named i/o-rules) is an nd-seq algorithm  and thus satisfies the branching time, abstract state and bounded exploration postulates.
Each rule belonging to a recursive ASM , including named i/o-rules, is associated with a signature given by the function symbols that appear in the rule or in the rule body if the rule is a named rule. This together with the agents in , defines the states (as sets of states of signature , one per ) and gives the satisfaction of the abstract state postulate 2.
The satisfaction of the branching time postulate 1 is an immediate consequence of the fact that for every state , applying any recursive ASM rule in , including named i/o-rules, defines a set of successor states.
For the satisfaction of the bounded exploration postulate 3, for a named i/o-rule we take all terms appearing in the rule body, which according to our definition of the update sets yielded by a named i/o-rule define exactly the update sets in a given state.
As to the recursive run postulate 5, the run constraint is satisfied by the definition of recursive ASM runs (Definition 14). The bounded call tree branching constraint is satisfied, because there are only finitely many named i/o-rules in each rule .
It remains to show that the runs of the recursive algorithm , induced by this interpretation of the given recursive ASM , are exactly the recursive runs of . However, this follows immediately from the two run characterizations in Postulate 5 and Definition 14 and from the fact that the successor relation of is defined by the update sets yielded by the rules of .
3.2 The Characterisation Theorem
We now show the converse of Theorem 3.1 giving our first main result that recursive algorithms are captured by recursive ASMs. The proof largely follows the ideas underlying the proof of the sequential ASM thesis in 
Let denote any recursive algorithm. Then for each state and a successor state in a recursive run of we obtain an update set . According to the Recursive Run Postulate 5 each such state transition is defined by one step of each of finitely many and not i/o-algorithms . Each of these i/o-algorithms is a fresh instance of some component of . In particular, by the freshness and the independence condition in the Call Step Postulate 4, the instances have disjoint signatures and yield subruns with states and update sets .
Consider now any such fresh instance of a component . All function symbols used by in its states and update sets are copies of function symbols of , somehow labelled to ensure the freshness condition of the instance. Removing these labels we obtain for any state and successor state pairs with states of and . Let be the set of all pairs obtained this way. We choose a fix bounded exploration witness for all .
To complete the proof of the theorem it now suffices to show the following Lemma 1.
For each there exists a recursive ASM rule with for all states appearing in .
First we show that the argument values of any location in an update of in any state are critical in . The proof uses the same argument as in [22, Lemma 6.2].
To show the property consider an arbitrary update set and let be an update at location . We show that the assumption that is not a critical value leads to a contradiction.
If is not a critical value, one can create a new structure by swapping with a fresh value not appearing in (e.g. taken from the set), so is a state of . As is not critical, we must have for all terms . According to the bounded exploration postulate we obtain for the set of update sets produced by in states and . Then the update appears in at least one update set in contradicting the fact that does not occur in and thus cannot occur in the update set created in this state.
Furthermore, for each pair we have a recursion depth function defined inductively as follows (induction on the call tree):
(with ) defined as follows:
Case 1: For some callee just activated in the run by , the restriction of to the signature of is an initial state of a terminating run of the callee, during which remains waiting, and such that and for a final state for and . Then the depth of the update is .
Case 2: If there is no such child with , then .
We now proceed by a case distinction for .
is defined for all states . In this case we proceed by induction over . The base case is de facto the proof of the non-deterministic sequential ASM thesis.
Let and let be a state with , and let be an update at location . As all are critical values and there is no child with , there exist terms with . Thus, the assignment rule produces the given update in state .
The parallel composition of all these assignment rules gives a rule with , and the bounded choice composition of these rules for all successor states defines a rule with .
Next step-by-step we extend the states252525These cases are captured in Lemmata 6.7, 6.8. and 6.9 in , for which the application of yields the updates sets defined by .
First, let be a state such that and coincide on . Then we have , because the rule only uses terms in , which have the same values in and . We further have due to the bounded exploration postulate. These equations together give for all states such that and coincide on .
Second, let are isomorphic states such that holds. Let be the isomorphism with . Then we have (by the Abstract State Postulate) and also (because the ASMs satisfy the Abstract State Postulate). These equations together give and hence also .
Third, each state defines an equivalence relation on : . States are called -similar iff holds. Now let be a state that is -similar to . Consider a state isomorphic to , in which each value that appears also in is replaced by a fresh one. Then is disjoint from and by construction -similar to , hence also -similar to .
Thus, we can assume without loss of generality that and are disjoint. Define a structure isomorphic to by replacing by for all . As and are -similar, holds for all terms , so the definition of is consistent. Now and coincide on , so by (i) we obtain .
To complete the proof for the induction base we exploit that is finite, hence there are only finitely many partitions of and only finitely many -similarity classes (). For each such class we define a formula such that holds. Then we can define the rule as follows:
PAR (IF THEN ) … (IF THEN )
Now let . For a state with