Quantitative Expressiveness of Instruction Sequence Classes for Computation on Single Bit Registers

by   Jan A. Bergstra, et al.
University of Amsterdam

The number of instructions of an instruction sequence is taken for its logical SLOC, and is abbreviated with LLOC. A notion of quantitative expressiveness is based on LLOC and in the special case of operation over a family of single bit registers a collection of elementary properties are established. A dedicated notion of interface is developed and is used for stating relevant properties of classes of instruction sequences



There are no comments yet.


page 1

page 2

page 3

page 4


On the complexity of the correctness problem for non-zeroness test instruction sequences

In this paper, we consider the programming of the function on bit string...

An Automatic Debugging Tool of Instruction-Driven Multicore Systems with Synchronization Points

Tracing back the instruction execution sequence to debug a multicore sys...

Program algebra for random access machine programs

This paper presents an algebraic theory of instruction sequences with in...

Adding 32-bit Mode to the ACL2 Model of the x86 ISA

The ACL2 model of the x86 Instruction Set Architecture was built for the...

Efficient Computation of Positional Population Counts Using SIMD Instructions

In several fields such as statistics, machine learning, and bioinformati...

The Effect of Instruction Padding on SFI Overhead

Software-based fault isolation (SFI) is a technique to isolate a potenti...

A short introduction to program algebra with instructions for Boolean registers

A parameterized algebraic theory of instruction sequences, objects that ...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

This paper makes use of the theory and notation regarding instruction sequences for operation on Boolean registers as surveyed in [11] for the special case of operations on Boolean registers thereby following the notation of [9] and simplifying the general presentation of [3] and [12].

Existing notations and results regarding instruction sequences will be used mostly without further reference or technical introduction because such expositions having amply been published. We mention [2, 3, 4, 6, 7, 9] and [11]), and further references listed in these papers. For the following notions, terms and phrases, we refer to the papers just mentioned and the references contained in those: basic instruction (), focus, method, focus method notation for basic instructions ( with focus and method ), yield (also called reply) of a basic instruction(a), positive test instruction (), negative test instruction (), termination instruction (), (forward) jump instruction (), backward jump instruction (), in direct jump instruction, finite PGA instruction sequence, (alternatively: single pass instruction sequence or PGA instruction sequence without iteration), PGLB program (PGA instruction sequence with with backward jumps instead of iteration), generalised semi-colon (text sequential composition), thread, terminated thread (stopped thread ), diverging thread (), thread extraction from an instruction sequence ( for an instruction sequence ), service, service family, empty service family, service family composition operator, service family algebra, apply operator (), the method interface consisting of 16 methods of the form for Boolean registers with ( for yield, for effect).

1.1 Logical lines of code for an instruction sequence

Because the identification of Booleans and bits may lead to confusion Boolean registers will be referred to as single bit registers below.

The number of instructions of an instruction sequence is referred to as its length in e.g. [7, 8]. However, in order to develop a terminology which is more similar to the classical notion of LOC (lines of code, also referred to as SLOC for source lines of code) we will make use of the following terminology:

Definition 1.1.

LLOC (logical lines of code): for an instruction sequence , written in any PGA-style instruction sequence notation, denotes the number of instructions of .

Conventions for the notation of instructions are such that equals the number of semi-colons in plus one.

Definition 1.2.

An instruction sequence has low register indices if for each kind of register the collection of register numbers of registers involved in one or more of its basic instructions constitute an initial segment of the positive natural numbers.

is not a precise measure of the size of

in terms of bytes. A reasonable estimate is that for the instruction sequence notations used below, and assuming that the instruction sequence has low register indices, the size of

as measured in bytes will not exceed say .

We refer to [19] for an exposition on various forms of LOC and SLOC in software engineering practice. In the setting of PGA style instruction sequences no distinction between a statement and an instruction is made and LLOC according to Definition 1.1 is a plausible interpretation of logical SLOC which is characterised in [19] as a metric, or rather a family of metrics, based on counting the number of statements in a source code. LLOC as in Defnition 1.1 comes close to the metric used implicitly in [16].

1.2 Existing approaches to program size

Work on program size has been carried out in the setting of computability theory, for instance [14][18], and [15] in relation to Kolmogorov complexity. In [17] program size is defined as the set of characters of a program and it is related with practical computational tasks, while [13] links program size with information theory. Unlike these approaches we use a rather fixed family of program notations, viewing a program as a sequence of instructions. By taking the number of instructions as a metric full precision is obtained while at the same time abstraction from the ad hoc syntax of instructions is achieved.

1.3 Objectives of the paper

The objective of this paper is to describe some elementary quantitative observations pertaining to instruction sequences and LLOC metric under the simplifying assumption that basic actions operate on a family of single bit registers, which arguably are the simplest conceivable datastructures. We will assume that the semantics of an instruction sequence, i.e. what it computes, is a partial function from tuples of bits to tuples of bits, thereby excluding instruction sequences meant for computing interactive systems.

We will demonstrate that for very simple tasks determination of the lowest LLOC of an implementation for that task is possible, and we will show by means of examples that theoretical work on LLOC minimisation is greatly facilitated by being explicit about the precise method interfaces of various single bit registers. For each bit there are possible interfaces and therefore when designing an instruction sequence for a task involving input registers, and output registers, while allowing the use of an arbitrary number of auxiliary registers, each with the same method interface, a total of different combinations of method interfaces each constitute potentially different versions of the problem to implement and to do so with a minimal (or relatively small) LLOC count. Many questions are stated and left unanswered.

A single bit register is a service (program algebra terminology for a system component able to execute the actions of an instruction sequence) which is accessed by a calling instruction via its focus. A focus plays the role of the name of a service, and at the same time it is informative about the role of the service. Below we will mainly consider the following foci: and for . The inputs for a computation are placed in the registers (so-called input registers), the outputs of a computation are found in the registers ( initialised output registers). At the end of a computation the final value of the input registers is forgotten. The focus prefixes and , are referred to as register roles. Other register roles exist, for instance for output registers which have have initial value , for a register which serves both as an input and as an output, for an auxiliary register with initial value , and for an auxiliary register with initial value .

1.4 Quantitative expressiveness versus qualitative expressiveness

Expressiveness of a formalism for denoting instruction sequences may be measured in many ways. We will mainly consider the following idea: given a task, computing a total or partial function of type we are interested in the shortest instruction sequence(s), taken from some class of instruction sequences, that is instruction sequences with a minimal number of instructions, which compute . Clearly if are two classes of instruction sequences then may be considered more expressive (more expressive w.r.t. LLOC) than if for some task all instruction sequences in that compute are longer than with the minimal LLOC for an instruction sequence in that implements task .

Definition 1.3.

Let be classes of instruction sequences. is more expressive (more expressive w.r.t. LLOC) than if for some task there is an instruction sequence which computes while there is no instruction sequence which also computes such that .

In some cases the smaller class of instruction sequences does not provide any implementation for a task which is implementable with the larger class. Then I will speak of differentiation of qualitative expressiveness.

Definition 1.4.

Let be classes of instruction sequences. is qualitatively more expressive than if for some task there is an instruction sequence which computes while there is no instruction sequence which also computes .

1.5 Rationale of designing additional forms of instructions

Below several types of instructions outside the core syntax of PGA will be discussed: instructions for structured programming, backward jumps, indirect jumps, and generalised semi-colon instructions. These constitute merely a fraction of the options for extension of the syntax of instruction sequences that have been explored in recent years.

We will assume that that the rationale of the introduction of additional kinds of instructions is to achieve one or more of of four potential advantages, upon making use of the “new” instructions:

Fewer instructions.

Some tasks may be implemented with a shorter instruction sequence, that is with fewer instructions. (This criterion when applied in practice amounts to the optimisation of program size or achieving good code compactness.)

Fewer steps.

A given task may be implemented by an instruction sequence which produces faster runs, i.e. fewer steps are taken till termination, either in the worst case or in average or according to some other efficiency criterion.

Fewer mistakes.

Correct or ‘high quality” instruction sequences can be produced either more quickly, or in a more readable form or, in such a manner that some given form of analysis or verification is more easily applied, or can be applied with a higher rate of success.

Fewer compiler optimisations.

A given task may be implemented by an instruction sequence which allows the production of efficient compiled version with fewer optimisation steps.

Below we will focus exclusively on the first two advantages. Undoubtedly the third advantage may become harder to achieve when optimising either code compactness or execution speed or both.

1.6 Generalised semi-colon and a non-expanding LLOC metric

We will make use of generalised semi-colon notation: . In order to apply the LLOC metric the generalised semi-colons must be expanded first.

An alternative presentational metric , called the generalised semi-colon non-expanding LLOC metric, works as follows: (i) for not containing any occurrence of the generalised sequential composition construct: , , and (iii) . The idea is to count “” as well as the corresponding closing bracket “” as if these were instructions, and to add a logarithmic increment taking account for the size of .

When writing an instruction sequence the use of generalised semicolon notation may improve readability. It may also be easier to write a compiler for instruction sequence expressions involving generalised semi-colons than for expanded versions thereof.

1.7 Terminology and notation for roles

The strings , , , and serve as role headers which prefix the role base, whereas and are role postfixes which may be appended to the role base. A role comes about given a role base from prefixing the role base with the header and in addition, in case the header is either or , postfixing the result with a postfix.

For single bit registers the preferred role base is the empty string and the respective roles are . Corresponding foci include a further number so that different copies of services for the role at hand can be distinguished. Examples of foci for the various roles for single bit registers are are e.g. . Below we will introduce instructions with role base for 1 dimensional single bit arrays, and we will use additional role bases and (with foci e.g. ) in order to enhance readability.

2 Expressiveness of single pass instruction sequences

All functions from bit vectors of length

to bit vectors of length can be computed without the use of backward jumps, that is without the use of any form of iteration or looping. Proposition 2.2 was shown for in taken from [7], the extension to is straightforward. The following function will be used.

Definition 2.1.

is given by , .

Proposition 2.1.



Induction on . The case is immediate. Step: . ∎

Proposition 2.2.

Let , with . For each total there is a finite PGA instruction sequence (i.e. single pass instruction sequence, or instruction sequence without iteration) with basic instructions of the form with focus and method which computes . Moreover the ’s can be chosen such that .


We will use induction on . If , produces a sequence of constants which is computed by . We notice that .

Now consider the case . We split into and such that for all , and . Using the induction hypothesis one may finc and with instructions each and which compute and respectively. Now the instruction sequence computes and that it . ∎

The design of has many alternatives. For instance setting works as well, while its basic actions are less amenable to reading input from the input registers as the value of input registers is set to 0 at the first method call to it.

2.1 Computational metrics: NOS

With (number of steps) I will indicate the number of instructions that is processed during the (unique) run of the instruction sequence on service family . If divergence occurs, a jump with counter 0, or a jump outside the range of instructions, takes the value . Equally if an error occurs, i.e. the occurrence of a method call outside the interface provided by , takes the value . Interfaces are discussed in detail in Section 5.

Some examples of :
, and .

The instruction sequence which has been constructed in the proof of Proposition 2.2 computes in such a manner that each result is found in precisely steps. This is average, and worst case of for inputs and outputs. This figure for is fairly low as, except from producing outputs, it provides just one instruction on average to process each bit of input after it has been read.

2.2 Tradeoff between LLOC and NOS: an open question

The implication of this observation is that it is easy to write an instruction sequence which produces fast computations, i.e. a low worst case fro relevant while it may be hard to ensure in addition that is kept reasonably small. If entails a combinatorial explosion, then so does the activity of designing and constructing .

In other words: given a task (with inputs and outputs, both fixed numbers) the programming problem to write an instruction sequence implementing the task primarily constitutes a challenge to find an implementation with low LLOC, a state of affairs which brings LLOC to prominence. It is unclear to what extent minimising LLOC stands in the way of obtaining a low worst case or average NOS in practice, that is for meaning full tasks . The following question, for which we have no answer, constitutes one of many ways to formalise this matter.

Problem 2.1.

Is there a family of functions for which LLOC minimal implementing instruction sequences (admitting auxiliary registers) have superpolynomial worst case NOS performance?

However, as minimising is in most cases an unfeasible challenge, it is reasonable to look for a combined metric. We are unaware of a plausible candidate for a combined metric, however, which leads us to stating the following conceptual question.

Problem 2.2.

Find a plausible metric for instruction sequences (which measures the success of a design) and combines and by capturing a useful tradeoff between these.

In [11] we have presented various designs of single pass instruction sequences for multiplication of natural numbers in binary notation. As it stands we have no systematic method to assess the success of these designs in quantitative terms. The processing speed (low worst case NOS) which is achieved by way of a divide and conquer approach is relevant only if the cost in terms of LLOC is not too high, and we have no obvious way to assess that matter.

A way out of this matter is to insist that implementing a family of functionalities by means of single pass instruction sequences must be done under the additional requirement that . That requirement, however, rules out the instruction sequences found in [11].

2.3 Backward jumps and LLOC, an open problem

One may incorporate iteration by allowing backward jump instructions (written ). PGLB is the instruction sequence notation which admits the instructions from PGLA (without iteration) as well as backward jump instructions. Thus PGLB instructions are with a basic action, i.e. an action of the form with a focus and a method.

Proposition 2.3.

There is a computable translation which transforms PGLB instruction sequences for single bit registers into finite PGA instruction sequences working on the same single bit registers in such a manner that for each , computes the same function as on the single bit registers which it makes use of.


Given PGLB instruction sequence working on inputs, all input vectors are presented to and the results are computed and collected in an appropriate finite datastructure. Now the proof of Proposition 2.2 is understood as the description of an algorithm by means of which the required instruction sequence is created. ∎

Proposition 2.4.

If there exists a translation which transforms each PGLB instruction sequence for single bit registers with low register indices into a finite PGA instruction sequence with low register indices working on the same single bit registers, perhaps making use of additional auxiliary registers, in such a manner that (i) for each , computes the same function as , and (ii) for each , is bounded by a fixed polynomial in LLOC, then NP P/Poly, and in fact NP


The connection between instruction sequences and complexity theory with advice functions has been explored in detail in [7]. The idea is that one may understand the instruction sequence itself as an advice function. The proof is an elementary application of the results in the mentioned paper. As a bit sequence the instruction sequence is of polynomial length in its number of instructions. The mechanism to compute the result of the execution of a single pass instruction sequence with low register indices on given inputs is with the of . ∎

It follows from these observations that it is implausible that for each PGLB instruction sequence a finite PGA instruction sequence of equal LLOC size can be found which computes the same function.

Upon taking into account the presence of more powerful services it is easily possible to demonstrate that backward jumps allow to write shorter programs for certain tasks. This idea is pursued in detail for certain services that represent an array of bits, i.e. indirect addressing of single bit registers.

2.4 The simplest array: using a single bit as an index

The simplest Boolean array has two single bit registers. Its role base is (for 1 dimensional array), role headers are . The method interface is as follows: (i) methods for apply to the index bit , and (ii) methods for can be used and will apply to the register indexed by the current value of .

The initial value of both registers, when of relevance, is given by the role postfix. The access bit is in fact also a part of this service kernel which has 8 states for that reason,

For instance is the focus for the th output service of this kind for which it is required that the registers are initially set to . (By consequence is a different focus.)

Copying say to say can be done as follows with :

A lower bound on for array copying in dimension 1, for a single pass instruction sequence, however, is : an access method for each of both arrays must appear twice (4), a method application the index bits of both arrays is necessary (2) and at least one termination instruction (1) is required. We find:

Proposition 2.5.

In the presence of 1D single bit addressed single bit arrays the use of backward jumps increases the expressive power of the instruction sequence notation.

The following question is open:

Problem 2.3.

Is it the case that the introduction of backward jumps in addition to PGA instructions renders the instruction sequence notation more expressive in terms of allowing to compute some functions with smaller LLOC size, and for the purpose of computing Boolean functions.

The stated question is not very specific for the precise syntax that allows for repetition. As an alternative to backward relative jumps one might consider: absolute jumps (see [1]), goto’s with label instructions (see [1]) and indirect jumps (see [2]).

We expect that multiplication of two -bits natural numbers thereby producing output values constitutes a task for which the availability of backward jumps provides a provable advantage in terms of the minimisation of the LLOC metric. This phenomenon may well appear for fairly low , say or below.

2.5 Unfolding an instruction sequence with backward jumps

Given an instruction sequence in PGLB notation, i.e. with backward jumps, one obtains which computes the same function as follows. Let . In obtain by replacing each jump by as follows:

  • if and then ,

  • if and then ,

  • if and then ,

  • if and then .

Then take . Assuming that works on a finite domain, for some , (, consecutive copies of ) is a finite PGA instruction sequences which computes the same function as . Moreover the computations take precisely as many steps as for . In Section LABEL:Cmetrics below we will discuss a computational metric for which and are equivalent, assuming that is taken sufficiently large. We notice: . irect address options for the array. Moreover a termination instruction is needed. Together this makes up for al has instructions. For this yields which is well below the (for array copying in dimension 2) in the presence of a backward jump is easily found to be

3 Proper subclasses of single pass instruction sequences

In this section we consider two restrictions on the design of instruction sequences in relation to expressiveness. The first restriction is that no register is acted upon more than once. The second restriction imposes an upper bound on te size of jumps. In Paragraph 5.4 below we will consider a third proper subclass of instruction sequences, by disallowing intermediate termination.

3.1 Single visit single pass instruction sequences

A useful subclass of finite PGA instruction sequences consists of those instruction sequence which contain at most one method call for each register. We will refer to these instruction sequences as single visit instruction sequences.

The single visit restriction comes with consequences for qualitative expressiveness. Consider the instruction sequence with two inputs and one output.
computes the function . As it turns out imposing the requirement that single pass instruction sequences are also single visit instructions reduces the qualitative expressiveness of the system.

Proposition 3.1.

The function as mentioned above cannot be computed by a single visit single pass PGA instruction sequence.


For single visit instruction sequences the use of auxiliary registers is not relevant as the first and last method call to it, if any call is made, will only return the know initial value of an auxiliary register. Assume that (i) is a single visit single pass PGA instruction sequences which has the required functionality, (ii) contains at most one call to each of the three single bit registers involved, and (iii) the first method call to a register in is for . Now notice that after reading for both replies the intended output still depends on the content of . Thus in both cases at some stage (i.e. after or more jumps) some test instruction takes input from . As there is only a single test instruction for in it follows that irrespective of the outcome of the test in that test on is performed. As a consequence the result of the computation of cannot depend on the initial content of which is wrong. So one may assume that the first call is to register . Now the output still depends in the value of and therefore in both cases the unique call to is reached and the output must be determined after processing that instruction so that it will not depend on the value read from . ∎

3.2 Single pass instruction sequences with bounded jumps

Another plausible restriction on single pass PGA instruction sequences results from imposing an upper bound to the size of jumps. At the moment of writing we have no answer concerning the following question.

Proposition 3.2.

Each Boolean function with finite range and domain be computed by a single pass PGA instruction sequence that involves jumps of size at most 2.


We consider a function taking its arguments from registers and producing results and in registers and . The construction is done in such a manner that it generalises to all cases.

Let be an enumeration of the arguments of . We write . An instruction sequence computing is found as follows:

with . ∎

Following [3] an instruction sequence with jumps of size only can be transformed into an equivalent instruction sequence without jumps. Thus the use of jumps of size does not increase expressiveness. Moreover, In the presence of auxiliary registers can be avoided.

Proposition 3.3.

With the use of arbitrarily many auxiliary registers (say ) each function on single bit registers can be computed by a single pass PGA instruction sequence without jumps.


Using Proposition 2.2, given , some single pass PGA instruction sequence over registers used by may be chosen such that computes . Using the main result of [3], with the help of sufficiently may auxiliary registers a single pass instruction sequence is found such that (after abstraction from internal steps). It follows that which implies that that computes . ∎

Although large jumps are not required for the computing any Boolean function, it still may be the case that imposing a restriction to small jumps leads to the need for longer instruction sequences, or it may imply the need for the use of more auxiliary registers.

Example 3.1.

Consider the function with one input and outputs ,
. returns with each set to while returns with set to . is computed by:

but in this case the jump instruction can be avoided, thereby achieving , as follows:

None of the instructions of can be avoided for any instruction sequence able to compute . It follows that is demonstrably a shortest instruction sequence able to compute .

Example 3.2.

Now the example is modified by having additional inputs which govern whether or not the outputs are to be set to 1. Moreover these additional inputs serve also as outputs and are complemented with each call, For focus naming non-empty role bases (see Paragraph 1.7) and are used and the function is computed by with :

It is easy to see that for no instruction sequence with fewer than instructions can compute . From Proposition 3.3 we know that can be computed by an instruction sequence without jumps with the use of auxiliary registers, and from Proposition 3.2 we know that it can be computed by means of an instruction sequence involving jumps with length or less. The latter instruction sequence may be quite long, however. By admitting jumps of size LLOC can be achieved:

It is plausible that for increasing the shortest single pass PGA instruction sequences for computing must involve increasingly large jumps, as does . Proving that to be the case is another matter, however. We will provide a partial result on that matter in Proposition 5.3 below, making use of interfaces in order to restrict the scope of the assertion and thereby to allow for its proof.

4 Interfaces

The setting of instruction sequences acting on service presents an incentive for the introduction and application of various forms of interfaces. Interfaces may may be classified and qualified in different ways. To begin with we distingiosh required interfaces and provided interfaces. A thread or an instruction sequence comes with a required interface, whereas services and service families come with a provided interface.

111Mathematically speaking required interfaces and provided interfaces are the same, though when working with interface groups required interfaces and provided interfaces may be thought of as inverses w.r.t. composition.

If a component is placed in a context, it is plausible to assume that the component comes with a required interface and that the context has a provided interface and to require for a good fit that the component’s required interface is a subinterface of the context’s provided interface.222The roles of component and context are not set in stone: if an instruction sequence computes over a service family , the thread is placed in a context made up of (denoted ), whereas if the instruction sequence uses the service family by way of the use operator (denoted ) it is less plausible to take this view as the use operator is not based on the assumption that provides a way of processing for each request (method call) that is required (issued) by .

4.1 Service kernels and method interfaces

A method interface is a finite set of methods (i.e. method names). A service kernel is a state dependent partial function from methods to Booleans. The domain of that function is called the method interface of the service kernel, and it is denoted with . The method interface of is supposed to be independent of the state of . Applying method to produces a yield and an effect which technically is another service kernel with . It is a useful convention to write thereby making a state explicit.

Using notation with an explicit state we may write and . A service kernel with an empty method interface is called degenerate or inactive. We will equate all inactive kernels denoting these with the constant which satisfies: .

4.2 Method interface of a single bit register service kernel

There are 16 methods for single bit registers, written (yield / effect), with . These are the methods applicable to any single bit register kernel (i.e. with content ). We write for the collection of these.

Each subset of constitutes a method interface. Consequently there are method interfaces for single bit registers.

For , denotes the service kernel which admits precisely the methods of on the single bit register . It follows that for , , so that , and .

4.3 Focus kernel linking and service family composition

A service kernel may be linked to (or: prefixed by, or: positioned under, or: combined with) a focus whereby a new service is obtained. If then is a different service starting out as a copy of .

Service families are combinations of services created from the empty service family and services by way of service family composition (denoted ) which is commutative and associative, and for which the empty service is a unit element. Service family composition is not idempotent, however, as (with as in Paragraph 4.1 above), thereby indicating that ambiguity in the service provided by a context is considered problematic, rather than it is resolved in a non-deterministic manner. Indeed if services with the same focus are combined an ambiguity arises as to which service kernel is to process , and for this dilemma no simple solution exists. For that reason the combination is understood as an error in the algebra of service families.

When combining services and the service family is obtained. If a basic action is applied to a service family then two cases are distinguished: (i) equals one of the in which case the method is applied to , so that either if a reply is obtained and the state of is updated, or otherwise an error occurs, or (ii) none of the equals in which case an error occurs.

When computing the application of an instruction sequence to a service family (i.e. computing ) an error is represented by having the empty service family as the result: . Evaluation of over works fine as long as there is at most a single , for which and moreover for that , . In that case performing basic action yields reply while changing the state of to , leaving the states of the other services in the service family unmodified. In other cases the empty service family is produced.

In the case of single bit services the inactive service kernel is denoted with rather than with , i.e. a register containing an error value, so that: . Applying any method to a register containing is considered a run time error, the handling of which depends on the context in principle. In the setting of this paper an error leads to the production of the empty service family.

4.4 Service family restriction

Let be a set of foci and a service family. Then , the -restriction of , results from by removing (i.e. replacing by ) each service in with ). For the special case that we find that each can be written in one of two forms: or . Service family restriction satisfies some useful equations: , , , .

4.5 Basic action interfaces

A basic action (name) is a pair with a focus and a method (name). A basic action interface is a finite collection of pairs where is a focus and is a method interface. The notation is simplified by writing for the basic action interface , and by writing for union. Both instruction sequences and service families come with a basic action interface. We write for the basic action interface of a service family and for the basic action interface of an instruction sequence .

For instruction sequences the interface collects all focus method pairs that occur in instructions in . is a required interface, as it collects requests (method calls) which an environment is supposed to respond to. Defining equations for are: , , and , in combination with .

For a service family , collects the method calls to which is able to respond. For service families the interface definition is less straightforward than for instruction sequences: , , . From these equations it follows that , and therefore distribution of over , i.e. , fails if and if .

5 Interfaces as constraints on instruction sequences

Given a basic action interface the collection of PGA instruction sequences for acting on single bit registers such that is denoted . Membership of for an appropriate basic action interface is a useful constraint on an instruction sequence. We will provide several examples of such constraints in the following Paragraphs of this Section.

Interfaces are partially ordered by inclusion (, is a subinterface of , is contained in , includes ). An interface may serve as a constraint on instruction sequences, in particular the requirement that the required interface of the instruction sequence is not too large: .

At the same time a basic action interface may serve as a constraint on a service family on which is supposed to operate: , that is the requirement that the provided interface of is not too small. We will provide four examples of the use of interfaces in the following Paragraphs.

5.1 Alternative initialisation of output registers

An obvious extension of the instruction set outlined in Paragraph 1.3 above is to allow to make use of registers which have as the initial content. Allowing 1-initialised output registers extends the class of instruction sequences in such a manner that there is a gain of expressiveness.

To see this improvement consider the function with a single input which takes constant value . Working with interface the mere termination instruction constitutes an instruction sequence that computes with LLOC 1. Alternatively if an instruction sequences is sought for in , a longer instruction sequence (LLOC 2) such as , is required.

5.2 Bit complementation

We will consider the function given by . represents complementation (negation).

Below seven instruction sequences each of which compute are listed. By imposing restrictions on the basic action interface serving as a constraint the differences between these options for implementing complementation of a single bit can be made explicit.

The role stands for a register which serves as an output as it won’t be read, but which may have initial value 0 or 1. Thus a single bit register with focus say may have arbitrary initialisation.

  • . Both inputs and outputs reside in . The instruction sequence computes and is a shortest possible program because at least one basic action needs to be applied to the input and a termination instruction must be included.

  • . In this case output is placed in a different register serving as an output register only. computes . Moreover a shorter implementation cannot be found: the input needs to be read and some writing of outputs is unavoidable as well as a termination instruction.

  • .

    The instruction sequence computes . A shorter instruction sequence for computing when implemented under the constraints of basic action interface does not exist. An input instruction is necessary, and both values must be written by some output instruction because both outputs can arise while the initial content of the output register is not known in advance.

  • . Now the instruction sequence
    is in and computes and it is easy to see that it constitutes a shortest possible program in for that task.

  • . Now is in , computes , and as such has minimal LLOC for that task.

  • . A shortest implementation of under these constraints is: .

  • . is computed by

Proposition 5.1.

As a single pass instruction sequence computing under the constraint that minimises LLOC.


and therefore consider an implementation of which has 4 instructions, say . We may assume that because otherwise cannot be performed unless a faulty termination takes place with the effect that may be simplified to three or even fewer instructions while still computing . That is impossible because at least one read instruction on and two different write instructions on (for and for ) must appear in . This observation also implies that the is at least 4. So . If were an input instruction, the output of is independent of the input, which is not the case.

Thus . Now a case distinction on reveals that fails because starting with the second instruction is skipped and no input action is performed. Similarly fails because starting with the second instruction is skipped and no input action will be performed. If then the collection of results for and for is left unchanged when deleting so that a shorter instruction sequence also implements which has been ruled out already.

Thus is an input instruction. It must be a test instruction because otherwise the output will not depend on the input. Let , the symmetric case can be dealt with similarly. Upon input the computation of proceeds with with . Consider the case with initial value for then for each option for the resulting value for is instead of the required output . In all cases a contradiction has been derived thus contradicting the initial assumption that with LLOC equal to 4 computes . ∎

The following fact admits an easy but tedious proof, the details of which are left aside.

Proposition 5.2.

For each basic action interface the following holds: if (complementation) can computed by a finite single pass instruction sequence in then can be computed by an instruction sequence with LLOC at most 5.

5.3 Parity checking

The second example of the use of interfaces as constraints concerns the role of auxiliary registers in single pass instruction sequences for computing multivariate functions on Booleans. We will survey the results of [8] while reformulating these in terms of interfaces.

Let . The function on bit sequences is given by: . determines the parity of a sequence of bits. We are interested in instruction sequences for computing from inputs stored in input registers with focus .

From [8] we take that the instruction sequence computes parity for bits:

and for :

Formalisation of the fact that these instruction sequences perform parity checking looks as follows in the notation of [4] and [11].

For all and for all bit sequences :

For we find that .

Next consider the interface and the instruction sequences with .

and for : .

In [8] it is shown that computes on inputs. In formal notation this reads:

For . Moreover it was shown in [8] that from