1 Introduction
For each , each function from to can be computed by a finite instruction sequence that contains only instructions to set and get the content of Boolean registers, forward jump instructions, and a termination instruction. It is an intuitively evident fact that the correctness of an arbitrary instruction sequence of this kind as an implementation of the restriction to of a given function from to , for , cannot be efficiently determined. In this paper, we investigate under what restrictions on the arbitrary instruction sequence the correctness can be efficiently determined in the case that the given function is the function that models the nonzeroness test on natural numbers with respect to their binary representations. To our knowledge, there are no previous investigations of this kind.
One of the main results of this work (Theorem 6.3) states, roughly, that the problem of determining the correctness of an arbitrary instruction sequence as an implementation of the restriction to of the function from to that models the nonzeroness test function, for , is coNPcomplete, even under the restriction on the arbitrary instruction sequence that its length depends linearly on . Another of the main results of this work (Theorem 6.4) states, roughly, that this problem can be decided in time polynomial in under the restriction on the arbitrary instruction sequence that its length is the length of the shortest possible correct implementations plus a constant amount and that it has a certain form. We expect that similar results can be established for many other functions, but possibly at considerable effort.
The question to what extent it can be efficiently determined whether an arbitrary program correctly solves a given problem is of importance to programming. We have chosen to investigate this question but, to our knowledge, there does not exist literature about it. This made us decide to start our investigation with programs of a very simple form, namely instruction sequences, and a very simple problem, namely the nonzeroness problem. Moreover, we decided to conduct our investigation as an application of program algebra, the algebraic theory of instruction sequences that we have developed (see below).
Instruction sequences are programs in their simplest form. Therefore, it is to be expected that it is somehow easier to understand the concept of an instruction sequence than to understand the concept of a program. The first objective of our work on instruction sequences that started with [2], and of which an enumeration is available at [9], is to understand the concept of a program. The basis of all this work is an algebraic theory of instruction sequences, called program algebra, and an algebraic theory of mathematical objects that represent in a direct way the behaviours produced by instruction sequences under execution, called basic thread algebra.^{1}^{1}1Both program algebra and basic thread algebra were first introduced in [2], but in that paper the latter was introduced under the name basic polarized process algebra. The body of theory developed through this work is such that its use as a conceptual preparation for programming is practically feasible.
The notion of an instruction sequence appears in the work concerned as a mathematical abstraction for which the rationale is based on the objective mentioned above. In this capacity, instruction sequences constitute a primary field of investigation in programming comparable to propositions in logic and rational numbers in arithmetic. The structure of the mathematical abstraction at issue has been determined in advance with the hope of applying it in diverse circumstances where in each case the fit may be less than perfect. Until now, this work has, among other things, yielded an approach to computational complexity where program size is used as complexity measure, a contribution to the conceptual analysis of the notion of an algorithm, and new insights into such diverse issues as the halting problem, garbage collection, program parallelization for the purpose of explicit multithreading and virus detection.
Like in the work on computational complexity (see [3, 5]) and the work on algorithmic equivalence of programs (see [4]) referred to above, in the work presented in this paper, use is made of the fact that, for each , each function from to can be computed by a finite instruction sequence that contains only instructions to set and get the content of Boolean registers, forward jump instructions, and a termination instruction. Program algebra is parameterized by a set of uninterpreted basic instructions. In applications of program algebra, this set is instantiated by a set of interpreted basic instructions. In a considerable part of the work belonging to our work on instruction sequences that started with [2], the interpreted basic instructions are instructions to set and get the content of Boolean registers.
This paper is organized as follows. First, we survey program algebra and the particular fragment and instantiation of it that is used in this paper (Section 2). Next, we present a simple nonzeroness test instruction sequence (Section 3). After that, as a preparation for establishing the main results, we first present a nonzeroness test instruction sequence whose length is minimal (Section 4) and then introduce the set of all nonzeroness test instruction sequences of minimal length (Section 5). Following this, we study the time complexity of several restrictions of the problem of deciding whether an arbitrary instruction sequence correctly implements the restriction to of the function from to that models the nonzeroness test function, for (Section 6). Finally, we make some concluding remarks (Section 7).
As mentioned earlier, to our knowledge, there is no previous work that addresses a question similar to the question to what extent it can be efficiently determined whether an arbitrary program correctly solves a given problem. For this reason, there is no mention of related work in this paper.
The following should be mentioned in advance. The set of Boolean values is a set with two elements whose intended interpretations are the truth values false and true. As is common practice, we represent the elements of by the bits and and we identify the elements of with their representation where appropriate.
This paper draws somewhat from the preliminaries of earlier papers that built on program algebra. The most recent one of those papers is [7].
2 Preliminaries: instruction sequences and computation
In this section, we present a brief outline of (ProGram Algebra) and the particular fragment and instantiation of it that is used in this paper. A mathematically precise treatment of this particular case can be found in [3].
The startingpoint of is the simple and appealing perception of a sequential program as a singlepass instruction sequence, i.e., a finite or infinite sequence of instructions each of which is executed at most once and can be dropped after it has been executed or jumped over.
It is assumed that a fixed but arbitrary set of basic instructions has been given. The intuition is that the execution of a basic instruction may modify a state and produces a reply at its completion. The possible replies are and . The actual reply is generally statedependent. Therefore, successive executions of the same basic instruction may produce different replies. The set is the basis for the set of instructions of which the instruction sequences considered in are composed. The elements of the latter set are called primitive instructions. There are five kinds of primitive instructions, which are listed below:

for each , a plain basic instruction ;

for each , a positive test instruction ;

for each , a negative test instruction ;

for each , a forward jump instruction ;

a termination instruction .
We write for the set of all primitive instructions.
On execution of an instruction sequence, these primitive instructions have the following effects:

the effect of a positive test instruction is that basic instruction is executed and execution proceeds with the next primitive instruction if is produced and otherwise the next primitive instruction is skipped and execution proceeds with the primitive instruction following the skipped one — if there is no primitive instruction to proceed with, inaction occurs;

the effect of a negative test instruction is the same as the effect of , but with the role of the value produced reversed;

the effect of a plain basic instruction is the same as the effect of , but execution always proceeds as if is produced;

the effect of a forward jump instruction is that execution proceeds with the th next primitive instruction of the instruction sequence concerned — if equals or there is no primitive instruction to proceed with, inaction occurs;

the effect of the termination instruction is that execution terminates.
Inaction occurs if no more basic instructions are executed, but execution does not terminate.
To build terms, has a constant for each primitive instruction and two operators. These operators are: the binary concatenation operator and the unary repetition operator . We use the notation , where and are terms, for the term . We use the convention that and stand for if .
The instruction sequences that concern us in the remainder of this paper
are the finite ones, i.e., the ones that can be denoted by terms
without variables in which the repetition operator does not occur.
Moreover, the basic instructions that concern us are instructions to set
and get the content of Boolean registers.
More precisely, we take the set
i.i ∈ .b b ∈
i.i ∈ i.b i ∈b ∈
as the set of basic instructions.^{2}^{2}2We write for the set of
positive natural numbers.
Each basic instruction consists of two parts separated by a dot. The part on the lefthand side of the dot plays the role of the name of a Boolean register and the part on the righthand side of the dot plays the role of a command to be carried out on the named Boolean register. The names are employed as follows:

for each , serves as the name of the Boolean register that is used as th input register in instruction sequences;

serves as the name of the Boolean register that is used as output register in instruction sequences;

for each , serves as the name of the Boolean register that is used as th auxiliary register in instruction sequences.
On execution of a basic instruction, the commands have the following effects:

the effect of is that nothing changes and the reply is the content of the named Boolean register;

the effect of is that the content of the named Boolean register becomes and the reply is ;

the effect of is that the content of the named Boolean register becomes and the reply is .
We write for the set of all instruction sequences that can be denoted by a term without variables in which the repetition operator does not occur in the case that is taken as specified above. is the set of all instruction sequences that matter in the remainder of this paper.
We write , where , for the length of .
Let , let , and let . Then computes if there exists a such that, for all , on execution of in an environment with input registers , output register , and auxiliary registers , if

for each , the content of register is when execution starts;

the content of register is when execution starts;

for each , the content of register is when execution starts;
then execution terminates and the content of register is when execution terminates.
We conclude these preliminaries with some terminology and notations that are used in the rest of this paper.
We refer to the content of a register when execution starts as the initial content of the register and we refer to the content of a register when execution terminates as the final content of the register.
The primitive instructions of the forms and are called read instructions. For a read instruction , the input register whose name appears in is said to be the input register that is read by . For an and , an input register that is read by occurrences of a read instruction in is said to be an input register that is read times in . For an , an occurrence of a read instruction in that is neither immediately preceded nor immediately followed by a read instruction is said to be an isolated read instruction of . For an , an occurrence of two read instructions in a row in that is neither immediately preceded nor immediately followed by a read instruction is said to be a read instruction pair of . We write , where , for the set of all such that is read by some occurrence of a read instruction in .
Take an instruction sequence and a function () such that computes . Modify by replacing all occurrences of by , all occurrences of by , and, for each , all occurrences of the register name by . Then the resulting instruction sequence computes the function defined by . If the occurrences of are replaced by instead of and the occurrences of is replaced by instead of , then the resulting instruction sequence computes the function defined by . Such register elimination and its generalization from one register to multiple registers are used a number of times in this paper. A notation for register elimination is introduced in the next paragraph.
For an and a function from a finite subset of to such that for some and is a proper subset of , we write for the instruction sequence obtained from by replacing, for each , all occurrences of by if and by if , all occurrences of by if and by if , and, for each , all occurrences of the register name by , where is the unique bijection from to such that, for all with , . For an and an such that for some and , we write for , where is the function from to defined by .
Register elimination is reminiscent of gate elimination as used in much work on circuit lower bounds (see e.g. Chapter 16 of [11]).
3 Simple nonzeroness test instruction sequences
The remainder of the paper goes into programming by means of instruction sequences of the kind introduced in Section 2. We consider the programming of a function from to that models a particular function from to with respect to the binary representations of the natural numbers by elements from . The particular function is the nonzeroness test function defined by the equations and . In this section, we present a simple instruction sequence computing the restriction to of the function from to that models this function, for .
, the restriction to of the function from to that models , is defined by n(b_1,…,b_n) = 1 iff b_1 = 1 or … or b_n = 1 .
We define an instruction sequence which is intended to compute as follows: n i = 1n (i. .) .
The following proposition states that the instruction sequence correctly implements .
Proposition 1
For each , computes .
Proof
We prove this proposition by induction on . The basis step consists of proving that computes . This follows easily by a case distinction on the content of . The inductive step is proved in the following way. It follows directly from the induction hypothesis that on execution of , after has been executed, (a) the content of equals iff the content of at least one of the input registers equals and (b) execution proceeds with the next instruction. From this, it follows easily by a case distinction on the content of that computes . ∎
The length of the instruction sequence defined above is as follows: (n) = 2 n + 1 .
is a simple instruction sequence to compute . It computes by checking all input registers. This is rather inefficient because, once an input register is encountered whose content is , checking of the remaining input registers can be skipped. does, moreover, not belong to the shortest instruction sequences computing . The shortest instruction sequences computing are the subject of Sections 4 and 5.
4 Shortest nonzeroness test instruction sequences
For , we have that execution of the instruction sequences denoted by and yield the same final content of for all initial contents of and . In this section, we present an instruction sequence which can be considered an adaptation of based on this fact. There are no instruction sequences shorter than that compute . Section 5 is concerned with the set of all instruction sequences of the same length as that compute .
We define an instruction sequence which is intended to
compute as follows:
n
{
i = 1n/2
(2i1. 2i. .) if n is even,
1. . i = 1(n1)/2
(2i. 2i+1. .)
if n is odd.
The following proposition states that the instruction sequence correctly implements .
Proposition 2
For each , computes .
Proof
We split the proof of this proposition into a proof for even and a proof for odd . The proof for even goes by induction on . The basis step consists of proving that computes . This follows easily by a case distinction on the contents of and . The inductive step is proved in the following way. It follows directly from the induction hypothesis that on execution of , after has been executed, (a) the content of equals iff the content of at least one of the input registers equals and (b) execution proceeds with the next instruction. From this, it follows easily by a case distinction on the contents of and that computes . The proof for odd is similar. ∎
The length of the instruction sequence defined above is as follows: (n) = { 3 n2+ 1 if n is even, 3 n+12if n is odd.
Proposition 3
For each , we have , if is even, and if is odd.
Proof
This follows immediately from the fact that if is even and if is odd. ∎
Proposition 3 and the following corollary of this proposition are used in several proofs to come.
Corollary 1
We have and .
We also have and for each . In fact, belongs to the shortest instruction sequences computing . This is stated by the following theorem.
Theorem 4.1
For all , for all , computes only if .
Proof
We prove the following stronger result:
for all , for all , or computes only if .
We use the following in the proof. Let , where , be obtained from by replacing all occurrences of and by and , respectively, and replacing, for each , all occurrences of , , and by , , and , respectively.^{3}^{3}3Here, we write for the complement of . It follows directly from the proof of Theorem 8.1 from [4] that computes if computes and is of the form or , where is , or . Moreover, .
We prove the theorem by induction on .
The basis step consists of proving that for all , or computes only if . The following observations can be made about all such that or computes : (a) there must be at least one occurrence of or in — because otherwise the final content of will not be dependent on the content of ; (b) there must be at least one occurrence of , or in if computes and there must be at least one occurrence of , or in otherwise — because otherwise the final content of will always be the same; (d) there must be at least one occurrence of in — because otherwise nothing will ever be computed. It follows trivially from these observations that, for all , or computes only if .
The inductive step is proved by contradiction. Suppose that , or computes , and . Assume that there does not exist an such that or computes and . Obviously, this assumption can be made without loss of generality. From this assumption, it follows that where , , and is or for some such that . This can be seen as follows:
if is or with or , then and cannot compute ;
if is with , then there is an such that or computes and — which contradicts the assumption;
if is , or , then can be replaced by or in and so, there is an such that computes and — which contradicts the assumption;
if is , or , then can be replaced by or in and so, there is an such that computes and — which contradicts the assumption;
if is , , , , or for some , then can be replaced by or in and and so there is an such that or computes and — which contradicts the assumption;
if is , or for some , then can be replaced by or in and and so, because or also computes and , there is an such that or computes and — which contradicts the assumption;
if is for some , then can be replaced by in and and so there is an such that or computes and — which contradicts the assumption;
if is or for some such that , then, because the final content of is independent of the initial content of , can be replaced by and in and so there is an such that or computes and — which contradicts the assumption. So, we distinguish between the case that is and the case that is .
In the case that is , we consider the case that contains . In this case, after execution of , execution proceeds with . Let . Then or computes . Moreover, by Corollary 1, we have that . Hence, there exists a such that or computes and . This contradicts the induction hypothesis.
In the case that is , we consider the case that contains . In this case, after execution of , execution proceeds with . Let . Then or computes . From here, because , we cannot derive a contradiction immediately as in the case that is . A case distinction on is needed. With the exception of the cases that is or , for some such that and , we still consider the case that contains . In the cases that are not excepted above, a contradiction is derived as follows:
if is or with or , then and cannot compute ;
if is with , then there is a such that or computes and, by Corollary 1, — which contradicts the induction hypothesis;
if is , or , then can be replaced by or in and so, there is a such that computes and, by Corollary 1, — which contradicts the induction hypothesis;
if is , or , then can be replaced by or in and so, there is a such that computes and, by Corollary 1, — which contradicts the induction hypothesis;
if is , , , , or for some , then can be replaced by or in and so, there is a such that or computes and, by Corollary 1, — which contradicts the induction hypothesis;
if is , or for some , then can be replaced by or in and so, because or also computes and , there is a such that or computes and, by Corollary 1, — which contradicts the induction hypothesis;
if is for some , then can be replaced by in and and so there is an such that or computes and — which contradicts the induction hypothesis;
if is or for some such that , then, because the final content of is independent of the initial content of , can be replaced by and in and so there is an such that or computes and, by Corollary 1, — which contradicts the induction hypothesis;
if is or , then has been replaced by or in and so there is a such that or computes and, by Corollary 1, — which contradicts the induction hypothesis. In the case that is , we consider the case that both and contain . Let . Then, or computes and, by Corollary 1, — which contradicts the induction hypothesis. In the case that is , we consider the case that only contains . Let . Then, or computes and, by Corollary 1, — which contradicts the induction hypothesis. ∎
Theorem 4.1 is a result similar to certain results on circuit lower bounds (see e.g. Chapter 16 of [11]). In the proof of this theorem use is made of register elimination, a technique similar to gate elimination as used in work on circuit lower bounds.
The following result is a corollary of the strengthening of Theorem 4.1 that is actually proved above.
Corollary 2
For all , for all of the form or , computes only if .
5 More shortest nonzeroness test instruction sequences
In this section, we study the remaining instruction sequences of the same length as that compute . The final outcome of this study is important for the proof of Theorem 6.1 in Section 6.
The following proposition states that change of the order in which the read instructions occur in yields again a correct implementation of .
Proposition 4
For each , for each bijection on , is also computed by the instruction sequence obtained from by replacing, for each with , all occurrences of the register name in by .
Proof
The proof is like the proof of Proposition 2, but with, for each with , all occurrences of the register name in the proof replaced by . ∎
The proof of Proposition 2 can be seen as a special case of the proof of Proposition 4, namely the case where is the identity function.
The following proposition states that, for instruction sequences as considered in Proposition 4, in the case that is odd, change of the position of the isolated read instruction of yields again a correct implementation of .
Proposition 5
For each odd , for each bijection on and with , is also computed by the instruction sequence .
Proof
Let be odd, and let be a bijection on . For each with , let . We prove that computes by induction on . The basis step follows immediately from Proposition 4. The inductive step goes as follows. Let and . Then is with the subsequence replaced by . From this, it follows easily by a case distinction on the contents of , , and that and compute the same function from to . From this and the induction hypothesis, it follows that computes . ∎
Let be an instruction sequence as considered in Propositions 4 or 5. Then replacement of one or more occurrences of in by yields again a correct implementation of because the effects of these instructions on execution of are always the same. Even replacement of one or more occurrences of in by yields again a correct implementation of , unless its last occurrence is replaced, because checking of the first of the remaining input registers can be skipped once an input register is encountered whose content is . Moreover, replacement of one or more occurrences of in by a forward jump instruction that leads to another occurrence of yields again a correct implementation of because, once an input register is encountered whose content is , checking of the remaining input registers can be skipped.
In the preceding paragraph, the clarifying intuitions for the statements made do not sufficiently verify the statements. Below, the statements are incorporated into Theorem 5.1 and verified via the proof of that theorem.
We define a set of instruction sequences with
Propositions 4
and 5
and the statements made above in mind.
For even , we define a subset of as follows:
iff
X =
i = 1n/2
(ϱ(2i1). ϱ(2i). φ(i)) for some function from
to such that
φ(j) ∈ 3k k ∈k ≤n/2  j.1,.1,.1 ,φ(n/2) ∈ .1,.1
and some bijection on
.
For odd , we define a subset of as follows:
iff
there exists an
m ∈k ∈k ≤(n1)/2 such that
X = i = 1m
(ϱ(2i1). ϱ(2i). φ’_m(i)) ϱ(2m+1). φ’_m(m+1) i = m+1(n1)/2
(ϱ(2i). ϱ(2i+1). φ’_m(i+1))
for some function from
to such that
φ’_m(j) ∈ 3k1 k ∈k ≤(n+1)/2  j j ≤m ¡ j + k3k k ∈k ≤(n+1)/2  j j ≤m ¡ j + k.1,.1,.1 ,φ’_m((n+1)/2) ∈ .1,.1
and some bijection on
.
Obviously, we have that and, for each , .
The following theorem states that each instruction sequence from correctly implements .
Theorem 5.1
For all , for all , only if computes .
Proof
For convenience, forward jump instructions, , , and are called replaceable instructions in this proof.
Let , and let be such that . Let be obtained from by replacing all occurrences of a replaceable instruction other than in by . It follows immediately from Propositions 4 and 5 that computes . Hence, it remains to be proved that and compute the same function from to .
The fact that and compute the same function from to is proved by induction on the number of occurrences of replaceable instructions other than in . The basis step is trivial. The inductive step goes as follows. Let be obtained from by replacing the first occurrence of a replaceable instruction other than in by . From the induction hypothesis and the fact that computes , it follows that computes . Clearly, execution of and yield the same final content of if the initial contents of are such that execution of does not proceed at some point with the replacing occurrence of . What remains to be shown is that execution of and yield the same final content of if the initial contents of are such that execution of proceeds at some point with the replacing occurrence of . Call this case the decisive case. If the decisive case occurs, then the content of at least one of the input registers is . From this and the fact that computes , it follows that on execution of the final content of is in the decisive case. Execution of and execution of have the same effects in the decisive case until the point where proceeds with the replacing occurrence of . At that point, execution of proceeds with the replaced occurrence of a replaceable instruction other than instead. From this, the fact that contains no instructions by which the content of can become , and the fact that contains only forward jump instructions that lead in one or more steps to a replaceable instruction other than a forward jump instruction, it follows that on execution of the final content of is also in the decisive case. Hence, and compute the same function from to . ∎
There are instruction sequences in in which there is only one occurrence of and no occurrences of or . These instruction sequences compute much more efficiently than because, once an input register is encountered whose content is , checking of the remaining input registers is skipped.
Not all instruction sequences with the same length as that correctly implement belong to . If is odd and , may occur once in an instruction sequence that belongs to . Let be such that occurs in , and let be such that or occurs before in . If the occurrence of will only be executed if the content of is , then its replacement by an occurrence of yields a correct implementation of that does not belong to . Because of this, we introduce an extension of .
We define a
Comments
There are no comments yet.