A Theory of Lazy Imperative Timing

by   Eric C. R. Hehner, et al.

We present a theory of lazy imperative timing.



page 1

page 2

page 3

page 4


Truly Concurrent Process Algebra with Timing

We extend truly concurrent process algebra APTC with timing related prop...

The application of precision time protocol on EAST timing system

The timing system focuses on synchronizing and coordinating each subsyst...

Timing Aware Dummy Metal Fill Methodology

In this paper, we analyzed parasitic coupling capacitance coming from du...

A Brief Overview of the KTA WCET Tool

KTA (KTH's timing analyzer) is a research tool for performing timing ana...

Automatic Timing-Coherent Transactor Generation for Mixed-level Simulations

In this paper we extend the concept of the traditional transactor, which...

Tik-Tok: The Utility of Packet Timing in Website Fingerprinting Attacks

A passive local eavesdropper can leverage Website Fingerprinting (WF) to...

Iodine: Verifying Constant-Time Execution of Hardware

To be secure, cryptographic algorithms crucially rely on the underlying ...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Lazy evaluation was introduced as a programming language execution strategy in 1976 by Peter Henderson and Jim Morris [4], and by David Turner [8], and is now part of several programming languages, including Gofer, Miranda, and Haskell. It was introduced into the setting of functional programming, and has mainly stayed there, although it is just as applicable to imperative programs [2]. The name “lazy evaluation” is appropriate in the functional setting, but in the imperative setting it is more appropriately called “lazy execution”.

The usual, familiar execution of programs is called “eager execution”. For example,

is executed by first executing the assignment , and then the assignment , and then the statement. If this is the entire program, a lazy execution executes only the assignment , and then the statement, because the assignment is unnecessary.

Here is a more interesting example. Let be an integer variable, and let be an infinite array of integers.

After initializing to and to , there is an infinite loop that assigns ( factorial) to each array element . Then, after the infinite loop, the value of is printed. An eager execution will execute the loop forever, and the final printing will never be done. A lazy execution executes only the first three iterations of the loop, and then prints the desired result. Of course it is easy to modify the program so that the loop is iterated only 3 times in an eager execution: just replace by . But [5] gives a reason for writing it as above: to separate the producer (initialization and loop) from the consumer (printing). Many programs include a producer and a consumer, and each may be complicated, and it is useful to be able to write them separately. When written as above, we can change the consumer, for example to , without changing the producer. It is not the purpose of this paper to argue the relative merits of eager and lazy execution, nor to advocate any particular way of programming. The example is intended to show only that lazy execution can reduce execution time, and in the extreme case, it can be reduced from infinite time to finite time.

The analysis of eager execution time is well known; for example, see [3]. Some analysis of lazy execution time has also been done in the functional setting [7]. The purpose of this paper is to present a theory for the analysis of lazy execution time in the imperative setting. This paper is based on part of the PhD thesis of Albert Lai [6], but simplifications have been made to shorten the explanations, and a different measure of time is being used.

2 A Practical Theory of Programming

In a Practical Theory of Programming [3], we do not specify programs; we specify computation, or computer behavior. The free variables of the specification represent whatever we wish to observe about a computation, such as the initial values of variables, their final values, their intermediate values, interactions during a computation, the time taken by the computation, the space occupied by the computation. Observing a computation provides values for those variables. When you put the observed values into the specification, there are two possible outcomes: either the computation satisfies the specification, or it doesn’t. So a specification is a binary (boolean) expression. If you write anything other than a binary expression as a specification, such as a pair of predicates, or a predicate transformer, you must say what it means for a computation to satisfy a specification, and to do that formally you must write a binary expression anyway.

A program is an implemented specification. It is a specification of computer behavior that you can give to a computer to get the specified behavior. I also refer to any statement in a program, or any sequence or structure of statements, as a program. Since a program is a specification, and a specification is a binary expression, therefore a program is a binary expression. For example, if the program variables are and , then the assignment program is the binary expression where unprimed variables represent the values of the program variables before execution of the assignment, and primed variables represent the values of the program variables after execution of the assignment.

We can connect specifications using any binary operators, even when one or both of the specifications are programs. If and are specifications, then says that any behavior satisfying also satisfies , where is implication. This is exactly the meaning of refinement. As an example, again using integer program variables and ,

We can say “ implies ”, or “ refines ”, or “ implements ”. When we are programming, we start with a specification that may not be a program, and refine it until we obtain a program, so we may prefer to write

using reverse implication (“is implied by”, “is refined by”, “is implemented by”).

3 Eager Timing

If we are interested in execution time, we just add a time variable . Then is the start time, and is the finish time, which is if execution time is infinite. We could decide to account for the real time spent executing a program. Or we could decide to measure time as a count of various operations. In [3] and [6], time is loop iteration count. In this paper, time is assignment count; I make this choice to keep my explanations short, but I could choose any other measure.

Using the same program variables and , and time variable , the empty program (elsewhere called ), whose execution does nothing and takes no time, is defined as

An example assignment is

The conditional specifications are defined as

A conditional specification is a conditional program if its parts are programs. The sequential composition of specifications and is defined as

     (for    substitute    in  )

(for    substitute    in  )

Sequential composition of and is mainly the conjunction of and , but the final state and time of are identified with the initial state and time of . A sequential composition is a program if its parts are programs.

In our example program

to prove that the execution time is infinite, there are two parts to the proof. The first is to write and prove a specification for the loop. Calling the specification , we must prove

The specification we are interested in is

The proof uses111The arithmetic used here is defined in complete detail in [3, p.233-234]. , and is trivial, so we omit it. If we were to try the specification

for any finite number expression , the proof would fail because . A stronger specification that succeeds is

but the final values of variables after an infinite computation are normally not of interest (or perhaps not meaningful). The proof uses and is otherwise easy, so we omit it.

The other part of the eager timing proof is to prove

This proof is again trivial, and omitted. Eager execution is presented in great detail in [3], and is not the point of this paper.

4 Need Variables

To calculate lazy execution time, we introduce a time variable and need variables. For each variable of a basic (unstructured) type, we introduce a binary (boolean) need variable. If is an integer variable, then introduce binary need variable (pronounced “need ”). The value of may be , or any other integer; the value of may be or . As always, we use and for the value of this integer variable at the start and end of a program (which could be a simple assignment, or any composition of programs). Likewise we use and for the value of its need variable at the start and end of a program. At the start, means that the initial value of variable is needed, either in the computation or following the computation, and means that the initial value of variable is not needed for the computation nor following the computation. At the end, means that the final value of variable is needed for something following the computation, and means that the final value of variable is not needed.

With program variables and and time variable , we earlier defined

We now augment this definition with need variables. From we see that the initial value of is needed if and only if the final value is needed. Likewise for . So

Although is a symmetric operator, making and equivalent, as a matter of style we write (some expression in unprimed variables)  because the final value of a program variable is determined by the initial values of the program variables. But we write (some expression of primed need variables)  because the need for an initial value is determined by the need for final values.

We now augment the assignment

with need variables. We have a choice. Perhaps the most reasonable option is

This says that if the value of is needed after this assignment, then that value is and the assignment takes time , but if the value of is not needed afterward, then no final value of is stated and the assignment takes time because it is not executed. In either case, the value of is unchanged. The initial value does not appear, so it is not needed, hence . The last conjunct says that the initial value of is needed if and only if the final value of is needed, because appears in the right side of .

The other option is

This option seems less reasonable because it says the final value of is even if that value is not needed and the assignment is not executed. But if the final value of is not used, then it doesn’t hurt to say it’s . This option has the calculational advantage that it untangles the results from the timing. So this is the option we choose. Every assignment has this same timing part, but using the need variable for the variable being assigned.

In the assignment

we see that appears once, to obtain , so . And appears twice, to obtain and , so . Time and need variables can be added automatically, but the algorithm to add them is not presented in this short paper.

For each structured variable in a program, there is a need variable structured exactly the same way. For example, if is a pair of integers, then is a pair of binaries (booleans); the value of is or or or . And if is an integer variable, then

If we define datatype recursively as

then a tree is either the empty list, or it is a list of three components, the first component being the left subtree, the middle component being the root value, and the last component being the right subtree. This requires us to define datatype

for need variables. If we have variable of type , we also have need variable of type . If , then is either or . If , then may be or 31 other values.

Returning to integer variables and , here is an example conditional program.


We see that occurs in the right sides of both and , so . We see that occurs in the right side of only , so . We have added the need variables in accordance with the rules, as we would expect a compiler to do. But we can do better by using some algebra. Notice that , so the results part can be stated equivalently as

which results in the need part

We find that has been strengthened, making lazier execution possible. But a compiler would not be expected to make this improvement.

Sequential composition remains the same with need variables added.

(for substitute in )

(for substitute in )

At the end of an entire program, we put , defined as

Like , its execution does nothing and takes no time. Since this is the end of the whole program, says there is no further need for the values of any variables.

5 Example

We now have all the theory we need. Let us apply it to our example program

To begin, we need a specification for the loop, which we call . With a number on each line for reference,

Lines 0 and 1 are the same as in the stronger version of the eager specification presented earlier. For eager execution lines 0 and 1 are not necessary because the loop execution is nonterminating, but for lazy execution they are necessary. Line 2 is the timing. It says that if the final value of is needed, then the loop takes forever; otherwise, if the final value of is needed for any , then the loop time is twice the difference between the largest such and , because there are two assignments in each iteration; otherwise the loop takes time because it will not be executed222I confess that I did not get the lazy specification right the first time I wrote it; my error was in the time line 2. The specification is used in two proofs (below), and any error prevents one of the proofs from succeeding. That is how an error is discovered. Fortunately, the way the proof fails gives guidance on how to correct the specification.. Line 3 says that the loop needs an initial value for if and only if a final value of is needed for any . Line 4 says that for , must have an initial value if and only if its final value is needed. Line 5 says that needs an initial value if and only if the final value of is needed for any . And line 6 says that for , the initial value of is not needed. In this paragraph, the words “initial” and “final” are used to mean relative to the entire loop execution: initially before the first iteration (if there is one), and finally after the last iteration (if there is one).

There can be more than one specification that’s correct in the sense that it makes the proofs succeed. For example, if the type of variable allows it, we could add the line , and then line 3 would be , but since line 2 says that if we need then execution time is infinite, these additions really don’t matter.

The first proof is the loop refinement. We must prove

For the proof, we first replace each of the sequentially composed programs with their binary equivalent, including time and need variables. Then we use the sequential composition rule, and use one-point laws to eliminate the quantifiers. And we make any simplifications we can along the way. The proof is in the Appendix.

Then to prove that the overall execution time is , we must prove

For , we suppose it is like an assignment , except that is not a program variable. This proof is also in the Appendix.

6 Execution versus Proof

In a lazy execution, the value of a variable may not be evaluated at various times during execution. Nonetheless, the value that would be evaluated if the execution were eager can still be used in the proof of lazy execution time. For example, in the loop specification line 3, we see the conjunct

If there is no for which is needed after the loop, then the value of is not needed before the loop. The value of is used in the proof to say whether the value of is needed in execution.

If we change the print statement to , then the loop is not executed at all. The initialization is still required, but is not. The theory tells us that the execution time is . The theory still requires that the assignment produces , but the execution does not.

7 Conclusion

We have presented a theory of lazy imperative timing. The examples presented are small enough so we know what the right answers are without using the theory; that enables us to see whether the theory is working. But the theory is not limited to small, easy examples.

Time and need variables are added according to a syntactic formula, and that can be automated. But in some cases, that formula does not achieve maximum laziness. To achieve maximum laziness may require some further algebra. The proofs can also be automated, but the prover needs to be given domain knowledge.


  • [1]
  • [2] Walter Guttmann (2010): Lazy UTP. In: Symposium on Unifying Theories of Programming, Springer LNCS 5713, pp. 82–101, doi:10.1007/978-3-642-14521-6_6.
  • [3] Eric C.R. Hehner (1993): A Practical Theory of Programming. Springer, doi:10.1007/978-1-4419-8596-5. Available at http://www.cs.utoronto.ca/~hehner/aPToP.
  • [4] Peter Henderson & James H. Morris (1976): A Lazy Evaluator. In: ACM Symposium on Principles of Programming Languages, pp. 95–103, doi:10.1145/800168.811543.
  • [5] John Hughes (1989): Why Functional Programming Matters. Computer Journal 32(2), pp. 98–107, doi:10.1093/comjnl/32.2.98.
  • [6] Albert Y.C. Lai (2013): Eager, Lazy, and Other Executions for Predicative Programming. Ph.D. thesis, University of Toronto.
  • [7] David Sands (1990): Complexity Analysis for a Lazy Higher-Order Language. In: European Symposium on Programming, Springer LNCS 432, pp. 361–376, doi:10.1007/3-540-52592-0_74.
  • [8] David A. Turner (1979): A New Implementation Technique for Applicative Languages. Software: Practice and Experience 9(1), pp. 31–49.

Appendix A Appendix

Proof of the loop refinement

starting with the right side:

Replace each statement by its definition.

Eliminate the first semi-colon.

Eliminate the last semi-colon. This step uses .

Proof of

starting with the right side:

Replace each statement by its definition.