Parallel Complexity Analysis with Temporal Session Types

04/17/2018 ∙ by Ankush Das, et al. ∙ Carnegie Mellon University 0

We study the problem of parametric parallel complexity analysis of concurrent, message-passing programs. To make the analysis local and compositional, it is based on a conservative extension of binary session types, which structure the type and direction of communication between processes and stand in a Curry-Howard correspondence with intuitionistic linear logic. The main innovation is to enrich session types with the temporal modalities next ( A), always ( A), and eventually ( A), to additionally prescribe the timing of the exchanged messages in a way that is precise yet flexible. The resulting temporal session types uniformly express properties such as the message rate of a stream, the latency of a pipeline, the response time of a concurrent queue, or the span of a fork/join parallel program. The analysis is parametric in the cost model and the presentation focuses on communication cost as a concrete example. The soundness of the analysis is established by proofs of progress and type preservation using a timed multiset rewriting semantics. Representative examples illustrate the scope and usability of the approach.



There are no comments yet.


page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1. Introduction

For sequential programs, several type systems and program analyses have been proposed to structure, formalize (Lago and Gaboardi, 2011; Danner et al., 2015; Çiçek et al., 2017), and automate (Gulwani et al., 2009; Hoffmann et al., 2017; Avanzini et al., 2015) complexity analysis. Analyzing the complexity of concurrent, message-passing processes poses additional challenges that these systems do not address. To begin with, we need information about the possible interactions between processes to enable compositional and local reasoning about concurrent cost.

Session types (Honda et al., 1998) provide a structured way to prescribe communication behavior between message-passing processes and are a natural foundation for compositional, concurrent complexity analysis. In particular, we use a system of binary session types that stands in a Curry-Howard correspondence with intuitionistic linear logic (Caires and Pfenning, 2010; Caires et al., 2016). Our communication model is asynchronous in the sense of the asynchronous -calculus: sending always succeeds immediately, while receiving blocks until a message arrives.

In addition to the structure of communication, the timing of messages is of central interest for analyzing concurrent cost. With information on message timing we may analyze not only properties such as the rate or latency with which a stream of messages can proceed through a pipeline, but also the span of a parallel computation, which can be defined as the time of the final response message assuming maximal parallelism.

There are several possible ways to enrich session types with timing information. A challenge is to find a balance between precision and flexibility. We would like to express precise times according to a global clock as in synchronous dataflow languages whenever that is possible. However, sometimes this will be too restrictive. For example, we may want to characterize the response time of a concurrent queue where enqueue and dequeue operations arrive at unpredictable intervals.

In this paper, we develop a type system that captures the parallel complexity of session-typed message-passing programs by adding temporal modalities next (), always (), and eventually (), interpreted over a linear model of time. When considered as types, the temporal modalities allow us to express properties of concurrent programs such as the message rate of a stream, the latency of a pipeline, the response time of concurrent data structure, or the span of a fork/join parallel program, all in the same uniform manner. Our results complement prior work on expressing the work of session-typed processes in the same base language (Das et al., 2017). Together, they form a foundation for analyzing the parallel implementation complexity of session-typed processes.

The way in which we construct the type system is conservative over the base language of session types, which makes it quite general and easily able to accommodate various concrete cost models. Our language contains standard session types and process expressions, and their typing rules remain unchanged. They correspond to processes that do not induce cost and send all messages at the same constant time .

To model computation cost we introduce a new syntactic form , which advances time by one step. To specify a particular cost semantics we take an ordinary, non-temporal program and add delays capturing the intended cost. For example, if we decide only the blocking operations should cost one unit of time, we add a delay before the continuation of every receiving construct. If we want sends to have unit cost as well, we also add a delay immediately after each send operation. Processes that contain delays cannot be typed using standard session types.

To type processes with non-zero cost, we first introduce the type , which is inhabited only by the process expression . This forces time to advance on all channels that can communicate along. The resulting types prescribe the exact time a message is sent or received and sender and receiver are precisely synchronized.

As an example, consider a stream of bits terminated by , expressed as the recursive session type

where stands for internal choice and for termination, ending the session. A simple cost model for asynchronous communication prescribes a cost of one unit of time for every receive operation. A stream of bits then needs to delay every continuation to give the recipient time to receive the message, expressing a rate of one. This can be captured precisely with the temporal modality :

A transducer neg that negates each bit it receives along channel and passes it on along channel would be typed as

expressing a latency of one. A process negneg that puts two negations in sequence has a latency of two, compared with copy which passes on each bit, and id which terminates and identifies the channel with the channel , short-circuiting the communication.

All these processes have the same extensional behavior, but different latencies. They also have the same rate since after the pipelining delay, the bits are sent at the same rate they are received, as expressed in the common type used in the context and the result.

While precise and minimalistic, the resulting system is often too precise for typical concurrent programs such as pipelines or servers. We therefore introduce the dual type formers and to talk about varying time points in the future. Remarkably, even if part of a program is typed using these constructs, we can still make precise and useful statements about other aspects.

For example, consider a transducer compress that shortens a stream by combining consecutive 1 bits so that, for example, becomes . For such a transducer, we cannot bound the latency statically, even if the bits are received at a constant rate like in the type . So we have to express that after seeing a 1 bit we will eventually see either another bit or the end of the stream. For this purpose, we introduce a new type with the same message alternatives as , but different timing. In particular, after sending we have to send either the next bit or end-of-stream eventually (), rather than immediately.

We write instead of for the continuation type after to express that there will always be a delay of at least one; to account for the unit cost of receive in this particular cost model.

The dual modality, , is useful to express, for example, that a server providing is always ready, starting from “now”. As an example, consider the following temporal type of an interface to a process of type with elements of type . It expresses that there must be at least three time units between successive enqueue operations and that the response to a dequeue request is immediate, only one time unit later ( stands for external choice, the dual to internal choice).

As an example of a parametric cost analysis, we can give the following type to a process that appends inputs and to yield , where the message rate on all three lists is units of time (that is, the interval between consecutive list elements needs to be at least 2).

It expresses that append has a latency of two units of time and that it inputs the first message from after units of time, where is the number of elements sent along .

To analyze the span of a fork/join parallel program, we capture the time at which the (final) answer is sent. For example, the type describes the span of a process that computes the parity of a binary tree of height with boolean values at the leaves. The session type expresses that the result of the computation is a single boolean that arrives at time after the request.

In summary, the main contributions of the paper are (1) a generic framework for parallel cost analysis of asynchronously communicating session-typed processes rooted in a novel combination of temporal and linear logic, (2) a soundness proof of the type system with respect to a timed operational semantics, showing progress and type preservation (3) instantiations of the framework with different cost models, e.g. where either just receives, or receives and sends, cost one time unit each, and (4) examples illustrating the scope of our method. Our technique for proving progress and preservation does not require dependency graphs and may be of independent interest. We further provide decidable systems for time reconstruction and subtyping that greatly simplify the programmer’s task. They also enhance modularity by allowing the same program to be assigned temporally different types, depending on the context of use.

Related is work on space and time complexity analysis of interaction nets by Gimenez and Moser (2016), which is a parallel execution model for functional programs. While also inspired by linear logic and, in particular, proof nets, it treats only special cases of the additive connectives and recursive types and does not have analogues of the and modalities. It also does not provide a general source-level programming notation with a syntax-directed type system. On the other hand they incorporate sharing and space bounds, which are beyond the scope of this paper.

Another related thread is the research on timed multiparty session types (Bocchi et al., 2014) for modular verification of real-time choreographic interactions. Their system is based on explicit global timing interval constraints, capturing a new class of communicating timed automata, in contrast to our system based on binary session types in a general concurrent language. Therefore, their system has no need for general and modalities, the ability to pass channels along channels, or the ability to identify channels via forwarding. Their work is complemented by an expressive dynamic verification framework in real-time distributed systems (Neykova et al., 2014), which we do not consider. Semantics counting communication costs for work and span in session-typed programs were given by Silva et al. (2016), but no techniques for analyzing them were provided.

The remainder of the paper is organized as follows. We review our basic system of session types in Section 2, then introduce the next-time modality in Section 3 followed by and in Section 4. We establish fundamental metatheoretic type safety properties in Section 5 and time reconstruction in Section 6. Additional examples in Section 7 are followed by a discussion of further related work in Section 8 and a brief conclusion.

2. The Base System of Session Types

The underlying base system of session types is derived from a Curry-Howard interpretation of intutionistic linear logic (Caires and Pfenning, 2010; Caires et al., 2016). We present it here to fix our particular formulation, which can be considered the purely linear fragment of SILL (Toninho et al., 2013; Pfenning and Griffith, 2015). Remarkably, the rules remain exactly the same when we consider temporal extensions in the next section. The key idea is that an intuitionistic linear sequent

is interpreted as the interface to a process expression P. We label each of the antecedents with a channel name and the succedent with channel name . The ’s are channels used by and is the channel provided by .

The resulting judgment formally states that process provides a service of session type along channel , while using the services of session types provided along channels respectively. All these channels must be distinct, and we sometimes implicitly rename them to preserve this presupposition. We abbreviate the antecedent of the sequent by .

Type Provider Action Session Continuation
send label
receive and branch on label
send token none
send channel
receive channel
Figure 1. Basic Session Types. Every provider action has a matching client action.
Figure 2. Basic Process Expressions

Figure 1 summarizes the basic session types and their actions. The process expression for these actions are shown in Figure 2; the process typing rules in Figure 3. The first few examples (well into Section 4) only use internal choice, termination, and recursive types, together with process definitions and forwarding, so we explain these in some detail together with their formal operational semantics. A summary of all the operational semantics rules can be found in Figure 4.

2.1. Internal Choice

A type is said to describe a session, which is a particular sequence of interactions. As a first type construct we consider internal choice , an -ary labeled generalization of the linear logic connective . A process that provides can send any label along and then continue by providing . We write the corresponding process as , where is the continuation. This typing is formalized by the right rule in our sequent calculus. The corresponding client branches on the label received along as specified by the left rule .

We formalize the operational semantics as a system of multiset rewriting rules (Cervesato and Scedrov, 2009). We introduce semantic objects and which mean that process or message provide along channel and are at time . A process configuration is a multiset of such objects, where any two offered channels are distinct. Communication is asynchronous, so that a process sends a message along and continues as without waiting for it to be received. As a technical device to ensure that consecutive messages on a channel arrive in order, the sender also creates a fresh continuation channel so that the message is actually represented as (read: send along and continue as ).

When the message is received along , we select branch and also substitute the continuation channel for .

The message is just a particular form of process, where is identity or forwarding, explained in Section 2.3. Therefore no separate typing rules for messages are needed; they can be typed as processes (Balzer and Pfenning, 2017).

In the receiving rule we require the time of the message and receiver process to match. Until we introduce temporal types, this is trivially satisfied since all actions are considered instantaneous and processes will always remain at time .

The dual of internal choice is external choice , which just reverses the role of provider and client and reuses the same process notation. It is the -ary labeled generalization of the linear logic connective .

Figure 3. Basic Typing Rules

2.2. Termination

The type , the multiplicative unit of linear logic, represents termination of a process, which (due to linearity) is not allowed to use any channels.

Operationally, a client has to wait for the corresponding closing message, which has no continuation since the provider terminates.

2.3. Forwarding

A process identifies the channels and so that any further communication along either or will be along the unified channel. Its typing rule corresponds to the logical rule of identity.

We have already seen this form in the continuations of message objects. Operationally, the intuition is realized by forwarding: a process forwards any message that arrives along to and vice versa. Because channels are used linearly the forwarding process can then terminate, making sure to apply the proper renaming. The corresponding rules of operational semantics are as follows.

In the last transition, we write to indicate that must occur in , which implies that this message is the sole client of . In anticipation of the extension by temporal operators, we do not require the time of the message and the forwarding process to be identical, but just that the forwarding process is ready before the message arrives.

2.4. Process Definitions

Process definitions have the form where is the name of the process and its definition. All definitions are collected in a fixed global signature . We require that for every definition, which allows the definitions to be mutually recursive. For readability of the examples, we break a definition into two declarations, one providing the type and the other the process definition binding the variables and those in (generally omitting their types):

A new instance of a defined process can be spawned with the expression

where is a sequence of variables matching the antecedents . The newly spawned process will use all variables in and provide to the continuation . The operational semantics is defined by

Here we write to denote substitution of the channels in for the corresponding variables in .

Sometimes a process invocation is a tail call, written without a continuation as . This is a short-hand for for a fresh variable , that is, we create a fresh channel and immediately identify it with (although it is generally implemented more efficiently).

Figure 4. Basic Operational Semantics

2.5. Recursive Types

Session types can be naturally extended to include recursive types. For this purpose we allow (possibly mutually recursive) type definitions in the signature, where we require to be contractive (Gay and Hole, 2005). This means here that should not itself be a type name. Our type definitions are equi-recursive so we can silently replace by during type checking, and no explicit rules for recursive types are needed.

As a first example, consider a stream of bits (introduced in Section 1) defined recursively as

When considering bits as representing natural numbers, we think of the least significant bit being sent first. For example, a process six sending the number would be

Executing yields (with some fresh channels )

As a first example of a recursive process definition, consider one that just copies the incoming bits.

% received on , send on , recurse
% received on , send on , recurse
% received on , send on , wait on , close

The process mentioned in the introduction would just swap the occurrences of and . We see here an occurrence of a (recursive) tail call to copy.

A last example in this section: to increment a bit stream we turn to but then forward the remaining bits unchanged (), or we turn to but then increment the remaining stream () to capture the effect of the carry bit.

3. The Temporal Modality Next ()

In this section we introduce actual cost by explicitly advancing time. Remarkably, all the rules we have presented so far remain literally unchanged. As mentioned, they correspond to the cost-free fragment of the language in which time never advances. In addition, we have a new type construct (read: next ) with a corresponding process construct , which advances time by one unit. In the corresponding typing rule

we abbreviate by . Intuitively, when idles, time advances on all channels connected to . Computationally, we delay the process for one time unit without any external interactions.

There is a subtle point about forwarding: A process may be ready to forward a message before a client reaches time while in all other rules the times must match exactly. We can avoid this mismatch by transforming uses of forwarding at type where to . In this discussion we have used the following notation which will be useful later:

3.1. Modeling a Cost Semantics

Our system allows us to represent a variety of different abstract cost models in a straightforward way. We will mostly use two different abstract cost models. In the first, called , we assign unit cost to every receive action while all other operations remain cost-free. We may be interested in this since receiving a message is the only blocking operation in the asynchronous semantics. A second one, called and considered in Section 7, assigns unit cost to both send and receive actions.

To capture we take a source program and insert a delay operation before the continuation of every receive. We write this delay as in order to remind the reader that it arises systematically from the cost model and is never written by the programmer. In all other respects, is just a synonym for .

For example, the earlier copy process would become

             % No longer correct!

As indicated in the comment, the type of copy is now no longer correct because the bits that arrive along are delayed by one unit before they are sent along . We can observe this concretely by starting to type-check the first branch


We see that the delay does not type-check, because neither nor have a type of the form . We need to redefine the type so that the continuation type after every label is delayed by one, anticipating the time it takes to receive the label , , or . Similarly, we capture in the type of copy that its latency is one unit of time.

With these declarations, we can now type-check the definition of copy. We show the intermediate type of the used and provided channels after each interaction.

% well-typed by type of copy

Armed with this experience, we now consider the increment process plus1. Again, we expect the latency of the increment to be one unit of time. Since we are interested in detailed type-checking, we show the transformed program, with a delay after each receive.

% type error here!

The branches for and type-check as before, but the branch for does not. We make the types at the crucial point explicit:

% ill-typed, since

The problem here is that identifying and removes the delay mandated by the type of plus1. A solution is to call copy to reintroduce the latency of one time unit.

In order to write plus2 as a pipeline of two increments we need to delay the second increment explicitly in the program and stipulate, in the type, that there is a latency of two.


Programming with so many explicit delays is tedious, but fortunately we can transform a source program without all these delay operations (but explicitly temporal session types) automatically in two steps: (1) we insert the delays mandated by the cost model (here: a after each receive), and (2) we perform time reconstruction to insert the additional delays so the result is temporally well-typed or issue an error message if this is impossible (see Section 6).

3.2. The Interpretation of a Configuration

We reconsider the program to produce the number under the cost model from the previous section where each receive action costs one unit of time. There are no receive operations in this program, but time reconstruction must insert a delay after each send in order to match the delays mandated by the type .

Executing then leads to the following configuration

These messages are at increasing times, which means any client of will have to immediately (at time 0) receive , then (at time 1) , then (at time 2) , etc. In other words, the time stamps on messages predict exactly when the message will be received. Of course, if there is a client in parallel we may never reach this state because, for example, the first message along channel may be received before the continuation of the sender produces the message . So different configurations may be reached depending on the scheduler for the concurrent processes. It is also possible to give a time-synchronous semantics in which all processes proceed in parallel from time 0 to time 1, then from time 1 to time 2, etc.

4. The Temporal Modalities Always () and Eventually ()

The strength and also the weakness of the system so far is that its timing is very precise. Now consider a process compress that combines runs of consecutive 1’s to a single 1. For example, compressing should yield . First, in the cost-free setting we might write

The problem is that if we adopt the cost model where every receive takes one unit of time, then this program cannot be typed. Actually worse: there is no way to insert next-time modalities into the type and additional delays into the program so that the result is well-typed. This is because if the input stream is unknown we cannot predict how long a run of 1’s will be, but the length of such a run will determine the delay between sending a bit 1 and the following bit 0.

The best we can say is that after a bit 1 we will eventually send either a bit 0 or the end-of-stream token . This is the purpose of the type . We capture this timing in the type (for slow bits).

In the next section we introduce the process constructs and typing rules so we can revise our compress and skip1s programs so they have the right temporal semantics.

4.1. Eventually

A process providing promises only that it will eventually provide . There is a somewhat subtle point here: since not every action may require time and because we do not check termination separately, expresses only that if the process providing