Back to Futures

02/11/2020
by   Klaas Pruiksma, et al.
Carnegie Mellon University
0

We briefly introduce the semi-axiomatic sequent calculus for linear logic whose natural computational interpretation is session-typed asynchronous communication. This natural asynchrony allows us to endow it with a shared-memory semantics that is weakly bisimilar to its more standard message-passing semantics. We then show how to further refine the concurrent shared memory semantics into a sequential one. Exploiting the expressive framework of adjoint logic, we show how to combine instances of message-passing, shared-memory, and sequential languages into a coherent whole. We exemplify this by providing rational reconstructions for SILL and futures, two approaches for introducing concurrency into functional programming languages. As a byproduct we obtain a first complete definition of typed linear futures.

READ FULL TEXT VIEW PDF

Authors

page 1

page 2

page 3

page 4

04/02/2019

A Message-Passing Interpretation of Adjoint Logic

We present a system of session types based on adjoint logic which genera...
10/28/2020

Actris 2.0: Asynchronous Session-Type Based Reasoning in Separation Logic

Message passing is a useful abstraction for implementing concurrent prog...
12/14/2018

Mastering Concurrent Computing Through Sequential Thinking: A Half-century Evolution

Concurrency, the art of doing many things at the same time is slowly bec...
10/05/2018

Realisability of Pomsets via Communicating Automata

Pomsets are a model of concurrent computations introduced by Pratt. They...
08/24/2021

Minimal Translations from Synchronous Communication to Synchronizing Locks

In order to understand the relative expressive power of larger concurren...
07/30/2021

Minimal Translations from Synchronous Communication to Synchronizing Locks (Extended Version)

In order to understand the relative expressive power of larger concurren...
08/31/2020

Correctly Implementing Synchronous Message Passing in the Pi-Calculus By Concurrent Haskell's MVars

Comparison of concurrent programming languages and correctness of progra...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

The computational interpretation of constructive proofs depends not only on the logic, but on the fine structure of its presentation. For example, intuitionistic logic may be presented in axiomatic form (and we obtain combinatory reduction [Curry34]), or via natural deduction (which corresponds to the -calculus [Howard69]), or a sequent calculus with a stoup (and we discern explicit substitutions [Herbelin94csl]).

More recently, a correspondence between linear logic in its sequent formulation and the session-typed synchronous -calculus has been discovered [Bellin94, Cockett09scp, Caires10concur, Wadler12icfp] and analyzed. Cut reduction here corresponds to synchronous communication. We can also give an asynchronous message-passing semantics [Gay10jfp] to the same syntax (in the sense that input is blocking but output is not), but this is no longer directly related to cut reduction in the sequent calculus [DeYoung12csl]. In this paper we briefly introduce a new style of inference system that combines features from Hilbert’s axiomatic form [Hilbert34] and Gentzen’s sequent calculus [Gentzen35] which we call semi-axiomatic sequent calculus [DeYoung20phd] (Section 2). Cut reduction in the semi-axiomatic sequent calculus corresponds to asynchronous communication somewhat like the asynchronous -calculus [Boudol92tr, Honda91ecoop] except that message order must be preserved.

In this paper we show that the apparently small change from the ordinary to the semi-axiomatic sequent calculus has profound and far-reaching consequences. The first of these is that if we stick to intuitionistic (rather than classical) linear logic, we can easily give a natural (although restricted to write-once memory) shared-memory semantics which is weakly bisimilar with the distributed message-passing semantics (Section 4). In this semantics, channels are reinterpreted as locations in shared memory. Naively, we might expect that sending a message corresponds to writing to memory, while receiving a message reads it. Instead all right rules in the sequent calculus write to memory and all left rules read from memory. This presents the opportunity for a simple sequential semantics consistent with the more typical interpretations of intuitionistic logic, but here the role of memory is made explicit (Section 5). The sequential semantics is a refinement of the shared memory semantics that can be seen as a particularly simple stack-based scheduler. This observation, together with the conceptual tools from adjoint logic [Reed09un, Licata16lfcs, Licata17fscd, Pruiksma19places] allow us to reconstruct SILL [Toninho13esop, Toninho15phd, Griffith16phd], fork/join parallelism [Conway63afips], and futures [Halstead85], three approaches that introduce concurrency into (functional) programming languages (see Sections 8, LABEL:app:forkjoin and 7). As a side effect, we obtain a rigorous treatment of linear futures, already anticipated and used in parallel complexity analysis by Blelloch and Reid-Miller [Blelloch99tocs] without fully developing them.

For simplicity, we focus mainly on purely linear channels and processes. The ideas are robust enough to easily generalize to the case where weakening and contraction are permitted on the logical side, which leads to memory structures that may be referenced zero (weakening) or multiple (contraction) times (Section 9). Both concurrent and sequential semantics generalize, which means we have captured SILL and a typed version of futures in their usual generality: concurrency primitives layered over a fully expressive functional language.

The principal contributions of this paper are:

  1. a shared-memory semantics for session-typed programs based on a computational interpretation of linear logic with a direct proof of bisimilarity to the usual message-passing semantics;

  2. a sequential refinement of this semantics;

  3. generalizations of these constructions from linear logic to adjoint logic, which supports conservative combinations of multiple modes of truth with varying structural properties among weakening and contraction;

  4. a logical reconstruction of SILL which combines functional programming with session-typed message-passing concurrency;

  5. a logical reconstruction of typed futures (including a rigorous definition of linear futures) within adjoint logic.

2 A Linear Sequent Calculus for Asynchronous Communication

The Curry-Howard interpretation of linear logic [Girard87tapsoft, Bellin94, Cockett09scp, Caires10concur, Caires16mscs, Wadler12icfp] interprets linear propositions as session types [Honda93concur, Honda98esop], proofs in the sequent calculus as processes, and cut reduction as communication. Communication is synchronous in the sense that sender and receiver proceed to their continuations in one joint step. Asynchronous communication, in the sense that the sender may proceed before the message is received, can also be modeled operationally [Gay10jfp] (departing from the correspondence with linear logic) or logically [DeYoung12csl]. The latter interprets a message as a process whose only job it is deliver a message. Nevertheless, to be faithful to the logical foundation, an implementation would have to support the default of synchronous communication.

One can ask if there is a different formulation of linear logic that forces asynchronous communication because cut reduction itself is asynchronous. An instantiation of this idea can be found in Solos [Laneve03mscs, Guenot14un] which goes to an extreme in the sense that both send and receive of channels are asynchronous operations, although branching on labels received still has to block. Here we present a practical intermediate point where all send actions are asynchronous but receive actions block.

We call the underlying sequent calculus semi-axiomatic. It retains the invertible left or right rules for each connective and replaces the noninvertible ones with axioms, so half the rules will be axioms, while the other half will be the familiar sequent calculus rules. They are presented in Figure 1.

Figure 1: Semi-Axiomatic Sequent Calculus, linear fragment

In the presence of cut, it is easy to show that in the ordinary sequent calculus if and only if

in the semi-axiomatic sequent calculus. In order to state and prove a result that is analogous to cut elimination, we syntactically classify some proofs as

normal and then show that for each proof of there exists one that is normal. Normal forms allow some analytic cuts [Smullyan1969analytic, MartinLof94] (which preserve the subformula property) and can also be translated compositionally to cut-free proofs in the ordinary sequent calculus. This is analogous to the different notions of normal form for natural deductions [Prawitz65, Howard69] and combinatory terms [Curry34], which is where the Curry-Howard isomorphism originated.

In this paper, our interest lies elsewhere, so we do not carry out this proof-theoretic analysis (see [DeYoung20phd] for the results on the additive fragment), but we do extract the key steps of cut reduction from the proof of normalization. For this purpose, we label antecedents and succedent with distinct variables so we can better recognize the effects of cut reduction. For internal choice, the reduction selects a branch (, in the example) and also substitutes for the antecedent.

For linear implication, we substitute both for antecedent and succedent.

We see that in both of these cases the cut disappears entirely rather than being reduced to smaller cuts as in the ordinary sequent calculus. In the operational interpretation presented in the next section, axioms correspond to messages, cut reduction corresponds to message receipt, and certain cuts correspond to the (asynchronous) sending of messages.

3 Process Expressions and a Message-Passing Semantics

We now assign process expressions to deductions in the linear semi-axiomatic sequent calculus and formalize their operational semantics. We begin by presenting the types and process terms in Figure 2. Note that we separate out some pieces of process terms as values and as continuations. This will allow us to simplify not only this message-passing semantics, but also the other systems of operational semantics examined later (sections 5 and 4). One important feature of the values is that they are all small in size (provided that a reference to a channel is small), and as such are easily passed between processes over a network, for example. By contrast, continuations can contain arbitrary process terms, and as such, can grow unboundedly large. It is, of course, in principle possible to send such continuations over a network, but it would be preferable not to, and we use this as a guiding principle for our message-passing semantics, which we see as suitable for distributed computation.

Figure 2: Linear semi-axiomatic sequent calculus, types and process expressions.

A central operation (in both this message-passing semantics and the shared memory semantics of sections 5 and 4) is to pass a value to a continuation , written as and defined by

We will see special cases of this operation come up when we discuss the operational rules, and then will use this notation to unify many of the rules into a few simpler forms.

3.1 Typing and Operational Rules, In Detail

The typing judgment for processes has the form

where is a process expression providing along channel and using channels . The rules for this judgment can be found in Figure 3.

Figure 3: Typing rules based on linear semi-axiomatic sequent calculus.
Figure 4: Operational rules for message-passing

We will also present the dynamic semantics in the form of multiset rewriting rules [Cervesato09ic]. A configuration is a multiset of semantic objects where is a channel and is a process providing . These semantic objects are ephemeral in the sense that if they appear on the left-hand side of a rewriting rule they are consumed and replaced by the right-hand side. Rules can be applied to any subconfiguration, leaving the remainder of the configuration unchanged. All the channels provided in a configuration must be pairwise distinct. By convention, we write the provider of a channel to the left of its client. In the reduction rules we use for channels, which are runtime artifacts, and we continue to use for variables in programs that stand for channels that are created at runtime. In an object the expression will refer to channels, but will not have any free variables. These evaluation rules can be found in Figure 4, and we will discuss configurations in a bit more detail in Section 3.2 We now go on to examine the typing and evaluation rules in detail, one or two at a time. This represents a special case of the semantics of adjoint logic [Pruiksma19places], so we will only discuss what is necessary here.

Cut and Identity

We have the generic typing rules of cut and identity:

Operationally, we have the following rule for cut, which corresponds to spawning a new process:

This is straightforward — we need to create a new channel between the existing process and the newly spawned process, and then we perform substitutions to allow that new channel to be used. After this, both the new and old process can continue execution.

In some sense, the most natural semantics for identity is a global renaming of to or to . This is difficult to implement, however, as such a global operation requires communication between all processes which mention or , so we instead look for a local version of the semantics. As can be seen from the progress and preservation theorems (theorems 3.2 and 3.1), it is sufficient for to communicate with the provider of , and that only when the provider of itself tries to communicate along . This leads to the following definition:

Definition 1

Given a process term , we say that is poised on when:

  1. It is of the form , trying to send along .

  2. It is of the form , trying to receive and decompose a message from .

  3. It is of the form , forwarding between and .

We then restrict the identity rule to only apply when the provider of has poised on . This is also the solution adopted in Concurrent C0 [Willsey16linearity].

Internal and External Choice.

In order to support programming more naturally, we generalize internal choice and write where is a non-empty111Empty label sets work in principle, but require additional annotation to make typing possible, and so we omit them here. finite set of labels. By defining , we recover the standard binary choice. We then obtain the following rules:

The right rule corresponds to a message carrying the label and a continuation channel . The left rule provides a branch for every possible label and also a bound name for the continuation channel. Interaction between these objects is captured in the following computation rule.

This rule represents receipt of the message along channel . Note that is exactly — we pass the value to the continuation to construct the new process term. This pattern will be repeated in subsequent computation rules (though we will not call attention to it further). There is no corresponding rule to send such a message, since sending is achieved asynchronously by spawning a process (using cut).

External choice behaves dually. The nullary left rule represents a message with label and continuation channel , while the right rule receives and branches on such a message. These rules can be found in Figure 3. Computationally, we have

The first process above continues, but now providing the continuation channel that is received with the message.

Multiplicative Unit.

The right rule for the multiplicative unit corresponds to a message that also closes the channel after it is received, so there is no continuation. The left rule just waits for this message and then proceeds.

Even though the usual rule already has no premises, we call it here to emphasize the principle that in the semi-axiomatic sequent calculus, all noninvertible rules become axioms. Operationally, the unit is straightforward.

Linear Implication and Multiplicative Conjunction.

As is true generally for session types, the provider of a linear implication receives a channel . But because communication is asynchronous, it also must receive a continuation channel, so the message has the form , sending channel along , together with the continuation . The provider has a corresponding , as seen below:

The operational rule is then rather straightforward:

As in the additive rules (for internal and external choice), this rule represents the receipt of a message (here, the pair ), and the continuing execution of a new process term ().

Multiplicative conjunction is again dual, just reversing the role of sender and receiver. Rules can be found in Figures 4 and 3.

Recursive Types and Processes.

So far, we have strictly adhered to the logical foundation provided by the Curry-Howard interpretation of linear logic in its semi-axiomatic formulation. In a programming language, we also need recursive types and recursively defined processes. As is customary in session types, we use equirecursive types, collected in a signature in which we also collect recursive process definitions and their types. For each type definition , the type must be contractive so that we can treat types equirecursively with a straightforward coinductive definition and efficient algorithm for type equality [Gay05acta].

A definition of the process has the form , where the type of is given as . We will use the common notation for the list in order to improve readability.

For valid signatures we require that each declaration has a corresponding definition with . This means that all type and process definitions can be mutually recursive.

A call is then typed by

Operationally, we simply unfold the definition, so this rule does not require communication.

In the remainder of this paper we assume that we have a fixed valid signature , so we annotate neither the typing judgment nor the computation rules with an explicit signature.

3.2 Overview and Results

Summary.

The semantics presented in this section is a message-passing semantics and therefore suitable for distributed computation. It is easy to identify the messages and the threads of control that define the state of each process. A message has the form where is a channel and is a value carried by the message. The values are all small, as discussed before, provided only that the representation of a channel is small. Messages are uniformly received by processs of the form where is a continuation expression.

Process Configurations.

A process configuration is either a single process providing along , empty (), or the join of two configurations. We think of the join as an associative and commutative operator and impose the condition that all channels provided in a configuration are distinct. Configurations use some channels and provide others, so we type them with the judgment defined by the following rules.

In the first rule, contains channels not referenced by which are therefore still available to further clients to the right of and we can read the rule for the empty configuration similarly. We then have our first theorems: preservation and progress. The proofs follow standard patterns in the literature [Pruiksma19places] and are therefore omitted here. Their origins lie in the correctness of the presented cut reductions in the semi-axiomatic sequent calculus.

Theorem 3.1 (Type Preservation [Pruiksma19places])

If and then .

Theorem 3.2 (Progress [Pruiksma19places])

If then either

  1. for some , or

  2. every is poised on .

3.3 Examples

Example: Binary Numbers.

As a first simple example we consider binary numbers, defined as a type bin.

For example, the number would be represented by a sequence of labels . The first bit to be received would be . It arrives along a channel, say , and carries with it the continuation channel for the remaining bits. Writing out the whole sequence as a configuration we have

Example: Computing with Binary Numbers.

We implement a process succ which receives the bits of a binary number along a channel and produces the bits for the binary number along . For a spawn/cut in the examples, we consider a line break like a semicolon. Also, in the examples it is helpful to have a reverse cut, where the order of the premises is reversed.

Operationally, and behave identically. Using the channel names, it is easy to disambiguate which syntactic form of cut is used, so we will point out uses of reverse cut only in this first example. As a general convention in the example processes, we write the continuation of a channel as . The code for the successor process is in Figure 5.

% send along
% forward the remaining bits
% send along
% carry, as a recursive call
% send along
% followed by
% terminate, by forward at type
% spawn first successor in pipeline
% continue as second successor
Figure 5: Successor and plus2 processes on binary numbers

To implement plus2 we can just compose succ with itself. In the message-passing semantics, the two successor processes form a concurrently executing pipeline. This definition is also in Figure 5. We have taken a small syntactic liberty in that the second line of plus2 should technically be . We will continue to use the same abbreviation in the remaining examples.

Example: A Binary Counter.

As a second example, now using external choice, we implement a binary counter for which the client has two choices: it can either increment the counter (message ) or it can request the value in the form of a binary number from the preceding example (message ).

We implement the counter as a chain of processes, each carrying one bit of information, and a final process representing the end of the chain, see Figure 6. An increment message travels through the chain as far as necessary to account for all carries, and the value message travels all the way to the end, converting each process bit0, bit1, and end to its corresponding message , , and .

% continue as bit1
% respond with along
% convert remaining bits to binary
% and forward
% carry bit as message along
% continue as bit0
% respond with along
% convert remaining bits to binary
% and forward
% spawn new end process
% continue as bit1
% respond with as empty sequence
 ) % close channel by sending
Figure 6: A binary counter

4 A Shared Memory Semantics

The semantics presented in the previous section is a message-passing semantics and therefore suitable for distributed computation. As we have seen, it is easy to identify the messages and the threads of control that define the state of each process. Notably, messages are small: no process expressions or complicated data structures need to be transmitted. As for threads of control, cut/spawn spawns a new process and continues as , id/forward terminates, and in the rules for logical connectives the process receiving the message continues while the message terminates. A process represents a thread of control with continuation expression , which proceeds as when a value is received along channel .

We can now implement this message-passing semantics using a distributed model of computation, or we could implement it using shared memory, depending on our application or programming language. In fact, existing implementations of session types (see, for example, [Jespersen15wgp, Scalas16ecoop, Willsey16linearity, Kouzapas18scp]) leverage libraries that may support one or the other or even both. Because of the varying levels of abstractions and the complexity of the linear typing invariant, proving the correctness of such an implementation is difficult and we are not aware of any successful efforts in that direction.

Our approach to understanding the implementation of session types is to give a high-level definition of a shared memory semantics for the session-typed language described in Section 3. Note that we continue to use the same process terms and typing rules shown in Figures 3 and 2, changing only the semantics. We use the style of substructural operational semantics (SSOS) [Pfenning04aplas] and leverage the technique of destination-passing style [Pfenning09lics], which also has shown promise for a high-performance implementation [Shaikhha17fhpc]. A significant advantage of SSOS is the modularity it affords, which allows us to subsequently integrate the semantics with others (see Section 6).

A key restriction behind the message-passing semantics is that messages are small. In particular, they should carry no code in the form of closures—constructing and controlling the size of closures is a well-known problem in the practice of distributed computing in high-level languages (see, for example, [Miller16onward]). In the shared memory setting no such restriction is necessary since code can reside in shared memory without difficulty. On the other hand, we would like to make the allocation of memory explicit and ensure that concurrent access is safe.

Here are the central ideas of the shared-memory semantics:

  1. Channels are reinterpreted as addresses in shared memory.

  2. Cut/spawn is the only way to allocate a new cell in shared memory.

  3. Identity/forward will move data between cells.

  4. A process that provides will write to location and then terminate.

  5. A process that is poised on a channel will read from location (once it is available) and then continue.

In addition, memory cells that have been read can be deallocated due to linear typing. The counterintuitive part of this interpretation (when using the message-passing semantics as a point of reference) is that a process providing does not read a value from shared memory. Instead, it writes a continuation to memory and terminates. Conversely, a client of such a channel does not write a value to shared memory. Instead, it continues by jumping to the continuation. A memory cell may therefore contain either a value or a continuation , a class of expressions we denote by .

We formalize these principles using the following semantic objects:

  1. : thread with destination

  2. : cell that has been allocated, but not yet written

  3. : cell containing

We maintain the invariant that in a configuration either appears together with , or we have just , as well as that if two semantic objects provide the same channel , then they are exactly a , pair. While this invariant can be made slightly cleaner by removing the objects, this leads to an interpretation where cells are allocated lazily just before they are written. While this has some advantages, it is unclear how to inform the thread which will eventually read from the new cell where said cell can be found, and so, in the interest of having a realistically implementable semantics, we simply allocate an empty cell on spawning a new thread, allowing the parent thread to see the location of that cell.

At a lower level of abstraction the continuation would presumably be a closure, pairing a code pointer with an environment assigning addresses to its free channel variables. Unlike messages, values do not carry their destination so we type cell contents with the straightforward judgment .

The typing rules for configurations (shown below) are similar to those for the message-passing semantics, although it is convenient to type a thread together with its pre-allocated destination.

The rules for the operational semantics (Figure 7) formalize the intuitions we have presented for how shared memory should work.

Figure 7: Operational rules for shared memory

Note in particular how the last rule reverses the earlier role of messages and processes. The thread computing with destination continues execution by jumping to the stored continuation after substitution.

This semantics satisfies the expected variants of preservation and progress. Preservation does not change at all.

Theorem 4.1 (Type Preservation)

If and then .

Progress changes in that a configuration that cannot take a step must have filled in all of its destination cells. Note that this progress theorem uses the distinction between cells and threads in much the same way that the progress theorem for the message-passing semantics used the concept of a process being poised on a channel .

Theorem 4.2 (Progress)

If then either

  1. for some , or

  2. for every channel there is an object .

We can now establish a weak bisimulation with the message-passing semantics of Section 3. We define a relation between configurations in the message-passing and shared-memory semantics.

We extend this relation to whole configurations compositionally, recalling that the join operation for configurations is associative and commutative with unit .

It is easy to see that is a weak bisimulation. In the case of cut, we have to take advantage of the stipulation that new names may be chosen arbitrarily as long as they are fresh.

Theorem 4.3 (Weak Bisimulation)

The relation is a weak bisimulation between the message-passing and the shared-memory semantics.

The threads of control in the two semantic specifications are different, however. As we can see from the bisimulation itself, some messages and correspond to threads and others to memory cells. We return to the examples to illustrate the differences.

Example Revisited: Computing with Binary Numbers, Figure 5.

Recall that a succ process was a transducer from sequences of bits to sequences bits, all of those being messages. Under the shared memory semantics, succ is still the only process, and it reads the elements of the input sequence from memory and writes the elements of the output sequence to memory. In the plus2 example, under both interpretations there are two succ processes in a pipeline operating concurrently, either on messages or on memory content. We see from this that the two interpretations are quite close for positive types ().

Example Revisited: A Binary Counter, Figure 6.

In the message passing semantics for the binary counter we have a sequence of processes, one for each bit and one to mark the end. An increment message inc is passed through this sequence until there are no further carries to be processed. In the shared memory semantics each bit is a continuation expression stored in memory. An increment is actually a process that executes such a continuation, writes a new continuation to memory and then reads and executes the next continuation (if necessitated by a carry) or terminates. When receiving a value message, bits and are written to memory as each continuation bit0 and bit1 is reached and executed.

This alternative operational interpretation is greatly aided by our insistence on an intuitionistic sequent calculus. It is precisely the asymmetry between multiple antecedents and a single conclusion that supports the shared memory interpretation, together with the idea to take the semi-axiomatic sequent calculus as our formal basis.

5 A Sequential Semantics

Once we have introduced threads communicating through shared memory in the particular way we have, it is only a small step to a sequential semantics. The key idea is that for cut/spawn we execute with destination and wait until has terminated (and therefore deposited a value in the location denoted by ). We then proceed with the execution of . This would not have been possible in the message-passing semantics because processes pause and resume as messages are exchanged between them. We choose here a call-by-value sequential semantics because futures (see Section 8) have first been described in this context. Of course, in the language so far we can code sequential computation by performing additional synchronization before proceeding with the continuation of a cut. As with the coding of asynchronous communication under the synchronous semantics, however, this would present at best a tortuous path towards an understanding of how to combine sequential and concurrent modes of computation since a type by itself would remain agnostic to its intended interpretation.

The idea of the sequential semantics is to split the objects into two: which evaluates with destination and which waits for the evaluation of to complete before continuing with the evaluation of with destination . In addition, we have a special case of the object, , which means that is returned to destination , but has not yet been committed to storage. The configuration then always has one of the following two forms, adhering to our convention to show a provider to the left of its client.

The continuation objects are threaded in the form of a stack, while the cells represent a global store as before. However, we no longer have uninitialized cells since a cell will always be written by its provider before it is accessed by the client. When typing configurations we have to express the ordered threading of continuations. We accomplish this by generalizing the typing judgment to where are either empty or a single channel denoting the top of the continuation stack.

The dynamics then consists of the rules in fig. 8.

Figure 8: Sequential operational rules

Using these rules, we have new but uninteresting versions of preservation and progress.

Theorem 5.1 (Type Preservation)

If and then .

Progress changes in that a configuration that cannot take a step must have filled in all of its destinations either by writing values to memory or by returning a value to the continuation on top of the stack.

Theorem 5.2 (Progress)

If then either

  1. for some , or

  2. we have

    1. for every channel with there is a , and

    2. if then for some .

The sequential semantics is no longer bisimilar to the concurrent one, but it represents a particular schedule. A simple way of stating this is that the concurrent semantics simulates the sequential one, as evidenced by the relation defined below. More might be said by developing a notation of observational equivalence [Perez14ic], but this is beyond the scope of this paper.

Theorem 5.3 (Weak Simulation)

The relation is a weak simulation.

Example Revisited: Computing with Binary Numbers, Figure 5.

Under the sequential semantics, a single succ process will consume the given input sequence until it encounters the first (which requires no carry), building up continuation objects that then construct the output. When the two are composed in plus2, the first succ must finish its computation and write all results to memory before the second succ starts.

Example Revisited: A Binary Counter, Figure 6.

In the binary counter, an increment represents the only thread of control. It invokes the sequence of continuations in memory in turn until it encounters a bit0 process, at which point it returns and writes fresh continuations. When multiple increments are executed, each has to finish completely before the next one starts. A value message instead traverses the entire sequence, invoking the stored continuations, and then writes corresponding bits to memory as it returns from each.

6 Adjoint Logic: Combining Languages

At this point we have developed several different versions of an operational semantics for the same source language and explored their relationship. They have different use cases: the message-passing semantics is suitable for distributed computation, the shared-memory semantics for multi-threaded computation, and the sequential semantics for functional programming (linear, for now). One obvious generalization of this is to allow the structural rules of weakening and contraction on the logical side and, correspondingly, persistent cells on the computational side. Another question is how we can combine these languages in a coherent manner, since an application may demand, say, mostly sequential programming with some concurrency.

In this section we review adjoint logic [Reed09un, Licata16lfcs, Licata17fscd, Pruiksma19places], a general framework for combining logics with different structural properties. In the next sections we explore particular combinations to reconstruct the essence of SILL [Toninho13esop, Toninho15phd, Griffith16phd] and futures [Halstead85] as instances of the general schema.

In adjoint logic, propositions are stratified into distinct layers, each identified by a mode. For each mode there is a set of structural properties satisfied by antecedents of mode in a sequent. Here, stands for weakening and for contraction. For simplicity, we always assume exchange is possible. In addition, any instance of adjoint logic specifies a preorder between modes, expressing that the proof of a proposition of mode may depend on assumptions . In order for cut elimination to hold, this ordering must be compatible with the structural properties: if then . Sequents then have the form where, critically, each antecedent in satisfies . We express this concisely as .

A prototypical and inspirational example is Benton’s LNL [Benton94csl] combining linear and nonlinear intuitionistic logic. There are two modes, a linear one with and an unrestricted one with . Critically, , so the proof of a linear proposition can depend on unrestricted assumptions, while a proof of an unrestricted proposition can only depend on other unrestricted propositions but not linear ones.

We can go back and forth between the layers using shifts (up from to requiring ) and (down from to requiring ). A given pair and forms an adjunction, justifying the name adjoint logic.

We also sometimes restrict the available propositions of a given mode. For example, we can recover a formulation of intuitionistic linear logic [Girard87tapsoft] with the two modes and from LNL with , but stipulating that . Then .

The formulation of adjoint logic for a fixed set of modes denoting linear, affine, and unrestricted propositions has previously been proposed [Pfenning15fossacs]. More recently, this has been generalized to a uniform message-passing semantics for all of adjoint logic [Pruiksma19places] which gives rise to new communication patterns. One example is multicast, where a single message is sent to multiple recipients.