Log In Sign Up

Structured Parallel Programming Language Based on True Concurrency

Based on our previous work on algebraic laws for true concurrency, we design a skeleton of structured parallel programming language for true concurrency called SPPLTC. Different to most programming languages, SPPLTC has an explicit parallel operator as an essential operator. SPPLTC can structure a truly concurrent graph to a normal form. This means that it is possible to implement a compiler for SPPLTC.


page 1

page 2

page 3

page 4


On the extreme power of nonstandard programming languages

Suenaga and Hasuo introduced a nonstandard programming language While^ ...

Retrofitting Parallelism onto OCaml

OCaml is an industrial-strength, multi-paradigm programming language, wi...

User-Centered Programming Language Design: A Course-Based Case Study

Recently, user-centered methods have been proposed to improve the design...

Truly Visual Polymorphic Algebraic Data Structures through Maramafication

This paper presents a so-called maramafication of an essential part of f...

Revisiting Language Support for Generic Programming: When Genericity Is a Core Design Goal

Context: Generic programming, as defined by Stepanov, is a methodology f...

Introducing CPL

CPL here stands for a computer programming language conceived and develo...

Parallel Independence in Attributed Graph Rewriting

In order to define graph transformations by the simultaneous application...

1 Introduction

Parallel computing [4] [3] is becoming more and more important. Traditional parallelism often existed in distributed computing, since distributed systems are usually autonomous and local computer is single-core and single-processor and timed (Timed computing is serial in nature). Today, due to the progress of hardware, multi-cores, multi-processors, GPU make the local computer true parallel.

Parallel programming language has a relatively long research history. There have been always two ways: one is the structured way, and the other is the graph (true concurrency) way. The structured way is often based on the interleaving semantics, such as process algebra CCS. Since the parallelism in interleaving is not a fundamental computational pattern (the parallel operator can be replaced by alternative composition and sequential composition), the parallel operator often does not occur as an explicit operator, such as the mainstream programming languages C, C++, Java, et al.

The graph way is also called true concurrency. There also have been some ways to structure the graph [2] [5], but these work only considered the causal relation in the graph, and neglect the confliction and even the communication. And there are also industrial efforts to adopt the graph way, such as the workflow description language WSFL. The later workflow description language BPEL adopts both the structured way and the graph way. Why does BPEL not adopt the structured way only? It is because that the expressive power of the structured way is limited. Then why does BPEL not adopt the graph way only? It is just because that the graph could not be structured at that time and the structured way is the basis on implementing a compiler.

We did some work on true concurrency, and we found the algebraic laws for true concurrency called APTC [1]. APTC not only can be used to verify the behaviors of computational systems directly, but also implies a way to structure the truly concurrent graph. So, based on APTC, we design a skeleton of structured programming language for true concurrency called SPPLTC.

This paper is organized as follows. In section 2, we introduce APTC briefly, for more details, please refer to APTC [1]. We introduce the syntax of SPPLTC in section 3, the operational semantics of SPPLTC in section 4, the structuring algorithm in section 5. Finally, we conclude this paper in section 6.

2 Aptc

captures several computational properties in the form of algebraic laws, and proves the soundness and completeness modulo truly concurrent bisimulation/rooted branching truly concurrent bisimulation equivalence. These computational properties are organized in a modular way by use of the concept of conservational extension, which include the following modules, note that, every algebra are composed of constants and operators, the constants are the computational objects, while operators capture the computational properties.

  1. (Basic Algebras for True Concurrency). has sequential composition and alternative composition to capture causality computation and conflict. The constants are ranged over , the set of atomic events. The algebraic laws on and are sound and complete modulo truly concurrent bisimulation equivalences, such as pomset bisimulation , step bisimulation , history-preserving (hp-) bisimulation and hereditary history-preserving (hhp-) bisimulation .

  2. (Algebra for Parallelism for True Concurrency). uses the whole parallel operator , the parallel operator to model parallelism, and the communication merge to model causality (communication) among different parallel branches. Since a communication may be blocked, a new constant called deadlock is extended to , and also a new unary encapsulation operator is introduced to eliminate , which may exist in the processes. And also a conflict elimination operator to eliminate conflicts existing in different parallel branches. The algebraic laws on these operators are also sound and complete modulo truly concurrent bisimulation equivalences, such as pomset bisimulation , step bisimulation , history-preserving (hp-) bisimulation . Note that, these operators in a process except the parallel operator can be eliminated by deductions on the process using axioms of , and eventually be steadied by , and , this is also why bisimulations are called an truly concurrent semantics.

  3. Recursion. To model infinite computation, recursion is introduced into . In order to obtain a sound and complete theory, guarded recursion and linear recursion are needed. The corresponding axioms are (Recursive Specification Principle) and (Recursive Definition Principle), says the solutions of a recursive specification can represent the behaviors of the specification, while says that a guarded recursive specification has only one solution, they are sound with respect to with guarded recursion modulo truly concurrent bisimulation equivalences, such as pomset bisimulation , step bisimulation , history-preserving (hp-) bisimulation , and they are complete with respect to with linear recursion modulo truly concurrent bisimulation equivalence, such as pomset bisimulation , step bisimulation , history-preserving (hp-) bisimulation .

  4. Abstraction. To abstract away internal implementations from the external behaviors, a new constant called silent step is added to , and also a new unary abstraction operator is used to rename actions in into (the resulted with silent step and abstraction operator is called ). The recursive specification is adapted to guarded linear recursion to prevent infinite -loops specifically. The axioms for and are sound modulo rooted branching truly concurrent bisimulation equivalences (a kind of weak truly concurrent bisimulation equivalence), such as rooted branching pomset bisimulation , rooted branching step bisimulation , rooted branching history-preserving (hp-) bisimulation . To eliminate infinite -loops caused by and obtain the completeness, (Cluster Fair Abstraction Rule) is used to prevent infinite -loops in a constructible way.

can be used to verify the correctness of system behaviors, by deduction on the description of the system using the axioms of . Base on the modularity of , it can be extended easily and elegantly. For more details, please refer to the manuscript of [1].

3 Syntax

Let denote the silent step (internal action or event) and define to be the set of actions, range over . We write for the set of processes. For each process constant schema , a defining equation of the form

is assumed, where is a process.

The standard BNF grammar of syntax of SPPLTC can be defined as follows:

Where defines sequential computation which is a causality in execution time, defines alternative computation which is a kind of conflict. explicitly defines concurrency. There are other kinds of operators in APTC [1], such as communication merge , but, these operators can be replaced by the above three fundamental operators.

As a programming language, either an imperative language or a functional language, should contain more ingredients, such as the set of numbers, the set of truth values, the set of store locations, arithmetic expressions, boolean expressions, commands or functions, and iteration or recursion. The above grammar definition is a simplification of traditional programming language, with a focus on parallelism. We can treat atomic actions as commands, they can operate on values, but the details of operations are omitted. The if-else condition are simplified as alternative composition and the condition is omitted. And we neglect iteration or recursion, because contains recursion.

4 Operational Semantics

True concurrency is a graph driven by causality and conflict. While concurrency and consistency are implied. For causality, there are two kinds: the causality in execution time, and communications between communication actions in different parallel branches. For conflict, there are also two kinds: the conflict structured by , and the conflicts existed among actions in different parallel branches. And other computational properties, such as the whole truly concurrent operator , the conflict elimination operator , the deadlock constant , encapsulation operator , recursion, the silent step , and the placeholder are also needed in parallel programming.

The operational semantics defined by labelled transition systems (LTSs) are almost the same as APTC [1], except for the parallel operator , we know that in true concurrency, by use of the placeholder , contains both the interleaving semantics and true concurrency, that is, . For SPPLTC, as a parallel programming language, there is also another computational properties that should be considered, race condition, denoted . Two actions and in race condition, denoted , mean that they maybe share a same variable, and they should be executed serially and non-deterministically. That is, , we use actions in race condition relation as a predicate.

So, we give the operational semantics of parallelism as Table 4 defines. And we omit the transition rules of other computational properties, please refer to APTC [1]. In the following, let , and let variables range over the set of terms for true concurrency, and the predicate represents successful termination after execution of the action .

Table 1: Transition rules of parallel operator