A universal concept in many disciplines is to characterize the behavior of an object, often called a module or a system, and thereby to intentionally ignore internal aspects considered irrelevant. We will here use the term system. The purpose of a system is to be used or embedded in an environment, which can for example consist of several other systems. A system’s behavior is the exact characterization of the effect a system can have when embedded in an environment. In such a consideration, anything internal, not affecting the behavior, is by definition considered irrelevant, and two systems with the same behavior are considered to be equal. Which aspects are considered irrelevant, and hence are not modeled as part of the behavior, depends strongly on the concrete investigated question. For example, if for a software module computing a function, one is only interested in the input-output behavior, then this function characterizes the behavior of the software module, and aspects like the underlying computational model, the program, the complexity, timing guarantees, etc., are irrelevant. In another consideration, for example, one may want to include the timing aspects as part of the observable (and hence relevant) behavior. In yet another consideration one may be interested in the memory requirements for the specific computational model, etc.
One can think of a system as connecting to the environment via an interface111The term “interface” is indeed used in some contexts to refer to the specification of the behavior of an object, such as a software module, but here we use the term in a more general and more abstract sense.. More generally, if one wants to model a system composed of several (sub-)systems, then one can consider each system to have several interfaces and that systems can be composed by connecting an interface of one system with an interface of another system. The term interface is here used in the sense of capturing the potential connection to another system. This corresponds to drawing a diagram with boxes, each having several lines (interfaces), and where some interfaces of systems are connected (by lines) and some interfaces remain free, i.e., accessible by the environment; see Figure 1. A composed system appears to the environment as having only the free (not connected) interfaces and its behavior is observed only via these free interfaces; the internal topology becomes irrelevant and is not part of the behavior.
In some applications, certain internal details of a system matter. One can then define the relevant internal aspects as being part of the behavior, to make them, by definition, visible from the outside. If too many internal details become relevant, our approach might be less suitable than directly using a model of systems that considers these internals. Indeed, a majority of existing work models systems by defining their internal operations, e.g., via states and transition functions. In many cases, however, such a detailed description of the internal operations is unnecessary and cumbersome, and our abstract approach would be beneficial. We now describe two examples for which ignoring internal details appear particularly useful.
In distributed systems, where systems are connected to other systems with which they can communicate, one is often interested in certain properties of the composed system. As an example, we present a simple impossibility proof of bit-broadcast for three parties with one dishonest party. This famous result was first proven by Lamport et al. [LSP82, PSL80]; the proof we present here is in the spirit of that given by Fisher et al. [FLM86]. This proof only requires that the involved systems can be connected and rearranged as described; the communication between them and how they operate internally is irrelevant. Ignoring these internal details not only simplifies the proof but also makes the result more general, e.g., it also holds if the systems communicate via some sort of analog signals.
The goal of a bit-broadcast protocol is to allow a sender to send a bit such that all honest receivers output the same bit (consistency) and if the sender is honest, they output the bit that was sent (validity). Assume an honest sender uses the system for broadcasting a bit , and honest receivers use systems and that decide on a bit , and , respectively. If these systems implement a broadcast protocol with the required guarantees, each condition in Figures 1(a) to 1(d) must hold for all systems , which capture the possible behaviors of a dishonest party. Assume toward a contradiction that this is the case and consider the system in (e). We can view the system in the dotted box composed of and as a system in (b) to obtain . We can also view the system in the densely dotted box composed of and as a system in (c) to obtain . Finally, the system in the dashed box can be viewed as a system in (d), which implies , a contradiction. Hence, there are no systems , , , and that satisfy these constraints.
Cryptographic schemes are often defined as some sort of efficient algorithms. While efficiency is of course relevant in practice, one can separate the computational aspects from the functionality to simplify the analysis. Constructive cryptography by Maurer and Renner [MR11, Mau12] allows one to model what cryptographic protocols achieve using a system algebra that abstracts away cumbersome details. To this end, one considers so-called resource systems, which provide a certain functionality to the parties connected to their interfaces, and converter systems, which can be connected to resources to obtain a new resource. Typical resources with interfaces for two honest parties and an adversary are a shared secret key, which provides a randomly generated key at the interfaces for the honest parties and nothing at the interface for the adversary,222One could also view a shared secret key as having no interface for the adversary, but as defined in constructive cryptography, all resources involved in a construction have an interface for each party. and different types of channels with different capabilities for the adversary. The goal of a cryptographic protocol is then to construct a resource from a resource . Such a protocol consists of a converter for each honest party, and it achieves the construction if the system obtained from by connecting the protocol converters to the interfaces of the honest parties is indistinguishable from the system obtained from by connecting some converter system, called simulator, to the interface of the adversary. The notion of indistinguishability can be defined in several ways leading to different types of security, e.g., as the systems being identical or via a certain class of distinguishers.
A crucial property of this construction notion is that it is composable, i.e., if some protocol constructs a resource from a resource and another protocol constructs a resource from , these protocols can be composed to obtain a construction of from . Turned around, one can also decompose the construction of from into two separate constructions. Since these two constructions can be analyzed independently, this approach provides modularity and simplifies the analysis of complex protocols by breaking them down into smaller parts.
An example of a construction is that of a secure channel, which leaks only the length of the sent messages to the adversary and does not allow modifications of them, from the resource consisting of a shared secret key and an authenticated channel, which leaks the sent messages to the adversary but also does not allow modifications of them, by a symmetric encryption scheme. See Figure 3
for an illustration of the involved systems. To achieve this construction, the simulator must, knowing only the length of the sent messages, output bit-strings that are indistinguishable from encryptions of these messages. If the used encryption scheme is, e.g., the one-time pad, the two systems inFigure 3 are identical for an appropriate simulator, i.e., they have the same input-output behavior [Mau12], which provides the strongest possible security guarantee. Note that internally, these systems are very different, but this is intentionally ignored.
We develop a theory of systems with different levels of abstraction. To achieve generality and to strive for simplicity, theorems are proved at the highest level of abstraction at which they hold. See Figure 4 for an overview of the types of systems we consider. This is not meant to be a complete picture of all systems one can consider; systems we do not consider in this paper but could be treated within our theory include probabilistic systems, physical systems, circuits, etc.
Abstract system algebras and composition-order invariance.
At the highest level of abstraction, we do not specify what systems are, but only postulate two operations, depicted in Figure 5; one for combining two systems in parallel as in (a) and one for connecting interfaces as in (b). Using these operations one can build “graphs” such as those depicted in Figure 1, by first taking the systems in parallel and then connecting the interfaces. We call a set of systems together with these operations, and the specification which interfaces a system has and which of them can be connected, a system algebra.
A natural property of such an algebra that can be specified at this level of abstraction is composition-order invariance, that is, a composed system is completely described by its “graph”, and the order in which the operations are applied to build the graph does not matter. This property is not only very natural, but also necessary for many applications. For example, the impossibility proof for broadcast sketched above relies on it since otherwise, rearranging and subdividing systems as in (e) would not be allowed. To illustrate this, we formalize the systems occurring in that figure in an abstract system algebra and show how composition-order invariance appears in the proof. Composition-order invariance was also used by Maurer and Renner to prove the composition theorem of constructive cryptography [MR11]. While appearing natural and innocent, examples throughout our paper indicate that composition-order invariance is actually a nontrivial property and requires a proof.
Functional system algebras.
An important type of system in computer science takes inputs and produces outputs depending on these inputs. The behavior of such a system can be fully described by a function mapping inputs to outputs. We define a type of system algebra, called functional system algebra, where the systems have input interfaces and output interfaces and correspond to such functions. Connecting an input interface to an output interface is understood as setting the value input at the former to be equal to the value output at the latter. Determining the resulting system thus involves finding a fixed point of the underlying function; if multiple fixed points exist, the system algebra has to specify which one to select. By appropriately choosing the domains and functions, various types of systems can be modeled in this way, including interactive systems that take many inputs in different rounds and systems that depend on time.
We prove several basic results at this level of abstraction, i.e., without specifying which functions are considered. For example, we show that not all functional system algebras are composition-order invariant, but if there are always unique fixed points for interface connections and connections can be reordered, composition-order invariance holds.
While this paper focuses on the deterministic case, we point out that functional systems can be used as a basis to model probabilistic systems. For example, one can consider systems that take randomness as an explicit input at a dedicated interface, one can include random variables in the domains of the functions, or one can consider probability distributions over deterministic systems. Systematically understanding probabilistic systems in this way is future work.
Instantiations of functional system algebras.
To instantiate the concept of a functional system algebra, we need to specify the domains of the functions and the set of functions to consider. To be able to define the interface connection, we have to ensure that all functions have the required fixed points. One way to guarantee fixed points that is well-studied in mathematics, especially domain theory, is to equip the domain with a partial order such that all chains have a supremum and to consider monotone functions. A related concept are continuous functions, which are defined as preserving these suprema. In both cases the functions have a least fixed point. While continuity is a stronger requirement than monotonicity, a slightly weaker assumption on the domains is sufficient to guarantee least fixed points. We show that if least fixed points are chosen for interface connections, both classes of functions form a functional system algebra. Under the additional assumption that nonempty chains have an infimum, we show that these system algebras are composition-order invariant.
Monotone and continuous functions are not only a mathematical convenience to obtain fixed points, but they also encompass as a special case an intuitive and useful model known as Kahn networks [Kah74] (or Kahn process networks; in Kahn’s paper only defined for continuous functions, but monotone functions can be used as well). Kahn networks have been developed to provide a semantics for parallel programming languages [Kah74], but they have also been used in other contexts, including embedded systems [SZT04] and signal processing [LP95]. The domains of the functions there consist of sequences of values and the partial order is defined to be the initial segment (or prefix) relation. An interpretation of a function is that it maps input histories to output histories. Such functions therefore correspond to interactive systems that take one input after the other at each input interface and produce outputs depending on the inputs. Monotonicity means that additional inputs can only yield additional outputs, an output cannot be “taken back”. Even though it appears to be a very natural question whether the order in which interfaces of Kahn networks are connected matters, we are not aware of any result in this direction. Our proof of composition-order invariance, which indeed turned out to be nontrivial, therefore also provides new insights into this well-studied model.
We finally provide an instantiation of functional system algebras consisting of causal systems in which inputs to the system can only influence “later” outputs. We formalize this by considering a partially ordered set (where “less than” can be interpreted as “before”) and letting the domains of the functions consist of subsets thereof. As an example, consider the partially ordered set containing pairs , which can be interpreted as the value being input (or output) at time , where the order is naturally defined as the one induced by the second component. The domains are then sets of such pairs. This allows us, as for Kahn networks, to model systems that take several inputs at each input interface and produce several outputs. We define causality for such systems and prove that the corresponding functions have unique fixed points. Therefore, we obtain a composition-order invariant functional system algebra. This system algebra can in particular be used to model and analyze systems that depend on time, such as clocks and channels with certain delays.
1.3 Related Work
There exists a large body of work on modeling certain types of systems mathematically. Some models can be understood as special cases in our theory, but also very general theories and models that do not fit in our theory exist. The work we are aware of, however, only captures partial aspects of our theory. We now describe some of this work and compare it to ours.
The abstract concept of a system algebra in which complex systems are built from components has been informally described in the context of cryptography by Maurer and Renner [MR11]. They also, again informally, introduced composition-order independence, which corresponds to our composition-order invariance. We provide in this paper a formalization that matches their requirements.
Hardy has developed an abstract theory, in which composition-order invariance (there called order independence) plays an important role [Har13]. That work, however, focuses on physical systems. Lee and Sangiovanni-Vincentelli [LS98] also introduce an abstract system model, but it is specific to systems that consider some form of time and does not follow an algebraic approach.
Closely related to our abstract system algebras are block algebras as introduced by de Alfaro and Henzinger in the context of interface theories [dAH01b]. Our systems and interfaces are there called blocks and ports, respectively, and they also define parallel composition and port connection operations. A major difference compared to our system algebras is that port connections do not hide the connected ports. Moreover, while de Alfaro and Henzinger require the parallel composition to be commutative and associative, they do not define a notion that corresponds to our composition-order invariance, i.e., their port connections not necessarily commute with other port connections and parallel composition.
Milner’s Flowgraphs [Mil79] model composed systems as generalized graphs, in which the nodes correspond to subsystems. This is considerably different from our theory since we abstract away all internal details of a system, in particular the subsystems that compose a system.
A line of work on system models based on functions has been initiated by Kahn’s seminal paper [Kah74] on networks of autonomous computing systems. These systems may be sensitive to the order in which messages arrive on one interface, but they are oblivious to the relative order of incoming messages on different interfaces. He shows that least fixed points exist, based on earlier work by Tarski [Tar55], Scott [Sco70], and Milner [Mil73],333Other sources attribute the original theorem to Kleene or Knaster, see [LNS82]. and therefore connecting systems is well-defined. Tackmann [Tac14] considered the case where systems are fully oblivious to the order of their incoming messages, which can be seen as a special case of Kahn networks where each interface contains at most one message. Micciancio and Tessaro [MT13] start from the same type as Kahn but extend it to tolerate certain types of order-dependent behavior within complex systems.
Several works have defined causal system models. Lee and Sangiovanni-Vincentelli [LS98] define delta causality, which intuitively requires that each output must be provoked by an input that occurred at least a -difference earlier. They show that fixed points exist, based on Banach’s theorem. Cataldo et al. [CLL06] generalize this to a notion of “superdense” time where multiple events may occur simultaneously. Portmann et al. [PMM16], in the quantum scenario, describe a type of strict causality based on a causality function that can be seen as a generalization of delta causality. Naundorf [Nau00] considers strict causality without any minimal time distance, and proves that fixed points still exist. Matsikoudis and Lee [ML15] then show a constructive fixed point theorem for the same notion, which they refer to as strictly contracting. They show that it is implied by a more natural notion of (strict) causality where outputs can be influenced only by inputs that occur strictly earlier, under the assumption that the ordering of inputs is well-founded444A partial order on a set is well-founded if every nonempty subset of has one or more minimal elements.. We show in Appendix A that the strict causality notion of [ML15] is essentially equivalent to the definition we introduce in this work.
Except for the work of Portmann et al., none of the previously mentioned definitions of causal functions explicitly capture systems with multiple interfaces as the work of Kahn [Kah74] or our work. Also, the mentioned papers investigating causal functions do not define how to connect systems such that one obtains a system of the same type, and therefore they do not provide a system algebra as we do. The model by Portmann et al. [PMM16] captures quantum information-processing systems and can be seen as a generalization of our causal systems. Restricting that model to classical, deterministic inputs and outputs yields, however, a more complex and less general model than our causal systems. For example, the causality definition in that paper is more restrictive and in contrast to our causal systems, the systems there are not allowed to produce infinitely many outputs in finite time.
Several models of systems have been proposed that model the systems as objects that explicitly contain state. I/O automata initially discussed by Lynch and Tuttle [LT89] and interface automata by de Alfaro and Henzinger [dAH01a]
enhance stateful automata by interactive communication. Interactive Turing machines basically equip Turing machines with additional tapes that they share with other machines and have been used widely in (complexity-theoretic) cryptography[Gol01, Can01].
2.1 Functions and Notation for Sets and Tuples
A function is a subset of such that for every , there is exactly one such that , where we will usually write instead of . For two sets and , the set of all functions is denoted by . A partial function is a function for some . For a subset , denotes the image of under . For , we define the restriction of to as . Note that for and , we have . An element in can equivalently be interpreted as a tuple of elements in indexed by elements in . In case we interpret a function as a tuple, we usually use a boldface symbol to denote it. For a tuple with for all , we also write and if , we write . The symmetric difference of two sets and is defined as
Finally, we denote the power set of a set by .
2.2 Order Relations
We first recall some basic definitions about relations.
Let be a set. A (binary) relation on is a subset of . We write for . A relation is called reflexive if for all . It is called symmetric if for all and antisymmetric if . A relation is called transitive if for all .
For sets and and a binary relation on , we define the relation on as the componentwise relation, i.e., for ,
A partial order on is a binary relation on that is reflexive, antisymmetric, and transitive. A partially ordered set (poset) is a set together with a partial order on .
We will typically denote partial orders by , , or , and define the relation by , and analogously and .
Let be a poset. Two elements are comparable if or , and incomparable otherwise. If all are comparable, is totally ordered. A totally ordered subset of a poset is called a chain.
Let be a poset. An element is the least element of if for all . Similarly, is the greatest element of if for all . The least element and greatest element of are denoted by and , respectively. An element is a minimal element of if there is no with , and is a maximal element if there is no , . For a subset , is a lower bound of if for all and an upper bound of if for all . If the set of lower bounds has a greatest element, it is called the infimum of , denoted ; the supremum of , denoted , is the least upper bound of .
A poset is well-ordered if every nonempty subset of has a least element.
Note that every well-ordered poset is totally ordered because has a least element.
Let and be posets. An order isomorphism is a bijection such that for all . The posets and are called order isomorphic if such order isomorphism exists.
2.3 Ordinals and Transfinite Induction
We briefly recall some basics of set theory, following Halbeisen [Hal12] and Jech [Jec03]. A class is a collection of sets. More formally, a class corresponds to a logical formula and we write if satisfies that formula. Every set is a class but not all classes are sets; for example, the class of all sets and the class of all ordinals are not sets. A class that is not a set is called a proper class.
An ordinal is a set such that , , and .
For ordinals with , we write . For all ordinals , we have either or (but not both) and . It can be shown that every nonempty class of ordinals has a least element (according to the relation ) [Hal12, Theorem 3.12]. For an ordinal , we define
We have that is the least ordinal greater than [Hal12, Corollary 3.13]. An ordinal is called successor ordinal if for some ordinal . A limit ordinal is an ordinal that is not a successor ordinal.
Every well-ordered set is order isomorphic to exactly one ordinal [Jec03, Theorem 2.12]. This ordinal is called the order type of .
We define the natural numbers as , , , and so on. That is, a number is the set of all numbers less than . The set of natural numbers is also an ordinal, denoted by . Note that and are limit ordinals and all nonzero natural numbers are successor ordinals. A method for proving a statement about all ordinals is via the transfinite induction theorem [Jec03, Theorem 2.14].
Theorem (Transfinite Induction).
Let be a class of ordinals such that
if , then , and
if is a nonzero limit ordinal and for all , then .
Then, is the class of all ordinals.
The following lemma will later be useful.
Let be a poset and assume that for every ordinal , there is an such that for all .555More formally, we consider a class function from the class of all ordinals to , and write for . Then, there exists an ordinal such that .
A result by Hartogs implies that for any set , there exists an ordinal such that there is no injective function [Joh87, Lemma 7.1]. Hence, there exist ordinals such that , since otherwise the function would be injective. Since , we have . Thus, implies that . ∎
2.4 Complete Posets and Fixed Points of Monotone and Continuous Functions
A natural requirement for functions between posets is that they preserve order. Order-preserving functions are also called monotone and are defined below.
Let and be posets. A function is monotone if
Note that a monotone bijection is not necessarily an order isomorphism: For with incomparable and and with , the bijection is trivially monotone but not an order isomorphism.
Let be a set and be a function. Then, is called a fixed point of if .
A complete partially ordered set (CPO) is a poset in which every chain has a supremum.
Note that the empty set is a chain and every element is an upper bound of . Therefore, a CPO contains a least element.
Theorem ([És09, Theorem 2.5]).
Let be a CPO and be monotone. Then, has a least fixed point, which equals for some ordinal , where , for any ordinal , and for nonzero limit ordinals , . We further have for .
The above theorem is constructive in the sense that it not only guarantees the existence of a least fixed point, but also provides a procedure to find it. However, this procedure might only terminate after transfinitely many steps. The situation improves if the function is not only monotone but also continuous in the sense that it preserves suprema. In this case, a weaker requirement on the domain of the function is sufficient, namely only chains that correspond to infinite sequences need to have a supremum.
Let be a poset. An -chain in is a sequence such that for all . We say is an -chain complete partially ordered set (-CPO) if it has a least element and every -chain has a supremum.
Let and be -CPOs. A function is -continuous if for every -chain in , exists and .
The next lemma shows that -continuity implies monotonicity (the converse is not true).
Let and be -CPOs and an -continuous function. Then, is monotone.
Let such that . Then, is an -chain. Therefore,
Theorem ([Dp02, Theorem 8.15]).
Let be an -CPO and be -continuous. Then, has a least fixed point, which equals , where and for .
In the literature, CPOs and continuity are often defined in terms of so-called directed subsets instead of chains [DP02, És09]. For our purposes, chains are more intuitive and directly applicable in our proofs. This definitional inconsistency is not an issue since the two types of definitions have been shown to be equivalent [Mar76].
3 Abstract System Algebras
We first define system algebras at an abstract level described, but not formalized, by Maurer and Renner [MR11], where systems are objects with interfaces via which they can be connected to other systems. A system algebra is a set of systems with certain operations that allow one to compose several systems to obtain a new system. In this way, complex systems can be decomposed into independent components. At this level of abstraction, we only specify how systems can be composed, but not what systems are or how they interact with other systems. In the same sense as the elements of an algebraic ring are abstract objects without concrete meaning, abstract systems have no particular meaning attached, beyond how they can be composed. In a concrete instantiation, systems could, e.g., communicate via discrete inputs and outputs at their interfaces or via analog signals, et cetera. We define two operations; an operation for taking two systems in parallel, and an operation for connecting two interfaces of a system. See Figure 5 for a depiction of these two operations. Several systems can be connected by first taking them in parallel and then connecting their interfaces. A similar definition has been given by Tackmann [Tac14, Definition 3.4].
Let be a set (the set of interface labels). A -system algebra consists of a set (the set of systems), a function (assigning to each system its set of interface labels), a partial function (the parallel composition operation), a function (specifying for each system the set of interface-label pairs that can be connected), and a partial function (the interface connection operation), such that
for all , is finite and for all , we have ,
for , , denoted , is defined if and only if , and in this case, and for and for all , we have , and
for and , , denoted by , is defined if and only if , and in this case, .
We will usually identify a system algebra with the set of systems and use the same symbols , , , and for different system algebras. The parallel composition of two system is only allowed if they have disjoint interface sets. This means in particular that one cannot consider the parallel composition of a system with itself. One can imagine that each system exists only once and therefore cannot be used twice within another system. This is not an issue because can contain many “copies” of a system with different interface labels, and different systems can have different interface labels. One could also introduce an interface-renaming operation, but we will not formalize this because it is not needed here. The set determines which interfaces of a system are compatible, i.e., can be connected to each other. It might or might not be possible to connect an interface to itself. Figuratively speaking, one could imagine that interfaces come with different types of plugs and one can only connect interfaces with matching plugs, where contains all unordered pairs of matching interfaces. For example, we will later consider system algebras with separate interfaces for inputs and outputs, where one can only connect input interfaces to output interfaces, but not two interfaces of the same type. Since the connection operation is defined for unordered pairs of interfaces, one always connects two interfaces to each other, without a direction. The condition on for the parallel composition ensures that if one can connect two interfaces of a system, one can still do so after taking another system in parallel, and additional connections are only created between the two systems, not for a single system. The intuition behind this condition is that the two systems are independent and do not influence what is possible for the other system. After connecting interfaces, however, it is possible that connections that were allowed before become disallowed. For example, one might want to consider a system algebra in which one cannot create cycles by connecting systems, e.g., when modeling systems that correspond to Boolean circuits. Then, certain connections are only allowed as long as other interfaces are not connected.
We restrict ourselves to systems with finitely many interfaces because we are only interested in systems that are composed of finitely many components. Therefore, we can define parallel composition as a binary operation and interface connection for a single pair of interfaces, whereas in general, one would define the parallel composition of potentially infinitely many systems and the connection of potentially infinitely many pairs of interfaces. In our simplified setting, repeated applications of the binary parallel composition and the connection of two interfaces are sufficient.
An important property system algebras can have is composition-order invariance (called composition-order independence by Maurer and Renner [MR11]). Loosely speaking, it guarantees that a system that is composed of several systems is independent of the order in which they have been composed. Put differently, a figure in which several systems are connected by lines uniquely determines the overall system; the order in which the figure was drawn is irrelevant.
For a -system algebra , we say permits reordering if for all , , and , we have and . If additionally , is called connection-order invariant. A connection-order invariant system algebra is called composition-order invariant if the operation is associative and commutative and for all and such that , we have .
All system algebras we consider in this paper have associative and commutative parallel composition. Note however, that one can also imagine system algebras where this is not the case: Consider a set of systems that correspond to software components that are compiled together when two systems are composed in parallel. Depending on compiler optimizations, the efficiency of the resulting program might depend on the order in which components are compiled together.
3.2 An Abstract Proof of Broadcast Impossibility
In this section, we provide a formal proof at the level of abstract systems of the impossibility of bit-broadcast for three parties with one dishonest party, as sketched in the introduction. This exemplifies how a proof that involves drawing figures can be justified at the level of abstract systems and why composition-order invariance is crucial for doing so. Since there is no notion of outputting a bit for abstract systems, one cannot directly formulate the requirements for broadcast at this level. We therefore first prove a more abstract statement and afterwards argue how this implies the impossibility for more concrete systems. We assume in the following that we have a system algebra that allows to connect systems arbitrarily, i.e., two (different) interfaces of a system can always be connected. This is a reasonable assumption, e.g., in a setting where the systems communicate with other systems by sending messages over channels.
Let be a composition-order invariant -system algebra such that for all . Let such that , , , and are pairwise disjoint and , , , and , where these eight interface labels are distinct. Further let
Note that corresponds to the set of systems for all possible and in (d), corresponds to (c) for , and corresponds to the systems in (b). Let be the system in (e). By composition-order invariance, we have for ,
Hence, we obtain for , and , that . Again using composition-order invariance, we further have for , , and ,
Finally, we have for , , and ,
Therefore, we have . ∎
To see why this implies the claimed impossibility, assume a protocol for broadcast exists and let for be a system that implements the protocol for the sender to broadcast the bit and let and be systems for the two receivers such that these systems have distinct interface labels.666Since interface labels are only used for connecting systems and typical protocols do not depend on them, it is reasonable to assume that such systems with distinct interface labels exist. The validity of the broadcast protocol implies that for all systems in , the subsystem decides on the bit (say with probability more than ) and for all systems in , the subsystem decides on the bit (with probability more than ). The consistency condition further implies that for all systems in , and decide on the same bit (with probability more than ). Now Section 3.2 says that there is a system that satisfies all three constraints, which is impossible. Therefore no such protocol exists.
4 Functional System Algebras
We now introduce special system algebras for functional systems that take inputs at dedicated input interfaces and produce outputs at their output interfaces, where the outputs are computed as a function of the inputs. This not only allows us to model systems that take a single input at each input interface and produce a single output at each output interface, but also much more general systems. For example, to model interactive systems that successively take inputs and produce outputs, one can consider the set of sequences of values as the domain of the functions. The function corresponding to a system then maps an entire input history to the output history and is a compact description of the system.
We define the parallel composition of two systems to be the function that evaluates both systems independently. Interface connection is defined in a way such that after connecting an input interface to an output interface, the input at the former equals the output at the latter. This corresponds to having a fixed point of a certain function determined by the connected interfaces and the system. One therefore has to choose such that fixed points for all allowed connections exist. Ideally, there is always a unique fixed point, because in this case, the interface connection operation is uniquely determined by this condition. If there are several fixed points, one has to be chosen in each case. A functional system algebra is therefore characterized by a set of functions , a function determining the allowed interface connections, and an appropriate choice of fixed points . We use boldface letters for functional systems to distinguish them from abstract systems.
Let and be sets, let for all finite disjoint , be a set of functions , and let be the union of all . For , , , and , let
be the set of fixed points of the function . Further let such that for all and , we have , (or , ),777We formally cannot require and because is unordered. To simplify the notation we will, however, always assume that and when we write , and similarly for , , etc. and for all , we have . Finally let for and be a function such that for all , we have . Then, we define
where is the set of all and
for , we have ,
for pairwise disjoint , , and , we have with
and for and , we have with
If is closed under and ,888That is, for for which is defined, , and for and , . and if for all with , for , and for all , we have , then is a -system algebra. In this case, we say is a functional -system algebra over .
Our definition of interface connections via fixed points implies that the “line” connecting two systems has no effect on the values. This means in particular that it does not introduce delays or transmission errors. If, e.g., delays are required (and the domains and functions are defined at a level such that delays can be specified, see Section 6), one can allow the connection of systems only via dedicated channel systems that introduce the desired delays.
The choice of fixed points is crucial for obtaining a system algebra with desired properties. For example, if the system algebra is supposed to model a class of real-world systems, the chosen fixed point should correspond to the value generated by the real system. The choice of the fixed point can also influence whether connection-order invariance holds. If fixed points are not unique, a reasonable requirement is that they are consistently chosen in the sense that whenever two systems have the same set of fixed points for a specific interface connection, the same fixed point is chosen for both systems.
Let be a functional -system algebra over . We say it has unique fixed points if for all , , and . If there exists999While uniquely determines , the converse is not true. For example, if , , and are such that the input at interface does not influence the outputs of at interfaces different from , the choice of is irrelevant. Hence, we only require for consistently chosen fixed points that can be explained consistently. such that and for all , , , and ,