Many classical protocols and distributed systems are symmetric. This means that every process, independently of its identity, starts in the same initial state and follows the same set of transitions. Symmetric systems are easier to understand and maintain; especially in VLSI designs, which usually contain large numbers of identical components, this is a significant cost factor. Constructing symmetric systems is also a step towards building arbitrarily scalable systems [7, 2, 11].
There is a large body of results [1, 18, 5, 12, 26, 13] that deal with the question of which distributed systems need symmetry breaking and which do not. Leader election among the processes on a ring, for example, cannot be implemented symmetrically ; similarly, in resource-sharing problems, like the Dining Philosophers, the only way to avoid starvation is to break the symmetry .
Our goal is to automate this type of reasoning. Given a specification of a reactive system in temporal logic, we wish to automatically determine whether there exists a symmetric implementation. This is a refinement of the classic distributed synthesis problem, which asks whether a temporal specification has an implementation where the processes are arranged in a particular architecture. Distributed synthesis is well-studied [25, 21, 14, 15, 16, 9, 24]. However, the approach presented in this paper is the first to synthesize symmetric implementations.
We consider rotation-symmetric system architectures. Rotation-symmetric architectures are multi-process architectures where all processes have access to all system inputs, but see different rotations of the inputs. Figure 1 shows a simple rotation-symmetric architecture. Rotation-symmetric architectures are suitable to reason about distributed systems that lack a central coordination process. They can, for example, model leader election scenarios and distributed traffic light controllers . The fact that the processes obtain their input in different rotations is important: since all processes have the same implementation, they would otherwise also produce the same output. The synthesis problem for such systems could trivially be reduced to the standard synthesis problem by adding a constraint that the outputs are the same all the time.
We present an algorithm for the synthesis of symmetric systems in rotation-symmetric architectures from specifications in linear-time temporal logic (LTL). Most standard synthesis algorithms follow the automata-theoretic approach , whereby the given temporal formula is translated into a tree automaton that accepts exactly those computation trees that satisfy the formula. Hence, the specification is realizable if and only if the language of the automaton is non-empty. The synthesis algorithm then simply extracts some finite-state implementation from the language of the automaton. The situation is more difficult when we wish to decide the existence of a symmetric solution, because the language of the automaton may contain both computation trees that belong to symmetric implementations and computation trees that belong to asymmetric implementations. As we show in Section 4, symmetry is not a regular property: we therefore cannot check symmetry with a separate tree automaton or encode symmetry as a temporal logic formula and add it to the specification.
The key insight of our algorithm is that the paths in the computation trees produced by symmetric implementations are guaranteed to be invariant under rotations: if, in each position of two (finite or infinite) computation paths, the values of the input variables of the th process in the first path correspond to the values of the input variables of the th process, for some , in the second path, then the values of the output variables of the th process must also, in each position, correspond to the values of the output variables of the th process (for all , where is the number of processes). Our algorithm exploits this observation to simplify the computation trees. Paths that are just rotations of each other are collapsed into a single representative. Computations in different processes that must lead to identical outputs are thus kept in the same path of the reduced tree; the paths only split when the symmetry is broken by some input. While symmetry is difficult to check on the original computation tree, it becomes a local condition on individual paths in the reduced tree: as long as the output never spontaneously introduces asymmetry, i.e., as long as every asymmetry in the output can be explained by a previous asymmetry in the input, the reduced tree can be expanded into a full computation tree that we know, by construction, to be symmetric.
As we show in Section 4, the running time of our synthesis algorithm is single-exponential in the number of processes. In Section 5, we show that our algorithm is asymptotically optimal: the problem is EXPTIME-complete in the number of processes. In Section 6, we study the extension of the synthesis problem to the case where the processes no longer have access to all variables. Here, our result is negative: under incomplete information, the symmetric synthesis problem is undecidable even for system architectures where the standard synthesis problem is decidable. This paper is based on previously unpublished results from the first author’s PhD thesis , where also additional details of the presented results can be found.
A reactive system produces a valuation to the output propositions in some set and reads the values of the input propositions in some set in every step of its execution. The behavior of a reactive system can be described as a computation tree , where is the set of tree nodes and labels every tree node by the output propositions that the system sets to after having read as its (prefix) input sequence.
A trace in a computation tree is an infinite sequence . Given some language , reactive synthesis is the process of checking if there exists a computation tree with as node set such that every trace of is in . A classical logic to denote specification languages is linear temporal logic (LTL, ). LTL formulas for reactive system specifications are built according to the grammar
For LTL specifications, it is known that if and only if there exists a computation tree all of whose traces satisfy a specification (i.e., the specification is realizable), there exists a regular such computation tree. A computation tree is regular if it has only finitely many different sub-trees. Given a computation tree , a tree is a sub-tree of if and only if and there exists a such that for every , we have . Regular computation trees can be translated to finite-state machines and implemented in hardware or software using a finite amount of memory. A tree language for some sets and is a subset of all trees with and . A tree or word language is called regular if it can be recognized by some finite tree or word automaton (with a Muller acceptance condition, see  for details).
In distributed synthesis, we search for a distributed implementation of a finite state-machine. Given is an architecture that defines several processes and the signals that connect the processes among themselves and with the global input and output of the architecture. Starting from a specification over all signals, we search for implementations for all of the processes such that the computation tree induced by the process implementations and the architecture satisfies the specification. In the induced computation tree, all processes are executed at the same time and in parallel, using the usual parallel composition semantics.
It is known since the seminal work by Pneuli and Rosner  that not all architectures have a decidable distributed synthesis problem. Figure 2 depicts the A0 architecture that they defined as an example for an undecidable architecture. Finkbeiner and Schewe  later proved that the distributed synthesis problem is decidable if and only if there exists no information fork in the architecture. An information fork is a pair of processes that are incomparably informed, i.e., for which each of the processes has access to some global input that the other process cannot read. For a more formal definition of distributed synthesis, the interested reader is referred to .
A Turing machine is a tuple in which is a finite set of states, is an input alphabet, is a (finite) tape alphabet,
encodes the Turing machine transition function,is an initial state, and maps every state to its type, which can be accepting, rejecting, or transient. The function maps every state/tape content combination to exactly two possible successor state/tape content/tape motion combinations. For deterministic Turing machines, the two successor combinations are always the same. Alternating Turing machines  extend the non-deterministic Turing machines by partitioning the transient states into universally branching and existentially branching states. An (alternating) Turing machine accepts a word if there exists an accepting run tree when starting in state with the tape empty except for a copy of where the machine head starts on the first character of . In all universal states, the Turing machine execution must be accepting for both possible transitions.
We assume that the modulo function always returns a non-negative number, such that, e.g., .
3 The Symmetric Synthesis Problem
We consider distributed reactive synthesis problems in which all processes share the same implementation. A process has an interface with the local input proposition set and a local output proposition set . The connections between the processes are described in an architecture. [Symmetric architecture] Given an interface , a symmetric architecture over is a tuple with:
the set of (internal) signals ,
the process set ,
the global input signal set ,
the input edge function , and
the output edge function .
As an example, the architecture given in the right part of Figure 2 hosts processes with the interface and has the components , , , , and . We only consider architectures in which every internal signal is written to by exactly one local output of one process. Given a FSM for a process with an interface and an architecture over , we can construct an FSM with as input proposition set and as output proposition set that implements the behavior of the complete architecture when using the FSM as process implementation. Without loss of generality, we use the standard synchronous composition semantics to do so. We define the symmetric synthesis problem as follows: Given an interface , an architecture , and a specification over the propositions , the symmetric synthesis problem is to check if an FSM implementation with the input proposition set and output proposition set exists such that the FSM obtained by plugging into satisfies . In case of a positive answer, we also want to obtain .
4 Rotation-Symmetric Synthesis
Many symmetric architectures found in practice consist of a ring of processes, all of which read all the input to the overall system. A slight generalization of this architecture shape is the class of rotation-symmetric architectures.
A symmetric architecture over the interface with processes is called rotation-symmetric if and only if there exists a local designated proposition set for every process instance such that the following conditions hold:
for every , every , and every , we have , and
for every and , we have .
We show in this section that the symmetric synthesis problem for rotation-symmetric architectures and linear-time temporal logic (LTL) is decidable.
The key observation that we use to prove decidability is that the computation trees that characterize the input/output behavior of a process implementation plugged into a rotation-symmetric architecture have a useful property that we call the symmetry property. While this property is non-regular and thus cannot be encoded into the specification (Lemma 4), we show how to decompose it into two sub-properties, one of which is regular. The other one is still non-regular, but has the advantage that we can enforce it in a synthesis process by post-processing the computation tree obtained from a synthesis procedure to contain only rotations of the computation tree paths along so-called normalized inputs. Since every tree with the symmetry property is left unaltered by this step and we also describe how to ensure that the result of the post-processing step is guaranteed to be a correct solution, this approach is sound and complete.
We assume some fixed rotation-symmetric architecture over some local process interface to be given, define to denote the global input alphabet to all processes, while denotes the global output. The local output of one process is given as .
The following rotation function will become useful in the analysis below. Let for some other set . We define a rotation operator with for every and . Furthermore, we extend the function to LTL formulas and define for an LTL formula over the set of propositions and to be with all atomic propositions replaced by for , . For clarity, when dealing with the function for some set , we often partition the elements of by their process indices and for example write instead of for . The rotation function is extended to sequences of elements in by rotating the individual sequence items.
[Symmetry property] Given a tree over and , we say that the tree has the symmetry property if for each and , . [Symmetry lemma] The set of regular trees having the symmetry property is precisely the same as the set of trees that are induced by a rotation-symmetric architecture for some process implementation. A proof of the lemma can be found in the appendix. The symmetry property is not a regular tree property, and hence cannot be encoded into a tree or word automaton.
The set of symmetric computation trees for the two-process rotation-symmetric architecture with process interface and and is not a regular tree language.
For a proof by contradiction, suppose that the set of symmetric computation trees is regular. The language includes a tree with the symmetry property in which the node labels on the path and, symmetrically, on the path form the sequence , i.e., the length of the -sequences grows according to the distance to the root. According to the pumping lemma for regular tree languages, however, the sequence can be partitioned into , such that, for every , there exists a tree in the language where the label sequence on is , while the label sequence on is still . Clearly, these trees are not symmetric. ∎
Since the symmetry property is non-regular, we need to alter the synthesis process itself to account for it. In order to synthesize an implementation for one process, we synthesize implementations for all processes together. These only need to work correctly on normalized input sequences . An input sequence is normalized if , where the function uses the lexicographic ordering over the strings in . For the ordering of the elements in , we consider the lexicographic ordering of their tuple representation. For example, we have and for a three-process architecture. A tree with the symmetry property is fully determined by the labels along normalized input sequences, as for every non-normalized input sequence , we have for every such that .
When only considering the normalized input sequences during synthesis, we can take the computation tree for all processes in the architecture together and complete it by filling all other tree labels with rotations of the tree labels along normalized inputs. We call the resulting tree its symmetric completion. If afterwards, we have for all and , then the symmetry lemma guarantees that the resulting tree is induced by some process instantiated in a rotation-symmetric architecture. So if we can guarantee that (1) is actually the case for all normalized and and (2) that the symmetric completion of the tree satisfies the specification along all paths, then we can obtain a correct process implementation by synthesizing a computation tree for the complete architecture. Our construction for symmetric synthesis consist of these two components, which we describe in more detail below.
4.1 Ensuring Symmetric Completability
Not every -labeled computation tree can easily be made symmetric by replacing the tree labels for non-normalized input sequences. Take for example a tree for the architecture given in Figure 1 with . Since the output of the processes is initially different, this means that they cannot have the same implementation. We show in this section that detecting such cases is simple, and the formalization of the observation is a regular property that can be easily encoded into LTL. Let be some set, and be a list of process identifiers. For every and , we define
where denotes the greatest common divisor function. For some word , represents how many different rotations in of exist that map the word to itself.
[Second symmetry lemma] Let be a computation tree with and for which for every , we have that (where the symbol refers to division without remainder). The unique symmetric completion of has the symmetry property. Furthermore, if is regular, then so is its unique symmetric completion. By the second symmetry lemma, it suffices for a computation tree to have for all to ensure that the symmetric completion of the tree has the symmetry property. We can encode this requirement in LTL as
for the function
that encodes, for each whether (for with ).
4.2 Ensuring That the Tree Completion Satisfies the Specification
If we have a computation tree all of whose traces satisfy some linear-time specification , this does not imply that its rotation-symmetric completion satisfies as well. If all traces of however satisfy , then since we know that every infinite trace in the rotation-symmetric completion is a rotation of a trace in the original tree by some value , we know that the rotation-symmetric completion also satisfies along every trace. So if we synthesize a tree for as specification instead of , taking the rotation-symmetric completion maintains .
Note that strengthening to comes without loss of generality if we are interested in rotation-symmetric implementations. By the symmetry property, if the tree induced by a rotation-symmetric architecture and a process implementation satisfies , then it also satisfies for all as every rotation of every trace in the tree is also a trace in the tree. Hence, to satisfy , it also needs to satisfy as otherwise we could take a trace not satisfying , rotate it by , and obtain a trace that does not satisfy .
4.3 Putting Everything Together
Using the concepts defined above, we are now ready to tie them together to a complete synthesis process. We start with a specification over the architecture input propositions and the output proposition set for .
We modify the specification to .
We modify to (as described in Section 4.2).
We synthesize a regular tree that satisfies along all paths using a classical reactive synthesis procedure. If there is no such tree, the specification is unrealizable.
If a regular computation tree is found, we replace every label along non-normalized directions by rotations of ’s labels along normalized directions to get a tree with the symmetry property.
We cut off the labels of except for the output of the first process in the architecture. The resulting (regular) tree is the synthesized process implementation.
The above synthesis process from LTL has a complexity that is 2EXPTIME in the length of the specification and exponential-time in the number of processes.
We use the automata-theoretic approach to reactive system synthesis from [17, 24] and the concepts defined in these works. We start by translating the specification to a universal co-Büchi word (UCW) automaton, which is of size in the size of the specification. As UCWs do not blow up under conjunction, executing step 1 from the construction above leads to an automaton of size . A deterministic automaton for the added property in step 2 can be built with at most states, so executing step 2 leads to at most additional states, and we obtain an automaton with many states. The bounded synthesis approach works with specifications given as co-Büchi word automata  and takes time exponential in the number of states of the automaton. The overall time complexity so far is thus 2EXPTIME in and exponential in . Step 4 leads to a blow-up of at most a factor of and can be done in time polynomial in the number of states in the synthesized finite-state machine (whose size is proportional to the time complexity of the synthesis procedure executed in the previous step). Step 5 is simple and takes time linear in the size of the FSM. ∎
Note that even though the construction above discards all non-normalized parts of the synthesized computation tree, asking the synthesis algorithm to nevertheless synthesize these parts according to the specification comes without loss of generality, as trees with the symmetry property (which we are actually searching for) fulfill along all paths if all of their paths satisfy . So the synthesis process does not report spurious unrealizability.
5 Rotation-Symmetric Synthesis – Complexity
The symmetric synthesis construction from the previous section has a time complexity that is doubly-exponential in the length of the specification and singly-exponential in the number of processes. We want to show in this section that this matches the complexity of the problem by giving a corresponding hardness result. The 2EXPTIME-hardness in the specification length is inherited from the complexity of LTL synthesis . For the EXPTIME complexity in the number of processes, we provide the following result:
Given an -space bounded alternating Turing machine , we can reduce the acceptance of a word by to the symmetric realizability problem of processes with a specification in LTL of size polynomial in .
We build a specification that requires the processes to output the Turing tape configuration along an execution of the machine. The specification is realizable if and only if the Turing machine does not accept the word. Every process outputs the value of one Turing tape cell and if the tape head is at the cell, also the state of the Turing machine. There are input signals to the architecture, and when the processes start, the left-most local input signals of the processes is used to tell one or more processes that the Turing tape computation should start at that cell with the tape head being initially there (with as the initial tape content). To account for the rotation-symmetry, the processes output not only the tape content and tape head position, but also the current boundaries of the tape. The specification is modeled such that if start and end markers collide, the simulation of the Turing machine can stop.
The specification also includes conjuncts that require all processes together to simulate the Turing machine computation correctly and to never reach an accepting state. Whenever the alternating Turing machine branches universally, the left-most local process input signal is used to select which successor state is picked. In case of existential branching, the processes can decide which successor state to pick. Enforcing the specification to be realizable if and only if the word is not accepted by the Turing machine helps with taking care of the diverging computations of the Turing machine and those computations that exceed the space bound. Both count as non-accepting in the definition of space-bounded Turing machines. Since these runs never visit accepting states and/or permit the simulation to stop, they are allowed to be simulated by a synthesized implementation.
The specification can be written with size polynomial in as we only need to define the specification for one process. By the symmetry of the architecture, the other processes have to fulfill it as well. ∎
A more detailed proof can be found in the appendix.
The rotation-symmetric realizability problem (for LTL) has a time complexity that is exponential in the number of processes.
Given the question whether a word is in the language defined by some -EXPTIME -AEXPSPACE problem for some , we can reduce it to the symmetric realizability problem for an LTL specification of length polynomial in and with a number of processes that is -exponential in . Since by the space hierarchy theorem , the -EXPTIME hierarchy is strict for increasing , we can conclude that in general, we cannot solve the symmetric realizability problem faster than in time exponential in the number of components. ∎
6 The General Case – Undecidability
The synthesis problem for standard, not necessarily symmetric, distributed systems is decidable as long as the processes can be ordered with respect to their relative knowledge about the system inputs . The problem becomes undecidable as soon as it contains an information fork, i.e., a pair of processes with incomparable knowledge. The simplest such architecture is Pnueli and Rosner’s A0 architecture , shown on the left in Fig. 2. In this section, we show that for symmetric synthesis, even architectures without information forks, such as the S0 architecture shown on the right in Fig. 2, are undecidable. Our proof is based on Pnueli and Rosner’s undecidability argument for A0:
[] For a given Turing machine , there exists an LTL formula that is realizable in the distributed architecture A0 if and only if halts and such that the two processes of the unique implementation of sequentially output binary encodings of the configurations of the Turing machine on (or , respectively) upon the first value on the input (or , respectively).
Because of the undecidability of the halting problem, Lemma 6 means that the distributed synthesis problem of architecture A0 is undecidable. We prove the undecidability of the symmetric synthesis problem of architecture S0 in two steps. First, we establish the undecidability of the larger architecture S2, depicted in Figure 3, by showing that the realizability of in A0 can be reduced to the symmetric realizability of an LTL formula over S2; in the second step, we encode the synthesis problem of S0 into the synthesis problem of S2 and thus establish that the synthesis problem for the simpler architecture S0 is undecidable as well.
The symmetric synthesis problem for architecture S2 is undecidable.
We show that there exists an implementation for the specification in the A0 architecture if and only if there exists a joint implementation for the two processes in the S2 architecture that satisfies , where results from prefixing all occurrences of the signals and in with a next-time operator.
The results of the two synthesis problems can be translated into each other. A distributed implementation of over A0 is necessarily symmetric: both processes output the same bitstream when reading a value as their local input for the first time. To obtain an implementation for S2, we simulate the process with input and use as the local output. Additionally, we copy all values from to , and to .
Conversely, an implementation found by the symmetric synthesis of S2 provides an implementation of in A0. The key property of the architecture S2 is that the process does not know if the local input is the (delayed) input to the other process, or if its input is the (Turing machine tape) output of the other process. Thus, it cannot find out if it is the top process or the bottom process in the architecture and must prevent violating the specification in either case. A more detailed proof is given in the appendix. ∎
In order to reduce the symmetric synthesis problem of S2 to the symmetric synthesis problem of S0, we introduce compression functions that time-share multiple signals of S2 into a single signal in S0.
Let be a set of signals. We call a function for some Boolean variable a compression function if is injective. We call a function that maps a specification over the signal set to a different specification over the signal set the adjunct compression function to if for all and specifications over , we have that if and only if .
In the appendix, we give such a pair of compression functions for LTL. The compression mechanism is illustrated in Figure 4. One clock cycle in the four-bit-per-character version of a word is spread to 10 computation cycles in the one-bit-per-character version of the word. Every 10 cycles, the 2-cycle character start sequence (CSS) is instantiated, followed by four two-cycle slots for every signal in . Note that the construction ensures that whenever we have as a part in a compressed word, then we know that a character start sequence begins on the first occurrence of in this part.
The symmetric synthesis problem for architecture S0 is undecidable.
In order to reduce the symmetric synthesis problem of architecture to the symmetric synthesis problem of architecture S2, we compress into signal ; into signal ; and into signal . A more detailed proof is given in the appendix. ∎
In this paper, we have studied the problem of synthesizing symmetric systems. Our new synthesis algorithm is a useful tool in the development of distributed algorithms, because it checks automatically if certain properties in a design problem require symmetry breaking.
Our algorithm synthesizes implementations of rotation-symmetric architectures, i.e., architectures where the processes observe all inputs. The undecidability result for the architecture S0 indicates that it is impossible to extend the synthesis algorithm to architectures where the processes no longer have access to all inputs. A promising direction of research, however, is to use our results to extend existing semi-algorithms for synthesis under incomplete information to such symmetric architectures. An example for such an approach is bounded synthesis , which determines if there exists an implementation with at most states, where is a given bound. The specification is translated into a universal co-Büchi automaton, which is then, together with the bound , encoded into a satisfiability modulo theory problem. To ensure correctness under incomplete information, constraints are added that ensure that if a process cannot distinguish two inputs, it transitions to the same successor state. Similarly, for symmetric synthesis, constraints can be added that ensure that the outputs of the individual processes are identical in states that are indistinguishable for them.
Algorithms for symmetric synthesis procedures also offer a new perspective on the problem of synthesizing arbitrarily scalable (i. e. parametric) systems. Due to the undecidability of the problem, only very limited solutions to this problem have been found so far. For example, Jacobs and Bloem  tackle the case of asynchronous processes with local input in a ring architecture and use the bounded synthesis approach mentioned above. Emerson and Srinivasan  present a solution for a multi-process version of a small subset of the temporal logic CTL while Attie and Emerson  give a different solution allowing a bigger subset of CTL but only guaranteeing correctness of the solution if certain other conditions are fulfilled, like the dead-lock freeness of the solution produced. In such a setting, symmetric synthesis can be used to detect specifications that are unrealizable even for small system sizes – if there is no solution for a fixed number of processes , then there is certainly none for scalable systems as well.
Local and global properties in networks of processors (extended
Twelfth Annual ACM Symposium on Theory of Computing (STOC), pages 82–93, 1980.
-  Paul C. Attie and E. Allen Emerson. Synthesis of concurrent systems with many similar processes. ACM Trans. Program. Lang. Syst., 20(1):51–115, 1998. URL: http://doi.acm.org/10.1145/271510.271519, doi:10.1145/271510.271519.
-  Ashok K. Chandra, Dexter Kozen, and Larry J. Stockmeyer. Alternation. J. ACM, 28(1):114–133, 1981.
-  E. M. Clarke, Orna Grumberg, and Doron Peled. Model Checking. MIT Press, 1999.
-  Shimon Cohen, Daniel J. Lehmann, and Amir Pnueli. Symmetric and economical solutions to the mutual exclusion problem in a distributed system. Theor. Comput. Sci., 34:215–225, 1984.
-  Rüdiger Ehlers. Symmetric and efficient synthesis. PhD thesis, Saarland University, 2013. URL: http://scidok.sulb.uni-saarland.de/volltexte/2013/5607/.
-  E. Allen Emerson and Jai Srinivasan. A decidable temporal logic to reason about many processes. In Proc. PODC, pages 233–246, 1990.
-  N. J. Fine and H. S. Wilf. Uniqueness theorems for periodic functions. Proceedings of the American Mathematical Society, 16:109–114, 1965.
-  Bernd Finkbeiner and Sven Schewe. Uniform distributed synthesis. In Proc. LICS, pages 321–330, 2005.
-  Jörg Flum, Erich Grädel, and Thomas Wilke, editors. Logic and Automata: History and Perspectives [in Honor of Wolfgang Thomas], volume 2 of Texts in Logic and Games. Amsterdam University Press, 2008.
-  Swen Jacobs and Roderick Bloem. Parameterized synthesis. Logical Methods in Computer Science, 10(1), 2014. doi:10.2168/LMCS-10(1:12)2014.
-  Ralph E. Johnson and Fred B. Schneider. Symmetry and similarity in distributed systems. In Proc. PODC, pages 13–22. ACM, 1985.
-  Evangelos Kranakis. Invited talk: Symmetry and computability in anonymous networks. In Nicola Santoro and Paul G. Spirakis, editors, Proc. SIROCCO, pages 1–16. Carleton Scientific, 1996.
-  Orna Kupferman and Moshe Y. Vardi. Synthesis with incomplete information. In Proc. ICTL, 1997.
-  Orna Kupferman and Moshe Y. Vardi. -calculus synthesis. In Proc. MFCS, pages 497–507, 2000.
-  Orna Kupferman and Moshe Y. Vardi. Synthesizing distributed systems. In 16th Annual IEEE Symposium on Logic in Computer Science (LICS 2001), July 2001.
-  Orna Kupferman and Moshe Y. Vardi. Safraless decision procedures. In FOCS, pages 531–542. IEEE, 2005.
-  Daniel J. Lehmann and Michael O. Rabin. On the advantages of free choice: A symmetric and fully distributed solution to the dining philosophers problem. In Proc. POPL, 1981.
-  Amir Pnueli. The temporal logic of programs. In FOCS, pages 46–57. IEEE, 1977.
-  Amir Pnueli and Roni Rosner. On the synthesis of an asynchronous reactive module. In Giorgio Ausiello, Mariangiola Dezani-Ciancaglini, and Simona Ronchi Della Rocca, editors, ICALP, volume 372 of Lecture Notes in Computer Science, pages 652–671. Springer, 1989.
-  Amir Pnueli and Roni Rosner. Distributed reactive systems are hard to synthesize. In FOCS, volume II, pages 746–757. IEEE, 1990.
-  Michael O. Rabin. Automata on Infinite Objects and Church’s Problem. American Mathematical Society, 1972.
-  Desh Ranjan, Richard Chang, and Juris Hartmanis. Space bounded computations: review and new separation results. Theoretical Computer Science, 80(2):289 – 302, 1991. doi:10.1016/0304-3975(91)90391-E.
-  Sven Schewe and Bernd Finkbeiner. Bounded synthesis. In Kedar S. Namjoshi, Tomohiro Yoneda, Teruo Higashino, and Yoshio Okamura, editors, ATVA, volume 4762 of Lecture Notes in Computer Science, pages 474–488. Springer, 2007.
-  Pierre Wolper. Synthesis of Communicating Processes from Temporal-Logic Specifications. PhD thesis, Stanford University, 1982.
-  Masafumi Yamashita and Tiko Kameda. Computing on an anonymous network. In Proc. PODC, pages 117–130, 1988.
Appendix A Appendix – Proof Details
a.1 Additional Preliminaries
We use Moore machines as finite-state model for regular computation trees. Formally, a Moore machine is a tuple with the (finite) set of states , the input alphabet , the output alphabet , the initial state , and the labelling function . A Moore machine induces a computation tree with and such that for all , we have that . Moore machines induce regular computation trees, i.e., computation trees that only have a finite number of distinct sub-trees.
Given a Moore machine, an extended computation tree induced by it is the same as a computation tree induced by the Moore machine, except that the tree labels are in , where for every node , the first label element of describes the state of the Moore machine after reading the input from the initial state, and the second label element describes the last output after reading from the initial state as before.
a.2 Additional Definitions
In Definition 3, we used the standard definition of parallel composition to say what it means to plug a process implementation into a symmetric architecture. For the sake of completeness, let us formally define this special case of parallel composition.
Given an architecture for some process interface and some Moore machine with and , we define the aggregated Moore machine of the architecture and as with:
for all , we have ,
for all and , such that for all , , and
for all , .
This definition ensures that the values of all signals are “exported” from the aggregated finite-state machine. Thus, when specifying the system behaviour of an aggregated system in a language such as linear-time temporal logic (LTL), we can refer to the signals used internally between the components.
In the main part of the paper, we also define computation trees that encode the behavior of a rotation-symmetric architecture after we plug one process into it. If the process is a finite-state machine, then the resulting computation tree for the behavior of the complete architecture is regular, and hence can be translated (back) to a Moore machine. We call this Moore machine for the behavior of the complete rotation-symmetric architecture implementation the symmetric product of the single process, whose definition we give next. The reader is reminded that and are defined on page 4.
[Symmetric product] Given a Moore machine , we say that a Moore machine is the symmetric product of if , , and for all , :
s. t. and . Note that Definition A.2 is just a combination of Definition 4 and the usual definition of parallel composition of Moore machines, applied to architectures consisting of a single cycle of processes.
a.3 Proof of the Symmetry Lemma
Let in the following for every the expression denote the local output of process , i.e., let us define .
: The fact that the computation tree induced by the symmetric product of some Moore machine has the symmetry property follows directly from the definitions.
: For the converse direction, we prove that from every regular computation tree with the symmetry property, we can construct a Moore machine that is an implementation for one process, and by taking the symmetric product of the Moore machine, we obtain a product machine whose computation tree is in turn the one that we started with.
Let be the computation tree to start with. As it is regular, we have an equivalence relation over the nodes in the tree. Let be the function that maps a tree node in onto a tree node representing its equivalence class, so for all , we have that the sub-trees induced by and are the same if and only if , and for every there is some such that . We build a Moore machine for one process in the symmetric architecture from by setting with:
We now show that the symmetric product of induces a computation tree that is the same as . If we take the symmetric product (Definition A.2) of , we obtain with:
Let be the extended computation tree induced by with . We can show by induction that for every , we have that . The induction basis is trivial, as . For the inductive step, we have:
In step (1)-(A.3) of this deduction, we applied the definitions of the elements of and . In step (A.3)-(3), we used the inductive hypothesis. In step (3)-(4), we used the regularity of the tree: for some and , we need to have as the subtree induced by has to be the same as the one induced by , as otherwise and would not be in the same equivalence class of subtrees (which is a contradiction). The last step uses the fact that if we concatenate two strings that are rotated by the same number of indices, then we can also first concatenate, and then rotate.
Now let us have a look at the outputs in the extended computation tree . For every , we have:
In step (8)-(9), we simply applied the definition of . In step (9)-(10), we used the fact that we are dealing with equivalence classes over nodes in the computation tree that respect the labelling of the system. In step (10)-(11), we use the symmetry property of . For every , we have by this property, and then by renaming. In the last step, we just plug together the tuple. ∎
a.4 Correctness of the Function
The definition of the function in Section 4.1 is supposed to describe how to compute the symmetry degree of a word, i.e., the number of processes getting the same rotations of an input proposition valuation or the number of rotations of the output of the processes that lead to the same element of . We prove that the definition of the function achieves this goal in two steps and start with the following sub-lemma:
If there are precisely values (for some ) such that for some , then the list of indices is precisely the list of indices but such that for all , we have but for all , we have or either or .
For all , we know that as well since for all , . Furthermore, .
To show that all elements in are equally spaced (modulo ), consider the converse. So we have with and there are no indices in in between and or and , respectively. By the argument above if we also have or if we also have , which is a contradiction. The case that involves wrapping around in the modulo space can be proven similarly.
So we know that there are equally spaced elements in , and by the same line of reasoning, we can also deduce that the spacing between the elements in is the same as the spacing between and the largest element in L. Since furthermore and for all , the claim follows. ∎
Lemma A.4 can alternatively be shown by applying a theorem by Fine and Wilf  on the combinatorics on words. To use it, we would however have to rearrange the letters in a word, and describing that construction would be more complicated than giving a direct proof, which is why the latter has been done here. For every , we have
The proof is done by induction on the length of .
Basis: Trivial, since for every