A faster algorithm for the FSSP in one-dimensional CA with multiple speeds

03/30/2020
by   Thomas Worsch, et al.
0

In cellular automata with multiple speeds for each cell i there is a positive integer p_i such that this cell updates its state still periodically but only at times which are a multiple of p_i. Additionally there is a finite upper bound on all p_i. Manzoni and Umeo have described an algorithm for these (one-dimensional) cellular automata which solves the Firing Squad Synchronization Problem. This algorithm needs linear time (in the number of cells to be synchronized) but for many problem instances it is slower than the optimum time by some positive constant factor. In the present paper we derive lower bounds on possible synchronization times and describe an algorithm which is never slower and in some cases faster than the one by Manzoni and Umeo and which is close to a lower bound (up to a constant summand) in more cases.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

01/17/2020

Lower bounds for the maximum number of runners that cause loneliness, and its application to Isolation

We consider (n+1) runners with given constant unique integer speeds runn...
12/07/2020

A Note on John Simplex with Positive Dilation

We prove a Johns theorem for simplices in R^d with positive dilation fac...
07/13/2018

A Tight Lower Bound for Clock Synchronization in Odd-Ary M-Toroids

Synchronizing clocks in a distributed system in which processes communic...
01/28/2021

Continuous One-Counter Automata

We study the reachability problem for continuous one-counter automata, C...
10/24/2019

On the Complexity of Asynchronous Freezing Cellular Automata

In this paper we study the family of freezing cellular automata (FCA) in...
04/15/2008

Theory and Applications of Two-dimensional, Null-boundary, Nine-Neighborhood, Cellular Automata Linear rules

This paper deals with the theory and application of 2-Dimensional, nine-...
04/15/2020

Latin Hypercubes and Cellular Automata

Latin squares and hypercubes are combinatorial designs with several appl...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

The Firing Squad Synchronization Problem (FSSP) has a relatively long history in the field of cellular automata. The formulation of the problem dates back to the late fifties and first solutions were published in the early sixties. A general overview of different variants of the problem and solutions with many references can be found in [3]. Readers interested in more recent developments concerning several specialized problems and questions are referred to the survey [4].

In recent years asynchronous CA have received a lot of attention. In a “really” asynchronous setting (when nothing can be assumed about the relation between updates of different cells) it is of course impossible to achieve synchronization. As a middle ground the FSSP has been considered in what Manzoni and Umeo [2] have called CA with multiple speeds, abbreviated in the following as MS-CA. In these CA different cells may update their states at different times. But there is still enough regularity so that the problem setting of the FSSP makes sense: As in standard CA there is a global clock. For each cell there is a positive integer such that this cell only updates its state at times which are a multiple of . We will call the period of cell . Additionally there is a finite upper bound on all , so that it can be assumed that each cell has stored as part of its state. This also means that there are always times (namely the multiples of the least common multiple of all periods) when all cells update their states simultaneously.

The rest of this paper is organized as followed: In Section 2 we fix some notation, review the basics of standard CA in general and of the FSSP for them. In Section 3 cellular automata with multiple speeds (MS-CA) and the corresponding FSSP will be introduced. Since most algorithms for the FSSP make heavy use of signals, we have a closer look at what can happen with them in MS-CA. In Section 4 some lower bounds for the synchronization times will be derived. Finally, in Section 5 an algorithm for the FSSP in MS-CA will be described in detail.

2 Basics

denotes the set of integers, the set of positive integers and . For and we define . For let .

The greatest common divisor of a set of numbers is abbreviated as and the least common multiple as .

We write for the set of all functions . The cardinality of a set is denoted .

For a finite alphabet and we write for the set of all words over having length , for , and . For a word and some the longest prefix of which has length at most is denoted as , i. e. is the prefix of length of or the whole word if it is shorter than . Analogously is used for suffixes of .

Usually cellular automata are specified by a finite set of states, a neighborhood , and local transition function . In the present paper we will only consider one-dimensional CA with Moore neighborhood with radius .

Therefore a (global) configuration of a CA is a function , i. e. . Given a configuration and some cell the so-called local configuration observed by in is the mapping and denoted by . In the standard definition of CA the local transition function induces a global transition function describing one step of the CA from to by requiring that holds for each .

By contrast in MS-CA it is not possible to speak about the successor configuration. The relevant definitions will be given and discussed in the next section.

Before, we quickly recap the Firing Squad Synchronization Problem (FSSP). A CA solving the FSSP has to have a set of states . For the problem instance of size is the initial configuration where

Cells outside the segment between and are in a state # which is supposed to be fixed by all transitions. State G is the initial state for the so-called general, state _ is the initial state for all other cells, and state F indicates the cells have been synchronized.

The goal is to find a local transition function which makes the CA transit from each to configuration in which all cells are in state F (the other cells still all in state #) and no cell ever was in state F before. In addition the local transition function has to satisfy and , which prohibits the trivial “solution” to have all cells enter state F in the first step and implies that “activities” have to start at the G-cell and spread to other cells from there.

Within the framework of synchronization let’s call the set the support of configuration . As a consequence of all these requirements during a computation starting with some problem instance all subsequent configurations have the same support .

It is well known that there are CA which achieve synchronization in time (for ) and that no CA can be faster, not even for a single problem instance.

3 CA with multiple speeds

3.1 Definition of Ms-Ca

A cellular automaton with multiple speeds (MS-CA) is a specialization of standard CA. Its specifiation requires a finite set of so-called possible periods (in [2] they are called lengths of update cycles). Before a computation starts a period has to be assigned to each cell which remains fixed throughout the computation. Requiring to be finite is meaningful for several reasons:

  • It can be shown [2, Prop. 3.1] that otherwise it is impossible to solve the FSSP for MS-CA with one fixed set of states.

  • We want that each cell can make its computation depend on its own period and those of its neighbors to the left and right, but of course the analogue of the local transition function should still have a finite description. To this end we want to be able to assume that each has its own period stored in its state.

For the rest of the paper we assume that the set of states is always of the form , and that the transition function never changes the first component. We will denote the period of a cell as . For we write and for the projections on the first and second component and analogously for global configurations. For a global configuration we write (or simply if is clear from the context) for the set of numbers that are periods of cells in the support of ; cells in state # can be ignored because they don’t change their state by definition.

For MS-CA it is not possible to speak about the successor configuration. Instead it is necessary to know how many time steps have already happened since the CA started. Borrowing some notation from asynchronous CA, for any subset of cells and any denote by the configuration reached from if exactly the cells in update their states according to and all other cells do not change their state:

(Thus, the global transition function of a standard CA is .) Given some MS-CA and some time denote by the set of so-called active cells at time . Then, for each initial configuration the computation resulting from it is the sequence where and for each . In particular this means, that at time all cells will update their states according to . More generally this is true for all , where is the least common multiple of all , i. e. all that are a multiple of all elements in . We will speak of a common update when .

The observations collected in the following lemma are very simple and don’t need an explicit proof.

Lemma 1.

Let be the greatest common divisor of all and let . Let as before.

  1. For each the set is empty.

  2. For each computation for each .

  3. The computation results when using instead of and exactly the same local transition function.

  4. When all cells involved in a computation have the same period , is simply a times slower “copy” of the computation in a standard CA.

Therefore the interesting cases are whenever and . We will assume this for the rest of the paper without always explicitly mentioning it again.

Fact 2.

If and then there is at least one odd number that can be used as a period.

3.2 Signals in Ms-Ca

Since almost all CA algorithms for the synchronization problem make extensive use of signals, they are also our first example for some MS-CA. Figure 1 shows a sketch of a space-time diagram. Time is increasing in the downward direction (throughout this paper). When a cell is active at time a triangle between the old state in row and the new state in row indicates the state transition. The numbers in the top row are the periods of the cells.

At this point it is not important to understand how an appropriate local transition function could be designed to realize the depicted signal. But, assuming that this can be done, the example has been chosen such that the signal is present in a cell for the first time exactly steps after it was first present in the left neighbor .

1 1 3 3 2 2 2
common update
common update
common update
Figure 1: Sketch of how a basic signal could move right in a MS-CAas fast as possible. The numbers in the top row are the periods of the cells. Time is going down. Triangles indicate active state transitions, i. e. when they are missing the state has to stay the same.

One possibility to construct such computations is the following. Putting aside an appropriate number of cells at either end, the configuration consists of blocks of cells. In each block all cells have some common period and there are cells in the block. For example, in Figure 1 (at least one can assume that, since only periods , , and are used) and there are cells with period and cells with period . Let’s number the cells in such a block from to .

Assume that a signal should move to the right as fast as posible. For each such block the following holds: If the signal appears in the left neighbor of such a block for the first time after a common update at some time , then it can only enter cell of the block at time , cell of the block at time etc., and hence the last cell of the block at time . That cell is the left neighbor of the next block. Hence by induction the same is true for every block and the passage of the signal through each block will take steps. This happens to be the sum of all periods of cells in one block.

Unfortunately there are also cases in which signals are not delayed by the increase of the periods of some cells. Figure 2 shows a situation where periods and are assigned to subsequent cells alternatingly. As can be seen, a signal can move from each cell to next one in every step, including a change of direction at the right border.

1 2 1 2 1 2 1 2 1 1
Figure 2: Basic signal first moving right and bouncing back at the border with speed although half of the cells has only period .

3.3 The FSSP in Ms-Ca

In the standard setting for each there is exactly one problem instance of the FSSP of length .

In MS-CA we will assume that the set of states is always of the form . We will call a configuration a problem instance for the MS-FSSP if two conditions are satisfied:

  • is a problem instance for the FSSP in standard CA.

  • The period of all border cells is the same as that of G-cell .

By definition border cells never change their state, no matter what their period is. The second condition just makes sure that formally a period is assigned even to border cells, but this does not change the set of periods that are present in the cells of the support that do the real work.

Now for each size there are problem instances of the MS-FSSP.

It should be clear that the minimum synchronization time will at least in some cases depend on the periods. Assume that there are two different , say . Then, when all cells have synchronization can be achieved more quickly than when all cells have . A straightforward transfer of a (time optimal) FSSP algorithm for standard CA (needing steps) yields a MS-CA running in time . This is faster than any MS-CA with uniform period can be which needs (see Section 4).

4 On lower bounds for the synchronization time on Ms-Ca

In the case of standard CA the argument used for deriving lower bounds for the synchronization time uses the following observation. Whenever an algorithm makes the leftmost cell fire at some time , it can only be correct if changing the border state # in cell to state _ (i. e. increasing the size of the initial configuration by ) can possibly (and in fact will) have an influence on the state of cell at time . If changing the state at the right end cannot have an influence on cell . But then adding cells in state _ to the right will still make cell enter state F at time , while the now rightmost cell will not have had any chance to leave its state.

This argument can of course be transferred to MS-CA, and it means that one has to find out the minimum time to send a signal to the rightmost cell of the support and back to cell .

Theorem 3.

For every MS-CA solving the MS-FSSP there are constants and such that for infinitely many there are at least problem instances of size such that needs at least

steps for synchronization of .

Proof.

The example in Figure 1 can be generalized.

We first define a set of periods we will make use of. According to Fact 2 the set is not empty. If then let and . If then let where (choosing the maximum is not important; we just want to be concrete). Let .

We will use blocks of length of sucessive cells with the same period (as in Figure 1). As will be seen it is useful to have at least one odd . Indeed, is odd (because is the only possibly even number in ).

Since there are at least different numbers , there are also at least different block lengths in , denoted as and .

Consider now all problem instances similar to Figure 1. Let

  • The periods of the first cells are the same but otherwise arbitrary.

  • The rest of the cells is partitioned into blocks. There are blocks consisting of cells and the period of all cells in that block is and there are blocks consisting of cells and the period of all cells in that block is .

  • At the right end of the instance there is a segment consisting of cells of period where . Since is odd, one has .

For each the total size of the problem instances is which is linear in , and there are different arrangements of the blocks. This number is known to be larger than (proof by induction). Formulated the other way around for these problem sizes there is a number of problem instances which exponential in and hence also in (for some appropriately chosen base ).

It remains to estimate the synchronization time for these problem instances. As already described in Subsection 

3.2 a signal that is supposed to first move to the right border as fast as possible and then back to cell will arrive in cell after the first (common) update. From that time on for each block it will take exactly steps to “traverse” each block which is also the sum of all periods of the cells in the block.

For the passage through the last cells forth and back have a look at Figure 3. Each cell is passed twice, once when the signal moves right and once when it moves back to the left, each time for steps. The only exception is the rightmost cell, where the signal stays only for the duration of period, i. e. steps. Altogether these are steps which is exactly the same number of steps as for each full block to the left. Consequently the position of the signal moving back to the left is for the first time in the cell to the right of a full block immediatly after a common update.

3 3 2 2 2 2 2 3 3 3 3 2 2
common
 update
common
 update
common
 update
common
 update
Figure 3: Reflection of a “fast” signal at the right border. We use , , hence , and . Therefore at the right end there are always cells with period . We have introduced small spaces to make the boundaries of the last full block of cells with period better visible. On the left hand side the last full block has cells of period and on the right hand side the last full block has cells of period . For more details see the proof of Theorem 3.

Summing up all terms for the movement of a signal from the very first cell to the right border and back results in a time of

This contains twice the term

hence

as claimed.

It can be observed that in the case that all the formula becomes the well-known lower bound of . ∎

In the following section we will describe an algorithm which achieves a running time which is slower than the lower bound in Theorem 3 by only a constant summand.

5 Detailed description of the synchronization algorithm for Ms-Ca

To the best of our knowledge the paper by Manzoni and Umeo [2] is the only work on the FSSP in one-dimensional MS-CA until now. They describe an algorithm which achieves synchronization in time where is maximum period used by some cell in the initial configuration .

Below we will describe an algorithm which proofs the following:

Theorem 4.

For each there is a constant and an algorithm which synchronizes each MS-FSSP instance of size with periods in time

(1)

In the case of standard CA all and formula (1) becomes which is only a constant number of steps slower than the fastest algorithms possible.

5.1 Core idea for synchronization

In the proof of a lower bound above we have constructed problem instances consisting of blocks consisting of cells with identical period. The arrangement was chosen in such a way that a signal, even if it were to move as fast as possible, would have to spend steps in a cell with period before moving on. In a standard CA this is the time a signal with speed needs to move across cells. Which leads to the idea to have each cell with period of the MS-CA simulate cells of a standard CA (solving the FSSP). We’ll call the simulated cells virtual cells or v-cells for short, and where disambiguation seems important call the cells of the MS-CA host cells. States of v-cells will be called v-states.

5.2 Details of the synchronization algorithm

From now on, assume that we are given some standard CA for the standard FSSP. Its set of states will be denoted as .

Algorithm 5.

As a first step we will sketch the components of the set of states of the MS-CA.

We already mentioned in Subsection 3.3 that we assume to be of the form . Let denote the least common multiple of all . Since in the algorithm below host cells will have to count from up to , we require that the set of states always contains a component . Hence , and we assume that the transition function will in each step update the -component of a cell by incrementing it by its period, modulo . Imagine that this is always “the last part” of a transition whenever a cell is active. Thus an active cell can identify the common updates by the fact that its -component is . But of course it is equally easy for an active cell to identify an activation that is the last before a common update.

Next, each host cell will have to store the states of some v-cells. As will be seen this will not only comprise the states of the v-cells it is going to simulate, but also the states of v-cells simulated by neighboring host cells; we will call these neighboring v-cells. To this end we choose . We will denote the newly introduced components of a cell as , , and . In a host cell will accumulate the states of more and more neighboring v-cells from the left. Analogously, in a host cell will accumulate the states of more and more neighboring v-cells from the right. In the middle component a host cell will always store the states of the v-cells it has to simulate itself.

The simulation will run in cycles each of which is steps long and begins with a common update. During one cycle a cell with period will be active times. Whenever a host cell is active it collects as many neighboring v-states as possible, but at most from either side. More precisely this is done as depicted in the following table:


[1mm]

In other words, the states of the own v-cells are not changed, but more and more neighboring v-states are being collected. We will show in Lemma 6 below that during the last activation of a cycle, i. e. the last activation before a common update, after having collected neighboring v-states, the length of the and components are in fact and not shorter. It is therefore now possible for each host cell to replace the v-states of its v-cells by the v-states those v-cells would be in after steps. The and components are reset to the empty word.

It is during the last activation of a cycle that a host will compute state F for each of its v-cells. The immediately following activation is a common update for all host cells. They will simultaneously detect that their v-cells reached the “virtual F and all enter the “real firing state”.

For a proof of Theorem 4 only the following two aspects remain to be considered.

Lemma 6.

After one cycle of algorithm 5 each host cell will have collected the states of neighboring v-cells, to the left and to the right.

Proof.

Without loss of generality we only consider the case to the left. We will prove by induction on the global time that for all the following holds:

For for each cell with components as above and with period and for all with : If then the cell is active and after the transition .

If then , , and obviously holds.

Now assume that the statement is true for all times less or equal some . Again, nothing has to be done if ; assume therefore that .

Consider a cell with components and period and its left neighbor with components and period , and therefore . Let be the time when the left neighbor was active for the last time before , and let . Then , for some , and since it was the last activation before , . By induction hypothesis the left neighbor already had . The new of the cell under consideration is which then has length at least . Since the proof is almost complete.

Strictly speaking the above argument does not hold when the left neighbor is a border cell. But in that case a cell can treated by its neighbor as if that has already filled with states #. ∎

Lemma 7.

For a problem instance of size with periods the time needed by Algorithm 5 for synchronization can be bounded by

(2)
Proof.

The total number of v-cells simulated is . A time optimal FSSP algorithm for standard CA needs steps for the synchronization of that many cells. During each cycle of length exactly steps of each v-cell are simulated, except possibly for the last cycle. During that, the F v-state may be reached in less than steps.

Hence the total number of steps is for some appropriately chosen constant . ∎

To sum up taking together Theorem 3 and Theorem 4 one obtains

Corollary 8.

For each there is a constant such that there is an MS-CA for the MS-FSSP which needs synchronization time and for infinitely many sizes there are problem instances () for which there is a lower bound on the synchronization time of .

6 Outlook

In this paper we have described a MS-CA for the synchronization problem which is sometimes faster and never slower than the one by Manzoni and Umeo. For a number of problem instances which is exponential in the number of cells to be synchronized the time needed is close to some lower bound derived in Section 4. An initial version of the proof could be improved thanks to an anonymous reviewer.

While higher-dimensional MS-CA have been considered [1], in the present paper we have restricted ourselves to the one-dimensional case. In fact it is not completely clear how to generalize the algorithm described above to two-dimensional CA. The MS-CA described in this paper it is essential that

  • from one cell to another one there is only one shortest path

  • and it is clear how many v-cells a cell should simulate.

The generalization of this approach to -dimensional CA is not obvious for us. In addition the derivation of reasonably good lower bounds on the synchronization times seem to be more difficult, but if one succeeds that might give a hint as to how to devise an algorithm. As a matter of fact, the same happened in the one-dimensional setting.

Similarly it is not clear how to apply the ideas in the case of CA solving some other problem, not the FSSP, because only (?) for the FSSP it is obvious which state(s) to choose for the v-cells in the initial configuration.

Both aspects, algorithms and lower bounds, are interesting research topics but need much more attention. Even in the -dimensional case there is still room for improvement as has been seen in Figure 2.

It remains an open problem how cells with different update periods should be ordered to ensure that they can be synchronized as soon as possible.

References

  • [1] Manzoni, L., Porreca, A.E., Umeo, H.: The firing squad synchronization problem on higher-dimensional CA with multiple updating cycles. In: Fourth International Symposium on Computing and Networking, CANDAR 2016, Hiroshima, Japan, November 22-25, 2016, pp. 258–261 (2016)
  • [2] Manzoni, L., Umeo, H.: The firing squad synchronization problem on CA with multiple updating cycles. Theor. Comput. Sci. 559, 108–117 (2014)
  • [3] Umeo, H.: Firing squad synchronization problem in cellular automata. In: Encyclopedia of Complexity and Systems Science, pp. 3537–3574. Springer (2009)
  • [4] Umeo, H.: How to synchronize cellular automata – recent developments –. Fundam. Inform. 171(1-4), 393–419 (2020)