Synchronization of Deterministic Visibly Push-Down Automata

05/04/2020 ∙ by Henning Fernau, et al. ∙ 0

We generalize the concept of synchronizing words for finite automata, which map all states of the automata to the same state, to deterministic visibly push-down automata. Here, a synchronizing word w does not only map all states to the same state but also fulfills some conditions on the stack content of each run after reading w. We consider three types of these stack constraints: after reading w, the stack (1) is empty in each run, (2) contains the same sequence of stack symbols in each run, or (3) contains an arbitrary sequence which is independent of the other runs. We show that in contrast to general deterministic push-down automata, it is decidable for deterministic visibly push-down automata whether there exists a synchronizing word with each of these stack constraints, i.e., the problems are in EXPTIME. Under the constraint (1) the problem is even in P. For the sub-classes of deterministic very visibly push-down automata the problem is in P for all three types of constraints. We further study variants of the synchronization problem where the number of turns in the stack height behavior caused by a synchronizing word is restricted, as well as the problem of synchronizing a variant of a sequential transducer, which shows some visibly behavior, by a word that synchronizes the states and produces the same output on all runs.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

The classical synchronization problem asks, given a deterministic finite automaton (DFA), whether there exists a synchronizing word that brings all states of the automaton to a single state. While this problem is solvable in polynomial time [cerny1964, San2005, Vol2008], many variants, such as synchronizing only a subset of states [San2005], or synchronizing a partial automaton without taking an undefined transition (called carefully synchronizing) [DBLP:journals/mst/Martyugin14], are PSPACE-complete. Restricting the length of a potential synchronizing word by a parameter in the input also yields a harder problem, namely the NP-complete short synchronizing word problem [Rys80, DBLP:journals/siamcomp/Eppstein90]. The field of synchronizing automata has been intensively studied over the last years, among others in attempt to verify the famous Černý conjecture claiming that every synchronizable DFA admits a synchronizing word of quadratic length in the number of states [cerny1964, DBLP:journals/jalc/Cerny19, DBLP:journals/eik/Starke66b, DBLP:journals/jalc/Starke19]. The currently best upper bound on this length is cubic, and only very little progress has been made, basically improving on the multiplicative constant factor in front of the cubic term, see [DBLP:journals/jalc/Shitov19, DBLP:conf/stacs/Szykula18]. More information on synchronization of DFA and the Černý conjecture can be found in [Vol2008, beal_perrin_2016, JALC20]. In this work, we want to move away from deterministic finite automata to more general deterministic visibly push-down automata.111The term synchronization of push-down automata already occurs in the literature, i.e., in  [caucal2006synchronization, DBLP:journals/mst/ArenasBL11], but there the term synchronization refers to some relation of the input symbols to the stack behavior [caucal2006synchronization] or to reading different words in parallel [DBLP:journals/mst/ArenasBL11]; do not to confuse it with our notion of synchronizing states.

The synchronization problem has been generalized in the literature to other automata models including infinite-state systems with infinite branching such as weighted and timed automata [DBLP:conf/fsttcs/0001JLMS14, DBLP:phd/hal/Shirmohammadi14] or register automata [babari2016synchronizing]. For instance, register automata are infinite state systems where a state consists of a control state and register contents.

Another automaton model, where the state set is enhanced with a potential infinite memory structure, namely a stack, is the class of nested word automata (NWAs were introduced in [DBLP:journals/jacm/AlurM09]), where an input word is enhanced with a matching relation determining at which pair of positions in a word a symbol is pushed to and popped from the stack. The class of languages accepted by NWAs is identical to the class of visibly push-down languages (VPL) accepted by visibly push-down automata (VPDA) and form a proper sub-class of the deterministic context-free languages. VPDAs have first been studied by Mehlhorn [DBLP:conf/icalp/Mehlhorn80] under the name input-driven pushdown automata and became quite popular more recently due to the work by Alur and Madhusudan [DBLP:conf/stoc/AlurM04], showing that VPLs share several nice properties with regular languages. For more on VPLs we refer to the survey [DBLP:journals/sigact/OkhotinS14]. In [DBLP:journals/jalc/ChistikovMS19], the synchronization problem for NWAs was studied. There, the concept of synchronization was generalized to bringing all states to one single state such that for all runs the stack is empty (or in its start configuration) after reading the synchronizing word. In this setting, the synchronization problem is solvable in polynomial time (again indicating similarities of VPLs with regular languages), while the short synchronizing word problem (with length bound given in binary) is PSPACE-complete; the question of synchronizing from or into a subset is EXPTIME-complete. Also, matching exponential upper bounds on the length of a synchronizing word are given.

Our attempt in this work is to study the synchronization problem for real-time (no -transitions) deterministic visibly push-down automata (DVPDA) and several sub-classes thereof, like real-time deterministic very visibly push-down automata (DVVPDA for short; this model was introduced in [DBLP:phd/dnb/Ludwig19]), real-time deterministic visibly counter automata (DVCA for short; this model appeared a.o. in [DBLP:conf/stacs/BaranyLS06, DBLP:journals/corr/abs-0901-2068, DBLP:conf/fsttcs/Bollig16, DBLP:conf/mfcs/HahnKLL15, DBLP:conf/dlt/KrebsLL15, DBLP:conf/stacs/KrebsLL15]) and finite turn variants thereof. We want to point out that, despite the equivalence of the accepted language class, the automata models of nested word automata and visibly push-down automata still differ and the results from [DBLP:journals/jalc/ChistikovMS19] do not immediately transfer to VPDAs. In general, the complexity of the synchronization problem can differ for different automata models accepting the same language class. For instance, in contrast to the polynomial time solvable synchronization problem for DFAs, the generalized synchronization problem for finite automata with one ambiguous transition is PSPACE-complete, as well as the problem of carefully synchronizing a DFA with one undefined transition [DBLP:conf/wia/Martyugin12]. We will not only consider the synchronization model introduced in [DBLP:journals/jalc/ChistikovMS19], where reading a synchronizing word results in an empty stack on all runs; but we will also consider a synchronization model where not only the final state on every run must be the same but also the stack content needs to be identical, as well as a model where only the states needs to by synchronized and the stack content might be arbitrary. These three models of synchronization have been introduced in [MY20], where length bounds on a synchronizing word for general DPDAs have been studied dependent on the stack height. The complexity of these three concepts of synchronization for general DPDAs are considered in [dpda-crossref] where it is shown that synchronizability is undecidable for general DPDAs and deterministic counter automata (DCA). It becomes decidable for deterministic partially blind counter automata and is PSPACE-complete for some types of finite turn DPDAs, while it is still undecidable for other types of finite turn DPDAs.

In contrast, we will show in the following that for DVPDAs and considered sub-classes hereof, the synchronization problem for all three stack models, with restricted or unrestricted number of turns, is in EXPTIME and hence decidable. For DVVPDAs and DVCAs, the synchronization problems for all three stack models (with unbounded number of turns) are even in P. Like the synchronization problem for NWAs in the empty stack model considered in [DBLP:journals/jalc/ChistikovMS19], we observe that the synchronization problem for DVPDAs in the empty stack model is solvable in polynomial time, whereas synchronization of DVPDAs in the same and arbitrary stack models is at least PSPACE-hard. If the number of turns caused by a synchronizing word on each run is restricted, the synchronization problem becomes PSPACE-hard for all considered automata models for and is only in P for in the empty stack model. We will further introduce variants of synchronization problems distinguishing the same and arbitrary stack models by showing complementary complexities in these models. For problems considered in [dpda-crossref], these two stack models have always shared their complexity status.

Missing proof details can be found in the appendix.

2 Fixing Notations

We refer to the empty word as . For a finite alphabet we denote with the set of all words over and with the set of all non-empty words. For we set . For we denote with the length of , with for the ’th symbol of , and with for the subword of . We call a prefix and a suffix of . If , then .

We call a deterministic finite automaton (DFA for short) if is a finite set of states, is a finite input alphabet, is a transition function , is the initial state and is the set of final states. The transition function is generalized to words by for . A word is accepted by if and the language accepted by is defined by . We extend to sets of states or to sets of letters , letting . Similarly, we may write to define for each . The synchronization problem for DFAs (called DFA-Sync) asks for a given DFA whether there exists a synchronizing word for . A word is called a synchronizing word for a DFA , if it brings all states of the automaton to one single state, i.e., .

We call a deterministic push-down automaton (DPDA for short) if is a finite set of states; the finite sets and are the input and stack alphabet, respectively; is a transition function ; is the initial state; is the stack bottom symbol which is only allowed as the first (lowest) symbol in the stack, i.e., if and contains , then only occurs in as its prefix and moreover, ; and is the set of final states. We will only consider real-time push-down automata and forbid -transitions, as can be seen in the definition. Notice that the bottom symbol can be removed, but then the computation gets stuck.

Following [DBLP:journals/jalc/ChistikovMS19], a configuration of is a tuple . For a letter and a stack content with , we write if . This means that the top of the stack is the right end of . We also denote with the reflexive transitive closure of the union of over all letters in . The input words on top of are concatenated accordingly, so that . The language accepted by a DPDA is . We call the sequence of configurations the run induced by , starting in , and ending in . We might also call the final state of the run.

We will discuss three different concepts of synchronizing DPDAs. For all concepts we demand that a synchronizing word maps all states, starting with an empty stack, to the same synchronizing state, i.e., for all . In other words, for a synchronizing word all runs started on some states in end up in the same state. In addition to synchronizing the states of a DPDA we will consider the following two conditions for the stack content: (1) , (2) . We will call (1) the empty stack model and (2) the same stack model. In the third case, we do not put any restrictions on the stack content and call this the arbitrary stack model.

As we are only interested in synchronizing a DPDA we can neglect the start and final states.

Starting from DPDAs we define the following sub-classes thereof:

  • A deterministic visibly push-down automaton (DVPDA) is a DPDA where the input alphabet  can be partitioned into such that the change in the stack height is determined by the partition of the alphabet. To be more precise, the transition function is modified such that it can be partitioned accordingly into such that puts a symbol on the stack, leaves the stack unchanged, and reads and pops a symbol from the stack [DBLP:conf/stoc/AlurM04]. If  is the symbol on top of the stack, then is only read and not popped. We call letters in call or push letters; letter in internal letters; and letters in return or pop letters. The language class accepted by DVPDA is equivalent to the class of languages accepted by deterministic nested word automata (see [DBLP:journals/jalc/ChistikovMS19]).

  • A deterministic very visibly push-down automaton (DVVPA) is a DVPDA where not only the stack height but also the stack content is completely determined by the input alphabet, i.e., for a letter and all states for and it holds that .

  • A deterministic visibly (one) counter automaton (DVCA) is a DVPDA where ; note that every DVCA is also a DVVPDA.

We are now ready to define a family of synchronization problems, the complexity of which will be our subject of study in the following chapters. [Sync-DVPDA-Empty]  
Given: DPDA .
Question: Does there exist a word that synchronizes in the empty stack model?
For the same stack model, we refer to the synchronization problem above as Sync-DVPDA-Same and as Sync-DVPDA-Arb in the arbitrary stack model. Variants of these problems are defined by replacing the DVPDA in the definition above by a DVVPDA, and DVCA. If results hold for several stack models or automata models, then we summarize the problems by using set notations in the corresponding statements. For the problems Sync-DVPDA-Same and Sync-DVPDA-Arb we introduce two further refined variants of these problems, denoted by the extension -Return and -NoReturn, where for all input DVPDA in the former variant holds, whereas in the latter variant holds. In the following these variants reveal insights in the differences between synchronization in the same stack and arbitrary stack models, as well as connections to a concept of trace-synchronizing a sequential transducer showing some visibly behavior.

We will further consider synchronization of these automata classes in a finite-turn setting. Finite-turn push-down automata are introduced in [ginsburg1966finite]. We adopt the definition in [valiant1973decision]. For a DVPDA an upstroke of is a sequence of configurations induced by an input word  such that no transition decreases the stack-height. Accordingly, a downstroke of is a sequence of configurations in which no transition increases the stack-height. A stroke is either an upstroke or downstroke. A DVPDA is an -turn DVPDA if for all the sequence of configurations induced by can be split into at most strokes. Especially, for 1-turn DVPDAs each sequence of configurations induced by an accepting word consists of one upstroke followed by a most one downstroke. Two subtleties arise when translating this concept to synchronization: (a) there is no initial state so that there is no way to associate a stroke counter to a state, and (b) there is no language of accepted words that restricts the set of words on which the number of strokes should be limited. We therefore generalize the concept of finite-turn DVPDAs to finite-turn synchronization for DVPDAs as follows. -Turn-Sync-DVPDA-Empty  
Given: DVPDA .
Question: Is there a synchronizing word in the empty stack model, such that for all states , the sequence of configurations consists of at most strokes? We call such a synchronizing word an -turn synchronizing word for . We define -Turn-Sync-DVPDA-Same and -Turn-Sync-DVPDA-Arb accordingly for the same stack and arbitrary stack model. Further we extend the problem definition to other classes of automata such as real-time DVVPDAs, and DVCAs. Table 1 summarizes our results, obtained in the next sections, on the complexity status of these problems together with the above introduced synchronization problems.

class of automata empty stack model same stack model arbitrary stack model
DVPDA P PSPACE-hard PSPACE-hard
DVPDA-NoReturn P PSPACE-complete P
DVPDA-Return P P PSPACE-hard
-Turn-Sync-DVPDA PSPACE-hard PSPACE-hard PSPACE-hard
0-Turn-Sync-DVPDA P PSPACE-complete PSPACE-complete
DVVPDA P P P
-Turn-Sync-DVVPDA PSPACE-hard PSPACE-hard PSPACE-hard
0-Turn-Sync-DVVPDA P PSPACE-complete PSPACE-complete
DVCA P P P
-Turn-Sync-DVCA PSPACE-hard PSPACE-hard PSPACE-hard
1-Turn-Sync-DVCA PSPACE-complete PSPACE-complete PSPACE-complete
0-Turn-Sync-DVCA P PSPACE-complete PSPACE-complete
Table 1: Complexity status of the synchronization problem for different classes of deterministic real-time visibly push-down automata in different stack synchronization modes. For the -turn synchronization variants, takes all values not explicitly listed. All our problems are in EXPTIME.

Finally, we introduce two PSPACE-complete problems for DFAs to reduce from later. [DFA-Sync-Into-Subset (PSPACE-complete [DBLP:journals/ipl/Rystsov83])]  
Given: DFA , subset .
Question: Is there a word such that ? [DFA-Sync-From-Subset (PSPACE-complete [San2005])]  
Given: DFA with .
Question: Is there a word that synchronizes , i.e., for which is true?

3 DVPDAs – Distinguishing the Stack Models

We start with some positive result showing that we come down from the undecidability of the synchronization problem for general DPDAs in the empty set model to a polynomial time solvable version by considering visibly DPDAs. The problems Sync-DVPDA-Empty, Sync-DVCA-Empty, and Sync-DVVPDA-Empty are decidable in polynomial time.

Proof.

We prove the claim for Sync-DVPDA-Empty as the other automata classes are sub-classes of DVPDAs. Let be a DVPDA. First, observe that if is empty, then any synchronizing word for in the empty stack model cannot contain any letter from . Hence, is basically a DFA and for DFAs the synchronization problem is in P [cerny1964, Vol2008, San2005]. From now on, assume . We show that a pair argument similar to the one for DFAs can be applied, namely that is synchronizable in the empty stack model if and only if every pair of states can be synchronized in the empty stack model. The only if direction is clear as every synchronizing word for also synchronizes each pair of states. For the other direction, observe that since is a DVPDA, the stack height of each path starting in any state of is predefined by the sequence of input symbols. Hence, if we focus on the two runs starting in and ensure that their stacks are empty after reading a word , then also the stacks of all other runs starting in other states in parallel are empty after reading . Therefore, we can successively concatenate words that synchronize some pair of active states in the empty stack model and end up with a word that synchronizes all states of in the empty stack model. Further formal algorithmic details can be found in the appendix.

Formal and algorithmic proof details of Theorem 3.

In order to determine if a pair of states can be synchronized in the empty stack model, we build the following product automaton . For all states in for which , simulates the actions of on in the first component and the actions of on in the second component. For states , this is also the case for all transitions except for zero-tests of the stack, as here we transition to the corresponding state . For , , restricted to , is the same as . Clearly, accepts all words that have as a prefix, for which synchronizes the states and in in the empty stack model and is any return letter in that checks the empty stack condition. Further for all pairs of states , is a DVPDA. As the emptiness problem for DVPDAs is in P [DBLP:conf/stoc/AlurM04], we can build and test all product automata for non-emptiness in polynomial time. ∎

Does this mean everything is easy and we are done? Interestingly, the picture is not that simple, as considering the same and arbitrary stack models shows. The problem Sync-DVPDA-Same is PSPACE-hard.

Proof.

We reduce from DFA-Sync-Into-Subset. Let be a DFA and . We construct from a DVPDA with , , , and . The transition function agrees with on all letters in . For we set and for all . For , we set , and for , .

Note that is a sink-state and can only be reached from states in with a transition by the call-letter . For states not in , the input letter pushes an on the stack which cannot be pushed to the stack by any letter on a path starting in . Hence, in order to synchronize  in the same stack model, a letter might only and must be read in a configuration where only states in are active. Every word that brings in such a configuration also synchronizes in into the set . ∎

From the proof of Theorem 3, we can conclude the next results by observing that a DVPDA without any return letter cannot make any turn. Sync-DVPDA-Same-NoReturn and 0-Turn-Sync-DVPDA-Same are PSPACE-hard.

In contrast with the two previous results, Sync-DVPDA-Same is solvable in polynomial time if we have the promise that . Sync-DVPDA-Same-Return is in P.

Proof.

We prove the claim by straight reducing to Sync-DVPDA-Empty with the identity function. If a DVPDA with can be synchronized in the same stack model with a synchronizing word , then can be extended to where empties the stack. As  is deterministic and complete, is defined on all states. As after reading , the stack content on all paths is the same, reading extends all paths with the same sequence of states. Conversely, a word synchronizing a DVPDA with in the empty stack model also synchronizes in the same stack model. ∎

The arbitrary stack model requires the most interesting construction in the following proof. Sync-DVPDA-Arb is PSPACE-hard.

Proof.

We give a reduction from the PSPACE-complete problem DFA-Sync-From-Subset. Let be a DFA with . We construct from a DVPDA where all unions in the definition of are disjoint. Let , , and with .

For states we set and for states we set for some arbitrary but fixed . For states we set .

For each call letter we set for , .

First, assume is a word that synchronizes the set in the DFA . Then, it can easily be observed that is a synchronizing word for in the arbitrary stack model.

Now, assume is a synchronizing word for in the arbitrary stack model. If , then is also a synchronizing word for and especially synchronizes the set in . (*) Next, assume contains some letters . The action of is designed such that it maps to the set if applied to an empty stack and otherwise gradually undoes the transitions performed by letters from . This is possible as each letter stores its pre-image on the stack when is applied. Further, acts as the identity on the states in if applied to an empty stack. Hence, whenever the stacks are empty while reading some word, all states in are active.

Hence, if is a subword of a synchronizing word of , with , then is also a synchronizing word of . This justifies the set of rewriting rules . Now, consider a synchronizing word of where none of the rewriting rules from applies. Hence, . By (*), with and . Then, is also a synchronizing word of , because for all states , is in the same configuration after reading , starting in configuration , as after reading . But as only (and all) states from are active after reading , is also a word in that synchronizes the set in . ∎

Observe that in the construction above, for all input DFAs. The next corollary follows from Theorem 3 and should be observed together with the next theorem in contrast to Theorem 3 and Corollary 3. Sync-DVPDA-Arb-Return is PSPACE-hard. Sync-DVPDA-Arb-NoReturn DFA-Sync.

Proof.

Let be a DVPDA with empty set of return symbols. As there is no return-symbol, the transitions of cannot depend on the stack content. Hence, we can redistribute the symbols in into and obtain a DFA. The converse is trivial. ∎

If we move from deterministic visibly push-down automata to even more restricted classes, like deterministic very visibly push-down automata or deterministic visibly counter automata, the three stack models do no longer yield synchronization problems with different complexities. Instead, all three models are equivalent, as stated next. Hence, their synchronization problems can be solved by the pair-argument presented in Theorem 3 in polynomial time. Sync-DVCA-Empty Sync-DVCA-Same Sync-DVCA-Arb.
Sync-DVVPDA-Empty Sync-DVVPDA-Same Sync-DVVPDA-Arb.

Proof.

First, note that every DVCA is also a DVVPDA. If for a DVVPDA , then we can empty the stack after synchronizing the state set, as the very visibly conditions ensures that the contents of the stacks on all runs coincide. As the automaton is deterministic, all transitions for letters in are defined on each state. As the stack content on all runs coincides in every step, the arbitrary stack model is identical to the same stack model and hence equivalent to the empty stack model. If , then we can reassign to in order to reduce from the same-stack and arbitrary stack to the empty stack variant, as transitions cannot depend on the stack content which is again the same on all runs due to the very visibly condition. ∎

4 Restricting the Number of Turns Makes Synchronization Harder

We are now restricting the number of turns a synchronizing word may cause on any run. Despite the fact that we are hereby restricting the considered model even further, the synchronization problem becomes even harder, in contrast to the previous section. For every fixed with , the problems -Turn-Sync-DVCA-Same and -Turn-Sync-DVCA-Arb are PSPACE-hard.

Proof.

We give a reduction from the PSPACE-complete problem DFA-Sync-Into-Subset. Let be a DFA with . We construct from a DVCA , where all unions are disjoint. We set , and . For all internal letters, agrees with on all states in . For the letter , we set for all , and for all , we set . For we loop in every state in . For , we loop with every letter in (incrementing the counter with and decrementing it with ).

Let be an arbitrary but fixed state in . For the states we set for , . Further, for even index , we set and

. For odd index

, we set , and . For even , let , , and . For odd , let , , and . All other transitions (on internal letters) act as the identity.

Observe that the state must be the synchronizing state of , since it is a sink state. In order to reach from any state in , the automaton must pass through all the states for all by construction. Since we can only transition from a state to with an empty or non-empty stack in alternation, passing the gadget forces to make turns. For even , the last upstroke is enforced by passing from to by explicitly increasing the stack. As is only allowed to make turns while reading the -turn synchronizing word this implies that any of the states might be visited at most once, as branching back into by taking a transition that maps to would force to go through all states again, which exceeds the allowed number of strokes. Note that only counter values of at most one are allowed in any run which is currently in a state in as otherwise the run will necessarily branch back into later on.222In some states, such as for even , it is simply impossible to have a higher counter value. Especially, this is the case for which ensures that each -turn synchronizing word has first synchronized into before the first letter is read, as otherwise is reached with a counter value greater than 1, or has already made a turn in and hence cannot reach anymore.

In the construction above, for odd each run enters the synchronizing state with an empty stack (*). For even each run enters the synchronizing state with a counter value of 1. The visibly condition, or more precisely very visibly condition as we are considering DVCAs, tells us that at each time while reading a synchronizing word, the stack content of every run is identical. In particular, this is the case at the point when the last state enters the synchronizing state and hence, any -turn synchronizing word for is a synchronizing word in both the arbitrary and the same stack models. ∎

By observing that in the empty stack model allowing even turns is as good as allowing turns, essentially (*) from the previous proof yields the next result. For every fixed with , the problem -Turn-Sync-DVCA-Empty is PSPACE-hard.

Proof of Cor. 4.

Since we need to synchronize with an empty stack, for even , the last upstroke cannot be performed. Hence, for even , every DVCA can be synchronized by an -turn synchronizing word if and only if can be synchronized by an -turn synchronizing word. As for odd in the construction above, every -turn synchronizing word synchronized  in the empty stack model, the claim follows from the proof in Theorem 4. ∎

For every fixed with , the problems -Turn-Sync-DVPDA and -Turn-Sync-DVVPDA in the empty, same, and arbitrary stack models are PSPACE-hard. 0-Turn-Sync-DVPDA-Empty DFA-Sync.

Proof.

The visibly condition and the fact that we can only synchronize with an empty stack means that we cannot read any letter from , hence we cannot use the stack at all. Delete (a) all transitions with a symbol from and (b) all transitions with a symbol from  and a non-empty stack. Then, assigning the elements in to gives us a DFA. ∎

The next result is obtained by a reduction from DFA-Sync-From-Subset. The problems 0-Turn-Sync-DVCA-{Same, Arb} are PSPACE-hard.

Proof of Theorem 4.

We give a reduction from the PSPACE-complete problem DFA-Sync-From-Subset. Let be a DFA with . We construct from a DVCA . We set , , and . For all and , we set . For all states , we set for some arbitrary but fixed state . All other transitions act as the identity.

Note that the 0-turn condition only allows us to read the letter before any letter in has been read, as afterwards would decrease the stack after it has been increased. Therefore, every synchronizing word for in the same and arbitrary stack models also synchronizes  in  by either synchronizing the whole set  without using any transition, or it brings  in exactly the set  with the first letter  and continues to synchronize . ∎

The problems 0-Turn-Sync-DVVPDA-{Same, Arb}, and 0-Turn-Sync-DVPDA-{Same, Arb} are PSPACE-hard.

5 (Non-)Tight Upper Bounds

In this section we will prove that at least all considered problems are decidable (in contrast to non-visibly DPDAs and DCAs, see [dpda-crossref]) by giving exponential time upper bounds. We will also give some tight PSPACE upper bounds for some PSPACE-hard problems discussed in previous section, but for other problems previously discussed a gap between PSPACE-hardness and membership in EXPTIME remains. All problems listed in Table 1 are in EXPTIME.

Proof.

We show the claim explicitly for Sync-DVPDA-Same, Sync-DVPDA-Arb, -Turn-Sync-DVPDA-Empty, -Turn-Sync-DVPDA-Same, and -Turn-Sync-DVPDA-Arb. The other results follow by inclusion of automata classes.

Let be a DVPDA. We construct from the -fold product DVPDA with state set , consisting of -tuples of states, and alphabet . Since is a DVPDA, for every word , the stack heights on runs starting in different states in is equal at every position in . Hence, we can multiply the stacks to obtain the stack alphabet for . For the transition function (split up into ) of we simulate independently on every state in an -tuple, i.e., for and letters , we set

  • if for ;

  • ;

  • .

The bottom symbol of the stack is the -tuple . Let be an enumeration of the states in and set as the start state of .

For Sync-DVPDA-Arb, set as the final states for . Clearly, for Sync-DVPDA-Arb, is a DVPDA and the words accepted by are precisely the synchronizing words for in the arbitrary stack model. As the emptiness problem can be decided for visibly push-down automata in time polynomial in the size of the automaton [DBLP:conf/stoc/AlurM04], the claim follows, observing that is exponentially larger than .

For Sync-DVPDA-Same, we produce a DVPDA by enhancing the automaton with three additional states , , and and an additional new return letter and set as the single accepting state of , while the start state coincides with the one of . For states we set if and , for all . We set if for all . For all other cases, we map with to . We let the transitions for be defined such that is a non-accepting trap state for all alphabet symbols. For we set if for . Further, we set and map with to in all other cases. The state also maps to with all input symbols other than . We let the transitions for be defined such that is an accepting trap state for all alphabet symbols.

Clearly, for Sync-DVPDA-Same is a DVPDA and the words accepted by are precisely the synchronizing words for in the same stack model, potentially prolonged by a sequence of ’s, as the single accepting state can only be reached from a state in where the states are synchronized and the stack content is identical for each run (which is checked in the state ). As the size of is exponential in the size of , we get the claimed result as in the previous case.

For the -Turn synchronization problems, we have to modify the previous construction by adding a stroke counter similar as in the proof of Theorem 4 (see Appendix 5).

Further proof details for the -Turn cases in Theorem 5.

For the problems -Turn-Sync-DVPDA in the empty, same, and arbitrary stack models, we enhance in each -tuple with an additional index , i.e., the basic set of states is now . We further add for all three models the non-accepting trap state to the set of states. For each -tuple, we implement the transition function of for internal letters in as before by keeping the value of the index in each transition. For call letters in we realize as before for state-tuples with index by simulation on the individual states and setting in every image if is even, and keeping the value of if is odd. For tuples with index , we proceed as before for smaller index if is odd, while for even we map with a call letter to the state . For the return letters in , we realize for pairs of states in and bottom of stack symbol as before by simulating on the individual states and keeping the value of . For all other stack symbols, we realize as before for state-tuples with index by simulation on the individual states and keeping the value of if is even, and setting in every image if is odd. For tuples with we proceed as before if is even. For states with index or for odd , we map with each return letter to for stack symbols other than the bottom of stack symbol. In all three models we set as the start state, with being an enumeration of the states in .

For -Turn-Sync-DVPDA-Arb, we set as the set of final states.

For -Turn-Sync-DVPDA-Empty, we set the additional trap state as the single accepting state and add a new return letter with which we map to for states with the bottom-of-stack symbol and to for all other stack symbols or states.

For -Turn-Sync-DVPDA-Same, we add the two states and to and set as the single accepting state. Again, we add a new return letter . For states with and symbol on top of the stack, which is not the bottom-of-stack symbol, we map with to if all entries in the stack symbol tuple are identical. If instead the bottom-of-stack symbol is on top of the stack, we map with for states directly to . For all other states and stack symbols, maps to . For , we stay in with the letter if we see a symbol on the stack with and map with to if we see the bottom-of-stack symbol. For all other stack symbols, maps to . Also, all input letter other than maps to . For we define all transitions such that is a trap state.

In all three cases, the constructed automaton is a DVPDA that accepts precisely the -turn synchronizing words for (potentially prolonged by a sequence of ’s) in the respective stack model. As the constructed automaton is of size in all three cases, we can decide whether the constructed automaton accepts at least one word in time exponential in the description of . ∎

It cannot be expected to show PSPACE-membership of synchronization problems concerning DVPDAs using a -fold product DVPDA, as the resulting automata is exponentially large in the size of the DVPDA that is to be synchronized, as the emptiness problem for DVPDAs is P-complete [DBLP:journals/sigact/OkhotinS14]. Rather, one would need a separate membership proof. We conjecture that a PSPACE-membership proof similar to the one for the short synchronizing word problem presented in [DBLP:journals/jalc/ChistikovMS19] can be obtained if exponential upper bounds for the length of synchronizing words for DVPDAs in the respective models can be obtained. The problems 0-Turn-Sync-{DVPDA, DVVPDA, DVCA}-Same are in PSPACE. Let be a DVPDA. For the same stack model, the 0-turn condition forbids us to put in simultaneous runs different letters on the stack at any time while reading a synchronizing word, as we cannot exchange symbols on the stack with visible PDAs. Note that this is a dynamic runtime-behavior and does not imply that is necessarily very visibly. Further, the 0-turn and visibility condition enforces that at each step the next transition does not depend on the stack content if the symbol on top of the stack is not . Hence, we can construct from a -fold DFA (with a state set exponential in the size of ) in a similar way as in the proof of Theorem 5 by neglecting the stack as nothing is ever popped from the stack. Details on the construction can be found in the appendix. As the emptiness problem for DFAs can be solved in NLOGSPACE, the claim follows with Savitch’s famous theorem stating that  [DBLP:journals/jcss/Savitch70].333Here, a smaller powerset-construction would also work but for simplicity, we stuck with the introduced -fold product construction.

Proof of Theorem 5.

Let be a DVPDA. For the same stack model, the 0-turn condition forbids us to put in simultaneous runs different letters on the stack at any time while reading a synchronizing word, as we cannot exchange symbols on the stack with visible PDAs. Note that this is a dynamic runtime-behavior and does not imply that is necessarily very visibly. Further, the 0-turn and visibility condition enforces that at each step the next transition does not depend on the stack content if the symbol on top of the stack is not . We construct from a partial -fold product DFA with state set , consisting of -tuples of states with an additional bit of information which will indicate whether the stack is still empty, and alphabet . For the transition function of , we simulate for a state with , and letter , (by restricting the image to the first component in for call letters) on the individual states , with in the tuple if (1) and , (2) , or (3) and for , it holds that . In case (1) and (2), we keep the value of  in the transition and in case (3), we ensure in the image of the transition. The size of the state graph of is bounded by . Clearly, the DVPDA can be synchronized by a 0-turn synchronizing word in the same stack model if and only if there is a path in the state graph of from the state for to some state in . These reachability tests can be performed in  [DBLP:journals/jcss/Savitch70]. The claim for the other problems follows by inclusion of automata classes. ∎

Sync-DVPDA-Same-NoReturn is in PSPACE.

Proof of Cor. 5.

Let be a DVPDA with . As we have no return letter, any synchronizing word for is also a 0-turn synchronizing word and hence, the claim follows with Theorem 5. ∎

The problems 0-Turn-Sync-{DVPDA, DVVPDA, DVCA}-Arb, and 1-Turn-Sync-DVCA-{Empty, Same, Arb} are in PSPACE.

Proof.

The claim follows from [dpda-crossref, Theorem 16 & 17] by inclusion of automata classes. ∎

6 Sequential Transducers

In [dpda-crossref], the concept of trace-synchronizing a sequential transducer has been introduced. We want to extend this concept to sequential transducers showing some kind of visible behavior regarding their output, inspired by the predetermined stack height behavior of DVPDAs. We call a sequential transducer (ST for short) if is a finite set of states, is an input alphabet, is an output alphabet, is the start state, is a total transition function, and collects the final states. We generalize from input letters to words by concatenating the produced outputs. is called a visibly sequential transducer (VST for short) [or very visibly sequential transducer (VVST for short)] if for each and for all and , it holds that and implies that [or that , respectively]. A VVST is thereby computing the same homomorphism , regardless of which states are chosen as start and final states (*). Hence, if is the underlying DFA (ignoring any outputs), then describes the language of all possible outputs of . By Nivat’s theorem [Niv68], a language family is a full trio iff it is closed under VVST and inverse homomorphisms. Our considerations also show that a language family is a full trio iffit is closed under VVST and inverse VVST mappings.

We say that a word trace-synchronizes a sequential transducer if, for all states , , i.e., a synchronizing state is reached, producing identical output. Notice that from the viewpoint of trace-synchronization, we do not assume that a VVST has only one state.

[Trace-Sync-Transducer]  
Given: Sequential transducer .
Question: Does there exists a word that trace-synchronizes ?

Remarks on sequential transducers. The definitions in the literature are not very clear for finite automata with outputs. We follow here the name used by Berstel in [DBLP:books/lib/Berstel79]; Ginsburg [Ginsburg66] called Berstel’s sequential transducers generalized machines, but used the term sequential transducer for the nondeterministic counterpart. For non-deterministic transducers which allow to read multiple letters at once the concept of fixing the ratio between the length of the produced output and the length of the input was already studied in [DBLP:journals/tcs/Carton07] and was even mentioned in [sakarovitch2003elements]. Here, the ratio is fixed for every transition independent of the input letter(s) and a transducer admitting such a fixed ratio is called -synchronous. The term ’synchronization’ again appears here but refers to finding an -synchronous transducer to a given rational relation.

We define Trace-Sync-VST and Trace-Sync-VVST by considering a VST, respectively VVST, instead. In contrast to the undecidability of Trace-Sync-Transducer [dpda-crossref], we get the following results for trace-synchronizing VST and VVST from previous results. Trace-Sync-VST is PSPACE-complete.

Proof.

First, observe that there is a straight reduction from the problem Sync-DVPDA-Same-NoReturn to Trace-Sync-VST as the input DVPDAs to the problem Sync-DVPDA-Same-NoReturn have no return letters and hence, the stack is basically a write only tape. Further, as the remaining alphabet is partitioned into letters in , which write precisely one symbol on the stack, and into letters in , writing nothing on the stack, the visibly condition is satisfied when interpreting the DVPDA with as a VST.

There is also a straight reduction from Trace-Sync-VST to Sync-DVPDA-Same-NoReturn as follows. For a VST we construct a DVPDA with by introducing for each a new alphabet . Observe that is either or contains only words of the same length. By setting , , , and interpreting the output sequence produced by as the single stack symbol in . ∎

Yet, by Observation (*), we inherit from Sync-DFA the following algorithmic result. Trace-Sync-VVST is in P.

Proof of Theorem 6.

For each VVST and the same output is already produced in for all . Hence, we can ignore the output and test for trace-synchronization by the polynomial time pair-algorithm for DFAs [San2005]. ∎

7 Discussion

Our results concerning DVPDAs and sub-classes thereof, are subsumed in Table 1. While all problems listed in the table are contained in EXPTIME, the table lists several problems for which their known complexity status still contains a gap between PSPACE-hardness lower bounds and EXPTIME upper bounds. Presumably, their precise complexity status is closely related to upper bounds on the length of synchronizing words which we want to consider in the near future. One of the questions which could be solved in this work is if there is a difference between the complexity of synchronization in the same stack model and synchronization in the arbitrary stack model. While for general DPDA, DCA, and sub-classes thereof, see [dpda-crossref], these two models admitted synchronization problems with the same complexity, here we observed that these models can differ significantly. While the focus of this work is on determining the complexity status of synchronizability for different models of automata, an obvious question for future research is the complexity status of closely related, and well understood questions in the realm of DFAs, such as the problem of shortest synchronizing word, subset synchronization, synchronization into a subset, and careful synchronization.

Here is one subtlety that comes with shortest synchronizing words: While for finding synchronizing words of length at most for DFAs, it does not matter if the number is given in unary or in binary due to the known cubic upper bounds on the lengths of shortest synchronizing words, this will make a difference in other models where such polynomial length bounds are unknown. More precisely, for instance with DVPDAs, it is rather obvious that with a unary length bound , the problem becomes NP-complete, while the status is unclear for binary length bounds. As there is no general polynomial upper bound on the length of shortest synchronizing words for VPDAs, they might be of exponential length. Hence, we don not get membership in PSPACE easily, not even for synchronization models concerning DVPDA for which general synchronizability is solvable in P, as it might be necessary to store the whole word on the stack in order to test its synchronization effects.

References