# Finding Short Synchronizing Words for Prefix Codes

We study the problems of finding a shortest synchronizing word and its length for a given prefix code. This is done in two different settings: when the code is defined by an arbitrary decoder recognizing its star and when the code is defined by its literal decoder (whose size is polynomially equivalent to the total length of all words in the code). For the first case for every ε > 0 we prove n^1 - ε-inapproximability for recognizable binary maximal prefix codes, Θ( n)-inapproximability for finite binary maximal prefix codes and n^1/2 - ε-inapproximability for finite binary prefix codes. By c-inapproximability here we mean the non-existence of a c-approximation polynomial time algorithm under the assumption P NP, and by n the number of states of the decoder in the input. For the second case, we propose approximation and exact algorithms and conjecture that for finite maximal prefix codes the problem can be solved in polynomial time. We also study the related problems of finding a shortest mortal and a shortest avoiding word.

## Authors

• 3 publications
• 7 publications
• ### Gray codes for Fibonacci q-decreasing words

An n-length binary word is q-decreasing, q≥ 1, if every of its length ma...
10/19/2020 ∙ by Jean-Luc Baril, et al. ∙ 0

• ### Complete Variable-Length Codes: An Excursion into Word Edit Operations

Given an alphabet A and a binary relation τ⊆ A * x A * , a language X ⊆ ...
12/05/2019 ∙ by Jean Néraud, et al. ∙ 0

• ### Polynomial Time Algorithms for Constructing Optimal Binary AIFV-2 Codes

Huffman Codes are optimal Instantaneous Fixed-to-Variable (FV) codes in ...
01/30/2020 ∙ by Mordecai Golin, et al. ∙ 0

• ### The Subfield Codes of Hyperoval and Conic codes

Hyperovals in (2,(q)) with even q are maximal arcs and an interesting re...
04/17/2018 ∙ by Ziling Heng, et al. ∙ 0

• ### Generating Shortest Synchronizing Sequences using Answer Set Programming

For a finite state automaton, a synchronizing sequence is an input seque...
12/20/2013 ∙ by Canan Güniçen, et al. ∙ 0

• ### Matrix-product structure of constacyclic codes over finite chain rings F_p^m[u]/〈 u^e〉

Let m,e be positive integers, p a prime number, F_p^m be a finite field ...
03/03/2018 ∙ by Yuan Cao, et al. ∙ 0

• ### On synchronization of partial automata

A goal of this paper is to introduce the new construction of an automato...
04/19/2020 ∙ by Jakub Ruszil, et al. ∙ 0

##### This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

## 1 Introduction

Prefix codes are a simple and powerful class of variable-length codes that are widely used in information compression and transmission. A famous example of prefix codes is Huffman’s codes [15]. In general, variable length codes are not resistant to errors, since one deletion, insertion or change of a symbol can desynchronize the decoder causing incorrect decoding of all the remaining part of the message. However, in a large class of codes called synchronizing codes resynchronization of the decoder is possible in such situations. It is known that almost all maximal finite prefix codes are synchronizing [12]. Synchronization of finite prefix codes has been investigated a lot [5, 7, 8, 10, 21, 22], see also the book [6] and references therein. For efficiency reasons, it is important to use as short words resynchronizing the decoder as possible to decrease synchronization time. However, despite the interest in synchronizing prefix codes, the computational complexity of finding short synchronizing words for them has not been studied so far. In this paper, we provide a systematic investigation of this topic.

Each recognizable (by a finite automaton) maximal prefix code can be represented by an automaton decoding the star of this code. For a finite code, this automaton can be exponentially smaller than the representation of the code by listing all its words (consider, for example, the code of all words of some fixed length). This can, of course, happen even if the code is synchronizing. An important example here is the code . The minimized decoder of this code is a famous Wielandt automaton with states (see ex. [1]), while the literal automaton contains states, see Figure 1 for the case . In different applications, the first or the second way of representing the code can be useful. In some cases large codes having a short description may be represented by a minimized decoder, while in other applications the code can be described by simply providing the list of all codewords. The number of states of the literal decoder is equal to the number of different prefixes of the codewords, and thus the representations of a prefix code by listing all its codewords and by providing its literal automaton are polynomially equivalent. We study the complexity of problems for both arbitrary and literal decoders of finite prefix codes.

In this paper we study the existence of approximation algorithms for the problem Short Sync Word of finding a shortest synchronizing words in several classes of deterministic automata decoding prefix codes. In Section 2 we describe main definitions and survey existing results in the computational complexity of Short Sync Word. In Section 3 we provide a strong inapproximability result for this problem in the class of strongly connected automata. Section 4 is devoted to the same problem in acyclic automata, which are then used in Section 5 to show logarithmic inapproximability of Short Sync Word in the class of Huffman decoders. In Section 6 we provide a much stronger inapproximability result for partial Huffman decoders. In Section 7 we provide several algorithms for literal Huffman decoders and conjecture that Short Sync Word can be solved in polynomial time in this class. Finally, in Section 8 we apply the developed techniques to the problems of finding shortest mortal and avoiding words.

## 2 Main Definitions and Related Results

A partial deterministic finite automaton (which we simply call a partial automaton in this paper) is a triple , where is a set of states, is a finite alphabet and is a (possibly incomplete) transition function. The function delta can be canonically extended to a function by defining for , . If is a complete function, the automaton is called complete (in this case we call it just an automaton). An automaton is called strongly connected

if for every ordered pair

of states there is a word mapping to .

A state in a partial automaton is called sink if each letter either maps the state to itself or is undefined. A simple cycle in a partial automaton is a sequence of its states such that all the states in the sequence are different and there exist letters such that for and . A simple cycle is a self-loop if it consists of only one state. We call a partial automaton weakly acyclic if all its cycles are self-loops, and strongly acyclic if moreover all its states with self-loops are sink states. Some properties of these automata have been studied in [19].

There is a strong relation between partial automata and prefix codes [6]. A set of words is called a prefix code if no word in is a prefix of another word. The class of recognizable (by an automaton) prefix codes can be described as follows. Take a strongly connected partial automaton and pick a state in it. Then the set of all first return words of (that is, words mapping to itself such that each non-empty prefix does not map to itself) is a recognizable prefix code. Moreover, each recognizable prefix code can be obtained this way. A prefix code is called maximal if it is not a subset of another prefix code. The class of maximal recognizable prefix codes corresponds to the class of complete automata. If a state can be picked in an automaton in such a way that the set of all first return words is a finite prefix code, we call the automaton a partial Huffman decoder. If such automaton is complete (and thus the finite prefix code is maximal), we call it simply a Huffman decoder.

Let be a partial automaton. A word is called synchronizing for if there exists a state such that maps each state of either to or the mapping of is undefined for this state, and there is at least one state such that the mapping of is defined for it. That is, a word is called synchronizing if it maps the whole set of states of the automaton to a set of size exactly one. An automaton having a synchronizing word is called synchronizing. A recognizable prefix code is synchronizing if a trim (partial) automaton recognizing the star of this code is synchronizing [6] (an automaton is called trim if there exists a state such that each state is accessible from this state, and there exists a state such that each state is coaccessible from this state). It can be checked in polynomial time that a strongly connected partial automaton is synchronizing (Proposition 3.6.5 of [6]).

Synchronizing automata have applications in different domains, such as synchronizing codes, symbolic dynamics, manufacturing and testing of reactive systems. They are also the subject of the Černý conjecture, one of the main open problems in automata theory. It stays that every -state synchronizing automaton has a synchronizing word of length at most , while the best known upper bounds are cubic [17, 24]. See [26] for a survey on this topic. The upper bound on the length of a shortest synchronizing word has been improved in particular for Huffman decoders [2] and further for literal Huffman decoders [5].

We consider the following computational problem.

 Short Sync Word Input: A synchronizing partial automaton A; Output: The length of a shortest synchronizing word for A.

Now we shortly survey existing results and techniques in the computational complexity and approximability of finding shortest synchronizing words for deterministic automata. To the best of our knowledge, there are no such results for partial automata. See [23] for an introduction to NP-completeness and [25] for an introduction to inapproximability and gap-preserving reductions.

There exist several techniques of proving that Short Sync Word is hard for different classes of automata. The very first and the most widely used idea is the one of Eppstein [11]. Here, the automaton in the reduction is composed of a set of “pipes”, and transitions define the way the active states are changed inside the pipes to reach the state where synchronization takes place. This idea (sometimes extended a lot) allows to prove NP-completeness of Short Sync Word in the classes of strongly acyclic [11], ternary Eulerian [16], binary Eulerian [27], binary cyclic [16] automata. This idea is also used in the proofs of [3] for inapproximability within arbitrary constant factor for binary automata, and for -inapproximability for -state binary automata [13] (the last proof uses the theory of Probabilistically Checkable Proofs). In fact, the proof in [13] holds true for binary automata with linear (in the number of states of the automaton) length of a shortest synchronizing word and a sink state.

Another idea is to construct a reduction from the Set Cover problem. It can be used to show logarithmic inapproximability of the Short Sync Word in weakly acyclic [14] and binary automata [4]. Finally, a reduction from Shortest Common Supersequence provides inapproximability of this problem within a constant factor [14].

In the class of monotonic automata Short Sync Word is solvable in polynomial time: because of the structure of these automata this problem reduces to a problem of finding a shortest words synchronizing a pair of states [20]. For general -state automata, a -approximation polynomial time algorithm exists for every [14].

## 3 The Construction of Gawrychowski and Straszak

In this section we briefly recall the construction of a gadget invented by Gawrychowski and Straszak [13] to show -inapproximability of the Short Sync Word problem in the general class of automata. Below we will use this construction several times.

Suppose that we have a constraint satisfiablity problem (CSP) with variables and constraints such that each constraint is satisfied by at most assignments (see [13] for the definitions and missing details). Following the results in [13], we can assume that , and also that either the CSP is satisfiable, or at most fraction of all constraints can be satisfied by an assignment. It is possible to construct the following ternary automaton in polynomial time. For each constraint the automaton contains a corresponding binary (over ) gadget which is a compressed tree (that is, an acyclic digraph) of height and the number of states at most having different leaves corresponding to satisfying and non-satisfying assignments. The automaton also contains a sink state such that all the leaves corresponding to satisfying assignments are mapped to , and all other leaves are mapped to the roots of the corresponding trees. The third letter is defined to map all the states of each gadget to its root and to map to itself. For every it is possible to construct such an automaton with at most states in polynomial time. Moreover, for a satisfiable CSP we get an automaton with a shortest synchronizing word of length at most , and for a non-satisfiable CSP the length of a shortest synchronizing word is at least . Since can be chosen arbitrary small, this provides a gap-preserving reduction with a gap of .

The described construction can be modified to get the same inapproximability in the class of strongly connected automata.

The Short Sync Word problem cannot be approximated in polynomial time within a factor of for every for -state binary strongly connected automata unless P = NP.

###### Proof.

Consider the automaton described above. Add a new letter that cyclically permutes the roots of all gadgets, maps to the root of one of the gadgets and acts as a self-loop for all the remaining states. Observe that thus constructed automaton is strongly connected and has the property that every non-satisfying assignment satisfies at most the fraction of of all constraints. Thus, the gap between the length of a shortest synchronizing word for a satisfying and non-satisfying assignment is still .

It remains to make the automaton binary. This can be done by using Lemma 3 of [4]. This way we get a binary automaton with states and a gap between and in the length of a shortest synchronizing word. By choosing appropriate small enough , we get a reduction with gap for binary strongly connected -state automata, which proves the statement. ∎

## 4 Acyclic Automata

In this section we investigate the simply-defined classes of weakly acyclic and strongly acyclic automata. The results for strongly acyclic automata are used in Section 5 to obtain inapproximability for Huffman decoders. Even though the automata in the classes of weakly and strongly acyclic automata are very restricted and have a very simple structure, the inapproximability bounds for them are quite strong. Thus we believe that these classes are of independent interest.

We will need the following problem.

 Set Cover Input: A set X of p elements and a family C of m subsets of X; Output: A subfamily of C of minimum size covering X.

A family of subsets of is said to cover if is a subset of the union of the sets in . For every , the Set Cover problem with cannot be approximated in polynomial time within a factor of for some unless P = NP [4].

The Short Sync Word problem cannot be approximated in polynomial time within a factor of for some for -state strongly acyclic automata over an alphabet of size for every unless .

###### Proof.

We reduce the Set Cover problem. Provided and , we construct the automaton as follows. To each set in we assign a letter . To each element in we assign a “pipe” of states in . Additionally, we construct a state in .

For and all and we define if contains , and otherwise. We also define for all and .

We claim that the length of a shortest synchronizing word for is equal to the minimum size of a set cover in . Let be a set cover of minimum size. Then a concatenation of the letters corresponding to the elements of is a synchronizing word of corresponding length.

In the other direction, consider a shortest synchronizing word for . No letter appears in at least twice. If the length of is less than , then by construction of the subset of elements in corresponding to the letters in form a set cover. Otherwise we can take an arbitrary subfamily of of size which is a set cover (such subfamily trivially exists if covers ).

The resulting automaton has states and letters. Thus we get a reduction with gap for some . Because of the mentioned result of Berlinkov, we can also assume that for arbitrary small . ∎

Now we are going to extend this result to the case of binary weakly acyclic automata.

The Short Sync Word problem cannot be approximated in polynomial time within a factor of for some for -state binary weakly acyclic automata unless .

###### Proof.

We extend the construction from the proof of Theorem 4 by using Lemma 3 of [4]. If we start with a strongly acyclic automaton with states and letters, this results in a binary weakly acyclic automaton with states. Moreover, the length of a shortest word of the new automaton is between and , where is the length of a shortest word of the original automaton. Since we can assume for arbitrary small , we have , where is the number of states of the new automaton. Thus we get a gap of . ∎

For ternary strongly acyclic automata it is possible to get -inapproximability.

The Short Sync Word problem cannot be approximated in polynomial time within a factor of for every for -state strongly acyclic automata over an alphabet of size three unless .

## 5 Huffman Decoders

We start with a statement relating strongly acyclic automata to Huffman decoders.

Let be a synchronizing strongly acyclic automaton over an alphabet of size . Let be the length of a shortest synchronizing word for . Then there exists a Huffman decoder over an alphabet of size with the same length of a shortest synchronizing word, and can be constructed in polynomial time.

###### Proof.

Provided a strongly acyclic automaton we construct a Huffman decoder .

Since is a synchronizing strongly acyclic automaton, it has a unique sink state . We define the alphabet as the union of with two additional letters . The set of states is the union of with some auxiliary states defined as follows. Consider the set of states in having no incoming transitions. Construct a full binary tree with the root having as the set of its leaves (if is not a power of two, some subtrees of the tree can be merged). Define to map each state of this tree to the left child, and to the right child. Transfer the action of to for all states in and all letters in . For all the internal states of the tree define all the letters of to map these states to . Finally, for all the states in define the action of in the same way as some fixed letter in .

Observe that any word over alphabet synchronizing also synchronizes . In the other direction, any synchronizing word for has to synchronize , which means that each state in has to be mapped to first, so the length of a shortest synchronizing word for is at least the length of a shortest synchronizing word for . ∎

Now we use Lemma 5 to get preliminary inapproximability results for Huffman decoders.

(i) The Short Sync Word problem is NP-complete for Huffman decoders over an alphabet of size .

(ii) The Short Sync Word problem cannot be approximated in polynomial time within a factor of for every for Huffman decoders over an alphabet of size unless .

(iii) For every , the Short Sync Word problem cannot be approximated in polynomial time within a factor of for some for Huffman decoders over an alphabet of size unless .

###### Proof.

(i) The automaton in the Eppstein’s proof of NP-completeness of Short Sync Word [11] is strongly acyclic. Then the reduction described in Lemma 5 can be applied.

(ii) A direct consequence of Theorem 4 and Lemma 5.

(iii) A direct consequence of Theorem 4 and Lemma 5. ∎

Now we show how to get a better inapproximability result for binary Huffman decoders using the composition of synchronizing prefix codes. We present a more general result for the composition of synchronizing codes which is of its own interest. This result shows how to change the size of the alphabet of a synchronizing complete code in such a way that the approximate length of a shortest synchronizing pair for it is preserved.

A set of words over an alphabet is a code if no word can be represented as a concatenation of elements in in two different ways. In particular, every prefix code is a code. A pair of words in is called absorbing if . The length of a pair is the total length of two word. A code over an alphabet is called complete if every word is a factor of some word in , that is, if for every word there exist words , such that . In particular, every maximal (by inclusion) code is complete. A complete code having an absorbing pair is called synchronizing. We refer to [6] for a survey on the theory of codes.

Let be a code over and be a code over . Suppose that there exists a bijection . The composition is then defined as the code over the alphabet [6]. Here is defined as for , . Sometimes is omitted in the notation of composition.

Let and be two synchronizing complete codes, such that is finite and and are the lengths of a shortest and a longest codeword in . Suppose that the composition is defined. Then the code is synchronizing, and the length of a shortest absorbing pair for is between and , where is the length of a shortest absorbing pair for and is the length of a shortest absorbing pair for .

###### Proof.

Let , , and be such that . First, assume that and are synchronizing, and let , be shortest absorbing pairs for and . Then and . We will show that is an absorbing pair for . Consider the set . It is a subset of the set . Thus, is an absorbing pair for . Moreover, the length of this pair is between and .

Conversely, assume that is a shortest absorbing pair for , hence . Then by the definition of composition and ; thus, is also absorbing for . Next, let , , . Then . Since the mapping is injective, . Consequently is synchronizing, and is an absorbing pair for it of length between and .

Summarizing, we get that the length of a shortest absorbing pair for is between and . ∎

In the case of maximal prefix codes the first element of the absorbing pair can be taken as an empty word. For recognizable maximal prefix codes and , where is finite, a Huffman decoder recognizing the star of can be constructed as follows. Let be a Huffman decoder for . Consider the full tree for , where each edge is marked by the corresponding letter. For each state in we substitute the transitions going from this state with a copy of as follows. The root of coincides with , and the inner vertices are new states of the resulting automaton. Suppose that is a leaf of , and the path from the root to is marked by a word . Let be the letter of the alphabet of which is mapped to the word in the composition. Then the image of under the mapping defined by is merged with . In such a way we get a Huffman decoder with states, where is the number of states in and . By the definition of composition, this decoder has the same alphabet as . See Figure 2 for an example.

The Short Sync Word problem cannot be approximated in polynomial time within a factor of for some for binary -states Huffman decoders unless .

###### Proof.

We start with claim (iii) in Corollary 5 and use Theorem 5 to reduce the size of the alphabet. Thus, we reduce Short Sync Word for Huffman decoders over an alphabet of size to Short Sync Word for binary Huffman decoders.

Assume that the size of the alphabet is a power of two (if no, duplicate some letter the required number of times). We take the code as . This is a code where some words are of length and the other words are of length (after minimization the star of this code is recognized by a Wielandt automaton with states discussed in the introduction). This code has a synchronizing word of length [1]. The number of vertices in the tree of this code is .

Let be the length of a shortest synchronizing word for the original automaton. By Theorem 5, the length of a shortest synchronizing word for the result of the composition is between and .

For the Set Cover problem the inapproximability result holds even if we assume that the size of the optimal solution is of size at least for some . Indeed, if is a constant we can check all the subsets of of size at most in polynomial time. Thus, we can assume that implying . Hence after the composition the length of a shortest synchronizing word is changed by at most constant multiplicative factor, and we we get a gap-reserving reduction with gap . The resulting automaton is of size , and the dependence is hidden in the constant in the statement of this corollary. ∎

## 6 Partial Huffman Decoders

In this section we investigate automata recognizing the star of a non-maximal finite prefix code. Such codes have some noticeable properties which do not hold for maximal finite prefix codes. For example, there exist non-trivial non-maximal finite prefix codes with finite synchronization delay, which provides guarantees on the synchronization time [9]. This allows to read a stream of correctly transmitted compressed data from arbitrary position, which can be useful for audio and video decompression.

First we show that the known upper bounds and approximability for Short Sync Word hold true for strongly connected partial automata. Because of Proposition 3.6.5 of [6], Algorithm 1 of [26] works without any changes for strongly connected partial automata. The analysis of its approximation ratio is the same as in [14]. Thus we get the following.

There exists a polynomial time algorithm (Algorithm 1 of [26]) finding a synchronizing word of length at most for a -state strongly connected partial automaton. Moreover, this algorithm provides a -approximation for the Short Sync Word problem.

Now we provide a lower bound on the approximability of the Short Sync Word problem for partial Huffman decoders by extending the idea used to prove inapproximability for Huffman decoders in the previous sections. First we prove the result for alphabet of size and then use a composition with a maximal finite prefix code to get the same result for the binary case.

The Short Sync Word problem cannot be approximated within a factor of for every for -state partial Huffman decoders over an alphabet of size unless .

###### Proof.

First we prove inapproximability for the class of partial strongly acyclic automata, that is, automata having no simple cycles but loops in the sink state. We start with the CSP problem described in Section 3 with all the restriction defined there. Having an instance of this problem with variables and constraints such that each constraint is satisfied by at most assignments, we construct an automaton over the alphabet . For each constraint , we construct identical compressed trees corresponding to this constraint (also described in Section 3). Then for we merge the leaves of corresponding to non-satisfying assignments with the root of , and delete all the leaves corresponding to satisfying assignments (leaving all the transition leading to deleted states undefined). For each , we again delete all the leaves corresponding to satisfying assignments and merge all the leaves corresponding to non-satisfying assignments with a new state . This state is a self-loop, that is, map to itself. Now we define an additional letter and new states . We define to map to the root of . Finally, we add new states such that maps to , and map to for , and map to . All other transitions are left undefined.

If is applied first, the set of states to be synchronized is together with the roots of for all . Observe that cannot be applied anymore, since it would result in mapping all the active states of the automaton to void. If a letter other than is applied first, a superset of must be synchronized then.

If there exists a satisfying assignment then the word is synchronizing, since it maps all the states but to void. Otherwise, to synchronize the automaton we need to pass through compressed trees, since each tree can map only at most states to void (since for every non-satisfiable CSP the maximum number of satisfiable constraints is in the construction, see Section 3). Thus we get a gap of for the class of -state strongly acyclic partial automata.

Now we are going to transfer this result to the case of partial Huffman decoders. We extend the idea of Lemma 5. All we need is to define transitions leading from to the states having no incoming transitions (which are together with ). The only difference is that now we have to make sure that cannot be applied too early resulting in mapping all the states of the compressed trees to void leaving the state active.

To do that, we introduce two new letters and perform branching as described in Lemma 5. Thus we get leaves of the constructed full binary tree. To each leave we attach a chain of states of length ending in the root of (or in ). This means that we introduce new states and define the letters to map a state in each chain to the next state in the same chain. This guarantees that if the letter appears twice in a word of length at most , this word maps all the states of the automaton to void. Finally, the action of on the compressed trees and the states repeats, for example, the action of the letter .

The number of states of the automaton in the construction is The gap is then . By choosing small enough we thus get a gap of as required. ∎

The next lemma shows that under some restrictions it is possible to reduce the alphabet of a non-maximal prefix code in a way that approximate length of a shortest synchronizing word is preserved. A word is called non-mortal for a prefix code if it is a factor of some word in .

Let be synchronizing prefix codes such that is finite and maximal. Let and be the lengths of a shortest and a longest codeword in . Suppose that is defined for some . If there exists a synchronizing word for such that is a non-mortal word for , then the composition is synchronizing. Moreover, then the length of a shortest synchronizing word for is between and , where is the length of a shortest synchronizing word for .

###### Proof.

Let , , and be such that . Let be a synchronizing word for . Then is a synchronizing word for of length at most . In the other direction, let be a synchronizing word for . Then is a synchronizing word for . Thus, . ∎

The Short Sync Word problem cannot be approximated in polynomial time within a factor of for every for binary -state partial Huffman decoders unless .

###### Proof.

We use the composition of the automaton constructed in the proof of Theorem 6 with the prefix code having a synchronizing word . The word is a concatenation of two different codewords, so their pre-images can be taken to be and , resulting in a non-mortal word for , so we can use Lemma 6. ∎

## 7 Literal Huffman Decoders

In this section we deal with literal Huffman decoders. Given a finite maximal prefix code over an alphabet , the literal automaton recognizing is an automaton defined as follows. The states of correspond to all proper prefixes of the words in , and the transition function is defined as

 δ(q,x)={qxif qx∉X,ϵif qx∈X

We will need the following useful lemma. The rank of a word with respect to an automaton is the size of the image of under the mapping defined by .

[[5, Lemma 16]] For every -state literal Huffman decoder over an alphabet of size there exists a word of length and rank at most .

Note that if a -state literal Huffman decoder has a synchronizing word of length at most , this word can be found in polynomial time by examining all words of length up to . Thus, in further algorithms we will assume that the length of a shortest synchronizing word is greater than this value. Lemma 7 stays that a word of rank at most can also be found in polynomial time.

There exists a -approximation polynomial time algorithm for the Short Sync Word problem for literal Huffman decoders.

###### Proof.

Let be a literal Huffman decoder, and . Let be a word of rank at most found as described above. Let be the image of under the mapping defined by , i.e. . Define by the word subsequently merging pairs of states in with shortest possible words. Note than a shortest word synchronizing has to synchronize every pair of states, in particular, one that requires a longest word. Thus the length of is at most times greater than the length of a shortest word synchronizing . Then the word is a -approximation for the Short Sync Word problem. ∎

For every , there exists a -approximation -time algorithm for the problem Short Sync Word for -state literal Huffman decoders.

###### Proof.

Let be a literal Huffman decoder, and . First we check all words of length at most , whether they are synchronizing. The number of these words is polynomial, and the check can be performed in polynomial time. If a synchronizing word is found then we have an exact solution. Otherwise, a shortest synchronizing word must be longer than that and we proceed to the second stage.

Let be a word of rank at most found as before. Now we construct the power automaton restricted to all the subsets of size at most . Using it, we find a shortest word synchronizing the subset ; let this word be . We return .

Let be a shortest synchronizing word for . Clearly, and . Thus , so is a -approximation as required. ∎

In view of the presented results we propose the following conjecture.

###### Conjecture .

There exists an exact polynomial time algorithm for the Short Sync Word problem for literal Huffman decoders.

Finally, we remark that it is possible to define the notion of the literal automaton of a non-maximal finite prefix code in the same way. In this case we leave undefined the transitions for a state and a letter such that is a proper prefix of a codeword, but is neither a proper prefix of a codeword nor a codeword itself. However, the statement of Lemma 7 is false for partial automata. Indeed, consider a two-word prefix code . Its literal automaton has states, and a shortest synchronizing word for it is of length . Every word of length at most which is defined for at least one state is of the form or and thus has rank at least .

## 8 Mortal and Avoiding Words

A word is called mortal for a partial automaton if its mapping is undefined for all the states of . The techniques described in this paper can be easily adapted to get the same inapproximability for the Short Mortal Word problem defined as follows.

 Short Mortal Word Input: A partial automaton A with at least one undefined transition; Output: The length of a shortest mortal word for A.

This problem is connected for instance to the famous Restivo’s conjecture [18].

Unless P = NP, the Short Mortal Word problem cannot be approximated in polynomial time within a factor of

(i) for every for -state binary strongly connected partial automata;

(ii) for some for -state binary partial Huffman decoders.

###### Proof.

It can be seen that in Theorem 3 and Corollary 5 we construct an automaton with a state such that each state has to visit before synchronization. Introduce a new state having all the transitions the same as , and for set the only defined transition (for an arbitrary letter) to map to . Thus we get an automaton such that every mortal word has to map each state to

before mapping it to nowhere. Thus we preserve all the estimations on the length of a shortest mortal word, which proves both statements. ∎

Moreover, it is easy to get a -approximation polynomial time algorithm for Short Mortal Word for literal Huffman decoders following the idea of Theorem 7. Indeed, it follows from Lemma 7 that either there exists a mortal word of length at most , or there exists a word of rank at most . In the latter case we can find a word which is a concatenation of and a shortest word mapping all this states to nowhere one by one. By the arguments similar to the proof of Theorem 7 we then get the following.

###### Proposition .

There exists a -approximation polynomial time algorithm for the Short Mortal Word problem for -state literal Huffman decoders. This algorithm always finds a mortal word of length .

Another connected and important problem is to find a shortest avoiding word. Given an automaton , a word is called avoiding for a state if is not contained in the image of , that is, . Avoiding words play an important role in the recent improvement on the upper bound on the length of a shortest synchronizing word [24]. They are in some sense dual to synchronizing words.

 Short Avoiding word Input: An automaton A and its state q admitting a word avoiding q; Output: The length of a shortest word avoiding q in A.

If is not the root of a literal Huffman decoder (that is, not the state corresponding to the empty prefix), then a shortest avoiding word consists of just one letter. So avoiding is non-trivial only for the root state.

###### Proposition .

For every , there exists a -approximation -time algorithm for the problem Short Sync Word for -state literal Huffman decoders.

###### Proof.

We use the same algorithm as in the proof of Theorem 7. The only difference is that we check whether the words are avoiding instead of synchronizing. ∎

## 9 Concluding Remarks

For prefix codes, a synchronizing word is usually required to map all the states to the root [6]. One can see that this property holds for all the constructions of the paper. Moreover, in all the constructions the length of a shortest synchronizing word is linear in the number of states of the automaton. Thus, if we restrict to this case, we still get the same inapproximability results. Also, it should be noted that all the inapproximability results are proved by providing a gap-preserving reduction, thus proving NP-hardness of approximating the Short Sync Word problem within a given factor.

## References

• [1] Dmitry S. Ananichev, Vladimir V. Gusev, and Mikhail V. Volkov. Slowly synchronizing automata and digraphs. In Mathematical Foundations of Computer Science, LNCS vol. 6281, pages 55–65. Springer, 2010.
• [2] Marie-Pierre Béal, Mikhail V. Berlinkov, and Dominique Perrin. A quadratic upper bound on the size of a synchronizing word in one-cluster automata. International Journal of Foundations of Computer Science, 22(2):277–288, 2011.
• [3] Mikhail V. Berlinkov. Approximating the minimum length of synchronizing words is hard. Theory of Computing Systems, 54(2):211–223, 2014.
• [4] Mikhail V. Berlinkov. On two algorithmic problems about synchronizing automata. In Arseny M. Shur and Mikhail V. Volkov, editors, DLT 2014. LNCS, vol. 8633, pages 61–67. Springer, Cham, 2014.
• [5] Mikhail V. Berlinkov and Marek Szykuła. Algebraic synchronization criterion and computing reset words. Information Sciences, 369:718 – 730, 2016.
• [6] Jean Berstel, Dominique Perrin, and Christophe Reutenauer. Codes and Automata. Encyclopedia of Mathematics and its Applications 129. Cambridge University Press, 2010.
• [7] Marek Biskup. Error Resilience in Compressed Data – Selected Topics. PhD thesis, Faculty of Mathematics, Informatics and Mechanics, University of Warsaw, 2008.
• [8] Marek Tomasz Biskup and Wojciech Plandowski. Shortest synchronizing strings for huffman codes. Theoretical Computer Science, 410(38):3925 – 3941, 2009.
• [9] Véronique Bruyère. On maximal codes with bounded synchronization delay. Theoretical Computer Science, 204(1):11–28, 1998.
• [10] Renato M. Capocelli, A. A. De Santis, Luisa Gargano, and Ugo Vaccaro. On the construction of statistically synchronizable codes. IEEE Transactions on Information Theory, 38(2):407–414, 1992.
• [11] David Eppstein. Reset sequences for monotonic automata. SIAM Journal on Computing, 19(3):500–510, 1990.
• [12] Christopher F Freiling, Douglas S Jungreis, François Théberge, and Kenneth Zeger. Almost all complete binary prefix codes have a self-synchronizing string. IEEE Transactions on Information Theory, 49(9):2219–2225, 2003.
• [13] Paweł Gawrychowski and Damian Straszak. Strong inapproximability of the shortest reset word. In F. Giuseppe Italiano, Giovanni Pighizzini, and T. Donald Sannella, editors, MFCS 2015. LNCS, vol. 9234, pages 243–255. Springer, Heidelberg, 2015.
• [14] Michael Gerbush and Brent Heeringa. Approximating Minimum Reset Sequences, pages 154–162. Springer Berlin Heidelberg, Berlin, Heidelberg, 2011.
• [15] David A. Huffman. A method for the construction of minimum-redundancy codes. Proceedings of the IRE, 40(9):1098–1101, 1952.
• [16] Pavel Martyugin. Complexity of problems concerning reset words for cyclic and eulerian automata. Theoretical Computer Science, 450(Supplement C):3 – 9, 2012. Implementation and Application of Automata (CIAA 2011).
• [17] Jean-Eric Pin. On two combinatorial problems arising from automata theory. In C. Berge, D. Bresson, P. Camion, J.F. Maurras, and F. Sterboul, editors, Combinatorial Mathematics Proceedings of the International Colloquium on Graph Theory and Combinatorics, volume 75 of North-Holland Mathematics Studies, pages 535 – 548. North-Holland, 1983.
• [18] Antonio Restivo. Some remarks on complete subsets of a free monoid. Quaderni de ”La ricerca scientifica”, CNR Roma, 109:19–25, 1981.
• [19] Andrew Ryzhikov. Synchronization problems in automata without non-trivial cycles. In Arnaud Carayol and Cyril Nicaud, editors, CIAA 2017, LNCS, vol. 10329, pages 188–200. Springer, Cham, 2017.
• [20] Andrew Ryzhikov and Anton Shemyakov. Subset synchronization in monotonic automata. In Juhani Karhumäki and Aleksi Saarela, editors, Proceedings of the Fourth Russian Finnish Symposium on Discrete Mathematics, TUCS Lecture Notes 26, pages 154–164. 2017. Accepted to Fundamenta Informaticae.
• [21] Marcel-Paul Schützenberger. On the synchronizing properties of certain prefix codes. Information and Control, 7(1):23 – 36, 1964.
• [22] Marcel-Paul Schützenberger. On synchronizing prefix codes. Information and Control, 11(4):396 – 401, 1967.
• [23] Michael Sipser. Introduction to the Theory of Computation. Cengage Learning, 3rd edition, 2012.
• [24] Marek Szykuła. Improving the Upper Bound on the Length of the Shortest Reset Word. In STACS 2018, LIPIcs, pages 56:1–56:13. Schloss Dagstuhl–Leibniz-Zentrum fuer Informatik, 2018.
• [25] Vijay V. Vazirani. Approximation Algorithms. Springer-Verlag New York, 2001.
• [26] Mikhail V. Volkov. Synchronizing automata and the Černý conjecture. In Carlos Martín-Vide, Friedrich Otto, and Henning Fernau, editors, LATA 2008. LNCS, vol. 5196, pages 11–27. Springer, Heidelberg, 2008.
• [27] Vojtěch Vorel. Complexity of a problem concerning reset words for eulerian binary automata. Information and Computation, 253(Part 3):497 – 509, 2017. LATA 2014.