Complete Variable-Length Codes: An Excursion into Word Edit Operations

12/05/2019 ∙ by Jean Néraud, et al. ∙ Université de Rouen 0

Given an alphabet A and a binary relation τ⊆ A * x A * , a language X ⊆ A * is τ-independent if τ (X) ∩ X = ∅; X is τ-closed if τ (X) ⊆ X. The language X is complete if any word over A is a factor of some concatenation of words in X. Given a family of languages F containing X, X is maximal in F if no other set of F can stricly contain X. A language X ⊆ A * is a variable-length code if any equation among the words of X is necessarily trivial. The study discusses the relationship between maximality and completeness in the case of τ-independent or τ-closed variable-length codes. We focus to the binary relations by which the images of words are computed by deleting, inserting, or substituting some characters.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

In formal language theory, given a property , the embedding problem with respect to consists in examining whether a language satisfying can be included into some language that is maximal with respect to , in the sense that no language satisfying can strictly contain . In the literature, maximality is often connected to completeness: a language over the alphabet is complete if any string in the free monoid (the set of the words over ) is a factor of some word of (the submonoid of all concatenations of words in ). Such connection takes on special importance for codes: a language over the alphabet is a variable-length code (for short, a code) if every equation among the words (i.e. strings) of is necessarily trivial.

A famous result due to M.P. Schützenberger states that, for the family of the so-called thin codes (which contains regular codes and therefore also finite ones), being maximal is equivalent to being complete. In connection with these two concepts lots of challenging theoretical questions have been stated. For instance, to this day the problem of the existence of a finite maximal code containing a given finite one is not known to be decidable. From this latter point of view, in [16] the author asked the question of the existence of a regular complete code containing a given finite one: a positive answer was brought in [4], where was provided a now classical formula for embedding a given regular code into some complete regular one. Famous families of codes have also been concerned by those studies: we mention prefix and bifix codes [2, Proposition 3.3.8, Proposition 6.2.1], codes with a finite deciphering delay [3], infix [10], solid [11], or circular [13].

Actually, with each of those families, a so-called dependence system can be associated. Formally, such a system is a family of languages constituted by those sets that contain a non-empty finite subset in . Languages in are -dependent, the other ones being -independent. A special case corresponds to binary words relations , where a dependence systems is constituted by those sets satisfying : is -independent if we have (with ). Prefix codes certainly constitute the best known example: they constitute those codes that are independent with respect to the relation obtained by removing each pair from the famous prefix order. Bifix, infix or solid codes can be similarly characterized.

As regards to dependence, some extremal condition corresponds to the so-called closed sets: given a word relation , a language is closed under (-closed, for short) if we have . Lots of topics are concerned by the notion. We mention the framework of prefix order where a one-to-one correspondence between independent and closed sets is provided in [2, Proposition 3.1.3] (cf. also [1, 18]). Congruences in the free monoid are also concerned [15], as well as their connections to DNA computing [7]. With respect to morphisms, involved topics are also provided by the famous -systems [17] and, in the case of one-to-one (anti)-automorphisms, the so-called invariant sets [14].

As commented in [6], maximality and completeness concern the economy of a code. If is a complete code then every word occurs as part of a message, hence no part of is potentially useless. The present paper emphasizes the following questions: given a regular binary relation , in the family of regular -independent (-closed) codes, are maximality and completeness equivalent notions? Given a non-complete regular -independent (-closed) code, is it embeddable into some complete one?

Independence has some peculiar importance in the framework of coding theory. Informally, given some concatenation of words in , each codeword is transmitted via a channel into a corresponding . According to the combinatorial structure of , and the type of channel, one has to make use of codes with prescribed error-detecting constraints: some minimum-distance restraint is generally applied. In this paper, where we consider variable length codewords, we address to the Levenshtein metric [12]: given two different words , their distance is the minimal total number of elementary edit operations that can transform into , such operation consisting in a one character deletion, insertion, or substitution. Formally, it is the smallest integer such that we have , with , where , , are further defined below. From the point of view of error detection, being -independent guarantees that implies . In addition, a code satisfies the property of error correction if its elements are such that unless : according to [9, chap. 6], the existence of such codes is decidable. Denote by Subw() the set of the subsequences of :

, the -character deletion, associates with every word , all the words whose length is . The at most -character deletion is ;

, the -character insertion, is the converse relation of and we set (at most -character insertion);

, the -character substitution, associates with every , all with length such that (the letter of position in ), differs of in exactly positions ; we set ;

– We denote by the antireflexive relation obtained by removing all pairs from (we have ).

For short, we will refer the preceding relations to edit relations. For reasons of consistency, in the whole paper we assume and . In what follows, we draw the main contributions of the study:

Firstly, we prove that, given a positive integer , the two families of languages that are independent with respect to or are identical. In addition, for , no set can be -independent. We establish the following result:

Theorem A.

Let be a finite alphabet, , and . Given a regular -independent code , is complete if, and only if, it is maximal in the family of -independent codes.

A code is -independent if the Levenshtein distance between two distinct words of is always larger than : from this point of view, Theorem A states some noticeable characterization of maximal -error detecting codes in the framework of the Levenshtein metric.

Secondly, we explore the domain of closed codes. A noticeable fact is that for any , there are only finitely many -closed codes and they have finite cardinality. Furthermore, one can decide whether a given non-complete -closed code can be embedded into some complete one. We also prove that no closed code can exist with respect to the relations , , .

As regard to substitutions, beforehand, we focus to the structure of the set . Actually, excepted for two special cases (that is, [5, 19], or with [8, ex. 8, p.77]), to our best knowledge, in the literature no general description is provided. In any event we provide such a description; furthermore we establish the following result:

Theorem B.

Let be a finite alphabet and . Given a complete -closed code , either every word in has length not greater than , or a unique integer exists such that . In addition for every ()-closed code , some positive integer exists such that .

In other words, no -closed code can simultaneously possess words in and words in . As a consequence, one can decide whether a given non-complete -closed code is embeddable into some complete one.

2 Preliminaries

We adopt the notation of the free monoid theory. Given a word , we denote by its length; for , denotes the number of occurrences of the letter in . The set of the words whose length is not greater (not smaller) than is denoted by (). Given and , we say that is a factor of if words exist such that ; a subword of consists in any (perhaps empty) subsequence of . We denote by () the set of the words that are factor (subword) of some word in (we have ). A pair of words is overlapping-free if no pair exist such that or , with and ; if , we say that itself is overlapping-free.

It is assumed that the reader has a fundamental understanding with the main concepts of the theory of variable-length codes: we suggest, if necessary, that he (she) report to [2]. A set is a variable-length code (a code for short) if for any pair of sequences of words in , say , , the equation implies , and for each integer (equivalently the submonoid is free). The two following results are famous ones from the variable length code theory:

Theorem 2.1

Schützenberger [2, Theorem 2.5.16] Let be a regular code. Then the following properties are equivalent:

(i) is complete;

(ii) is a maximal code;

(iii)

a positive Bernoulli distribution

exists such that ;

(iv) for every positive Bernoulli distribution we have .

Theorem 2.2

[4] Given a non-complete code , let be an overlapping-free word and . Then is a complete code.

With regard to word relations, the following statement comes from the definitions:

Lemma 2.3

Let and . Each of the following properties holds:

(i) is -independent if, and only if, it is -independent ( denotes the converse relation of ).

(ii) is ()-independent if, and only if, it is (-independent.

(iii) is -closed if, and only if, it is -closed.

3 Complete independent codes

We start by providing a few examples:

Example 3.1

For , , the prefix code is not -independent (we have ), whereas the following codes are -independent:

– the regular (prefix) code: . Note that since it contains , is not a code.

– the complete (non-regular) context-free Dyck bifix code , which generates the Dyck free submonoid (for every word we have ). Note that contains the empty word, , thus it cannot be a code; however remains a (non-complete) bifix code

– the non-complete finite bifix code : actually, is the complete uniform code .

– for every pair of different integers , the prefix code . We have , which is not a code, although it is complete.

In view of establishing the main result of Section 3, we will construct some peculiar word:

Lemma 3.2

Let , , . Given a a non-complete code some overlapping-free word exists such that does not intersect and .

Proof. Let be a non-complete code, and let . Trivially, we have . Moreover, in a classical way a word exists such that is overlapping-free (eg. [2, Proposition 1.3.6]). Since we assume , each word in is constructed by deleting (inserting, substituting) at most letters from , hence by construction it contains at least one occurrence of as a factor. This implies , thus does not intersect .

By contradiction, assume that a word exists such that . It follows from and that is obtained by deleting (inserting, substituting) at most letters from : consequently at least one occurrence of appears as a factor of : this contradicts , therefore we obtain (cf. Figure 1).



Fig. 1: Proof of Lemma 3.2: implies ; for and , the action of the substitution is represented by the arrows, in some extremal condition.

As a consequence, we obtain the following result:

Theorem 3.3

Let and . Given a regular -independent code , is complete if, and only if, it is maximal as an -independent codes.

Proof. According to Theorem 2.1, every complete -independent code is a maximal code, hence it is maximal in the family of -independent codes. For proving the converse, we make use of the contrapositive. Let be a non-complete -independent code, and let satisfying the conditions of Lemma 3.2. With the notation of Theorem 2.2, necessarily , which is a subset of , is a code. According to Lemma 3.2, we have . Since is -independent and antireflexive, this implies , thus non-maximal as a -independent code.
  
We notice that for no -independent set can exist (indeed, we have ). However, the following result holds:

Corollary 3.4

Let . Given a regular -independent code , is complete if, and only if, it is maximal as a -independent code.

Proof. As indicated above, if is complete, it is maximal as a -independent code. For the converse, once more we argue by contrapositive that is, with the notation of Lemma 3.2, we prove that remains independent. By definition, for each , we have , with . According to Lemma 3.2, since is antireflexive, for each we have : this implies , thus -independent.
  
With regard to the relation , Corollary 3.4 expresses some interesting property in term of error detection. Indeed, as indicated in Section 1, every code is -independent if the Levenshtein distance between its (distinct) elements is always larger than . From this point, Corollary 3.4 states some characterization of the maximality in the family of such codes.

It should remain to develop some method in view of embedding a given non-complete -code into a complete one. Since the construction from the proof Theorem 2.2 does not preserve independence, this question remains open.

4 Complete closed codes with respect to deletion or insertion

We start with relation the . A noticeable fact is that corresponding closed codes are necessarily finite, as attested by the following result:

Proposition 4.1

Given a -closed code , and , we have .

Proof. It follows from and being -closed that . By contradiction, assume and let be the unique pair of integers such that , with . Since we have , an integer exists such that , thus words exist such that , with and . By construction, every word with belongs to (indeed, we have and ). This implies , thus : a contradiction with being a code.

Example 4.2

(1) According to Proposition 4.1, no code can be -closed. This can be also drawn from the fact that, for every set we have .

(2) Let and . According to Proposition 4.1, every word in any -closed code has length not greater than . It is straightforward to verify that is a -closed code. In addition, a finite number of examinations leads to verify that is maximal as a -closed code. Taking for

the uniform distribution we have

: thus is non-complete.

According to Example 4.2 (2), no result similar to Theorem 3.3 can be stated in the framework of -closed codes. We also notice that, in Proposition 4.1 the bound does not depend of the size of the alphabet, but only depends of .

Corollary 4.3

Given a finite alphabet and a positive integer , one can decide whether a non-complete -closed code is included into some complete one. In addition there are a finite number of such complete codes, all of them being computable, if any.

Proof. According to Proposition 4.1 only a finite number of -closed codes over can exist, each of them being a subset of .
  
We close the section by considering the relations , and :

Proposition 4.4

No code can be -closed, -closed, nor -closed.

Proof. By contradiction assume that some -closed code exists. Let , and such that . It follows from , that . According to Lemma 2.3(iii), we have , thus . Since , we have : a contradiction with being a code. Consequently no -closed codes can exist. According to Example 4.2(1), given a code , we have : this implies , thus not -closed.

5 Complete codes closed under substitutions

Beforehand, given a word , we need a thorough description of the set . Actually, it is well known that, over a binary alphabet, all -bit words can be computed by making use of some Gray sequence [5]. With our notation, we have . Furthermore, for every finite alphabet , the so-called -arity Gray sequences allow to generate [8, 19]: once more we have . In addition, in the special case where and , it can be proved that we have [8, Exercise 8, p. 28]. However, except in these special cases, to the best of our knowledge no general description of the structure of appears in the literature. In any event, in the next paragraph we provide an exhaustive description of . Strictly speaking, the proofs, that we have reported in Section 5.2, are not involved in -closed codes: we suggest the reader that, in a first reading, after para. 5.1 he (she) directly jumps to para. 5.3.

5.1 Basic results concerning

Proposition 5.1

Assume . For each , we have .

In the case where is a binary alphabet, we set : this allows a well-known algebraic interpretation of . Indeed, denote by the addition in the group with identity , and fix a positive integer ; given , define as the unique word of such that, for each , the letter of position in is . With this notation the sets and are in one-to-one correspondence. Classically, we have if, and only if, some exists such that with (thus ). From the fact that , the following property holds:

(1)

In addition is equivalent to . Let . The following property follows from and :

(2)

Finally, for we denote by its complementary letter that is, ; for we set .

Lemma 5.2

Let , . Given the two following properties hold:

(i) If is even and then is an even integer;

(ii) If is even then we have , for every .

Given a positive integer , we denote ( ) the set of the words such that

is even (odd).

Proposition 5.3

Assume . Given exactly one of the following conditions holds:

(i) , is even, and ;

(ii) , is odd, and ;

(iii) and .

5.2 Proofs of the statements 5.1, 5.2 and 5.3

Actually, Proposition 5.1 is a consequence of the following property:

Lemma 5.4

Assume . For every word we have .

Proof. Let and . We prove that exists with and . By construction, exists such that:

(a) if, and only if, .
It follows from that some -element subset exists. Since we have , some letter exists. Let such that:

(b) and, for each : if, and only if, .
By construction we have , moreover implies . According to (a) and (b), we obtain:

(c) ,

(d) if , and:

(e) if .
Since we have , this implies .
  
Proof of Proposition 5.1. Let : we prove that . Let and let be a sequence of words such that , and, for each : if, and only if, . Since we have (), by induction over we obtain thus, according to Lemma 5.4, .
  
In view of proving Lemma 5.2 and Proposition 5.3, we need some new lemma:

Lemma 5.5

Assume . For every , we have .

Proof. Set . It follows from that the result holds for . Assume and let , . By construction, there are distinct integers such that the following holds:

(a) if, and only if, .
Since some -element set exists, words such that:

(b) if, and only if, , and:

(c) if, and only if, .
By construction, we have and , thus . Moreover, the fact that we have is attested by the following equations:

(d) ,

(e) , and:

(f) for : if, and only if, .
  
Proof of Lemma 5.2. Assume even. According to Property (1) we have with . According to (2), is even: hence (i) follows. Conversely, assume even and let . According to (2), is also even, moreover according to (1) we obtain : this implies . According to Lemma 5.5, we have : this establishes (ii).  
Proof of Proposition 5.3. Let and . (iii) is trivial and (i) follows from Lemma 5.2(i): indeed, since is even, is the set of the words such that is even. Assume odd and let ; we will prove that . If is even, the result comes from Lemma 5.2(ii). Assume odd and let , thus that is, for some . It follows from that is odd, whence is even: according to Lemma 5.2(ii), this implies . But since is even, we have : according to Lemma 5.5, this implies (we have ). We obtain : this completes the proof.
  

5.3 The consequences for -closed codes

Given a -closed code , we say that the tuple satisfies Condition (3) if each of the three following properties holds:

(3)

We start by proving the following technical result:

Lemma 5.6

Assume and even. Given a pair of words , if then the set cannot be a code.

Proof. Let , and (hence we have ). By contradiction, we assume that is a code. We are in Condition (i) of Proposition 5.3 that is, we have . On a first hand, since is a right-complete prefix code [2, Theorem 3.3.8], it follows from that a (perhaps empty) word exists such that . On another hand, it follows from that, for each , a unique pair of letters , exists such that , with that is, exists with . According to Lemma 5.2(i), is even; according to Lemma 5.2(ii), this implies . Since we have , the set cannot be a code.
  
As a consequence of Lemma 5.6, we obtain the following result:

Lemma 5.7

Given a -closed code , if satisfies Condition (3) then either we have , or we have for some .

Proof. Assume that we have . Firstly, consider two words and by contradiction, assume that is, without loss of generality . Since is -closed, we have , whence the set , which a subset of is a code: this contradicts the result of Lemma 5.6. Consequently, we have , with . Secondly, once more by contradiction assume that words , exist. As indicated above, since is -closed, is a code: since we have and , once more this contradicts the result of Lemma 5.6. As a consequence, if then necessarily we have , for some . With such a condition, according to Proposition 5.3 for each pair of words , we have , : this implies .
  
According to Lemma 5.7, with Condition (3) no -closed code can simultaneously possess words in and words in .

Lemma 5.8

Given a -closed code , if does not satisfy Condition 3 then either we have , or we have , with .

Proof. If Condition (3) doesn’t hold then exactly one of the three following conditions holds:

(a) ;

(b) and ;

(c) with and odd.
With each of the two last conditions, let . Since is -closed, according to the propositions 5.1 and 5.3(ii), we have . Since is a maximal code, it follows from Lemma 2.3(iii) that .
  
As a consequence, every -closed code is finite. In addition, we state:

Theorem 5.9

Given a complete (, )-closed code , exactly one of the following conditions holds:

(i) is a subset of ;

(ii) a unique integer exists such that .
In addition, every ()-closed code is equal to , for some .

Proof. Let be a complete -closed code. If Condition (3) does not hold, the result is expressed by Lemma 5.8. Assume that Condition (3) holds with . According to Lemma 5.7, in any case some integer exists such that . Taking for the uniform distribution, we have and thus, according to Theorem 2.1: