Optimal Quasi-Gray Codes: Does the Alphabet Matter?

A quasi-Gray code of dimension n and length ℓ over an alphabet Σ is a sequence of distinct words w_1,w_2,...,w_ℓ from Σ^n such that any two consecutive words differ in at most c coordinates, for some fixed constant c>0. In this paper we are interested in the read and write complexity of quasi-Gray codes in the bit-probe model, where we measure the number of symbols read and written in order to transform any word w_i into its successor w_i+1. We present construction of quasi-Gray codes of dimension n and length 3^n over the ternary alphabet {0,1,2} with worst-case read complexity O( n) and write complexity 2. This generalizes to arbitrary odd-size alphabets. For the binary alphabet, we present quasi-Gray codes of dimension n and length at least 2^n - 20n with worst-case read complexity 6+ n and write complexity 2. Our results significantly improve on previously known constructions and for the odd-size alphabets we break the Ω(n) worst-case barrier for space-optimal (non-redundant) quasi-Gray codes with constant number of writes. We obtain our results via a novel application of algebraic tools together with the principles of catalytic computation [Buhrman et al. '14, Ben-Or and Cleve '92, Barrington '89, Coppersmith and Grossman '75]. We also establish certain limits of our technique in the binary case. Although our techniques cannot give space-optimal quasi-Gray codes with small read complexity over the binary alphabet, our results strongly indicate that such codes do exist.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

08/19/2021

A construction of maximally recoverable codes

We construct a family of linear maximally recoverable codes with localit...
08/12/2021

Space-Efficient Huffman Codes Revisited

Canonical Huffman code is an optimal prefix-free compression code whose ...
11/01/2021

On the number of q-ary quasi-perfect codes with covering radius 2

In this paper we present a family of q-ary nonlinear quasi-perfect codes...
08/10/2020

Concurrent Fixed-Size Allocation and Free in Constant Time

Our goal is to efficiently solve the dynamic memory allocation problem i...
12/11/2019

Constructions of quasi-twisted quantum codes

In this work, our main objective is to construct quantum codes from quas...
03/28/2019

DEEP-FRI: Sampling outside the box improves soundness

Motivated by the quest for scalable and succinct zero knowledge argument...
05/11/2021

Variants on Block Design Based Gradient Codes for Adversarial Stragglers

Gradient coding is a coding theoretic framework to provide robustness ag...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

One of the fundamental problems in the domain of algorithm design is to list down all the objects belonging to a certain combinatorial class. Researchers are interested in efficient generation of a list such that an element in the list can be obtained by a small amount of change to the element that precedes it. One of the classic examples is the binary Gray code introduced by Gray [22], initially used in pulse code communication. The original idea of a Gray code was to list down all binary strings of length , i.e, all the elements of , such that any two successive strings differ by exactly one bit. The idea was later generalized for other combinatorial classes (e.g. see [36, 27]). Gray codes have found applications in a wide variety of areas, such as information storage and retrieval [8], processor allocation [9], computing the permanent [36], circuit testing [40], data compression [39], graphics and image processing [1], signal encoding [31], modulation schemes for flash memories [25, 21, 47] and many more. Interested reader may refer to an excellent survey by Savage [41] for a comprehensive treatment on this subject.

In this paper we study the construction of Gray codes over for any . Originally, Gray codes were meant to list down all the elements from its domain but later studies (e.g. [19, 37, 5, 6]) focused on the generalization where we list distinct elements from the domain, each two consecutive elements differing in one position. We refer to such codes as Gray codes of length  [19]. When the code lists all the elements from its domain it is referred to as space-optimal. It is often required that the last and the first strings appearing in the list also differ in one position. Such codes are called cyclic Gray codes. Throughout this paper we consider only cyclic Gray codes and we refer to them simply as Gray codes. Researchers also study codes where two successive strings differ in at most positions, for some fixed constant , instead of differing in exactly one position. Such codes are called quasi-Gray codes [5]222Readers may note that the definition of quasi-Gray code given in [19] was different. The code referred as quasi-Gray code by Fredman [19] is called Gray code of length where , in our notation. or -Gray codes.

We study the problem of constructing quasi-Gray codes over in the cell probe model [46], where each cell stores an element from . The efficiency of a construction is measured using three parameters. First, we want the length of a quasi-Gray code to be as large as possible. Ideally, we want space-optimal codes. Second, we want to minimize the number of coordinates of the input string the algorithm reads in order to generate the next (or, previous) string in the code. Finally, we also want the number of cells written in order to generate the successor (or, predecessor) string to be as small as possible. Since our focus is on quasi-Gray codes, the number of writes will always be bounded by a universal constant. We are interested in the worst-case behavior and we use decision assignment trees (DAT) of Fredman [19] to measure these complexities.

The second requirement of the above is motivated from the study of loopless generation of combinatorial objects. In the loopless generation we are required to generate the next string from the code in constant time. Different loopless algorithms to generate Gray codes are known in the literature [16, 4, 27]. However, those algorithms use extra memory cells in addition to the space required for the input string which makes it impossible to get a space-optimal code from them. More specifically, our goal is to design a decision assignment tree on variables to generate a code over the domain . If we allow extra memory cells (as in the case of loopless algorithms) then the corresponding DAT will be on variables, where is the number of extra memory cells used.

Although there are known quasi-Gray codes with logarithmic read complexity and constant write complexity [19, 37, 5, 6], none of these constructions is space-optimal. The best result misses at least strings from the domain when having read complexity [6]. Despite of an extensive research under many names, e.g., construction of Gray codes [19, 35, 15, 23], dynamic language membership problem [18], efficient representation of integers [37, 6], so far we do not have any quasi-Gray code of length , for some constant , with worst-case read complexity and write complexity . The best worst-case read complexity for space-optimal Gray code is [20]. Recently, Raskin [38] showed that any space-optimal quasi-Gray code over the domain must have read complexity . This lower bound is true even if we allow non-constant write complexity. It is worth noting that this result can be extended to the domain when is even.

In this paper we show that such lower bound does not hold for quasi-Gray codes over , when is odd. In particular, we construct space-optimal quasi-Gray codes over with read complexity and write complexity . As a consequence we get an exponential separation between the read complexity of space-optimal quasi-Gray code over and that over .

Theorem 1.1.

Let be odd and be such that . Then, there is a space-optimal quasi-Gray code over for which, the two functions and can be implemented by inspecting at most cells while writing only cells.

In the statement of the above theorem, denotes the element appearing after in the cyclic sequence of the code , and analogously, denotes the preceding element. Using the argument as in [19, 35] it is easy to see a lower bound of on the read complexity when the domain is . Hence our result is optimal up to some small constant factor.

Raskin shows lower bound on the read complexity of space-optimal binary quasi-Gray codes. The existence of binary quasi-Gray codes with sub-linear read complexity of length , for some constant , was open. Using a different technique than that used in the proof of Theorem 1.1 we get a quasi-Gray code over the binary alphabet which enumerates all but many strings. This result generalizes to the domain for any prime power .

Theorem 1.2.

Let be any natural number. Then, there is a quasi-Gray code of length at least over , such that the two functions and can be implemented by inspecting at most cells while writing only cells.

We remark that the points that are missing from in the above theorem are all of the form .

If we are allowed to read and write constant fraction of bits then Theorem 1.2 can be adapted to get a quasi-Gray code of length (see Section 5). In this way we get a trade-off between length of the quasi-Gray code and the number of bits read in the worst-case. All of our constructions can be made uniform (see the remark after Corollary 5.6).

Using the Chinese Remainder Theorem (cf. [13]), we also develop a technique that allows us to compose Gray codes over various domains. Hence, from quasi-Gray codes over domains , where ’s are pairwise co-prime, we can construct quasi-Gray codes over , where . Using this technique on our main results, we get a quasi-Gray code over for any that misses only strings where for an odd , while achieving the read complexity similar to that stated in Theorem 1.1. It is worth mentioning that if we get a space-optimal quasi-Gray code over the binary alphabet with non-trivial savings in read complexity, then we will have a space-optimal quasi-Gray code over the strings of alphabet for any with similar savings.

The technique by which we construct our quasi-Gray codes relies heavily on simple algebra which is a substantial departure from previous mostly combinatorial constructions. We view Gray codes as permutations on and we decompose them into simpler permutations on , each being computable with read complexity and write complexity . Then we apply a different composition theorem, than mentioned above, to obtain space-optimal quasi-Gray codes on , , with read complexity and write complexity 2. The main issue is the decomposition of permutations into few simple permutations. This is achieved by techniques of catalytic computation [7] going back to the work of Coppersmith and Grossman [12, 2, 3].

It follows from the work of Coppersmith and Grossman [12] that our technique is incapable of designing a space-optimal quasi-Gray code on as any such code represents an odd permutation. The tools we use give inherently only even permutations. However, we can construct quasi-Gray codes from cycles of length on as they are even permutations. Indeed, that is what we do for our Theorem 1.2. We note that any efficiently computable odd permutation on , with say read complexity and write complexity , could be used together with our technique to construct a space-optimal quasi-Gray code on with read complexity at most and constant write complexity. This would represent a major progress on space-optimal Gray codes. (We would compose the odd permutation with some even permutation to obtain a full cycle on . The size of the decomposition of the even permutation into simpler permutations would govern the read complexity of the resulting quasi-Gray code.)

Interestingly, Raskin’s result relies on showing that a decision assignment tree of sub-linear read complexity must compute an even permutation.

1.1 Related works

The construction of Gray codes is central to the design of algorithms for many combinatorial problems [41]. Frank Gray [22] first came up with a construction of Gray code over binary strings of length , where to generate the successor or predecessor strings one needs to read bits in the worst-case. The type of code described in [22] is known as binary reflected Gray code. Later Bose et al. [5] provided a different type of Gray code construction, namely recursive partition Gray code which attains average case read complexity while having the same worst-case read requirements. The read complexity we referred here is in the bit-probe model. It is easy to observe that any space-optimal binary Gray code must read bits in the worst-case [19, 35, 20]. Recently, this lower bound was improved to in [38]. An upper bound of even was not known until very recently [20]. This is also the best known so far.

Fredman [19] extended the definition of Gray codes by considering codes that may not enumerate all the strings (though presented in a slightly different way in [19]) and also introduced the notion of decision assignment tree (DAT) to study the complexity of any code in the bit-probe model. He provided a construction that generates a Gray code of length for some constant while reducing the worst-case bit-read to . Using the idea of Lucal’s modified reflected binary code [30], Munro and Rahman [37] got a code of length with worst-case read complexity only . However in their code two successive strings differ by coordinates in the worst-case, instead of just one and we refer to such codes as quasi-Gray codes following the nomenclature used in [5]. Brodal et al. [6] extended the results of [37] by constructing a quasi-Gray code of length for arbitrary , that has bits ( bits) worst-case read complexity and any two successive strings in the code differ by at most bits ( bits).

In contrast to the Gray codes over binary alphabets, Gray codes over non-binary alphabets received much less attention. The construction of binary reflected Gray code was generalized to the alphabet for any in [17, 11, 26, 39, 27, 24]. However, each of those constructions reads coordinates in the worst-case to generate the next element. As mentioned before, we measure the read complexity in the well studied cell probe model [46] where we assume that each cell stores an element of . The argument of Fredman in [19] implies a lower bound of on the read complexity of quasi-Gray code on . To the best of our knowledge, for non-binary alphabets, there is nothing known similar to the results of Munro and Rahman or Brodal et al.  [37, 6]. We summarize the previous results along with ours in Table 1.

Reference Value of length Worst-case cell read Worst-case cell write
[22]
[19]
[18]
[37]
[5]
[6]
[6]
[20]
Theorem 1.2
[11] any
Theorem 1.1 any odd
Table 1: Taxonomy of construction of Gray/quasi-Gray codes over

Additionally, many variants of Gray codes have been studied in the literature. A particular one that has garnered a lot of attention in the past 30 years is the well-known middle levels conjecture. See [32, 33, 34, 23], and the references therein. It has been established only recently [32]. The conjecture says that there exists a Hamiltonian cycle in the graph induced by the vertices on levels and of the hypercube graph in dimensions. In other words, there exists a Gray code on the middle levels. Mütze et al. [33, 34] studied the question of efficiently enumerating such a Gray code in the word RAM model. They [34] gave an algorithm to enumerate a Gray code in the middle levels that requires space and on average takes time to generate the next vertex. In this paper we consider the bit-probe model, and Gray codes over the complete hypercube. It would be interesting to know whether our technique can be applied for the middle level Gray codes.

1.2 Our technique

Our construction of Gray codes relies heavily on the notion of -functions defined by Coppersmith and Grossman [12]. An -function is a permutation on defined by a function and an -tuple of indices such that , where the addition is inside . Each

-function can be computed by some decision assignment tree that given a vector

, inspects coordinates of and then it writes into a single coordinate of .

A counter (quasi-Gray code) on can be thought of as a permutation on . Our goal is to construct some permutation on that can be written as a composition of -functions , i.e., .

Given such a decomposition, we can build another counter on , where , for which the function operates as follows. The first -coordinates of serve as an instruction pointer that determines which should be executed on the remaining coordinates of . Hence, based on the current value of the coordinates, we perform on the remaining coordinates and then we update the value of to . (For we can execute the identity permutation which does nothing.)

We can use known Gray codes on to represent the instruction pointer so that when incrementing we only need to write into one of the coordinates. This gives a counter which can be computed by a decision assignment tree that reads coordinates and writes into coordinates of . (A similar composition technique is implicit in Brodal et al. [6].) If is of length , then is of length . In particular, if is space-optimal then so is .

Hence, we reduce the problem of constructing -Gray codes to the problem of designing large cycles in that can be decomposed into -functions. Coppersmith and Grossman [12] studied precisely the question of, which permutations on can be written as a composition of -functions. They show that a permutation on can be written as a composition of -functions if and only if the permutation is even. Since is of even size, a cycle of length on is an odd permutation and thus it cannot be represented as a composition of -functions. However, their result also implies that a cycle of length on can be decomposed into -functions.

We want to use the counter composition technique described above in connection with a cycle of length . To maximize the length of the cycle in , we need to minimize , the number of -functions in the decomposition. By a simple counting argument, most cycles of length on require to be exponentially large in . This is too large for our purposes. Luckily, there are cycles of length on that can be decomposed into polynomially many

-functions, and we obtain such cycles from linear transformations.

There are linear transformations which define a cycle on of length . For example, the matrix corresponding to the multiplication by a fixed generator of the multiplicative group of the Galois field is such a matrix. Such matrices are full rank and they can be decomposed into elementary matrices, each corresponding to a -function. Moreover, there are matrices derived from primitive polynomials that can be decomposed into at most elementary matrices.333Primitive polynomials were previously also used in a similar problem, namely to construct shift-register sequences (see e.g. [27]). We use them to get a counter on of length at least whose successor and predecessor functions are computable by decision assignment trees of read complexity and write complexity . Such counter represents -Gray code of the prescribed length. For any prime , the same construction yields -Gray codes of length at least with decision assignment trees of read complexity and write complexity .

The results of Coppersmith and Grossman [12] can be generalized to as stated in Richard Cleve’s thesis [10].444Unfortunately, there is no written record of the proof. For odd , if a permutation on is even then it can be decomposed into -functions. Since is odd, a cycle of length on is an even permutation and so it can be decomposed into -functions. If the number of those functions is small, so the is small, we get the sought after counter with small read complexity. However, for most cycles of length on , is exponential in .

We show though, that there is a cycle of length on that can be decomposed into -functions. This in turn gives space-optimal -Gray codes on with decision assignment trees of read complexity ) and write complexity .

We obtain the cycle and its decomposition in two steps. First, for , we consider the permutation on which maps each element onto , for and , while other elements are mapped to themselves. Hence, is a product of disjoint cycles of length . We show that is a cycle of length . In the next step we decompose each into -functions.

For , we can decompose using the technique of Ben-Or and Cleve [3] and its refinement in the form of catalytic computation of Buhrman et al. [7]. We can think of as content of memory registers, where are the input registers, is the output register, and are the working registers. The catalytic computation technique gives a program consisting of instructions, each being equivalent to a -function, which performs the desired adjustment of based on the values of without changing the ultimate values of the other registers. (We need to increment iff are all zero.) This program directly gives the desired decomposition of , for . (Our proof in Section 6 uses the language of permutations.)

The technique of catalytic computation fails for and as the program needs at least two working registers to operate. Hence, for and we have to develop entirely different technique. This is not trivial and quite technical but it is nevertheless possible, thanks to the specific structure of and .

Organization of the paper

In Section 2 we define the notion of counter, Gray code and our computational model, namely decision assignment tree and also provide certain known results regarding the construction of Gray codes. Then in Section 3 we describe how to combine counters over smaller alphabets to get another counter over larger alphabet, by introducing the Chinese Remainder Theorem for counters. Next we provide some basic facts about the permutation group and the underlying structure behind all of our constructions of quasi-Gray codes. We devote Section 5 to construction of a quasi-Gray code over binary alphabet that misses only a few words, by using full rank linear transformation. In Section 6 we construct a space-optimal quasi-Gray code over any odd-size alphabet. Finally in Section 7 we rule out the existence of certain kind of space-optimal binary counters.

2 Preliminaries

In the rest of the paper we only present constructions of the successor function for our codes. Since all the operations in those constructions are readily invertible, the same arguments also give the predecessor function .

Notations:

We use the standard notions of groups and fields, and mostly we will use only elementary facts about them (see [14, 29] for background.). By we mean the set of integers modulo , i.e., . Throughout this paper whenever we use addition and multiplication operation between two elements of , then we mean the operations within that is modulo . For any , we let denote the set . Unless stated otherwise explicitly, all the logarithms we consider throughout this paper are based .

Now we define the notion of counters used in this paper.

Definition 1 (Counter).

A counter of length over a domain is any cyclic sequence such that are distinct elements of . With the counter we associate two functions and that give the successor and predecessor element of in , that is for , where , and where . If , we call the counter a space-optimal counter.

Often elements in the underlying domain have some “structure” to them. In such cases, it is desirable to have a counter such that consecutive elements in the sequence differ by a “small” change in the “structure”. We make this concrete in the following definition.

Definition 2 (Gray Code).

Let be finite sets. A Gray code of length over the domain is a counter of length over such that any two consecutive strings and , , differ in exactly one coordinate when viewed as an -tuple. More generally, if for some constant , any two consecutive strings and , , differ in at most coordinates such a counter is called a -Gray Code.

By a quasi-Gray code we mean -Gray code for some unspecified fixed . In the literature sometimes people do not place any restriction on the relationship between and and they refer to such a sequence a (quasi)-Gray code. In their terms, our codes would be cyclic (quasi)-Gray codes. If , we call the codes space-optimal (quasi-)Gray codes.

Decision Assignment Tree:

The computational model we consider in this paper is called Decision Assignment Tree (DAT). The definition we provide below is a generalization of that given in [19]. It is intended to capture random access machines with small word size.

Let us fix an underlying domain whose elements we wish to enumerate. In the following, we will denote an element in by . A decision assignment tree is a -ary tree such that each internal node is labeled by one of the variables . Furthermore, each outgoing edge of an internal node is labeled with a distinct element of . Each leaf node of the tree is labeled by a set of assignment instructions that set new (fixed) values to chosen variables. The variables which are not mentioned in the assignment instructions remain unchanged.

The execution on a decision assignment tree on a particular input vector starts from the root of the tree and continues in the following way: at a non-leaf node labeled with a variable , the execution queries and depending on the value of the control passes to the node following the outgoing edge labeled with the value of . Upon reaching a leaf, the corresponding set of assignment statements is used to modify the vector and the execution terminates. The modified vector is the output of the execution.

Thus, each decision assignment tree computes a mapping from into . We are interested in decision assignment trees computing the mapping for some counter . When is space-optimal we can assume, without loss of generality, that each leaf assigns values only to the variables that it reads on the path from the root to the leaf. (Otherwise, the decision assignment tree does not compute a bijection.) We define the read complexity of a decision assignment tree , denoted by , as the maximum number of non-leaf nodes along any path from the root to a leaf. Observe that any mapping from into can be implemented by a decision assignment tree with read complexity . We also define the write complexity of a decision assignment tree , denoted by , as the maximum number of assignment instructions in any leaf.

Instead of the domain , we will sometimes also use domains that are a cartesian product of different domains. The definition of a decision assignment tree naturally extends to this case of different variables having different domains.

For any counter , we say that is computed by a decision assignment tree if and only if for , , where denotes the output string obtained after an execution of on . Note that any two consecutive strings in the cyclic sequence of differ by at most many coordinates.

For a small constant , some domain , and all large enough , we will be interested in constructing cyclic counters on that are computed by decision assignment trees of write complexity and read complexity . By the definition such cyclic counters will necessarily be -Gray codes.

2.1 Construction of Gray codes

For our construction of quasi-Gray codes on a domain with decision assignment trees of small read and write complexity we will need ordinary Gray codes on a domain . Several constructions of space-optimal binary Gray codes are known where the oldest one is the binary reflected Gray code [22]. This can be generalized to space-optimal (cyclic) Gray codes over non-binary alphabets (see e.g. [11, 27]).

Theorem 2.1 ([11, 27]).

For any , there is a space-optimal (cyclic) Gray code over .

3 Chinese Remainder Theorem for Counters

In this paper we consider quasi-Gray codes over for . Below we describe how to compose decision assignment trees over different domains to get a decision assignment tree for a larger mixed domain.

Theorem 3.1 (Chinese Remainder Theorem for Counters).

Let be integers, and let be some finite sets of size at least two. Let be an integer, and be pairwise co-prime integers. For , let be a counter of length over computed by a decision assignment tree over variables. Then, there exists a decision assignment tree over variables that implements a counter of length over . Furthermore, , and .

Proof.

For any , let the counter . Let be variables taking values in , respectively. The following procedure, applied repeatedly, defines the counter :

If
else

It is easily seen that the above procedure defines a valid cyclic sequence when starting at for any . That is, every element has a unique predecessor and a unique successor, and that the sequence is cyclic. It can easily be implemented by a decision assignment tree, say . First it reads the value of . Since , it queries components. Then, depending on the value of , it reads and updates another component, say . This can be accomplished using the decision assignment tree . We also update the value of , and to that end we use the appropriate assignments from decision assignment tree . Observe that irrespective of how efficient is, we read completely to determine which of the remaining counters to update. Hence, , and .

Now it only remains to show that the counter described above is indeed of length . Thus, it suffices to establish that starting with the string , we can generate the string for any . Let us assume . At the end of the proof we will remove this assumption. Suppose the string is reachable from in steps. As our procedure always increment , must be divisible by . Let . Furthermore, the procedure increments a variable , , exactly after steps. Thus, is reachable if and only if satisfies the following equations:

Since are pairwise co-prime, Chinese Remainder Theorem (for a reference, see [13]) guarantees the existence of a unique integral solution such that . Hence, is reachable from in at most steps.

Now we remove the assumption , i.e., . Consider the string where for , and for . From the arguments in the previous paragraph, we know that this tuple is reachable. We now observe that the next steps increment to and to for , thus, reaching the desired string . ∎

Remark.

We remark that if ’s are space-optimal in Theorem 3.1, then so is .

In the above proof, we constructed a special type of a counter where we always read the first coordinate, incremented it, and further depending on its value, we may update the value of another coordinate. From now on we refer to such type of counters as hierarchical counters. In Section 7 we will show that for such type of a counter the co-primality condition is necessary at least for . One can further note that the above theorem is similar to the well known Chinese Remainder Theorem and has similar type of application for constructing of space-optimal quasi-Gray codes over for arbitrary .

Lemma 3.2.

Let be such that , where is odd and . Given decision assignment trees and computing space-optimal (quasi-)Gray codes over and , respectively, there exists a decision assignment tree implementing a space-optimal quasi-Gray code over such that , and .

Proof.

We will view as and simulate a decision assignment tree operating on on . From the Chinese Remainder Theorem (see [13]), we know that there exists a bijection (in fact, an isomorphism) . We denote the tuple by . From Theorem 3.1 we know that there exists a decision assignment tree over computing a space-optimal quasi-Gray code such that , and .

We can simulate actions of on an input to obtain the desired decision assignment tree . Indeed, whenever queries , queries the first coordinate of its input. Whenever queries the -th coordinate of , queries the -th coordinate of its input and makes its decision based on the value of that coordinate. Similarly, whenever queries the -th coordinate of , queries the -th coordinate and makes its decision based on the value of that coordinate. Assignments by are handled in similar fashion by updating only the appropriate part of . (Notice, queries made by might reveal more information than queries made by .) ∎

Before proceeding further, we would also like to point out that to get a space-optimal decision assignment tree over , it suffices to a get space-optimal decision assignment trees over for arbitrary dimensions. Thus, to get a decision assignment tree implementing space-optimal quasi-Gray codes over , we only need decision assignment trees implementing space-optimal quasi-Gray codes over and . This also justifies our sole focus on construction of space-optimal decision assignment trees over and in the later sections.

Lemma 3.3.

If, for all , there exists a decision assignment tree implementing a space-optimal (quasi-)Gray code over , then for any and , there exists a decision assignment tree implementing a space-optimal (quasi-)Gray code over such that the read and write complexity remain the same.

Proof.

Consider any bijective map . For example, one can take standard binary encoding of integers ranging from to as the bijective map . Next, define another map as follows: . Now consider that implements a space-optimal (quasi-)Gray code over . We fix a partition of the variables into blocks of variables each.

We now construct a decision assignment tree over using and the map . As in the proof of Lemma 3.2, our follows in the decision making. That is, if queries a variable, then queries the block in the partition where the variable lies. (Again, as noted before, may get more information than required by .) Upon reaching a leaf, using , updates the blocks depending on ’s updates to the variables.

We devote the rest of the paper to the construction of counters over , and for any odd .

4 Permutation Group and Decomposition of Counters

We start this section with some basic notation and facts about the permutation group which we will use heavily in the rest of the paper. The set of all permutations over a domain forms a group under the composition operation, denoted by , which is defined as follows: for any two permutations and , , where . The corresponding group, denoted , is the symmetric group of order . We say, a permutation is a cycle of length if there are distinct elements such that for , , , and for all , . We denote such a cycle by . Below we state few simple facts about composition of cycles.

Proposition 4.1.

Consider two cycles and where for any and , . Then, is the cycle of length .

Proposition 4.2.

If is a cycle of length , then for any , is also a cycle of length . Moreover, if , then .

The permutation is called the conjugate of with respect to . The above proposition is a special case of a well known fact about the cycle structure of conjugates of any permutation and can be found in any standard text book on Group Theory (e.g., Proposition  in Chapter  of [14].).

Roughly speaking, a counter of length over , in the language of permutations, is nothing but a cycle of the same length in . We now make this correspondence precise and give a construction of a decision assignment tree that implements such a counter.

Lemma 4.3.

Let be a domain. Suppose are such that is a cycle of length . Let be decision assignment trees that implement respectively. Let be a domain such that , and let be a decision assignment tree that implements a counter of length over where .

Then, there exists a decision assignment tree that implements a counter of length over such that , and .

Proof.

Suppose . Now let us consider the following procedure : on any input ,

If
else

Now using a similar argument as in the proof of Theorem 3.1, the above procedure is easily seen to be implementable using a decision assignment tree of the prescribed complexity. Each time we check the value of . Thus, we need to read components. Depending on the value of , we may apply on using the decision assignment tree . Then we update the value of . Hence, , and .

Let be the cycle of length given by . We now argue that the procedure generates a counter of length over starting at . Without loss of generality, let us assume that , where for , is the identity map. Fix . Define , and . For , let where denotes invocations of . Since increments in every invocation, for , and .

By Proposition 4.2, is a cycle of length . Hence, are all distinct and .

As a consequence we conclude that for any and , and . This completes the proof. ∎

In the next two sections we describe the construction of where for some and how the value of depends on the length of the cycle .

5 Counters via Linear Transformation

The construction in this section is based on linear transformations. Consider the vector space , and let be a linear transformation. A basic fact in linear algebra says that if has full rank, then the mapping given by is a bijection. Thus, when is full rank, the mapping can also be thought of as a permutation over