State spaces of convolutional codes, codings and encoders

12/05/2017 ∙ by Štěpán Holub, et al. ∙ Charles University in Prague 0

In this paper we give a compact presentation of the theory of abstract spaces for convolutional codes and convolutional encoders, and show a connection between them that seems to be missing in the literature. We use it for a short proof of two facts: the size of a convolutional encoder of a polynomial matrix is at least its inner degree, and the minimal encoder has the size of the external degree if the matrix is reduced.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1. Notation

Let be a (finite) field. An element of the field of Laurent series over is a series over a formal variable , where is the least integer such that called the delay of , denoted by . The degree of , denoted by , is the largest index for which (possibly ). (Note that and .) We avoid the tedious use of the variable in our notation: instead of , the fact that we deal with a series is indicated by the boldface. Important subsets of are the ring of polynomials , and its fraction field, the set of rational series . Let be the causal part of and its anticausal part. The set of causal series is the ring of power series

. Bold face variables will also denote vectors of series. If

or , then .

2. Levels of abstractness, state spaces and minimality

An convolutional code can be seen on three levels of increasing definiteness and/or decreasing abstractness: a code, an encoding and an encoder.

1. The first level, called simply a (convolutional) code, is the set of codewords, which is by definition a -dimensional subspace of the vector space with dimensional subspace in , that is, generated by a basis from .

2. An encoding is a realizable linear mapping which assigns codewords to messages. Such an encoding is defined by the choice of a causal rational basis of the space or, equivalently, by a generating matrix , which is a matrix of rank whose rows generate , and are causal rational series. An information vector (a message) is mapped to the codeword .

3. The encoding can be realized in real-time by an encoder which is a transducer characterized by a finite vector space (over ) of states and by a linear mapping . Given a state and an input , the encoder outputs and enters a state , where . Since is linear, it can be expressed as a matrix of size over , which is composed of four matrices such that and . The set of states is then , where . The encoder is typically realized by a linear circuit which is built from adders, multipliers and memory (or delay) elements. Then is the number of delay elements, and a state is the content of encoder’s memory.

The main point is that each of the above three levels can be assigned a finite dimensional vector space of states whose dimension is the corresponding degree. We can therefore distinguish (with growing abstractness) encoder state space, encoding state space and code state space, as well as encoder degree, encoding degree and code degree.

Remark 1.

In the literature, terminology varies. The circuit realization is called a “physical encoder” by McEliece [5], or a “physical realization” of “abstract linear encoder” , where the latter is identified with the generating matrix, that is, with the encoding. McEliece points out that a matrix can have different physical realizations (even minimal ones). At the same time, however, the term “convolutional encoder” without qualification is used for the encoder defined by four matrices as above. Those four matrices are also called “state-space realization” of the matrix , this time without pointing out that the same matrix can have different state-space realizations. Johanesson and Zigangirov use the term “encoder” directly for the circuit.

The state space of the encoder is explicit in its definition. On the other hand, it is not obvious what a “state of an encoding” , or “state of a matrix ” should be. In order to define the encoding state space, the encoding should be seen as the “behaviour” of an encoder. The encoding state at a given instant of time is therefore naturally defined by the mapping of future inputs to future outputs. Since we deal with time invariant systems, it is convenient to denote the “given instant” as time zero, and the state is then the mapping of causal series. We are interested only in “reachable” states, that is, those corresponding to some anticausal input . Due to linearity, we have that for the causal input the causal output will be

which is the behaviour of the initial state modified by the addition of . This suggests that we can define the encoding state space as the factor space

where

We denote elements of by . Note, in particular, that for any . The set is a vector space over the basic field (rather than over the field of rational functions). Consequently, also and in the above definition are understood as vector spaces over . Our definition of may seem less intuitive than the definition used by Forney [1], and Johanesson and Zigangirov [4]:

However, and are isomorphic via the isomorphism . The advantage of our definition will become clear with the definition of the mapping below.

One more step of the abstraction is needed to introduce code state space which should be independent even of a particular encoding. The idea is illustrated by the well-known construction of the minimal automaton in the theory of regular languages, when states are given by classes of the Myhill-Nerode equivalence, and thus defined purely in terms of the accepted language. In our case, it is natural to define “code state zero” as the set

of causal code words, again a vector space over . The state space of is now defined as the factor space

and the elements of are denoted by .

Remark 2.

This definition (and notation) is used by McEliece [5, p. 1096] who uses the term “abstract state space” . The same definition is used also in Forney [6], where instead of , the notation is used (moreover indexed by the particular time instant for systems which may not be time invariant). Beware that “abstract state space” denotes in Forney [1] as well as in Johanesson and Zigangirov [4, p. 90]. We remark that the “behavioral” approach leads to a slightly different definition

where

This is the definition used (somewhat implicitly, see our concluding remarks) by Forney [1]. However, it is straightforward that induces an isomorphism between and , which is also noted in Forney [6].

An encoder is called minimal if it has the smallest possible degree among all encoders realizing a given encoding. The degree of the minimal encoder of a generating matrix is called the McMillan degree of ([5, p. 1095]), denoted . Since the state of an encoding is given by the (future) behavior of the encoder, it is obvious that the degree of the minimal encoder is at least the degree of the corresponding encoding. On the other hand, the state space can be used as a state space of an encoder which could be called “standard” . If , then where and is the absolute component of . It is easy to see that is linear and well defined. Therefore, the McMillan degree of is in fact the dimension of the space :

Similarly, an encoding (a generating matrix ) is called minimal if is the smallest possible among all encodings defining the same code. A simple tool for studying minimal encodings is the linear mapping

Let us verify that is well defined, which means that maps into . For , we have

and therefore is indeed in . We now have

and is minimal if is injective. This directly translates into the equivalent criterion “Only the zero abstract state of can be a codeword” (see Theorem 2.37 (iii) [4]).

Remark 3.

An encoder is a minimal possible encoder of a code if it is a minimal encoder of a minimal encoding. However, the minimal possible encoder can be constructed directly from in a way similar to the above construction of the “standard encoder” . Such a minimal realization was studied by Willems and is called “canonical realization” by Forney [6].

3. Polynomial generating matrices

Consider a polynomial generating matrix . Define the degree of the th row as

and the external degree of as . Assume, without loss of generality, that . It is well known (and not difficult to see) that, for a given code , is minimized if and only if for a uniquely given -tuple of Forney indeces of . Then is called a canonical generating matrix of .

Each polynomial matrix admits a circuit realization whose degree is (this is the most obvious realization called “direct-form” or “controller canonical form” [5, 4]). Using Section 2, we therefore have

One of the fundamental results of the Forney theory is the State Space Theorem, which claims that

This means that canonical matrices define minimal encodings of a given code, and moreover, also their direct-form realization is the minimal one.

A standard characterization of canonical matrices is the following: A polynomial matrix is canonical if and only if it is basic, and reduced. Each of these two properties is captured by several equivalent conditions.

I. Being reduced is related to the external degree of the matrix. A matrix is reduced if its external degree cannot be lowered by adding polynomial multiples of some rows to another row. Equivalently, a reduced matrix of is obtained from a polynomial generating matrix by multiplication by a unimodular matrix (polynomial matrix with the determinant in ) minimizing the external degree. The matrix is reduced if and only if it has the predictable degree property which states that

for each .

II. Being basic is related to the internal degree of the matrix, which is the largest degree of its minors. A polynomial generating matrix of is basic if its internal degree is minimal among all polynomial generating matrices of . Another equivalent characterization is that is basic if and only if it has a polynomial right inverse.

It is also not difficult to see that for any polynomial matrix, and that if and only if is reduced.

The above claims can be found in any textbook about convolutional codes (see [5, Appendix A] for a longer list of equivalent conditions) and we will use them in what follows.

We formulate the State Space Theorem in a way that reveals the main idea of the proof, which closely follows McEliece [5]. The idea is to consider a particular basis of defined by the canonical matrix .

Theorem 1.

Let be a canonical generating matrix of . Then

is a basis of .

Proof.

Take an arbitrary element of and write it as . Decompose so that

We have and , which implies . The vector is a linear combination of vectors (over ), namely

We have shown that vectors generate .

It remains to show linear independence. Let therefore

This can be written as

for some , where

Since has a polynomial right inverse , we have , hence is polynomial. Then , unless . On the other hand, we have , which means that . The predictable degree property of now implies that , therefore also , since is anticausal and is causal. This completes the proof.  ∎

We now turn our attention to bases of . For any reduced matrix , there is a characterization of a basis of which is quite analogical to the State Space Theorem above. Nevertheless, while the State Space Theorem has become a classical result, a clear exposition of the following parallel result seems to be missing in the literature.

We denote by the th canonical vector (which should be not confused with the th Forney index ).

Theorem 2.

Let be a reduced generator matrix with , . Then the set

is a basis of .

Proof.

The proof closely resembles the proof of Theorem 1. Let be a decomposition of an arbitrary given by

From the definition of , we obtain which implies that vectors generate .

To show linear independence, consider where

Since , we deduce from that , and the predictable degree property of implies . ∎

For general polynomial generating matrices, a variant of the linear independence part of the proof can be carried out, providing a lower bound in terms of an equivalent reduced matrix.

Theorem 3.

Let be a polynomial generator matrix and let be a unimodular matrix such that is a reduced matrix with row degrees . Then the set

where are rows of , is linearly independent subset of .

Proof.

Let for some

Then , where

and

From this and from we obtain that , where . Since both and are polynomial, we deduce that also is polynomial. The predictable degree property of implies that , and therefore also , and . ∎

Corollary 1.

For any polynomial matrix ,

If is reduced, then

Proof.

Since is unimodular and is reduced, Theorem 3 yields

The claim follows. ∎

4. Concluding remarks

We have shown that it is useful to clearly distinguish the state space of the code from the state space of the encoding, and to consider the relation between the two. Although the relation is as simple as the mapping , it has apparently not been used in the literature so far.

A historical reason for this may be that Forney uses the factor space in his seminal paper [1, Lemma 5] (denoted there as ). This is equivalent to if and only if has a causal right inverse. However, for a general matrix this space is not very useful. In particular, unlike , the space is not independent of .

McEliece [5] uses explicitly the space , and he calls it the “abstract state space” which is an important terminological shift with respect to the usage introduced by Forney [1] (cf. Remark 2 above) not pointed out in the “Advice to experts” [5, Preface, p.1068]. Nevertheless, not even McEliece liberates code degree from a particular encoding, and (following Forney [1]) defines as the minimal possible external degree of a polynomial generating matrix. Such a definition, however, is fully justified only when the State Space Theorem is proven.

The present paper can be summarized as a vindication of a different approach: if, as we did, is defined as , then the mapping immediately shows that we cannot hope for any smaller encoding, let alone for a smaller encoder. The State Space Theorem then shows that this absolute minimum is not only achievable, it is even achieved by direct-form encoders of (canonical) polynomial matrices.

References

  • [1] G. D. Forney Jr., “Convolutional codes I: Algebraic structure,” IEEE Transactions on Information Theory, vol. 16, no. 6, pp. 720–738, Nov 1970.
  • [2] ——, “Structural analysis of convolutional codes via dual codes,” IEEE Trans. Information Theory, vol. 19, no. 4, pp. 512–518, 1973. [Online]. Available: https://doi.org/10.1109/TIT.1973.1055030
  • [3] ——, “Minimal bases of rational vector spaces, with applications to multivariable linear systems,” SIAM Journal on Control, vol. 13, no. 3, pp. 493–520, 1975.
  • [4] R. Johannesson and K. S. Zigangirov, Fundamentals of Convolutional Coding, 2nd ed.   Wiley-IEEE Press, 2015.
  • [5] R. J. McEliece, “The algebraic theory of convolutional codes,” in Handbook of coding theory.   Amsterdam: North-Holland, 1998, vol. I, ch. 12, pp. 1065–1138.
  • [6] G. D. Forney Jr., “Minimal realizations of linear systems: The ”shortest basis” approach,” IEEE Trans. Information Theory, vol. 57, no. 2, pp. 726–737, 2011. [Online]. Available: https://doi.org/10.1109/TIT.2010.2094811
  • [7] J. C. Willems, Models for Dynamics.   Wiesbaden: Vieweg+Teubner Verlag, 1989, pp. 171–269. [Online]. Available: https://doi.org/10.1007/978-3-322-96657-5_5