In digital communication, it is common that messages transmitted through a public channel may be distorted by the channel noise. The theory of error-correcting codes is the study of mechanisms to cope with this problem. This is an important research area with many applications in modern life. For example, error-correcting codes are widely employed in cell phones to correct errors arising from fading noise during high frequency radio transmission. One of the major challenges in coding theory remains to construct new error-correcting codes with good properties and to study their decoding and encoding algorithms.
In a binary erasure channel (BEC), a binary symbol is either received correctly or totally erased with probability . The concept of BEC was first introduced by Elias in 1955 . Together with the binary symmetric channel (BSC), they are frequently used in coding theory and information theory because they are among the simplest channel models, and many problems in communication theory can be reduced to problems in a BEC. Here we consider more generally a -ary erasure channel in which a -ary symbol is either received correctly, or totally erased with probability .
The problem of decoding linear codes over the erasure channel has received renewed attention in recent years due to their wide application in the internet and the distributed storage system in analyzing random packet losses [1, 8, 9]. Three important decoding principles, namely unambiguous decoding, maximum likelihood decoding and list decoding, were studied in recent years for linear codes over the erasure channel, the corresponding decoding error probabilities under these principles were also investigated (see [3, 6, 11, 13] and reference therein).
In particular in , upon improving previous results, the authors provided a detailed study on the decoding error probabilities of a general -ary linear code over the erasure channel under the three decoding principles. Via the notion of -incorrigible sets for linear codes, they showed that all these decoding error probabilities can be expressed explicitly by the -th support weight distribution of the linear codes. As applications they obtained explicit formulas of the decoding error probabilities for some of the most interesting linear codes such as MDS codes, the binary Golay code, the simplex codes and the first-order Reed-Muller codes etc. where the -support weight distributions were known. They also computed the average decoding error probabilities of a random code over the erasure channel and obtained the error exponent of a random code () under one of the decoding principles.
I-B Statement of the main results
In this paper we consider a new code ensemble, namely the random matrix ensemble , that is, the set of all matrices over endowed with uniform probability, each of which is associated with a parity-check code as follows: for each , the corresponding parity-check code is given by
Here boldface letters such as
denote row vectors.
As for previous results about the ensemble , the undetected error probability was studied in the binary symmetric channel by Wadayama  (i.e. ), and some bounds on the error probability under the maximum likelihood decoding principle were obtained in the -ary erasure channel [4, 7], but other than these results, not much is known. It is easy to see that contains all linear codes in the random code ensemble considered in , but these two ensembles are quite different for two reasons: first, in the random code ensemble considered in , each code is counted exactly once, while in each code is counted with some multiplicity as different choices for the matrix may give rise to the same code; second, some codes in may have rates strictly larger than as the rows of may not be linearly independent.
It is conceivable that most of the codes in have rate , and the average behavior of codes in should be similar to that of the random code ensemble considered in . The advantage of studying the ensemble is that it is much easier to deal with in terms of mathematics than with the random ensemble – such an advantage has been exploited in  – hence we may be able to obtain much stronger results than what was obtained in . We will show that this is indeed the case.
We first obtain explicit formulas for the average decoding error probability of the ensemble over the erasure channel under the three different decoding principles. This is comparable to [11, Theorem 2] for the random code ensemble. Such formulas are useful as they allow explicit evaluations of the average decoding error probabilities for any given and , hence giving us a meaningful guidance as to what to expect for a good code over the erasure channel.
Let be the random matrix ensemble described above. Denote by the Gaussian -binomial coefficient and denote
The average unsuccessful decoding probability of under list decoding with list size , where is a non-negative integer, is given by
The average unsuccessful decoding probability of under unambiguous decoding is given by
The average decoding error probability of under maximum likelihood decoding is given by
Next, letting for , we compute the error exponents of the average decoding error probability of the ensemble series as under these decoding principles.
Let the rate be fixed and .
For any fixed integer , the error exponent for average unsuccessful decoding probability of under list decoding with list size is given by
The error exponents for average unsuccessful decoding probability of under unambiguous decoding and maximum likelihood decoding (respectively) are both given by
A plot of the function for in the range is given by Fig. 1.
It can be checked that the error exponent here under unambiguous decoding principle coincides with that for the random code ensemble obtained in [11, Theorem 3].
Next, we establish a strong concentration result for the unsuccessful decoding probability of a random code in the ensemble towards the mean under unambiguous decoding.
Let the rate be fixed and . Then as runs over the ensemble , we have
under either of the following conditions:
if for any , or
if for .
Here the notion WHP in (8) refers to “with high probability”, that is, for any , there is and such that
Noting that in the range , it was known that (see Theorem 2), hence
Finally, we point out a weaker but more general concentration result:
Let the rate be fixed and . Then as runs over the ensemble , we have
I-C Discussion of Theorem 2
It is interesting to make a comparison of Theorem 2 with what can be obtained by Gallager’s method for nonlinear code ensembles over the erasure channel (see [5, Exercise 5.20, page 538]): consider the ensemble of all block codes of length and rate () over the erasure channel in which each letter of each codeword is selected independently as an element of with equal probability , then the average decoding error probability under list-decoding with list size is upper bounded by
where the function is given as
We compare the error exponent given in Theorem 2 corresponding to list decoding with list size for the random matrix ensemble with above where which corresponds to list decoding with list size for the random code ensemble of the same rate described by Gallager. We can observe that in the high rate region , the two exponents and coincide with each other, but in the low rate region , we have whenever . As illustrations we plot the two exponents as functions of in Figs. 2 and 3 for , , with and . This shows that under the list decoding principle over the erasure channel, the performance of linear codes on average is as good as that of nonlinear codes in the high rate range, but is inferior in the low rate range if the list size is at least 3. It is well-known that they have the same performance when , that is, under the unambiguous decoding and the maximum likelihood decoding principles .
The paper is now organized as follows. In Section II, we introduce the Gaussian -binomial coefficient in more details. Then in Section III, we provide three counting results regarding matrices of certain rank over . Afterwards in Sections IV, V and VI, we give the proofs of Theorems 1, 2 and 3-4 respectively. The proofs of Theorems 3 and 4 involves some technical calculus computations on the error exponent of the variance. In order to streamline their proofs, we put some of the arguments in Section VII Appendix. Finally we conclude this paper in Section VIII.
For integers , the Gaussian binomial coefficients is defined as
where . By convention , for any and if or . The function defined in (2) can be written as
We may define for and if or . Next, recall the well-known combinatorial interpretation of :
Lemma 1 ().
The number of -dimensional subspaces of an -dimensional vector space over is .
The Gaussian binomial coefficient satisfies the property
and the identity
Iii Three counting results for the ensemble
In this section we provide three counting results about matrices of certain rank in the ensemble . Such results may not be new, but since we cannot locate them in the literature, we prove them here. These results will be used repeatedly in the proofs later on.
For , denote by the rank of the matrix over .
Let be a random matrix in the ensemble . Then for any integer , we have
We may assume that satisfies , because if is not in the range, then both sides of Equation (12) are obviously zero.
Denote by the set of
-linear transformations fromto . Writing vectors in and as row vectors, we see that the random matrix ensemble can be identified with the set via the relation
Since if and only if , and , we have
The inner sum counts the number of surjective linear transformations from to , a -dimensional subspace of . Since , this is also the number of surjective linear transformations from to , or, equivalently, the number of matrices over such that the columns of are linearly independent. The number of such matrices can be counted as follows: the first column of can be any nonzero vector over , there are choices; given the first column, the second column can be any vector lying outside the space of scalar multiples of the first column, so there are choices; inductively, given the first columns, the -th column lies outside a -dimensional subspace, so the number of choices for the -th column is . Thus we have
Together with Lemma 1, we obtain
which is the desired result. ∎
Let be a random matrix in the ensemble . Let be a subset with cardinality . Denote by the submatrix formed by columns of indexed from . Then for any integers and , we have
We may assume that and , because if or does not satisfy this condition, then both sides of Equation (15) are zero.
Here is the subspace of formed by restricting to coordinates with indices from . We may consider the projection given by
The kernel of has dimension and is of the form for some subspace . So we can further decompose the sum on the right hand side of (16) as
Now we compute the inner sum on the right hand side of (17). Suppose we are given an ordered basis of the -dimensional subspace of . We extend it to an ordered basis of some -dimensional subspace as follows: first we need other basis vectors to be linearly independent. At the same time, they have to be linearly independent with any nonzero vector in due to the kernel condition. This requires the set to be linearly independent in . On the other hand, if this condition is satisfied, then the vectors are also linearly independent with one another as well as with any nonzero vector in . Therefore it reduces to counting the number of ordered linearly independent sets of vectors in . This number is clearly given by , so the total number of different ordered bases is given by .
On the other hand, given a fixed -dimensional subspace with , we count the number of ordered bases of of the form stated in previous paragraph as follows: we choose to be any vector in but not in , which gives many choices for ; similarly is any vector in but not in the span of and , this gives us many choices for ; using this argument, we see that the number of such ordered bases is given by .
Let be a random matrix in the ensemble . Let be subsets of such that
It is clear that if a matrix has full rank, then so is the submatrix for any index subset . Hence we have
It is easy to see that the two events and are conditionally independent given , since columns of and are independent as random vectors over . Hence we get
Iv Proof of Theorem 1
The background of the three decoding principles unambiguous decoding, maximum likelihood decoding and list decoding of linear codes over the erasure channel, the computation of their decoding error probability functions and , and their relation to the concept of -incorrigible set of a linear code were all laid out perfectly in [11, II. Preliminaries], so we do not repeat here. Interested readers may refer to that paper for more details. We focus on what are most relevant to the proof of Theorem 1 in this paper.
Let be an linear code, that is, is a -dimensional subspace of . Denote . For any , define
Since is a linear code, is also a vector space over .
Denote by the -incorrigible set distribution of , and the incorrigible set distribution of , which are defined respectively as follows:
It is easy to see that , so if , then . We also define
It is easy to see that , if and
We also have the identity
Recall from  that the values and can all be expressed in terms of , and as follows:
For , we write and for , where is the parity-check code defined by (1). The average decoding error probabilities over the matrix ensemble are given by
Here the expectation is taken over the ensemble .
Now we can start the proof of Theorem 1. For , we denote
We now compute . Noting that for and , we have , thus
By the symmetry of the ensemble , the inner sum on the right hand side depends only on the cardinality of , so we may assume to obtain
The right hand side is exactly where the probability is over the ensemble . So from Lemma 2 we have
Using this and (21), we also obtain