Random Matrices from Linear Codes and Wigner's semicircle law

In this paper we consider a new normalization of matrices obtained by choosing distinct codewords at random from linear codes over finite fields and find that under some natural algebraic conditions of the codes their empirical spectral distribution converges to Wigner's semicircle law as the length of the codes goes to infinity. One such condition is that the dual distance of the codes is at least 5. This is analogous to previous work on the empirical spectral distribution of similar matrices obtained in this fashion that converges to the Marchenko-Pastur law.

READ FULL TEXT VIEW PDF

page 1

page 2

page 3

page 4

02/22/2019

Convergence Rate of Empirical Spectral Distribution of Random Matrices from Linear Codes

It is known that the empirical spectral distribution of random matrices ...
09/06/2021

The asymptotic joint distribution of the largest and the smallest singular values for random circulant matrices

In this manuscript, we study the limiting distribution for the joint law...
10/22/2020

Codes over integers, and the singularity of random matrices with large entries

The prototypical construction of error correcting codes is based on line...
07/08/2020

Algebraic structure of F_q-linear conjucyclic codes over finite field F_q^2

Recently, Abualrub et al. illustrated the algebraic structure of additiv...
06/20/2022

Continuous boundary condition at the interface for two coupled fluids

We consider two laminar incompressible flows coupled by the continuous l...
12/18/2020

A Comparison of Distance Bounds for Quasi-Twisted Codes

Spectral bounds on the minimum distance of quasi-twisted codes over fini...
06/04/2019

Optimal coding and the origins of Zipfian laws

The problem of compression in standard information theory consists of as...

I Introduction

The theory of random matrices mainly concerns the statistical behavior of eigenvalues of large random matrices arising from various matrix models. There is a universality phenomenon that, like the law of large numbers in probability theory, the collective behavior of eigenvalues of a large random matrix does not depend on the distribution details of entries of the matrix. Partly because of this reason, originated from statistics

[21] and mathematical physics [20] and nurtured by mathematicians, the random matrix theory has found important applications in many diverse disciplines such as number theory [15], computer science, economics and communication theory [19] and remains a prominent research area.

Most of the matrix models considered in the literature were matrices whose entries have independent structures. In a series of work ([3, 2, 22]), initiated in [4], the authors studied matrices formed from linear codes over finite fields and ultimately proved that they behave like truly random matrices (i.e., random matrices with i.i.d. entries) in terms of the empirical spectral distribution, if the minimum Hamming distance of the dual codes is at least 5. This is the first result relating the randomness of matrices from linear codes to the algebraic properties of the underlying dual codes, and can be interpreted as a joint randomness test for codes or sequences. This is called a “group randomness” property [4] and may have many applications.

In this paper we study a new group randomness property of linear codes. To describe our results, we need some notation.

Let be a family of linear codes of length , dimension and minimum Hamming distance over the finite field of elements ( is called an code for short). Assume that as . The standard additive character on the finite field extends component-wise to a natural mapping . For each , choosing codewords at random uniformly from and applying the mapping , we obtain a random matrix . The Gram matrix of is

here denotes the conjugate transpose of . Denote by the expectation with respect to the probability space.

For any matrix with eigenvalues , the spectral measure of is defined by

where is the Dirac measure at the point . The empirical spectral distribution of is defined as

For the sake of brevity, a slightly simplified version of [22, Theorem 1] may be stated as follows.

Theorem 1.

Let be the empirical spectral distribution of the Gram matrix . If the dual distance of the code satisfies for each and is fixed, then for any , we have

(1)

Here

denotes the cumulative distribution function of the Marchenko-Pastur measure whose density function is given by

where , and is the indicator function of the interval .

It is well-known in random matrix theory that, if is a

matrix whose entries are i.i.d. random variables of zero mean and unit variance, the empirical spectral distribution of the Gram matrix of

satisfies the same Marchenko-Pastur law (1) as and is fixed (see [1, 14]), hence the above result can be interpreted as that matrices formed from linear codes of dual distance at least 5 behave like truly random matrices of i.i.d. entries. In other words, sequences from linear codes of dual distance at least 5 possess a group randomness property. The condition is also necessary, because the empirical spectral distribution of matrices formed from the first-order Reed-Muller codes whose dual distance is 4 behave very differently from the Marchenko-Pastur law ([4]).

In this paper we consider a different group randomness property. If is a random matrix whose entries are i.i.d. random variables of zero mean and unit variance, let , it is well-known in random matrix theory ([1, 5]) that in the limit simultaneously, the empirical spectral distribution of the matrix converges to Wigner’s semicircle law whose density function is given by

Here

denotes the identity matrix of size

. So a natural question is to investigate when similarly formed matrices from linear codes satisfy the same property. For this purpose, we consider the random matrix obtained by choosing distinct codewords at random uniformly from and by applying the mapping . Define

Now we state the main result of this paper.

Theorem 2.

Let be the empirical spectral distribution of the matrix . Assume that the linear codes satisfy:

(i) as , where is the cardinality of the code ;

(ii) for each , and

(iii) there is a fixed constant independent of such that

(2)

Here

is the standard inner product of the complex vectors

and . Then as simultaneously, for any , we have

We remark that condition (iii) is quite natural for linear codes, for instance, it appeared as a requirement in the construction of deterministic sensing matrices from linear codes that satisfy the ideal Statistical Restricted Isometry Property (see [7, Definition 1] or [12]). For binary linear codes of length , (iii) is equivalent to the condition

for any nonzero codeword . Here is the Hamming weight of the codeword . There is an abundance of binary linear codes that satisfy this condition, for example, the Gold codes ([13]), some families of BCH codes (see [7, 9, 10], and many families of cyclic and linear codes studied in the literature (see for example [8, 18, 23]).

Next, we emphasis that in Theorem 2 we prove the convergence “in probability”. This is not only stronger than say in probability theory (compared with Theorem 1) (see [11]), but also much more useful in practice: it implies that under the conditions (i)-(iii), if is relatively large, then for any fixed , randomly choosing codewords from , then for most of the case, the resulting function will be very close to the value . This can be easily confirmed by numerical experiments. We focus on binary Gold codes which have length and dual distance 5. Binary Gold codes satisfy the condition (2) because there are only three nonzero weights, namely and . Also the Gold codes have dimension and so as . For each pair in the set , we randomly pick codewords from the binary Gold code of length and form the corresponding matrix, from which we compute and plot the empirical spectral distribution together with Wigner’s distribution (see Figures 1 to 4 below). We do it 10 times for each such pair and at each time, we find that the plots are almost the same as before: they are all very close to Wigner’s semicircle law and as the length increases, they become more and more indistinguishable.

Fig. 1: Empirical spectral distribution (ESD) of binary Gold code versus Wigner semicircle law (SC), with
Fig. 2: Empirical spectral distribution (ESD) of binary Gold code versus Wigner semicircle law (SC), with
Fig. 3: Empirical spectral distribution (ESD) of binary Gold code versus Wigner semicircle law (SC), with
Fig. 4: Empirical spectral distribution (ESD) of binary Gold code versus Wigner semicircle law (SC), with

To prove Theorem 2

, we use the moment method, that is, we compute the moments and the variance for the empirical spectral distribution and compare them with Wigner’s semicircle law. This is a standard method in random matrix theory and has been used in

[2, 22]. We mainly follow the ideas and techniques from [22]. However, compared with [22], due to the nature of the problem, the computation, especially the variance becomes much more complicated. In order to present the ideas of the proof of Theorem 2 more clearly, in Section II we sketch the main steps of the proof of Theorem 1 in [22]. This will serve as a general guideline for the proofs later on; We also prove some counting lemmas which will be used later. In Section III we compute the required moments with respect to Wigner’s semicircle law, and in Section IV we study the variance. This concludes the proof of Theorem 2. Sections III and IV require the use of some crucial but technical lemmas. In order to present the ideas of the proofs more transparently, we postpone the proofs of those lemmas in Section V Appendix. Finally in Section VI we conclude the paper.

Ii Preliminaries

In this section we outline the main steps in the proof of Theorem 1 in [22]. This not only serves as a guideline of general ideas to be appreciated in later sections, but also allows us to introduce some crucial results which will be repeatedly used later.

Throughout the paper, let be an linear code. We always assume that its dual distance satisfies . For any , denote by the set of integers in the closed interval . Let be the natural mapping obtained component-wise from the standard additive character on .

Ii-a Outline of the main steps in [22]

For a positive integer , let be the set of maps endowed with the uniform probability measure. Each gives rise to a matrix whose rows are listed as . Let denote the Gram matrix of , that is, . For any positive integer , the -th moment of the spectral measure of is given by

Expanding the trace , we have

where is the set of all closed maps from to (“closed” means ), and

(3)

Here is the composition of the functions and , and is the standard inner product. Taking expectation with respect to the probability space and rearranging the terms, the first main step is to rewrite as

where is the set of equivalence classes of closed paths of under the equivalence relation

Here is the permutation group on the set of integers .

It is easy to see that

where

and is uniform probability space of all maps from to .

For simplicity, define

(4)

The second main step is to use properties of linear codes over finite fields to conclude that the quantity is exactly the number of solutions satisfying the system of equations

Here we write

and are the columns of a generating matrix of the linear code .

Finally, in the last main step, by some detailed analysis using number theory and graph theory, one can obtain (see [22, Section IV])

Lemma 1.

Here is the subset of all closed paths that form double trees.

Armed with Lemma 1

, we then can easily obtain the estimate

which is more than enough to prove Theorem 1.

Ii-B Two counting lemmas

For , we define

(5)

We may reorder the indices as

and

Let

Similar to the second main step in the previous subsection, expanding the expression , collecting terms according to the sets and respectively and taking expectation over the probability space , we can conclude that the term defined above is exactly the number of solutions such that

(6)
(7)
(8)

We remark that in equations (6)–(8), one equation is redundant, so we can remove any one equation without affecting the set of solutions. Using this we can obtain an estimate of as below:

Lemma 2.

If , then

where is the set of all such that the systems of equations (6)-(8) for can be completely solved in the forms and for some and .

Proof of Lemma 2.

Since , it can be easily seen that the graph is a closed path with vertices and edges, where is the closed path defined by reverting the directions of the edges of (after a cyclic relabelling of the vertices if necessary). The systems of equations (6)-(8) for are precisely the same as those for . Therefore Lemma 2 follows directly from Lemma 1 on the estimate of . ∎

First notice that for any . Armed with Lemmas 1 and 2, we obtain

Lemma 3.
Proof of Lemma 3.

We Write . If , then equations in (6) become empty, and equations in (7) and (8) are independent to each other, the number of solutions to which are and respectively. Hence and so .

If , then there is precisely one equation in (6). We remove this equation without affecting . The remaining equations are either in (7) or in (8), the number of solutions to which are exactly and respectively. Hence in this case we also have .

Now assume . If , then each reduced equation is either of the form or , which correspond to equations in either (7) or (8) respectively. Hence we still have ; otherwise if , then the result follows from the fact that and Lemma 2 on the estimate of . ∎

Iii The -th Moment Estimate

We use notation from Section II. Let be an linear code with dual distance . For a positive integer , let be the set of all injective maps endowed with the uniform probability measure. Each gives rise to a matrix whose rows are listed as . Let denote the Gram matrix of , that is, .

Define

and

We prove

Theorem 3.

If the conditions (i)-(iii) of Theorem 2 are satisfied, then for , we have

Here the constant implied in the big-O term depends only on the parameter .

Noting that the corresponding

-th moments of the Wigner semicircle distribution are given by

hence by Theorem 3, for any fixed , as and , we have

The rest of this section is devoted to a proof of Theorem 3.

Iii-a Problem Setting Up

Definition 1.

A closed path is called simple if it satisfies .

Denote by the set of all closed simple paths . This is a subset of appearing in Section II. Since all the diagonal entries of are zero, we can expand the expression of the trace in as

where is already defined in (3).

Similar to the first main step in Section II (see also Section III of [22]) we can write

where

and is the set of equivalence classes of simple closed paths of under the equivalence relation

We remark that

where is the uniform probability space of all injective maps from to .

Iii-B Proof of Theorem 3

Since is injective, is simple, so , from (2), we have

(9)

By Lemma 5 in Section V Appendix we have another estimate:

(10)

Define

hence we have

From (9), (10) and Lemma 1 we can summarize the estimates of as follows:

Note that (c) and (d) may appear only when is even. Using

and the identity (see [22] or [6, Lemma 2.4])

we obtain the desired estimates on . This completes the proof of Theorem 3.

Iv Proof of Theorem 2

To complete the proof of Theorem 2, by the moment convergence theorem [6, p.24], it suffices to prove the following result.

Theorem 4.

Assume the conditions of Theorem 2 are satisfied. Then

This section is devoted to a proof of theorem 4.

Iv-a Problem setting up

By definition,

Similar to the first main step in Section II, we can write

(11)

where

Here

denotes the set of equivalence classes of ordered pairs of simple closed paths in

under the equivalence relation

For simplicity, for , we define

Iv-B Study of

First, by the condition in (2), we easily obtain

(12)

Next, we have the following estimation:

Lemma 4.

Assume and . Then

(13)
Proof of Lemma 4.

If , applying Lemma 6 and Lemma 5 in Section V Appendix directly to the terms and () respectively, then using Lemmas 1-3 in Section II, also observing that , we obtain the desired result by a straightforward computation.

Now assume . We remark that if we use the above approach, we can only obtain

which falls short of our expectation (13). So we adopt a different method.

Denote

and

By using definition, we can rewrite as

where