Pseudo-Hadamard matrices of the first generation and an algorithm for producing them

05/19/2021 ∙ by Ruslan Sharipov, et al. ∙ 0

Hadamard matrices in {0,1} presentation are square m× m matrices whose entries are zeros and ones and whose rows considered as vectors in R^m produce the Gram matrix of a special form with respect to the standard scalar product in R^m. The concept of Hadamard matrices is extended in the present paper. As a result pseudo-Hadamard matrices of the first generation are defined and investigated. An algorithm for generating these pseudo-Hadamard matrices is designed and is used for testing some conjectures.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1. Introduction.

Regular Hadamard matrices are defined in presentation. They are square matrices whose entries are ones and minus ones and whose rows are orthogonal to each other with respect to the standard scalar product in (see [1]). Hadamard matrices are associated with Hadamard’s maximal determinant problem (see [2] and [3]). A simplified version of this problem was suggested in [4]. Using the well-known transformation from to presentation (see [2]), in [5] the concept of Hadamard matrices was transferred to the class of matrices whose entries are zeros and ones. A Hadamard matrix in presentation is a special square matrix, where or for some and where is the set of positive integers.

The case

is trivial. In this case we have exactly one Hadamard matrix which coincides with the identity matrix:

. In the case Hadamard matrices in presentation can be defined as follows.

Definition Definition 1.1

A Hadamard matrix is a square matrix, where for some , whose entries are zeros and ones and whose rows considered as vectors in produce the Gram\normalfont1\normalfont11 A Gram matrix is a matrix formed by pairwise mutual scalar products of a sequence of vectors in a space equipped with some scalar product. matrix of the form -1

where and , with respect to the standard scalar product in .

This definition is based on Theorems 2.1 and 2.2 from [5]. Below we omit the case and consider the case with only.

Note that the restrictions , , and in Definition 1.1 are essential since there is the following matrix

which is not a Hadamard matrix, i. e. is not produced from a regular Hadamard matrix by the to transformation.

Theorem 1.1

The transpose of a Hadamard matrix is again a Hadamard matrix.

Theorem 1.1 is immediate from Theorems 2.4 and 2.2 in [5]. In particular this theorem means that the Gram matrix associated with columns of a Hadamard matrix coincides with the Gram matrix 1.1 associated with its rows.

Below by analogy to Definition 1.1 we define pseudo-Hadamard matrices in presentation and study a subclass of them. In particular, we design an algorithms for generating this subclass of pseudo-Hadamard matrices.

2. Pseudo-Hagamard matrices of the first generation.

Relying upon Definition 1.1 and Theorem 1.1, it is easy to see that the set of Hadamard matrices is invariant under the following transformations:

Using these transformations, one can bring any Hadamard matrix to the form

Due to 1.1 with the number of ones in the first row of the matrix 2.1 is equal to . The number of zeros in this row is equal to . Due to Theorem 1.1 the same is valid for the first column of the matrix 2.1, i. e. its first column comprises ones and zeros.

Let’s remove the first row and the first column of the matrix in 2.1 and denote through the rest of the matrix :

The matrix 2.2 coincides with the minor in associated with the top left entry of the matrix 2.1.

Definition Definition 2.1

An matrix , where and , produced from some Hadamard matrix of the form 2.1 according to 2.2 is called a pseudo-Hadamard matrix of the first generation.

Theorem 2.1

For any pseudo-Hadamard matrix of the first generation with , where , its rows considered as vectors of   with the standard scalar product produce the Gram matrix of the form

where , , , and .

Demonstration Proof

Let’s denote through the rows of the matrix in 2.2 and through the rows of the matrix in 2.1, i. e. we denote the initial row of the matrix 2.1 through . If we consider as vectors in and as vectors in , then, applying the standard scalar products in and to them, we derive

Looking at 2.1 and taking into account that , we see that

Since and , the formula 1.1 is equivalent to

Since moreover and , the formula 2.3 is equivalent to

Applying 2.4 and 2.5 to 2.6, we easily derive 2.7. This means that 2.6 implies 2.7 and, hence, 1.1 implies 2.3. Theorem 2.1 is proved. ∎

A similar result is valid for columns of pseudo-Hadamard matrices. It is given by the following theorem.

Theorem 2.2

For any pseudo-Hadamard matrix of the first generation with , where , its columns considered as vectors of the space with the standard scalar product produce the Gram matrix of the form 2.3, where , , , and .

Theorem 2.2 follows from Theorem 2.1 due to Theorem 1.1. Theorems 2.1 and 2.2 are strengthened in the following theorem.

Theorem 2.3

A square matrix whose entries are zeros and ones is a pseudo-Hadamard matrix of the first generation if and only if   for some and if its rows and its columns considered as vectors of the space with the standard scalar product produce the same Gram matrix of the form 2.3, where , , , and .

Demonstration Proof

The necessity part in the statement of Theorem 2.3 is proved by Theorems 2.1 and 2.2. Let’s prove the sufficiency.

In proving Theorems 2.1 we have seen that 2.6 implies 2.7. However the converse is not true since the equalities 2.7 do not cover the cases with and . Let’s denote through the initial row of the matrix 2.1 shortened by omitting the first entry of it. This row obeys the equalities

The equalities 2.8, 2.9, and 2.10 follow from 2.6 due to 2.4 and 2.5 and moreover, if we adjoin 2.8, 2.9, and 2.10 to 2.7, the whole set of equalities 2.7, 2.8, 2.9, and 2.10 turns out to be equivalent to 2.6. Therefore, in order to complete our proof we need to derive 2.8, 2.9, and 2.10 from the premises of Theorem 2.3 being proved.

The equality 2.8 is trivial. It is fulfilled since the number of ones in the row is equal to . In order to derive 2.9 and 2.10 we define the following row:

(compare with 2.3). The scalar products of the row 2.11 with and with the rows of the matrix in the statement of Theorem 2.3 are easily calculated. They are equal to the number of ones in these rows:

Now let’s consider the sum of all rows of the matrix :

Each entry of the row 2.14 is equal to the number of ones in the corresponding column of the matrix . Since in 2.3 is the Gram matrix not only for rows, but also for columns of the matrix , the numbers of ones in columns of are given by diagonal entries of the Gram matrix 2.3. As a result we get

Let’s calculate scalar products of both sides of 2.15 with . In the case of the left hand side of 2.15 we have the following result:

The last sum in 2.16 is explicitly calculated using 2.3:

Since , , , and , 2.17 and 2.18 simplify to

Now let’s proceed to the right hand side of 2.15. In this case we have

(see 2.12 and 2.13). Since and , from 2.20 and 2.21 we get

Note that 2.15 implies . Substituting 2.19 and 2.22 into this equality, we get two expressions for :

Note that 2.23 and 2.24 do coincide with 2.9 and 2.10.

Thus, the formulas 2.8, 2.9, and 2.10 are derived from the premises of Theorem 2.3. Adjoining them to 2.7 and applying 2.4 and 2.5, we derive 2.6 for the rows of the matrix . The matrix now is produced backward from by adjoining the initial row and the initial column according to 2.1 and 2.2. The equality 2.6 is equivalent to 1.1. Therefore we can apply either Theorem 2.2 or Theorem 2.4 from [5]. Each of these two theorems means that the matrix produced backward from in 2.1 is a regular Hadamard matrix in presentation. Hence is a pseudo-Hadamard matrix of the first generation according to Definition 1.1. The proof of Theorem 2.3 is over. ∎

3. Pseudo-Hadamard matrices of higher generations.

Pseudo-Hadamard matrices of higher generations are defined recursively. A pseudo-Hadamard matrix of the second generation is produced from some pseudo-Hadamard matrix of the first generation upon rearranging its rows and columns in a way similar to 2.1 and then by removing the initial row and the initial column of it like in 2.2. Matrices of the third generation are produced in this way from matrices of the second generation etc, i. e. each next generation is produced from the previous one.

In this paper we shall not consider pseudo-Hadamard matrices of higher generations. They will be studied separately in forthcoming papers.

4. An algorithm for generating pseudo-Hadamard matrices of the first generation.

Like the algorithm for generating Hadamard matrices from [5], our present algorithm is based on partitioning of rows of matrices into groups (see Section 3 in [5]). We use the Maxima programming language (see [6]) for presenting its code. Almost all of the code coincide with the code in [5]. Below are those lines of the code that should be changed. HM_size:m$HM_quarter:(HM_size+1)/4$   HM_quarter:(HM_size+2)/4$q:HM_quarter$HM_row[1]:[[0,2*q],[1,2*q-1]]$   HM_row[1]:[[0,2*q-1],[1,2*q-1]]$HM_row[2]:[[0,q],[1,q],[2,q],[3,q-1]]$   HM_row[2]:[[0,q-1],[1,q],[2,q],[3,q-1]]$HM_matrix_num:1$HM_stream:openw("output_file.txt")$HM_make_row(3)$close(HM_stream)$ The lines to be removed are shown with strikethrough text. The replacement lines are given in green. Like in [5], the whole job is practically done by the recursive function HM_make_row(). Here its code is also slightly changed. HM_make_row(i):=block ([n,s,k,l,q,dummy,kk,y,dpnd,indp,nrd,nri,r,kr,qq,eq,eq_list,j,  LLL,RLL,RVV,RRV,subst_list],  if not integerp(HM_size) or HM_size<3 or mod(HM_size,4)#3     if not integerp(HM_size) or HM_size<2 or mod(HM_size,4)#2   then    (     print(printf(false,"Error: m=~a is incorrect size for      Hadamard matrices",HM_size)),     generation one pseudo-Hadamard matrices",HM_size)),     return(false)    ),  if HM_size=3     if HM_size=2   then    (     HM_row[2]:[[0,1],[1,1],[2,1]],        HM_row[2]:[[1,1],[2,1]],     HM_row[3]:[[1,1],[2,1],[4,1]],        HM_output_matrix(),     return(false)    ),  print(printf(false,"i=~a",i)),  ..............................  /*-- prepare the equation list --*/  eq_list:[],  var_list:[],  ..............................  eq_list:endcons(eq=2*HM_quarter,eq_list),     if i<2*HM_quarter   then eq_list:endcons(eq=2*HM_quarter-1,eq_list)   else eq_list:endcons(eq=2*HM_quarter,eq_list),  qq:1,  ..............................    eq_list:endcons(eq=HM_quarter,eq_list),       if i<2*HM_quarter     then eq_list:endcons(eq=HM_quarter-1,eq_list)     else eq_list:endcons(eq=HM_quarter,eq_list),    qq:qq*2  .............................. For the sake of brevity above we omit some unchanged portions of the code replacing them with dots. The lacking code can be taken from [5].

Apart from the function HM_make_row() the algorithm comprises two other functions HM_output_matrix() and HM_sc_prods_ok(i). Their code is unchanged. It can also be taken from [5].

Like in [5], the above code was run in Maxima, version 5.42.2, on Linux platform of Ubuntu 16.04 LTS using laptop computer DEXP Atlas H161 with the processor unit Intel Core i7-4710MQ. Below are performance data of the code.

The case is trivial. In this case the algorithm terminated instantly and produced exactly one pseudo-Hadamard matrix which coincides with the identity matrix.

The case is less trivial. In this case the algorithm also terminated instantly, but produced 6 matrices.

The case . In this case the algorithm ran for 6 seconds and produced 1440 matrices. The matrix production rate is 14400 matrices/minute.

The case . In this case the algorithm did not terminate during observably short time. But setting timestamps upon each next 10000 matrices, I have found that the first 10000 matrices were produced for 56 seconds, i. e. the matrix production rate is 10714 matrices/minute.

The case . In this case the first 10000 matrices were produced for 1 minute and 29 seconds, i. e. the matrix production rate is 6742 matrices/minute.

The case is different. In this case the algorithm becomes very slow. It has produced 10000 matrices upon running for 5 hours 1 minute and 17 seconds. The average matrix production rate is 33 matrices/minute. However this production rate is very unevenly distributed over the interval of running. In the beginning the algorithm does not produce matrices for about 3 hours.

As a conclusion we can say that is a practical limit for the algorithm in its present version. Though theoretically the algorithm has no limits.

5. Analysis of output and some conjectures.

The above algorithm produces matrices whose entries are zeros and ones and whose rows, when treated as vectors in , generate the Gram matrix of the form 2.3 with respect to the standard scalar product in . However we cannot apply Theorem 2.3 to these matrices since the Gramians of their columns are uncertain. Therefore the output matrices were additionally analyzed. Relying upon this analysis the following conjectures are formulated.

Conjecture 5.1

Let be a square matrix, where for some , whose entries are zeros and ones and whose rows considered as vectors of the space with the standard scalar product produce the Gram matrix of the form 2.3 with , , , and . Then coincides with some pseudo-Hadamard matrix of the first generation upon some permutation of its columns.

Conjecture 5.2

Let be a square matrix, where for some , whose entries are zeros and ones and whose columns considered as vectors of the space with the standard scalar product produce the Gram matrix of the form 2.3 with , , , and . Then coincides with some pseudo-Hadamard matrix of the first generation upon some permutation of its rows.

Conjectures 5.1 and 5.2 are dual to each other. They are either both valid or both invalid. I have tested these conjectures for all of my output matrices. They turned out to be valid

This makes a good evidence in favor of these conjectures to be valid, though this does not prove them.

6. Dedicatory.

This paper is dedicated to my sister Svetlana Abdulovna Sharipova.

References

References