# Binary Subspace Chirps

We describe in details the interplay between binary symplectic geometry and quantum computation, with the ultimate goal of constructing highly structured codebooks. The Binary Chirps (BCs) are Complex Grassmannian Lines in N = 2^m dimensions used in deterministic compressed sensing and random/unsourced multiple access in wireless networks. Their entries are fourth roots of unity and can be described in terms of second order Reed-Muller codes. The Binary Subspace Chirps (BSSCs) are a unique collection of BCs of ranks ranging from r=0 to r = m, embedded in N dimensions according to an on-off pattern determined by a rank r binary subspace. This yields a codebook that is asymptotically 2.38 times larger than the codebook of BCs, has the same minimum chordal distance as the codebook of BCs, and the alphabet is minimally extended from {± 1,± i} to {± 1,± i, 0}. Equivalently, we show that BSSCs are stabilizer states, and we characterize them as columns of a well-controlled collection of Clifford matrices. By construction, the BSSCs inherit all the properties of BCs, which in turn makes them good candidates for a variety of applications. For applications in wireless communication, we use the rich algebraic structure of BSSCs to construct a low complexity decoding algorithm that is reliable against Gaussian noise. In simulations, BSSCs exhibit an error probability comparable or slightly lower than BCs, both for single-user and multi-user transmissions.

## Authors

• 9 publications
• 14 publications
• 44 publications
05/16/2020

### Reconstruction of Multi-user Binary Subspace Chirps

We consider codebooks of Complex Grassmannian Lines consisting of Binary...
04/06/2018

### Binary Subspace Codes in Small Ambient Spaces

Codes in finite projective spaces equipped with the subspace distance ha...
05/11/2021

### On Compressed Sensing of Binary Signals for the Unsourced Random Access Channel

Motivated by applications in unsourced random access, this paper develop...
10/09/2018

### A decoding algorithm for binary linear codes using Groebner bases

It has been discovered that linear codes may be described by binomial id...
10/18/2018

### Abelian Noncyclic Orbit Codes and Multishot Subspace Codes

In this paper we characterize the orbit codes as geometrically uniform c...
09/17/2019

### Analog Subspace Coding: A New Approach to Coding for Non-Coherent Wireless Networks

We provide a precise framework to study subspace codes for non-coherent ...
01/29/2018

### Structured Spreadsheet Modelling and Implementation with Multiple Dimensions - Part 1: Modelling

Dimensions are an integral part of many models we use every day. Without...
##### This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

## 1 Introduction

Codebooks of complex projective (Grassmann) lines, or tight frames, have found application in multiple problems of interest for communications and information processing, such as code division multiple access sequence design [1], precoding for multi-antenna transmissions [2] and network coding [3]. Contemporary interest in such codes arise, e.g., from deterministic compressed sensing [4, 5, 6, 7, 8], virtual full-duplex communication [9], mmWave communication [10], and random access [11].

One of the challenges/promises of 5G wireless communication is to enable massive machine-type communications (mMTC) in the Internet of Things (IoT), in which a massive number of low-cost devices sporadically and randomly access the network [12]. In this scenario, users are assigned a unique signature sequence which they transmit whenever active [13]. A twin use-case is unsourced multiple access, where a large number of messages is transmitted infrequently. Polyanskiy [12] proposed a framework in which communication occurs in blocks of channel uses, and the task of a receiver is to identify correctly active users (messages) out of , with one regime of interest being , and . Ever since its introduction, there have been several follow-up works [14, 15, 16, 11, 17], extensions to a massive MIMO scenario [18] where the base station has a very large number of antennas, and a discussion on the fundamental limits on what is possible [19].

Given the massive number of to-be-supported (to-be-encoded) users (messages), the design criteria are fundamentally different and one simply cannot rely on classical multiple-access channel solutions. For instance, interference is unavoidable since it is impossible to have orthogonal signatures/codewords. Additionally, given that there is a small number of active user, the interference is limited. Thus, the challenge becomes to design highly structured codebooks of large cardinality along with a reliable and low-complexity decoding algorithm.

Codebooks of Binary Chirps (BCs) [5, 20] provide such highly structured Grassmannian line codebook in dimensions with additional desirable properties. All entries come from a small alphabet, being a fourth root of unity, and can be described in terms of second order Reed-Muller (RM) codes. RM codes have the fascinating property that a Walsh-Hadamard measurement cuts in half the solution space. This yields a single-user decoding complexity of , coming from the Walsh-Hadamard transform and number of required measurements. Additionally, the number of codewords is reasonably large, growing as , while the minimum chordal distance is .

We expand the BC codebook to the codebook of Binary Subspace Chirps (BSSCs) by collectively considering all BCs in dimensions, , in dimensions. That is, given a BC in dimensions, we embed it in dimensions via a unique on-off pattern determined by a rank binary subspace. Thus, a BSSC is characterized by a sparsity , a BC part parametrized by a binary symmetric matrix

and a binary vector

, and a unique on-off pattern parametrized by a rank binary subspace ; see (79) for the formal definition. The codebook of BSSCs inherits all the desirable properties of BCs, and in addition, it has asymptotically about 2.384 more codewords. Thus, an active device with a rank signature will transmit , during time slots determined by the rank subspace , and it will be silent otherwise. This resembles the model of [9], in which active devices can also be used (to listen) as receivers during the off-slots.

Given the structure of BSSCs, a unified rank, on-off pattern, and BCs part (in this order) estimation technique is needed. In

[21], a reliable on-off pattern detection was proposed, which made use of a Weyl-type transform [22] on qubit diagonal Pauli matrices; see (103

). The algorithm can be described with the common language of symplectic geometry and quantum computation. The key insight here is to view BSSCs as common eigenvectors of maximal sets of commuting Pauli matrices, commonly referred as

stabilizer groups. Indeed, we show that BSSCs are nothing else but stabilizer states [23], and their sparsity is determined by the diagonal portion of the corresponding stabilizer group; see Corollaries 2 and 3. We also show that each BSSC is a column of a unique Clifford matrix (86

), which itself is the common eigenspace of a unique stabilizer group (

101); see also Theorem 1. The interplay between the binary world and the complex world is depicted in Figure 1.

Making use of these structural results, the on-off pattern detection of [21] can be generalized to recover the BC part of the BSSC, this time by using the Weyl-type transform on the off-diagonal part of the corresponding stabilizer group. This yields a single-user BSSC reconstruction as described in Algorithm 2. In [24], we added Orthogonal Matching Pursuit (OMP) to obtain a multi-user BSSCs reconstruction (see Algorithm 3) with reliable performance when there is a small number of active users. As the number of active users increases, so does the interference, which has a quite destructive effect on the on-off pattern. However, state-of-the-art solutions for BCs [8, 17, 25]

such as slotting and patching, can be used to reduce the interference. Preliminary simulations show that BSSCs exhibit a lower error probability than BCs. This is because BSSCs have fewer closest neighbors on average than BCs. In addition, BSSCs are uniformly distributed over the sphere, which makes them optimal when dealing with Gaussian noise.

Throughout, the decoding complexity is kept at bay from the underlying symplectic geometry. The sparsity, the BC part, and the on-off pattern of a BSSC can be described in terms of the Bruhat decomposition (22) of a symplectic matrix. Indeed, the unique Clifford matrix (86) of which a BSSC is a column, is parametrized by a coset representative (24) as described in Lemma 1. In turn, such coset representative determines a unique stabilizer group (101). We use this interplay to reconstruct a BSSC by reconstructing the stabilizer group that stabilizes the given BSSC. This alone reduces the complexity from to .

The paper is organized as follows. In Section 2 we review the basics of binary symplectic geometry and quantum computation. In order to obtain a unique parametrization of BSSCs, we use Schubert cells and the Bruhat decomposition of the symplectic group. In Section 3 we lift the Bruhat decomposition of the symplectic group to obtain a decomposition of the Clifford group. Additionally, we parametrize those Clifford matrices whose columns are BSSCs. In Section 4 we give the formal definition of BSSCs, along with their algebraic and geometric properties. In Sections 5 and 6 we present reliable low complexity decoding algorithms, and discuss simulation results. We end the paper with some conclusions and directions future research.

### 1.1 Conventions

All vectors, binary or complex, will be columns. denotes the binary field, denotes the group of binary invertible matrices, and denotes the group of binary symmetric matrices. We will denote matrices (resp., vectors) with upper case (resp., lower case) bold letters. will denote the transpose and will denote the inverse transposed. and will denote the column space and the row space of respectively. Since all our vectors are columns, we will typically deal with column spaces, except when we work with notions from quantum computation, where row spaces are customary. will denote the matrix (complex or binary). denotes the binary Grassmannian, that is, the set of all -dimensional subspaces of . denotes the set of unitary complex matrices and will denote the conjugate transpose of a matrix.

## 2 Preliminaries

In this section we will introduce all preliminary notions needed for navigating the connection between the dimensional binary world and the dimensional complex world, as depicted in Figure 1. The primary bridge used here is the well-known homomorphism (62) and the Bruhat decomposition of the symplectic group. We focus on cosets of the symplectic group modulo the semidirect product . These cosets are characterized by a rank and a binary subspace , which we will think of as the column space of an binary matrix in column reduced echelon form. We will use Schubert cells as a formal and systematic approach. This also provides a framework for describing well-known facts from binary symplectic geometry (e.g., Remark 4). Finally, Subsection 2.3 discusses common notions from quantum computation.

### 2.1 Schubert Cells

Here we discuss the Schubert decomposition of the Grassmannian with the respect to the standard flag

 {0}=V0⊂V1⊂⋯⊂Vm, (1)

where and is the standard basis of . Fix a set of indices , which, without loss of generality, we assume to be in increasing order. The Schubert cell is the set of all matrices that have in leading positions , on the left, right, and above each leading position, and every other entry is free. This is simply the set of all binary matrices in column reduced echelon form with leading positions . By counting the number of free entries in each column one concludes that

 dimCI=r∑j=1(m−ij)−(r−j). (2)

Fix , and think of it as the column space of a matrix . After column operations, it will belong to some cell , and to emphasize this fact, we will denote it as . Schubert cells have a well-known duality theory which we outline next. Let be such that . Of course . Let and put . There is a bijection between and , realized by reverting the rows and columns of and identifying with . With this identification, we will denote the unique element of cell that is equivalent with , obtained by reverting the rows and columns of :

where is the antidiagonal matrix in respective dimensions.

Each cell has a distinguished element:

will denote the identity matrix

restricted to , that is, the unique element in that has all the free entries 0. Note that has as th column the th column of , and thus its non-zero rows form . In particular if then . We also have . With this notation one easily verifies that

 (II)THI=Ir,(II)TI˜I=0,(˜HI)TI˜I=Im−r. (4)

can be completed to an invertible matrix

 (5)

Note that when is completed to an invertible matrix it gives rise to a permutation matrix. Next, (4) along with the default equality implies that

 (6)

Let us describe this framework with an example.

###### Example 1.

Let and . Then

 C{1,2}=⎡⎢⎣1001uv⎤⎥⎦, ˜C{1,2}=⎡⎢⎣uv1⎤⎥⎦,Cˆ{3}≅C{1}=⎡⎢⎣1vu⎤⎥⎦, C{1,3}=⎡⎢⎣10u001⎤⎥⎦, ˜C{1,3}=⎡⎢⎣u10⎤⎥⎦,Cˆ{2}≅C{2}=⎡⎢⎣01u⎤⎥⎦, C{2,3}=⎡⎢⎣001001⎤⎥⎦, ˜C{2,3}=⎡⎢⎣100⎤⎥⎦,Cˆ{1}≅C{3}=⎡⎢⎣001⎤⎥⎦.

Let us focus on . Then is constructed directly by definition, that is, in column reduced echelon form with leading positions and . Whereas, is constructed so that . Then we revert the rows and columns (only rows in this case) to obtain the last object where we identify111In this specific case there is no need for identification, but this is only a coincidence. For different choices of one needs a true identification. with .

In this case, as we see from above, there is only one free bit. This yields two subspaces/matrices , which when completed to an invertible matrix as in (5) yield

 Pu=0=⎡⎢⎣100001010⎤⎥⎦,Pu=1=⎡⎢⎣100101010⎤⎥⎦. (7)

Then one directly computes

 (8)

Compare (8) with (6); the first two columns are obviously , whereas the last column is precisely with rows reverted. Note here that when all the free bits are zero then the resulting is simply a permutation matrix, and in this case .

### 2.2 Bruhat Decomposition of the Symplectic Group

We first briefly describe the symplectic structure of via the symplectic bilinear form

 ⟨a,b|c,d⟩s:=bTc+aTd. (9)

One is naturally interested in automorphisms that preserve such symplectic structure. It follows directly by the definition that a matrix preserves iff where

 \rmΩ=[0mImIm0m]. (10)

We will denote the group of all such symplectic matrices with . Equivalently,

 F=[ABCD]∈Sp(2m;2) (11)

iff and . It is well-known that

 |Sp(2m;2)|=2m2m∏i=1(4i−1). (12)

Consider the row space of the upper half of a symplectic matrix . Because is symmetric one has and thus for all . We will denote the dual with respect to the symplectic inner product (9). It follows that , that is, is self-orthogonal or totally isotropic. Moreover, is maximal totally isotropic because and thus . The set of all self-dual/maximal totally isotropic subspaces is commonly referred as the Lagrangian Grassmannian . It is well-known that

 |L(2m,m)|=m∏i=1(2i+1). (13)

For reasons that will become clear latter on we are interested in decomposing symplectic matrices into more elementary symplectic matrices, and we will do this via the Bruhat decomposition of . While the decomposition holds in a general group-theoretic setting [26], here we give a rather elementary approach; see also [27]. We start the decomposition by writing

 Sp(2m;2)=m⋃r=0Cr, (14)

where

 Cr={F=[ABCD]∈Sp(2m;2)∣∣∣rankC=r}. (15)

In there are two distinguished subgroups:

 SD (16) SU (17)

Let be the semidirect product of and , that is,

 P={FD(P)FU(S)∣P∈GL(m;2),S∈Sym(m;2)}. (18)

Note that the order of the multiplication doesn’t matter since

 FD(P)FU(S)=FU(PSPT)FD(P), (19)

and is again symmetric. It is straightforward to verify that , and that in general

 Cr={F1FΩ(r)F2∣F1,F2∈P}, (20)

where

 FΩ(r)=[Im|−rIm|rIm|rIm|−r], (21)

with being the block matrix with in upper left corner and 0 else and . Note here that and . Then it follows by (20) (and by (19)) that every can be written as

 (22)

The above constitutes the Bruhat decomposition of a symplectic matrix; see also [28, 24].

###### Remark 1.

It was shown in [29] that a symplectic matrix can be decomposed as

 F=FD(P1)FTU(S1)ΩFΩ(r)FU(S2)FD(P2). (23)

If we, instead, decompose as in (23) and insert between and , we see that (23) is reduced to (22). This reduction from a seven-component decomposition to a five-component decomposition is beneficial in quantum circuits design [28, 30].

In what follows we will focus on the right action of on , that is, the right cosets in the quotient group . It is an immediate consequence of (22) and (19) that a coset representative will look like

 FD(P)FU(S)FΩ(r), (24)

for some rank , invertible , and symmetric . However, two different invertibles may yield representatives of the same coset. We make this precise below.

###### Lemma 1.

A right coset in is uniquely characterized by a rank , an symmetric matrix , and a -dimensional subspace in .

###### Proof.

Write a coset representative as in (24). This immediately determines . Next, write in a block form

 S=[SrXXTSm−r], (25)

where are symmetric. Denote the matrices that have and in upper left and lower right corner respectively and 0 otherwise. Put also

 ˜X=[Ir0XTIm−r]. (26)

With this notation we have

 FU(S)FΩ(r)=FU(˜Sr)FΩ(r)FU(ˆSm−r)FD(˜X). (27)

In other words and belong to the same coset. Now consider an invertible

 ˜P=[Pr00Pm−r]. (28)

It is also straightforward to verify that

 FU(˜Sr)FΩ(r)FD(˜P) =FU(˜Sr)FD(ˆP)FΩ(r) =FD(ˆP)FU(ˆP−1˜SrˆP−T)FΩ(r),

where

 ˆP=[P−Tr00Pm−r], (29)

and the second equality follows by (19). Thus , where

 P1:=PˆP,S1:=ˆP−1˜SrˆP−T (30)

represents the same coset. Note that the transformation (30) doesn’t change the column space (that is, the lower left corner of ), which is an -dimensional subspace in .

Next, using Schubert cells will choose a canonical coset representative. We will use the same notation as in the above lemma. Let and be as above. To choose , think of the -dimensional subspace from the above lemma as the column space of a matrix , which belongs to some Schubert cell . We will use the coset representative

 FO(PI,Sr):=FD(PI)FU(˜Sr)FΩ(r), (31)

where is as in (5).

Let be in block from as in (11), and assume it is written as

 F=FD(P−T)FU(˜Sr)FΩ(r)FD(M)FU(S). (32)

Multiplying both sides of (32) on the left with and on the right with , and then comparing respective blocks we obtain

 PTA =(˜Sr+Im|−r)M, (33) PTAS =PTB+Im|rM−T, (34) P−1C =Im|rM, (35) P−1CS =P−1D+Im|−rM−T, (36)

which we can solve for and , while assuming that we know (and implicitly which can be determined by the column space of the lower-left block of ). First we find . For this, recall that has nonzero entries only on the upper left block. Thus, it follows by (33) that the last rows of coincide with the last rows of . Similarly, it follows from (35) that the first rows of coincide with the first rows of . With in hand we have

 ˜Sr=PTAM−1+Im|−r. (37)

By using (35) in (36) we see that the first rows of coincide with first rows of . Similarly, by using (33) in (34), we see that the last rows of coincide with the last rows of . Multiplication with yields . We collect everything in Algorithm 1, which gives not only the Bruhat decomposition but also a canonical coset representative.

We end this section with a few remarks.

###### Remark 2.

One can follow an analogous path by considering left action of on . This follows most directly by the observation that if is a right coset representative then is a left coset representative.

###### Remark 3.

Note that for the extremal case , a coset representative as in (31) is completely determined by a symmetric matrix , since in this case, as one would recall, .

###### Remark 4.

Directly from the definition we have

 |P|=|GL(m;2)|⋅|Sym(m;2)|=2m2m∏i=1(2i−1), (38)

which combined with (12) yields

 |Sp(2m;2)/P|=m∏i=1(2i+1)=|L(2m,m)|. (39)

The above is of course not a coincidence. Indeed, acts transitively from the right on . Next, consider . If a symplectic matrix as in (11) fixes this space, then and is invertible. Additionally, because is symplectic to start with, we obtain and is symmetric. Thus , and . That is, is the stabilizer (in a group action terminology) of . The mapping , given by

 FO(PI,Sr)⟼\rm rs[Im|rPIT(Im|r˜Sr+Im|−r)P−1I] (40)

is well-defined (because, as mentioned, the upper half of a symplectic matrix is maximal isotropic). It is also injective, and thus bijective due to cardinality reasons. Of course one can have many bijections but we choose this one due to Theorem 1.

### 2.3 The Heisenberg-Weyl Group

Fix , and let be the standard basis of , which is commonly referred as the computational basis. For set . Then is the standard basis of . The Pauli matrices are

 (41)

For put

 D(a,b):=σa1xσb1z⊗⋯⊗σamxσbmz. (42)

Directly by definition we have

 (43)

and thus, the former is a permutation matrix whereas the latter is a diagonal matrix. Then

 D(a,b)D(c,d)=(−1)bTcD(a+c,b+d). (44)

Thanks to (44) we have

 D(a,b)D(c,d)=(−1)bTc+aTdD(c,d)D(a,b). (45)

In turn, and commute iff

 ⟨a,b|c,d⟩s:=bTc+aTd=0, (46)

that is, iff and are orthogonal with respect to the symplectic inner product (9). Also thanks to (44), the set

 HWN:={ikD(a,b)∣a,b∈Fm2,k=0,1,2,3} (47)

is a subgroup of and is called the Heisenberg-Weyl group. We will also call its elements Pauli matrices as well. Directly from the definition, we have a surjective homomorphism of groups

 ΨN:HWN⟶F2m2,ikD(a,b)⟼(a,b). (48)

Its kernel is . We will denote the projective Heisenberg-Weyl group, and the induced isomorphism .

Note that defines a nondegenerate bilinear form in that translates commutativity in to orthogonality in . A commutative subgroup is called a stabilizer group if . Thus, for a stabilizer , thanks to (46) we have [31, 32]. In addition, because restricted to a stabilizer is an isomorphism, we have that iff . We will think of as the row space of a full rank matrix where both and are binary matrices. We will write

 E(A,B):={E(xTA,xTB)∣x∈Fr2}, (49)

where Combining this with (43) and (44) we obtain

 E(a,b)=iaTb∑v∈Fm2(−1)bTvev+aevT. (50)

Next, if is self-orthogonal in then is a stabilizer. Moreover, , which yields a one-to-one correspondence between stabilizers in and self-orthogonal subspaces in . It also follows that a maximal stabilizer must have elements. Thus there is a one-to-one correspondence between maximal stabilizers and Lagrangian Grassmannians . Of particular interest are maximal stabilizers

 XN :=E(Im,0m)={E(a,0)∣a∈Fm2}, (51) ZN :=E(0m,Im)={E(0,b)∣b∈Fm2}, (52)

which we naturally identify with and .

What follows holds in general for any stabilizer, but for our purposes, we need only focus on the maximal ones. Let be a maximal stabilizer and let be an independent generating set of (that is, ). Consider the complex vector space [33]

 V(S):={v∈CN∣Eiv=v,i=1,…,m}. (53)

It is well-known (see, e.g., [34]) that . A unit norm vector that generates it is called stabilizer state, and with a slight abuse of notation is also denoted by . Because we are disregarding scalars, it is beneficial to think of a stabilizer state as Grassmannian line, that is, . Next,

 \rmΠS:=m∏i=1IN+Ei2=1N∑E∈SE (54)

is a projection onto .

Given a stabilizer as above, for any , also describes a stabilizer . Similarly to (54) put

 \rmΠSd:=m∏i=1IN+(−1)diEi2=1N∑x