Codebooks of complex projective (Grassmann) lines, or tight frames, have found application in multiple problems of interest for communications and information processing, such as code division multiple access sequence design , precoding for multi-antenna transmissions  and network coding . Contemporary interest in such codes arise, e.g., from deterministic compressed sensing [4, 5, 6, 7, 8], virtual full-duplex communication , mmWave communication , and random access .
One of the challenges/promises of 5G wireless communication is to enable massive machine-type communications (mMTC) in the Internet of Things (IoT), in which a massive number of low-cost devices sporadically and randomly access the network . In this scenario, users are assigned a unique signature sequence which they transmit whenever active . A twin use-case is unsourced multiple access, where a large number of messages is transmitted infrequently. Polyanskiy  proposed a framework in which communication occurs in blocks of channel uses, and the task of a receiver is to identify correctly active users (messages) out of , with one regime of interest being , and . Ever since its introduction, there have been several follow-up works [14, 15, 16, 11, 17], extensions to a massive MIMO scenario  where the base station has a very large number of antennas, and a discussion on the fundamental limits on what is possible .
Given the massive number of to-be-supported (to-be-encoded) users (messages), the design criteria are fundamentally different and one simply cannot rely on classical multiple-access channel solutions. For instance, interference is unavoidable since it is impossible to have orthogonal signatures/codewords. Additionally, given that there is a small number of active user, the interference is limited. Thus, the challenge becomes to design highly structured codebooks of large cardinality along with a reliable and low-complexity decoding algorithm.
Codebooks of Binary Chirps (BCs) [5, 20] provide such highly structured Grassmannian line codebook in dimensions with additional desirable properties. All entries come from a small alphabet, being a fourth root of unity, and can be described in terms of second order Reed-Muller (RM) codes. RM codes have the fascinating property that a Walsh-Hadamard measurement cuts in half the solution space. This yields a single-user decoding complexity of , coming from the Walsh-Hadamard transform and number of required measurements. Additionally, the number of codewords is reasonably large, growing as , while the minimum chordal distance is .
We expand the BC codebook to the codebook of Binary Subspace Chirps (BSSCs) by collectively considering all BCs in dimensions, , in dimensions.
That is, given a BC in dimensions, we embed it in dimensions via a unique on-off pattern determined by a rank binary subspace.
Thus, a BSSC is characterized by a sparsity , a BC part parametrized by a binary symmetric matrix and a binary vector
and a binary vector, and a unique on-off pattern parametrized by a rank binary subspace ; see (79) for the formal definition. The codebook of BSSCs inherits all the desirable properties of BCs, and in addition, it has asymptotically about 2.384 more codewords. Thus, an active device with a rank signature will transmit , during time slots determined by the rank subspace , and it will be silent otherwise. This resembles the model of , in which active devices can also be used (to listen) as receivers during the off-slots.
Given the structure of BSSCs, a unified rank, on-off pattern, and BCs part (in this order) estimation technique is needed. In ). The algorithm can be described with the common language of symplectic geometry and quantum computation.
The key insight here is to view BSSCs as common eigenvectors of maximal sets of commuting Pauli matrices, commonly referred as ), which itself is the common eigenspace of a unique stabilizer group (
Given the structure of BSSCs, a unified rank, on-off pattern, and BCs part (in this order) estimation technique is needed. In, a reliable on-off pattern detection was proposed, which made use of a Weyl-type transform  on qubit diagonal Pauli matrices; see (103
). The algorithm can be described with the common language of symplectic geometry and quantum computation. The key insight here is to view BSSCs as common eigenvectors of maximal sets of commuting Pauli matrices, commonly referred asstabilizer groups. Indeed, we show that BSSCs are nothing else but stabilizer states , and their sparsity is determined by the diagonal portion of the corresponding stabilizer group; see Corollaries 2 and 3. We also show that each BSSC is a column of a unique Clifford matrix (86
), which itself is the common eigenspace of a unique stabilizer group (101); see also Theorem 1. The interplay between the binary world and the complex world is depicted in Figure 1.
Making use of these structural results, the on-off pattern detection of  can be generalized to recover the BC part of the BSSC, this time by using the Weyl-type transform on the off-diagonal part of the corresponding stabilizer group.
This yields a single-user BSSC reconstruction as described in Algorithm 2.
In , we added Orthogonal Matching Pursuit (OMP) to obtain a multi-user BSSCs reconstruction (see Algorithm 3) with reliable performance when there is a small number of active users.
As the number of active users increases, so does the interference, which has a quite destructive effect on the on-off pattern. However, state-of-the-art solutions for BCs [8, 17, 25] such as slotting and patching, can be used to reduce the interference.
Preliminary simulations show that BSSCs exhibit a lower error probability than BCs. This is because BSSCs have fewer closest neighbors on average than BCs. In addition, BSSCs are uniformly distributed over the sphere, which makes them optimal when dealing with Gaussian noise.
such as slotting and patching, can be used to reduce the interference. Preliminary simulations show that BSSCs exhibit a lower error probability than BCs. This is because BSSCs have fewer closest neighbors on average than BCs. In addition, BSSCs are uniformly distributed over the sphere, which makes them optimal when dealing with Gaussian noise.
Throughout, the decoding complexity is kept at bay from the underlying symplectic geometry. The sparsity, the BC part, and the on-off pattern of a BSSC can be described in terms of the Bruhat decomposition (22) of a symplectic matrix. Indeed, the unique Clifford matrix (86) of which a BSSC is a column, is parametrized by a coset representative (24) as described in Lemma 1. In turn, such coset representative determines a unique stabilizer group (101). We use this interplay to reconstruct a BSSC by reconstructing the stabilizer group that stabilizes the given BSSC. This alone reduces the complexity from to .
The paper is organized as follows. In Section 2 we review the basics of binary symplectic geometry and quantum computation. In order to obtain a unique parametrization of BSSCs, we use Schubert cells and the Bruhat decomposition of the symplectic group. In Section 3 we lift the Bruhat decomposition of the symplectic group to obtain a decomposition of the Clifford group. Additionally, we parametrize those Clifford matrices whose columns are BSSCs. In Section 4 we give the formal definition of BSSCs, along with their algebraic and geometric properties. In Sections 5 and 6 we present reliable low complexity decoding algorithms, and discuss simulation results. We end the paper with some conclusions and directions future research.
All vectors, binary or complex, will be columns. denotes the binary field, denotes the group of binary invertible matrices, and denotes the group of binary symmetric matrices. We will denote matrices (resp., vectors) with upper case (resp., lower case) bold letters. will denote the transpose and will denote the inverse transposed. and will denote the column space and the row space of respectively. Since all our vectors are columns, we will typically deal with column spaces, except when we work with notions from quantum computation, where row spaces are customary. will denote the matrix (complex or binary). denotes the binary Grassmannian, that is, the set of all -dimensional subspaces of . denotes the set of unitary complex matrices and will denote the conjugate transpose of a matrix.
In this section we will introduce all preliminary notions needed for navigating the connection between the dimensional binary world and the dimensional complex world, as depicted in Figure 1. The primary bridge used here is the well-known homomorphism (62) and the Bruhat decomposition of the symplectic group. We focus on cosets of the symplectic group modulo the semidirect product . These cosets are characterized by a rank and a binary subspace , which we will think of as the column space of an binary matrix in column reduced echelon form. We will use Schubert cells as a formal and systematic approach. This also provides a framework for describing well-known facts from binary symplectic geometry (e.g., Remark 4). Finally, Subsection 2.3 discusses common notions from quantum computation.
2.1 Schubert Cells
Here we discuss the Schubert decomposition of the Grassmannian with the respect to the standard flag
where and is the standard basis of . Fix a set of indices , which, without loss of generality, we assume to be in increasing order. The Schubert cell is the set of all matrices that have in leading positions , on the left, right, and above each leading position, and every other entry is free. This is simply the set of all binary matrices in column reduced echelon form with leading positions . By counting the number of free entries in each column one concludes that
Fix , and think of it as the column space of a matrix . After column operations, it will belong to some cell , and to emphasize this fact, we will denote it as . Schubert cells have a well-known duality theory which we outline next. Let be such that . Of course . Let and put . There is a bijection between and , realized by reverting the rows and columns of and identifying with . With this identification, we will denote the unique element of cell that is equivalent with , obtained by reverting the rows and columns of :
where is the antidiagonal matrix in respective dimensions.
Each cell has a distinguished element: will denote the identity matrix
will denote the identity matrixrestricted to , that is, the unique element in that has all the free entries 0. Note that has as th column the th column of , and thus its non-zero rows form . In particular if then . We also have . With this notation one easily verifies that
In addition, can be completed to an invertible matrix
can be completed to an invertible matrix
Note that when is completed to an invertible matrix it gives rise to a permutation matrix. Next, (4) along with the default equality implies that
Let us describe this framework with an example.
Let and . Then
Let us focus on . Then is constructed directly by definition, that is, in column reduced echelon form with leading positions and . Whereas, is constructed so that . Then we revert the rows and columns (only rows in this case) to obtain the last object where we identify111In this specific case there is no need for identification, but this is only a coincidence. For different choices of one needs a true identification. with .
In this case, as we see from above, there is only one free bit. This yields two subspaces/matrices , which when completed to an invertible matrix as in (5) yield
Then one directly computes
Compare (8) with (6); the first two columns are obviously , whereas the last column is precisely with rows reverted. Note here that when all the free bits are zero then the resulting is simply a permutation matrix, and in this case .
2.2 Bruhat Decomposition of the Symplectic Group
We first briefly describe the symplectic structure of via the symplectic bilinear form
One is naturally interested in automorphisms that preserve such symplectic structure. It follows directly by the definition that a matrix preserves iff where
We will denote the group of all such symplectic matrices with . Equivalently,
iff and . It is well-known that
Consider the row space of the upper half of a symplectic matrix . Because is symmetric one has and thus for all . We will denote the dual with respect to the symplectic inner product (9). It follows that , that is, is self-orthogonal or totally isotropic. Moreover, is maximal totally isotropic because and thus . The set of all self-dual/maximal totally isotropic subspaces is commonly referred as the Lagrangian Grassmannian . It is well-known that
For reasons that will become clear latter on we are interested in decomposing symplectic matrices into more elementary symplectic matrices, and we will do this via the Bruhat decomposition of . While the decomposition holds in a general group-theoretic setting , here we give a rather elementary approach; see also . We start the decomposition by writing
In there are two distinguished subgroups:
Let be the semidirect product of and , that is,
Note that the order of the multiplication doesn’t matter since
and is again symmetric. It is straightforward to verify that , and that in general
It was shown in  that a symplectic matrix can be decomposed as
If we, instead, decompose as in (23) and insert between and , we see that (23) is reduced to (22). This reduction from a seven-component decomposition to a five-component decomposition is beneficial in quantum circuits design [28, 30].
for some rank , invertible , and symmetric . However, two different invertibles may yield representatives of the same coset. We make this precise below.
A right coset in is uniquely characterized by a rank , an symmetric matrix , and a -dimensional subspace in .
Write a coset representative as in (24). This immediately determines . Next, write in a block form
where are symmetric. Denote the matrices that have and in upper left and lower right corner respectively and 0 otherwise. Put also
With this notation we have
In other words and belong to the same coset. Now consider an invertible
It is also straightforward to verify that
and the second equality follows by (19). Thus , where
represents the same coset. Note that the transformation (30) doesn’t change the column space (that is, the lower left corner of ), which is an -dimensional subspace in .
Next, using Schubert cells will choose a canonical coset representative. We will use the same notation as in the above lemma. Let and be as above. To choose , think of the -dimensional subspace from the above lemma as the column space of a matrix , which belongs to some Schubert cell . We will use the coset representative
where is as in (5).
Let be in block from as in (11), and assume it is written as
Multiplying both sides of (32) on the left with and on the right with , and then comparing respective blocks we obtain
which we can solve for and , while assuming that we know (and implicitly which can be determined by the column space of the lower-left block of ). First we find . For this, recall that has nonzero entries only on the upper left block. Thus, it follows by (33) that the last rows of coincide with the last rows of . Similarly, it follows from (35) that the first rows of coincide with the first rows of . With in hand we have
By using (35) in (36) we see that the first rows of coincide with first rows of . Similarly, by using (33) in (34), we see that the last rows of coincide with the last rows of . Multiplication with yields . We collect everything in Algorithm 1, which gives not only the Bruhat decomposition but also a canonical coset representative.
We end this section with a few remarks.
One can follow an analogous path by considering left action of on . This follows most directly by the observation that if is a right coset representative then is a left coset representative.
Note that for the extremal case , a coset representative as in (31) is completely determined by a symmetric matrix , since in this case, as one would recall, .
Directly from the definition we have
which combined with (12) yields
The above is of course not a coincidence. Indeed, acts transitively from the right on . Next, consider . If a symplectic matrix as in (11) fixes this space, then and is invertible. Additionally, because is symplectic to start with, we obtain and is symmetric. Thus , and . That is, is the stabilizer (in a group action terminology) of . The mapping , given by
is well-defined (because, as mentioned, the upper half of a symplectic matrix is maximal isotropic). It is also injective, and thus bijective due to cardinality reasons. Of course one can have many bijections but we choose this one due to Theorem 1.
2.3 The Heisenberg-Weyl Group
Fix , and let be the standard basis of , which is commonly referred as the computational basis. For set . Then is the standard basis of . The Pauli matrices are
Directly by definition we have
and thus, the former is a permutation matrix whereas the latter is a diagonal matrix. Then
Thanks to (44) we have
In turn, and commute iff
is a subgroup of and is called the Heisenberg-Weyl group. We will also call its elements Pauli matrices as well. Directly from the definition, we have a surjective homomorphism of groups
Its kernel is . We will denote the projective Heisenberg-Weyl group, and the induced isomorphism .
Note that defines a nondegenerate bilinear form in that translates commutativity in to orthogonality in . A commutative subgroup is called a stabilizer group if . Thus, for a stabilizer , thanks to (46) we have [31, 32]. In addition, because restricted to a stabilizer is an isomorphism, we have that iff . We will think of as the row space of a full rank matrix where both and are binary matrices. We will write
Next, if is self-orthogonal in then is a stabilizer. Moreover, , which yields a one-to-one correspondence between stabilizers in and self-orthogonal subspaces in . It also follows that a maximal stabilizer must have elements. Thus there is a one-to-one correspondence between maximal stabilizers and Lagrangian Grassmannians . Of particular interest are maximal stabilizers
which we naturally identify with and .
What follows holds in general for any stabilizer, but for our purposes, we need only focus on the maximal ones. Let be a maximal stabilizer and let be an independent generating set of (that is, ). Consider the complex vector space 
It is well-known (see, e.g., ) that . A unit norm vector that generates it is called stabilizer state, and with a slight abuse of notation is also denoted by . Because we are disregarding scalars, it is beneficial to think of a stabilizer state as Grassmannian line, that is, . Next,
is a projection onto .
Given a stabilizer as above, for any , also describes a stabilizer . Similarly to (54) put