Binary Subspace Chirps

02/24/2021
by   Tefjol Pllaha, et al.
0

We describe in details the interplay between binary symplectic geometry and quantum computation, with the ultimate goal of constructing highly structured codebooks. The Binary Chirps (BCs) are Complex Grassmannian Lines in N = 2^m dimensions used in deterministic compressed sensing and random/unsourced multiple access in wireless networks. Their entries are fourth roots of unity and can be described in terms of second order Reed-Muller codes. The Binary Subspace Chirps (BSSCs) are a unique collection of BCs of ranks ranging from r=0 to r = m, embedded in N dimensions according to an on-off pattern determined by a rank r binary subspace. This yields a codebook that is asymptotically 2.38 times larger than the codebook of BCs, has the same minimum chordal distance as the codebook of BCs, and the alphabet is minimally extended from {± 1,± i} to {± 1,± i, 0}. Equivalently, we show that BSSCs are stabilizer states, and we characterize them as columns of a well-controlled collection of Clifford matrices. By construction, the BSSCs inherit all the properties of BCs, which in turn makes them good candidates for a variety of applications. For applications in wireless communication, we use the rich algebraic structure of BSSCs to construct a low complexity decoding algorithm that is reliable against Gaussian noise. In simulations, BSSCs exhibit an error probability comparable or slightly lower than BCs, both for single-user and multi-user transmissions.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

05/16/2020

Reconstruction of Multi-user Binary Subspace Chirps

We consider codebooks of Complex Grassmannian Lines consisting of Binary...
04/06/2018

Binary Subspace Codes in Small Ambient Spaces

Codes in finite projective spaces equipped with the subspace distance ha...
05/11/2021

On Compressed Sensing of Binary Signals for the Unsourced Random Access Channel

Motivated by applications in unsourced random access, this paper develop...
10/09/2018

A decoding algorithm for binary linear codes using Groebner bases

It has been discovered that linear codes may be described by binomial id...
10/18/2018

Abelian Noncyclic Orbit Codes and Multishot Subspace Codes

In this paper we characterize the orbit codes as geometrically uniform c...
09/17/2019

Analog Subspace Coding: A New Approach to Coding for Non-Coherent Wireless Networks

We provide a precise framework to study subspace codes for non-coherent ...
01/29/2018

Structured Spreadsheet Modelling and Implementation with Multiple Dimensions - Part 1: Modelling

Dimensions are an integral part of many models we use every day. Without...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Codebooks of complex projective (Grassmann) lines, or tight frames, have found application in multiple problems of interest for communications and information processing, such as code division multiple access sequence design [1], precoding for multi-antenna transmissions [2] and network coding [3]. Contemporary interest in such codes arise, e.g., from deterministic compressed sensing [4, 5, 6, 7, 8], virtual full-duplex communication [9], mmWave communication [10], and random access [11].

One of the challenges/promises of 5G wireless communication is to enable massive machine-type communications (mMTC) in the Internet of Things (IoT), in which a massive number of low-cost devices sporadically and randomly access the network [12]. In this scenario, users are assigned a unique signature sequence which they transmit whenever active [13]. A twin use-case is unsourced multiple access, where a large number of messages is transmitted infrequently. Polyanskiy [12] proposed a framework in which communication occurs in blocks of channel uses, and the task of a receiver is to identify correctly active users (messages) out of , with one regime of interest being , and . Ever since its introduction, there have been several follow-up works [14, 15, 16, 11, 17], extensions to a massive MIMO scenario [18] where the base station has a very large number of antennas, and a discussion on the fundamental limits on what is possible [19].

Given the massive number of to-be-supported (to-be-encoded) users (messages), the design criteria are fundamentally different and one simply cannot rely on classical multiple-access channel solutions. For instance, interference is unavoidable since it is impossible to have orthogonal signatures/codewords. Additionally, given that there is a small number of active user, the interference is limited. Thus, the challenge becomes to design highly structured codebooks of large cardinality along with a reliable and low-complexity decoding algorithm.

Codebooks of Binary Chirps (BCs) [5, 20] provide such highly structured Grassmannian line codebook in dimensions with additional desirable properties. All entries come from a small alphabet, being a fourth root of unity, and can be described in terms of second order Reed-Muller (RM) codes. RM codes have the fascinating property that a Walsh-Hadamard measurement cuts in half the solution space. This yields a single-user decoding complexity of , coming from the Walsh-Hadamard transform and number of required measurements. Additionally, the number of codewords is reasonably large, growing as , while the minimum chordal distance is .

We expand the BC codebook to the codebook of Binary Subspace Chirps (BSSCs) by collectively considering all BCs in dimensions, , in dimensions. That is, given a BC in dimensions, we embed it in dimensions via a unique on-off pattern determined by a rank binary subspace. Thus, a BSSC is characterized by a sparsity , a BC part parametrized by a binary symmetric matrix

and a binary vector

, and a unique on-off pattern parametrized by a rank binary subspace ; see (79) for the formal definition. The codebook of BSSCs inherits all the desirable properties of BCs, and in addition, it has asymptotically about 2.384 more codewords. Thus, an active device with a rank signature will transmit , during time slots determined by the rank subspace , and it will be silent otherwise. This resembles the model of [9], in which active devices can also be used (to listen) as receivers during the off-slots.

Given the structure of BSSCs, a unified rank, on-off pattern, and BCs part (in this order) estimation technique is needed. In 

[21], a reliable on-off pattern detection was proposed, which made use of a Weyl-type transform [22] on qubit diagonal Pauli matrices; see (103

). The algorithm can be described with the common language of symplectic geometry and quantum computation. The key insight here is to view BSSCs as common eigenvectors of maximal sets of commuting Pauli matrices, commonly referred as

stabilizer groups. Indeed, we show that BSSCs are nothing else but stabilizer states [23], and their sparsity is determined by the diagonal portion of the corresponding stabilizer group; see Corollaries 2 and 3. We also show that each BSSC is a column of a unique Clifford matrix (86

), which itself is the common eigenspace of a unique stabilizer group (

101); see also Theorem 1. The interplay between the binary world and the complex world is depicted in Figure 1.

[width = 0.45]BSSC_struct.png

Figure 1: Interplay of binary world and complex world. Prior art is depicted in yellow. The contributions of this paper are depicted in green. See also [24, 21].

Making use of these structural results, the on-off pattern detection of [21] can be generalized to recover the BC part of the BSSC, this time by using the Weyl-type transform on the off-diagonal part of the corresponding stabilizer group. This yields a single-user BSSC reconstruction as described in Algorithm 2. In [24], we added Orthogonal Matching Pursuit (OMP) to obtain a multi-user BSSCs reconstruction (see Algorithm 3) with reliable performance when there is a small number of active users. As the number of active users increases, so does the interference, which has a quite destructive effect on the on-off pattern. However, state-of-the-art solutions for BCs [8, 17, 25]

such as slotting and patching, can be used to reduce the interference. Preliminary simulations show that BSSCs exhibit a lower error probability than BCs. This is because BSSCs have fewer closest neighbors on average than BCs. In addition, BSSCs are uniformly distributed over the sphere, which makes them optimal when dealing with Gaussian noise.

Throughout, the decoding complexity is kept at bay from the underlying symplectic geometry. The sparsity, the BC part, and the on-off pattern of a BSSC can be described in terms of the Bruhat decomposition (22) of a symplectic matrix. Indeed, the unique Clifford matrix (86) of which a BSSC is a column, is parametrized by a coset representative (24) as described in Lemma 1. In turn, such coset representative determines a unique stabilizer group (101). We use this interplay to reconstruct a BSSC by reconstructing the stabilizer group that stabilizes the given BSSC. This alone reduces the complexity from to .

The paper is organized as follows. In Section 2 we review the basics of binary symplectic geometry and quantum computation. In order to obtain a unique parametrization of BSSCs, we use Schubert cells and the Bruhat decomposition of the symplectic group. In Section 3 we lift the Bruhat decomposition of the symplectic group to obtain a decomposition of the Clifford group. Additionally, we parametrize those Clifford matrices whose columns are BSSCs. In Section 4 we give the formal definition of BSSCs, along with their algebraic and geometric properties. In Sections 5 and 6 we present reliable low complexity decoding algorithms, and discuss simulation results. We end the paper with some conclusions and directions future research.

1.1 Conventions

All vectors, binary or complex, will be columns. denotes the binary field, denotes the group of binary invertible matrices, and denotes the group of binary symmetric matrices. We will denote matrices (resp., vectors) with upper case (resp., lower case) bold letters. will denote the transpose and will denote the inverse transposed. and will denote the column space and the row space of respectively. Since all our vectors are columns, we will typically deal with column spaces, except when we work with notions from quantum computation, where row spaces are customary. will denote the matrix (complex or binary). denotes the binary Grassmannian, that is, the set of all -dimensional subspaces of . denotes the set of unitary complex matrices and will denote the conjugate transpose of a matrix.

2 Preliminaries

In this section we will introduce all preliminary notions needed for navigating the connection between the dimensional binary world and the dimensional complex world, as depicted in Figure 1. The primary bridge used here is the well-known homomorphism (62) and the Bruhat decomposition of the symplectic group. We focus on cosets of the symplectic group modulo the semidirect product . These cosets are characterized by a rank and a binary subspace , which we will think of as the column space of an binary matrix in column reduced echelon form. We will use Schubert cells as a formal and systematic approach. This also provides a framework for describing well-known facts from binary symplectic geometry (e.g., Remark 4). Finally, Subsection 2.3 discusses common notions from quantum computation.

2.1 Schubert Cells

Here we discuss the Schubert decomposition of the Grassmannian with the respect to the standard flag

(1)

where and is the standard basis of . Fix a set of indices , which, without loss of generality, we assume to be in increasing order. The Schubert cell is the set of all matrices that have in leading positions , on the left, right, and above each leading position, and every other entry is free. This is simply the set of all binary matrices in column reduced echelon form with leading positions . By counting the number of free entries in each column one concludes that

(2)

Fix , and think of it as the column space of a matrix . After column operations, it will belong to some cell , and to emphasize this fact, we will denote it as . Schubert cells have a well-known duality theory which we outline next. Let be such that . Of course . Let and put . There is a bijection between and , realized by reverting the rows and columns of and identifying with . With this identification, we will denote the unique element of cell that is equivalent with , obtained by reverting the rows and columns of :

(3)

where is the antidiagonal matrix in respective dimensions.

Each cell has a distinguished element:

will denote the identity matrix

restricted to , that is, the unique element in that has all the free entries 0. Note that has as th column the th column of , and thus its non-zero rows form . In particular if then . We also have . With this notation one easily verifies that

(4)

In addition,

can be completed to an invertible matrix

(5)

Note that when is completed to an invertible matrix it gives rise to a permutation matrix. Next, (4) along with the default equality implies that

(6)

Let us describe this framework with an example.

Example 1.

Let and . Then

Let us focus on . Then is constructed directly by definition, that is, in column reduced echelon form with leading positions and . Whereas, is constructed so that . Then we revert the rows and columns (only rows in this case) to obtain the last object where we identify111In this specific case there is no need for identification, but this is only a coincidence. For different choices of one needs a true identification. with .

In this case, as we see from above, there is only one free bit. This yields two subspaces/matrices , which when completed to an invertible matrix as in (5) yield

(7)

Then one directly computes

(8)

Compare (8) with (6); the first two columns are obviously , whereas the last column is precisely with rows reverted. Note here that when all the free bits are zero then the resulting is simply a permutation matrix, and in this case .

2.2 Bruhat Decomposition of the Symplectic Group

We first briefly describe the symplectic structure of via the symplectic bilinear form

(9)

One is naturally interested in automorphisms that preserve such symplectic structure. It follows directly by the definition that a matrix preserves iff where

(10)

We will denote the group of all such symplectic matrices with . Equivalently,

(11)

iff and . It is well-known that

(12)

Consider the row space of the upper half of a symplectic matrix . Because is symmetric one has and thus for all . We will denote the dual with respect to the symplectic inner product (9). It follows that , that is, is self-orthogonal or totally isotropic. Moreover, is maximal totally isotropic because and thus . The set of all self-dual/maximal totally isotropic subspaces is commonly referred as the Lagrangian Grassmannian . It is well-known that

(13)

For reasons that will become clear latter on we are interested in decomposing symplectic matrices into more elementary symplectic matrices, and we will do this via the Bruhat decomposition of . While the decomposition holds in a general group-theoretic setting [26], here we give a rather elementary approach; see also [27]. We start the decomposition by writing

(14)

where

(15)

In there are two distinguished subgroups:

(16)
(17)

Let be the semidirect product of and , that is,

(18)

Note that the order of the multiplication doesn’t matter since

(19)

and is again symmetric. It is straightforward to verify that , and that in general

(20)

where

(21)

with being the block matrix with in upper left corner and 0 else and . Note here that and . Then it follows by (20) (and by (19)) that every can be written as

(22)

The above constitutes the Bruhat decomposition of a symplectic matrix; see also [28, 24].

Remark 1.

It was shown in [29] that a symplectic matrix can be decomposed as

(23)

If we, instead, decompose as in (23) and insert between and , we see that (23) is reduced to (22). This reduction from a seven-component decomposition to a five-component decomposition is beneficial in quantum circuits design [28, 30].

In what follows we will focus on the right action of on , that is, the right cosets in the quotient group . It is an immediate consequence of (22) and (19) that a coset representative will look like

(24)

for some rank , invertible , and symmetric . However, two different invertibles may yield representatives of the same coset. We make this precise below.

Lemma 1.

A right coset in is uniquely characterized by a rank , an symmetric matrix , and a -dimensional subspace in .

Proof.

Write a coset representative as in (24). This immediately determines . Next, write in a block form

(25)

where are symmetric. Denote the matrices that have and in upper left and lower right corner respectively and 0 otherwise. Put also

(26)

With this notation we have

(27)

In other words and belong to the same coset. Now consider an invertible

(28)

It is also straightforward to verify that

where

(29)

and the second equality follows by (19). Thus , where

(30)

represents the same coset. Note that the transformation (30) doesn’t change the column space (that is, the lower left corner of ), which is an -dimensional subspace in .

Next, using Schubert cells will choose a canonical coset representative. We will use the same notation as in the above lemma. Let and be as above. To choose , think of the -dimensional subspace from the above lemma as the column space of a matrix , which belongs to some Schubert cell . We will use the coset representative

(31)

where is as in (5).

Let be in block from as in (11), and assume it is written as

(32)

Multiplying both sides of (32) on the left with and on the right with , and then comparing respective blocks we obtain

(33)
(34)
(35)
(36)

which we can solve for and , while assuming that we know (and implicitly which can be determined by the column space of the lower-left block of ). First we find . For this, recall that has nonzero entries only on the upper left block. Thus, it follows by (33) that the last rows of coincide with the last rows of . Similarly, it follows from (35) that the first rows of coincide with the first rows of . With in hand we have

(37)

By using (35) in (36) we see that the first rows of coincide with first rows of . Similarly, by using (33) in (34), we see that the last rows of coincide with the last rows of . Multiplication with yields . We collect everything in Algorithm 1, which gives not only the Bruhat decomposition but also a canonical coset representative.

Input: A symplectic matrix .

   1.   Block decompose to as in (11).
   2.   .
   3.   Find as in (5) from .
   4.   is the first rows of .
   5.   is the last rows of .
   6.   .
   7.   .
   8.   is the upper left block of .
   9.   is the first rows of .
   10. is the last rows of .
   11. .

Output:

Algorithm 1 Bruhat Decomposition of Symplectic Matrix

We end this section with a few remarks.

Remark 2.

One can follow an analogous path by considering left action of on . This follows most directly by the observation that if is a right coset representative then is a left coset representative.

Remark 3.

Note that for the extremal case , a coset representative as in (31) is completely determined by a symmetric matrix , since in this case, as one would recall, .

Remark 4.

Directly from the definition we have

(38)

which combined with (12) yields

(39)

The above is of course not a coincidence. Indeed, acts transitively from the right on . Next, consider . If a symplectic matrix as in (11) fixes this space, then and is invertible. Additionally, because is symplectic to start with, we obtain and is symmetric. Thus , and . That is, is the stabilizer (in a group action terminology) of . The mapping , given by

(40)

is well-defined (because, as mentioned, the upper half of a symplectic matrix is maximal isotropic). It is also injective, and thus bijective due to cardinality reasons. Of course one can have many bijections but we choose this one due to Theorem 1.

2.3 The Heisenberg-Weyl Group

Fix , and let be the standard basis of , which is commonly referred as the computational basis. For set . Then is the standard basis of . The Pauli matrices are

(41)

For put

(42)

Directly by definition we have

(43)

and thus, the former is a permutation matrix whereas the latter is a diagonal matrix. Then

(44)

Thanks to (44) we have

(45)

In turn, and commute iff

(46)

that is, iff and are orthogonal with respect to the symplectic inner product (9). Also thanks to (44), the set

(47)

is a subgroup of and is called the Heisenberg-Weyl group. We will also call its elements Pauli matrices as well. Directly from the definition, we have a surjective homomorphism of groups

(48)

Its kernel is . We will denote the projective Heisenberg-Weyl group, and the induced isomorphism .

Note that defines a nondegenerate bilinear form in that translates commutativity in to orthogonality in . A commutative subgroup is called a stabilizer group if . Thus, for a stabilizer , thanks to (46) we have [31, 32]. In addition, because restricted to a stabilizer is an isomorphism, we have that iff . We will think of as the row space of a full rank matrix where both and are binary matrices. We will write

(49)

where Combining this with (43) and (44) we obtain

(50)

Next, if is self-orthogonal in then is a stabilizer. Moreover, , which yields a one-to-one correspondence between stabilizers in and self-orthogonal subspaces in . It also follows that a maximal stabilizer must have elements. Thus there is a one-to-one correspondence between maximal stabilizers and Lagrangian Grassmannians . Of particular interest are maximal stabilizers

(51)
(52)

which we naturally identify with and .

What follows holds in general for any stabilizer, but for our purposes, we need only focus on the maximal ones. Let be a maximal stabilizer and let be an independent generating set of (that is, ). Consider the complex vector space [33]

(53)

It is well-known (see, e.g., [34]) that . A unit norm vector that generates it is called stabilizer state, and with a slight abuse of notation is also denoted by . Because we are disregarding scalars, it is beneficial to think of a stabilizer state as Grassmannian line, that is, . Next,

(54)

is a projection onto .

Given a stabilizer as above, for any , also describes a stabilizer . Similarly to (54) put