Orthogonal Sparse Superposition Codes

07/13/2020
by   Yunseo Nam, et al.
POSTECH
0

This paper presents a new class of sparse superposition codes for efficient short-packet and low-rate communication over the AWGN channel. The new codes are orthogonal sparse superposition codes, in which a codeword is constructed by a superposition of orthogonal columns of a dictionary matrix. We propose a successive encoding technique to construct such codewords. In addition, we introduce a near-optimal decoding, named an element-wise maximum a posterior decoding with successive support set cancellation, which has a linear decoding complexity in block lengths. Via simulations, we demonstrate that the proposed encoding and decoding techniques are less complex and better performing than existing coded modulation techniques for reliable short packet communications.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

06/24/2021

Decoding a class of maximum Hermitian rank metric codes

Maximum Hermitian rank metric codes were introduced by Schmidt in 2018 a...
05/09/2021

Complexity-Adaptive Maximum-Likelihood Decoding of Modified G_N-Coset Codes

A complexity-adaptive tree search algorithm is proposed for G_N-coset co...
04/16/2018

Efficient Scheduling of Serial Iterative Decoding for Zigzag Decodable Fountain Codes

Fountain codes are erasure correcting codes realizing reliable communica...
08/16/2018

Low-Power Cooling Codes with Efficient Encoding and Decoding

A class of low-power cooling (LPC) codes, to control simultaneously both...
09/19/2019

SCDP: Systematic Rateless Coding for Efficient Data Transport in Data Centres

In this paper we propose SCDP, a novel, general-purpose data transport p...
08/26/2021

Exact Decoding Probability of Sparse Random Linear Network Coding for Reliable Multicast

Sparse random linear network coding (SRLNC) used as a class of erasure c...
01/17/2020

Point-line incidence on Grassmannians and majority logic decoding of Grassmann codes

In this article, we consider the decoding problem of Grassmann codes usi...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

Sparse superposition code (SPARC) is one of the capacity-achieving codes initially introduced by Joseph and Barron [1]

. Unlike the traditional coded modulation method, a codeword of SPARCs is constructed by the multiplication of a Gaussian random matrix with independent and identically distributed (IID) entries and a sparse message vector. Leveraging a proper power allocation and an adaptive successive decoding, SPARCs can achieve any fixed-rate below the capacity of Gaussian channels as the code length goes to infinity. Notwithstanding its asymptotic optimality, the adaptive successive decoding yields a poor performance in practical code block lengths. Subsequently, an adaptive successive decoding with soft-decision

[2] and its variations [3] were proposed to improve the decoding performance in a finite code block length, which is of great interest for reliable short packet communications in 6G [4, 5].

Recently, low-complexity decoding methods inspired by compressed sensing have received significant attentions due to a deep connection between SPARCs and compressed sensing [6]. In principle, the decoding problem of SPARCs can be interpreted with a lens through a sparse signal recovery problem from noisy measurements under a certain sparsity structure. Exploiting this connection, approximate message passing (AMP) [7], successfully used in the sparse recovery problem, has been proposed for the decoding of SPARCs [8]. The AMP decoding is attractive in that it can guarantee the decoding performance per iteration thanks to the elegant recursion property, called state evolution [7]. Furthermore, under a Gaussian random dictionary matrix, it can provide a better block-error-rate (BLER) performance than that of adaptive successive decoding for finite code block lengths. The performance of SPARCs, however, is limited to the case when block length is short. In this regime, the dictionary matrix should be carefully designed and optimized. Nevertheless, most of the current researches have focused on asymptotic performance analysis of SPARCs for various decoding methods. Focusing on a short code block length, in this paper, we take a different approach to construct SPARCs in a deterministic manner.

In this paper, we present a new class of sparse superposition codes, called orthogonal sparse superposition codes. A codeword of the orthogonal superposition codes consists of multiple sub-codewords. Each sub-codeword is constructed by the multiplication of the commonly-shared unitary dictionary matrix and its own sub-message vector. The key idea in encoding is to sequentially select the non-zero supports of all sub-message vectors in such a way that they are mutually exclusive. We refer to this technique as successive orthogonal encoding. This construction guarantees that all sub-codewords are mutually orthogonal. This class of codes has a number of intriguing aspects because not only classical permutation modulation codes introduced in 1960’ [9] and recently introduced index and spatial modulation techniques [10, 11, 12] can be interpreted as special cases of the proposed code. We also present a near-optimal decoding algorithm for the orthogonal superposition codes, referred to as an element-wise maximum a posterior (MAP) decoding with successive support set cancellation (E-MAP-SSC). The proposed decoding algorithm is simple yet powerful in that it achieves a near-optimal decoding performance while requiring linear complexity in block lengths. Therefore, the proposed method is very useful for ultra-reliable and low-latency communication (URLLC) [5]. By simulations, we show that the proposed encoding and decoding techniques outperform the classical convolutional codes with Viterbi decoding and polar codes with successive cancellation (SC) decoding [13] in short block lengths.

Ii Orthogonal Sparse Superposition Coding

In this section, we present how to construct orthogonal sparse superposition codes by a novel encoding strategy called a successive orthogonal encoding. Then, we provide some properties of the codes and a relevant example to capitalize on the merits of the orthogonal superposition codes.

Fig. 1: Proposed successive encoder structure for orthogonal sparse superposition code construction.

Ii-a Preliminary

Before presenting the code construction idea, we introduce some notations and definitions.

Dictionary and sub-codeword: Let be the block length of a code. We let be the th sparse message vector with sparsity level for . We also define a unitary dictionary matrix , i.e., , shared for all sub-codewords. Then, the th sub-codeword is constructed by the multiplication of the unitary dictionary matrix and the sparse message vector , i.e., for .

Constellation: We define a set of signal levels for the element in . Let be a non-zero alphabet size of for . Then, the signal level set for the non-zero values of the th codeword vector is . is typically chosen from arbitrary pulse amplitude modulation (PAM) signal sets. By the union of signal level sets, we define a multi-level modulation set as .

Ii-B Successive Orthogonal Encoding

As illustrated in Fig. 1, the key idea of the proposed orthogonal sparse superposition coding is to successively map information bits into orthogonal sub-codeword vectors , where for .

We first explain how to construct the first sub-codeword from a binary information string with length . First, the encoder maps bits by uniformly selecting non-zero indices in . Here, we define the support set of by . Then, the encoder maps bits by uniformly assigning the elements in into the elements of . Once the sparse message vector is determined, the first sub-codeword vector is obtained by .

Utilizing the support information of , i.e., , the encoder constructs so that it is orthogonal to , i.e., . To accomplish this, the encoder defines a candidate index set of by . Then, the encoder maps information vector into by uniformly choosing indices in and uniformly allocating the elements in into the support set of , i.e., . The second codeword , therefore, carries information bits. This successive encoding method guarantees the orthogonality between and , i.e., . Therefore, under the unitary condition of .

The encoder successively applies the same principle until the th layer. Let . The encoder maps into by uniformly choosing indices. Then, it uniformly assigns elements in in the selected position of . Since the support set of , i.e., , is mutually exclusive with the support sets of for , i.e., , the orthogonality among sub-codewords is guaranteed, i.e., for .

Finally, a codeword of the orthogonal superposition code is a superposition of sub-codeword vectors with power control coefficients, namely,

(1)

where with and .

Ii-C Properties

To shed further light the significance of our code construction method, we provide some remarks.

Decodability: For noiseless case, an orthogonal sparse superposition code is uniquely decodable if all non-zero signal levels in and are distinct for . Suppose , i.e., . This is true because a decoder distinguishes the th subcodewords from , provided . Then, the decoder performs an inverse mapping from to to obtain information bits.

Orthogonal property: Unlike the conventional sparse superposition codes, the most prominent characteristic of the orthogonal sparse superposition codes is the orthogonal property, implying that all support sets of are mutually exclusive, i.e., for all . This property facilitates to perform decoding in a computationally efficient manner, which will be explained in the sequel.

Code rate: The th sub-codeword conveys information bits using channel uses. Therefore, the rate of the orthogonal sparse superposition code is

(2)

For a symmetric case in which , , and for , the code rate simplifies to

(3)

For a fixed block length , the proposed encoding scheme can construct codes with very flexible rates by appropriately choosing the code parameters including the number of layers , the number of non-zero values per layer , the index set size per layer , and the non-zero alphabets in each layer . These code parameters can be optimized to minimize decoding errors.

The average transmit power: One intriguing property of the orthogonal superposition codes is that its average transmit power can be extremely low. Without loss of generality, we set , i.e., and for . Since the th sub-codeword has the sparsity level of and its non-zero value is chosen from , the average power of is

(4)

Since the sub-codeword vectors are mutually orthogonal, the average power of becomes

(5)

Nominal coding gain: We define a codebook with a size of by where is the th codeword of the orthogonal sparse superposition code. The minimum Euclidean distance of the codebook is defined as

(6)

Then, we define the nominal coding gain[14]:

(7)

This coding gain is useful to fairly compare different types of coded modulation schemes when using the maximum likelihood decoder in the high-SNR regime.

It is instructive to consider an example of the proposed coding scheme.

Example 1: For ease of exposition, we restrict attention to the symmetric case where and and . We also consider two PAM set and . In this example, we construct an orthogonal sparse superposition code with rate . Using the idea of successive encoding, the encoder generates two sub-codeword vectors. By choosing two non-zero positions in and allocating them to 1 or -1 uniformly, we map information bits to . Then, the encoder maps bits to by selecting two non-zero indices from with . It uniformly allocates or -2 to the non-zero elements in . Since each sub-codeword has ternary alphabets, the orthogonal superposition code becomes . This code has alphabet size of five and the codeword is sparse, i.e., . The normalized average transmit power per channel use becomes

(8)

Since the minimum distance of this code is two, the normalized minimum distance is given by

(9)

Remark 1 (Generalization of index modulation): The proposed coding scheme also generalizes the existing index modulation methods [10]. Suppose a single-layer encoding with one section, i.e., . This method is identical to the index modulation. Therefore, our coding scheme can be interpreted as an efficient multiplexing method of multi-layered index (spatial) modulated signals in an orthogonal manner [10, 11, 12]. Therefore, one can possibly use our codes as a modulation technique conjunction with modern codes (e,g., polar and low-density parity-check codes).

Iii A Low-Complexity Decoding Algorithm

This section presents a low complexity decoding algorithm, referred to as an element-wise maximum a posterior decoding with successive support set cancellation (E-MAP with SSC) under the assumption of .

The key idea of the proposed algorithm is to successively decode sub-codeword vectors, i.e., in

to maximize a posterior probability (APP) using the Bayesian approach

[15]. Recall that the joint APP is factorized as

(10)

where the equality follows from the fact that are sufficient information to decode . To see this, recall the orthogonality between the sub-codewords, i.e.,

(11)

for . In addition, for , we have for . Therefore, the conditional probabilities in (10

) are dependent to sub-codewords only through their support sets. From this decomposition, our decoding algorithm is to successively estimate each sub-codeword vector by exploiting the knowledge of previously identified support sets. This motivates us to consider a decoding method with successive support set cancellation.

The decoder performs iterations. Each iteration decodes a sub-codeword vector and subtracts the contribution of the previously identified support sets to evolve the prior distribution for the next iteration. Suppose the decoder has correctly identified the non-zero support sets in the previous iterations. Using this information, it performs element-wise MAP decoding to identify the support set . Let . Since all received signals for are conditionally independent, the joint APP is factorized as

(12)

To compute this, we need a likelihood function, which is given by the mixture of Gaussian densities, i.e.,

(13)

The prior distribution of is also decomposed into

(14)

where

is a normalization constant to be a proper probability distribution and

is an indicator function for a set . Recall that the non-zero supports of are uniformly drawn from with . The probability mass function of becomes

(15)

Invoking the prior distribution of in (15), we obtain

(16)

Utilizing (13), (15), and (16), the decoder computes the probability of an event that given as

(17)

To satisfy the sparsity condition on , i.e., , the decoder estimates the support sets of by selecting indices that provide the largest element-wise MAP metric in (17). Let be the ordered index that has the th largest element-wise MAP, i.e., for . Then, the estimated support set of is

(18)

Once the support set is identified, the decoder performs MAP estimate for the signal levels on for , which is given by

(19)

The iteration ends when .

Remark 2 (Linear decoding complexity): The decoding complexity of the proposed E-MAP with SSC linearly increases with the block length . In the th iteration, the decoder needs to compute APPs as in (17). In addition, computations are required for the signal level detection. As a result, the total decoding complexity becomes . For example, when , the decoding complexity order is linear in both the block length and the number of layers, i.e., .

Remark 3 (Optimality): In our journal version, we show that the two-layered OSS code and E-MAP with SSC decoder achieve the Shannon’s limit in the power limited regime, i.e., as .

Iv Simulation Results

In this section, we provide numerical results to demonstrate the effectiveness of the proposed encoding and decoding methods.

We consider a two-layered orthogonal superposition code with block length . Each layer consists of one section and use non-zero alphabet as and . Since each layer conveys five information bits, the code rate becomes . For decoding, we consider the E-MAP decoding with SSC explained in the previous section. We compare the proposed encoding and decoding method with the conventional coding schemes, including a -polar code and a -convolutional code. The maximum likelihood (ML) decoding and the optimal soft-output Viterbi decoding are used for polar and convolutional codes, respectively. Fig. 2 shows the BLER performance when increasing , which is typically used to fairly compare the BLERs for the codes with different rates. As can be seen, the proposed encoding and decoding method outperforms the convolutional code with optimal decoding. In addition, it achieves a similar BLER performance with that of the polar code applying ML decoding. Nevertheless, the proposed decoding complexity is much lower than that of the ML decoding.

Why the proposed orthogonal superposition code and the polar code achieve a similar performance? This can be explained by comparing the nominal coding gains of the two codes. Recall that the square of the minimum distance of the orthogonal superposition code is two, i.e., . The energy per bit is . Therefore, the nominal coding gain of the code is

(20)

In the chosen parameter, the polar code has the minimum Hamming distance of , i.e., . Since it uses the BSPK modulation, the nominal coding gain of the polar code is

(21)

Therefore, the proposed coding scheme achieves the same nominal coding gain with the polar code in this short block length.

Fig. 2: BLER performance comparison of different codes under a very short block length of .

Fig. 3 compares the BLER performances between the orthogonal superposition codes and polar codes under different code lengths and rates. We use the simple SC decoder for decoding the polar codes in the simulations. As can be seen, the proposed encoding and decoding techniques outperform the polar codes using SC decoder in short block length regimes. For example, our method provides approximately 0.3 dB gain over the polar code with the SC decoder at BLER when the code rate is . We capitalize that the decoding complexity of the proposed methods, , is much less than that of the SC decoder, i.e., , while better performing.

Fig. 3: BLER performance comparison when the block length increases while decreasing code rates.

V Conclusion

This paper has introduced a new class of sparse superposition codes, called orthogonal sparse superposition codes. To construct this type of code, we have presented a novel encoding technique to generate codewords that are sparse linear combinations of orthogonal columns of a dictionary matrix. Harnessing the orthogonal structure, we also have proposed a near-optimal decoder with a linear decoding complexity in block lengths. In comparison with polar codes using the SC decoder and convolutional codes using the ML decoder, we have demonstrated that the proposed encoding and decoding techniques are more effective than the existing coded modulation techniques in a short block length regime.

References

  • [1] A. Barron and A. Joseph, “Least squares superposition codes of moderate dictionary size are reliable at rates up to capacity,” IEEE Trans. Inf. Theory, vol. 58, pp. 2541–2557, Feb. 2012.
  • [2] S. Cho and A. Barron, “ Approximate iterative Bayes optimal estimates for high-rate sparse superposition codes,” In Sixth Workshop on Information-Theoretic Methods in Science and Engineering, 2013.
  • [3] A. Greig and R. Venkataramanan, “ Techniques for improving the finite length performance of sparse superposition codes,” IEEE Trans. Commun., vol. 66, pp. 905–917, Mar. 2018.
  • [4] F. Boccardi, R. Heath, A. Lozano, T. Marzetta, and P. Popovski, “Five disruptive technology directions for 5G,” IEEE Commun. Mag., vol. 52, no. 2, pp. 74–80, Feb. 2014.
  • [5] P. Popovski, C.Stefanovic , J. J. Nielsen, E. de Carvalho, M. Angjelichinoski, K. F. Trillingsgaard, and A.-S. Bana, “Wireless access in ultra- reliable low-latency communication (URLLC),” IEEE Trans. Commun., vol. 67, no. 8, pp. 5783–5801, Aug. 2019.
  • [6] E. J. Cands, J. Romberg, and T. Tao, “Robust uncertainty principles: exact signal reconstruction from highly incomplete frequency information,” IEEE Trans. Inf. Theory, vol. 52, no. 2, pp. 489-509, Feb. 2006.
  • [7] D. L. Donoho, A. Maleki, and A. Montanari, “Message-passing algorithms for compressed sensing,” Proceedings of the National Academy of Sciences, vol. 106, no. 45, pp. 18914–18919, 2009.
  • [8] C. Rush, Adam Greig, and R. Venkataramanan, “Capacity-achieving sparse superposition codes via approximate message passing decoding,” IEEE Trans. Inf. Theory, vol. 63 no. 3, pp. 1476-1500, Mar. 2017.
  • [9] D. Slepian, “Permutation modulation,” Proceedings of the IEEE, vol. 53, no. 3, pp. 228-236, Mar. 1965.
  • [10] E. Başar, Ü. Aygölü, E. Panayırcı, and H. V. Poor, “Orthogonal frequency division multiplexing with index modulation,” IEEE Trans. Sig. Process., vol. 61, no. 22, pp. 5536–5549, Nov. 2013.
  • [11] M. Di Renzo, H. Haas, and P. M. Grant, “Spatial modulation for multiple-antenna wireless systems: A survey,” IEEE Commun. Mag., vol. 49, no. 12, Dec. 2011.
  • [12] J. Choi, Y. Nam, and N. Lee, “Spatial lattice modulation for MIMO systems,” IEEE Trans. Sig. Process., vol. 66, no. 12, pp. 3185–3198, Jun. 2018.
  • [13] E. Arikan, “Channel polarization: A method for constructing capacity achieving codes for symmetric binary-input memoryless channels,” IEEE Trans. Inf. Theory, vol. 55, pp. 3051–3073, Jul. 2009.
  • [14] G. D. Forney Jr. and G. Ungerboeck, “Modulation and coding for linear Gaussian channels,” IEEE Trans. Inf. Theory, vol. 44, no. 6, pp. 2384–2415, Jun. 1998.
  • [15] Y. Nam and N. Lee, “Bayesian matching pursuit: a finite-alphabet sparse signal recovery aalgorithm for quantized compressive sensing,” IEEE Sig. Process. Letters, vol. 26, no. 9, pp. 1285–1289, Sep. 2019.