The channel capacities of finite alphabets can be predicted by the theoretical calculations of the mutual informations and confirmed by bit error rate (BER) performances with the coded modulation schemes. So far, the smallest gap between the theoretical result and the coding performance is found at 0.0045dB using the low density parity-check code (LDPC) with BPSK input .
A standard way of modeling communication has been set up with the additive white Gaussian noise (AWGN) channel
where is the received signal, is the transmitted signal and
is the received AWGN component from a normally distributed ensemble of power, denoted by .
At finite-alphabet input, channel capacity can be expressed in terms of the achievable bit rates (ABRs) by the mutual information as
where is the mutual information, is the entropy of the received signal and is the entropy of the AWGN .
Our proposed transmission strategy is inspired by a consideration on the signal separation at (1) and consequently (2) . Let us envision that the input signal can be separated into and at the receiver and, subsequently, (1) can be re-written as
where and are the separated signals, and and are the received signals accordingly.
As such, instead of (2), the ABR of the system with these separated signals will be calculated using
where and are the overall mutual information and the mutual information pertaining to the transmissions of for , with the argument of the signal-to-noise power ratio (SNR). Moreover, is the energy of .
Since the result of (5) is not necessarily equal to (2), one can have a different approach for achieving higher ABR. Actually, this work goes through (3) and (4) aiming to improve the capacity achieved by BPSK input, i.e., to reduce the gap of 0.0045dB.
Throughout this paper, we use the capital letter to denote a vector in Hamming space and the lowercase to indicate its components, e.g.,, where represents the vector and is the th component. We use bold face letters to denote signals in Euclidean space with two dimensional complex plane. In the derivations, we use
to express the estimate ofat the receiver and to express the mutual information with the averaged SNR as the argument .
Ii Transmission Scheme
Consider two independent binary source bit sequences, expressed in a vector form of , where is the information bit (info-bit) of and is the length of the source subsequence, and indicates the two source bit sequences, respectively.
These two source bit sequences are encoded into the two channel code words expressed as of equal length by using two different channel coding matrices
where is the th code-bit of , and is the element of the code matrix , , where is length of the channel code word.
The code-bits of and are mapped onto two signal sets represented by and in (3) and (4), respectively. For simplifying expressions of the signal organization, we omit the indices of the code-bits series of , and, thus, work on the modulation of and without losing generality in the following derivations.
First, we define by the conventional BPSK in the complex plane to modulate each code-bit in
with , where is the symbol energy of .
For constructing , we define a rotation operator that rotates a vector for an angle in complex plane, e.g.,
where is the vector from the rotation of z by angle . To complete the operator, we note that and hold.
Each code-bit of is mapped onto as the follows: is mapped onto and is mapped onto . The signal is defined in a vector form of in the complex plane as
where is the rotated with an angle .
Since has two possible values, each value of can be mapped onto two possible points in the complex plane as shown in Fig.1, where and are the mapping results of , and and the results of . We refer this method to as the double mapping modulation (DMM), because one value of the code-bit is mapped onto two points in Euclidean space. The DMM is listed in Table I for the use of demodulation latter.
It is noted that the symbol energy of is also equal to .
After the above modulations, we can find that and the symbol , for , contains one bit from and another bit from .
In the proposed method, the transmitter inputs to the AWGN channel
where n is the noise in a vector form.
At the receiver, all the signals are recoded in a storage because each of the signals will be used twice: one time for demodulation of and another time for .
The information recoveries are done through the inverse steps of the transmitter with demodulation of first
where is the estimate of y for demodulation of . Then is recovered using the conventional decoding scheme.
Once has been obtained, the receiver will find each value of by reconstructing with the code matrix of (6). Then, by using table I, one can find the rotation angle and recover the BPSK modulation of by
where is the recovered BPSK symbol plus the noise, y is the reused signal and is the estimated rotation angle obtained by using Table 1.
Finally, one can decode with the results of (12) and obtain the recovered .
Iii Beyond Capacity of BPSK Input
In this section, we work on the BER simulations to show the advantage of the proposed signal transmission scheme.
For a given symbol energy , the signal can achieve exactly same BER performance as that of BPSK input when is of error free in (12).
Theoretically, according to Shannon theorem, there exist a channel code of infinitive length that can reduce the error probability of the info-bits withto infinitive small. Consequently, the error probability of recovered code-bits can be also of arbitrary small because the linear relation between the source- and channel code. Thus, the error free assumption of is correctly.
Then, the ABR contribution from increases the overall ABR of the proposed method to a level beyond the channel capacity of conventional BPSK.
From practical point of view, one can find the gap between ABR of BPSK and the channel capacity is at 0.0045dB. Then, we need to find a net ABR contribution from in case error presents in pertaining to errors in .
Let us study the degradation of by simulating the signal communication scheme of section II exactly. Since the target spectral efficiency is at 0.5bit/Hz/s, we use the LDPC code (DVB-S.2) of bits’ length of 68000 with code rate 1/2 to work with and the LDPC of code rates of 1/8 and 1/16 constructed by repeating each code-bit of the LDPC of code rate 1/4 for K times, where K=2 and 4.
Aiming at BER of , we examine how many dB loss of the BPSK with due to the errors in . The simulation results are shown in Fig. 2, where one can find the degradations of 0.1dB and 0.7dB loss in terms of symbol energy to noise ratios, with respect to K= 2 and 4, i.e. code rates of 1/8 and 1/16, respectively.
Then, we select the constructed LDPC of code rate 1/16 to and the LDPC of code rate 1/2 to to simulate the scheme of section II, compare with conventional BPSK plus the same LDPC used by and find eventually 0.052dB gain as shown in Fig.3.
An extension is made by using the 0.52dB gain to add to 0.0045dB of , where one can find that the gap between the extension of this approach and channel capacity can be changed to G= that indicates the beyond of the channel capacity of BPSK input, where G is the new gap obtained by the extension.
It is noted that the extension showing beyond the channel capacity is a conservative estimation because using two best existed LDPC codes to both and can reach the largest gain, while the extension is made using only one to . The constructed code of code rate 1/16 can be far from the best existed code at the BER performance for helping the overall performance of this approach.
In this paper, we proposed the parallel transmission method that can be separated at the receiver, where the signal separation is enabled by our created DMM working through Hamming- and Euclidean space. The simulation results show the better BER performance in comparison with conventional BPSK and both the theoretical analysis and the extension of this approach show the beyond channel capacity of BPSK input.
-  Sae-Young Chung, G. D. Forney, T. J. Richardson and R. Urbanke, “On the design of low-density parity-check codes within 0.0045 dB of the Shannon limit,” in IEEE Communications Letters, vol. 5, no. 2, pp. 58-60, Feb 2001.
-  C. E. Shannon, “A mathematical theory of communication”, The Bell System Technical Journal, vol. 27, no. 3, pp. 379-423, July 1948.
-  B. Jiao and D. Li, “Double-space-cooperation method for increasing channel capacity”, China Communications, vol. 12, no. 12, pp. 76-83, Dec. 2015.
-  D. P. Palomar and S. Verdú, “Representation of Mutual Information Via Input Estimates”, IEEE Trans. Inform. Theory, vol. 53, no. 2, pp.453-470, 2007.
-  K. Sayana, J. Zhuang and K. Stewart, ”Short Term Link Performance Modeling for ML Receivers with Mutual Information per Bit Metrics,” IEEE GLOBECOM 2008 - 2008 IEEE Global Telecommunications Conference, New Orleans, LO, 2008, pp. 1-6.