I Introduction
As a subclass of permutation modulation [1], index modulation (IM) has recently attracted significant interest [2, 3] due to its feature of “achieving more by doing less”. The central idea of IM lies in the observation that, in addition to encoding information in a signal, one can encode information in the order in which a signal is conveyed in a given domain. The idea of encoding information using permutations or combinations has been applied in several contexts. For example, by using different transmit antennas and channel uniqueness, permutation modulation has been employed in the spatial domain in the form of socalled spatial modulation [4, 5]. Similar ideas have been applied to the medium/channel domain by manipulating the radiation patterns of antennas [6, 7]. Permutation modulation has also been used in the subcarrier index domain in multicarrier systems, such as orthogonal frequencydivision multiplexing (OFDM). This approach is commonly referred to as subcarrierIM or simply IM [8, 9]. Finally, the use of permutation methods in conjunction with different modes in orbital angular momentum transmissions has been studied [10, 11].
To facilitate the use of combinatorial patterns for encoding, a codebook for the mapping between patterns and the source messages (bit sequences) must be specified. Many existing works that study permutation modulation in digital communication systems assume that the number of possible patterns is a power of two [4, 12, 13]. However, such an assumption limits the applicability of permutation modulation, e.g., conventional spatial modulation (with a single active antenna in each transmission period) is only applicable when the number of antennas at the transmitter is a power of two.
Another typical approach that has been studied is to assume that only a subset of all possible patterns contains valid patterns, and the size of the subset is a power of two [14, 15, 9, 16]. However, this approach is not able to utilize the full potential of permutation modulation in terms of data rate, because a certain number of possible permutations that could have been used to carry information are neglected [17]. The study detailed in [18] considers the possibility of using all permutation patterns with uniform probability, but no treatment of how to realize the uniform probability distribution in digital communication systems is given in that work.
To address these issues related to the mapping of source bit sequences to permutation patterns, a few recent contributions have focused on the adaptation of binary Huffman coding [19] for permutation/index codebook design [20, 21, 22, 23, 11, 17]. Here, a bijective mapping between information bit sequences and the permutation/index patterns is constructed with the aid of a full binary tree; patterns are associated with leaves in the tree, and corresponding bit sequences are defined according to a labeling rule (used in the Huffman algorithm) pertaining to the respective paths from each leaf to the root. Importantly, in contrast to conventional application scenarios for source compression where the source symbol distribution is known a priori, the probability distribution of the patterns observed during transmission in permutation modulation systems is dependent upon the binary source [17]. In this sense, the Huffman mapping is applied in permutation modulation schemes in a reversed manner. We adopt the term binarytree encoding rather than Huffman coding for the bittopattern mapping operation for the remainder of this paper in order to highlight this subtle, but important difference.
Binarytree encoding for permutation modulation schemes enables one to choose the probability distribution of the permutation patterns to achieve certain design criteria, e.g., achievable rate maximization [21, 23, 11, 17] or symbolerror rate (SER) minimization [21, 23]. However, existing works along this direction fall short in a number of ways. For example, the support of the (random) patterns, when constrained by full binary tree structures, is discrete. As a result, optimization problems for maximizing achievable rates or minimizing SERs are of mixedinteger forms, and an exhaustive search over all admissible probability distributions may be required to find the global optimum. However, the number of admissible distributions has not been characterized in the literature, and thus the complexity of exhaustive searching is not well understood. A common way to reduce optimization complexity that has been treated in the literature is to relax the fullbinarytree constraint on the pattern probability distribution [21, 11]. However, the problem of how to project the relaxed probability distribution to a feasible distribution that satisfies the fullbinarytree constraint remains open. An alternative strategy that has received attention recently has been to focus on high and low signaltonoise ratio (SNR) regimes. For the limited case of singleactiveantenna spatial modulation, analytic forms of the asymptotically optimal probability distributions for the permutation patterns were reported in [21]. A generalization that activates multiple resources per channel use, which is the scenario of interest for multicarrier communication systems such as OFDMIM [9], is desirable.
In this work, we study the subclass of permutation modulation where out of resources are active during each channel use. We concentrate our investigation on OFDMIM systems [9], because OFDMIM is a primary user of the permutation modulation subclass that we study, and any results obtained for full binary trees would be directly applicable to other permutation modulation schemes. Our main goal is to optimize the bittopattern mapping operation and transmit power allocation strategy for achievable rate maximization when channel state information is available at the transmitter. We make the following contributions.

We give a complete and rigorous formulation of the bittopattern mapping problem using the formalism of full binary trees, which covers all admissible pattern probability distributions given a uniform binary source. To this end, we report a new method to generate a reduced set of these trees and establish bounds on the number of trees in this set, which have not been reported in the mathematics or engineering literature to the best of our knowledge.

We formulate a relaxation of the achievable rate optimization problem with pattern probabilities and transmit powers as the optimization variables and give a number of analytic bounds and high/lowSNR asymptotic results that can be used to (approximately) solve the problem.

We propose an efficient, heuristic algorithm that projects a relaxed pattern probability distribution onto the feasible set of distributions that obey the full binary tree constraints, and demonstrate that this method yields an achievable rate that is superior to a conventional OFDMIM benchmark.
The rest of the paper is organized as follows. In Section II, the basic OFDMIM model is described, with emphasis being placed on the binarytree encoding operation. Section III explores the fundamental properties of binary trees; in this section details of the new tree construction algorithm are provided along with proof of completeness and bounds on the number of trees of a given size are reported. In Section IV, a relaxation of the achievable rate optimization problem is explained, and several analytic bounds and asymptotic results related to this problem are given. The fully constrained optimization problem is treated in Section V, where the aforementioned heuristic projection algorithm is outlined. A numerical analysis of all results reported in the paper are included in Section VI, and conclusions are drawn in Section VII.
Ii System Model
Iia BinaryTree Encoding
Consider a binary sequence , which is conveyed from a maximum entropy source to an OFDMIM encoder. The maximum entropy property of the source implies the sequence elements
are independent, uniformly distributed random variables. The encoder partitions
^{1}^{1}1The exact detail of how this partitioning is accomplished is beyond the scope of this paper. the sequence into two subsequences and , where and . One subsequence (say, ) is mapped to a sequence of ary complexvalued constellation symbols. For example, if 16QAM is employed, , and each group of bits in is mapped to a QAM symbol. The other subsequence is used to assign theary symbols to subcarriers in preparation for transmission. In the IM system considered in this paper, we assume each OFDM symbol vector is comprised of
groups of subcarriers, and subcarriers in each group are active, while the remaining subcarriers are nulled^{2}^{2}2We will focus on the case where the inequality is strict, since corresponds to a conventional OFDM system.. In keeping with convention, we use the term subcarrier activation pattern (SAP) to denote a pattern of active subcarriers (out of ).We are interested in system designs that maximize the achievable rate of OFDMIM; hence, we consider bittoSAP mapping strategies that cover the full set of available SAPs. Since there are SAPs, it is generally not possible to construct a fixedlength bittoSAP mapping scheme that satisfies this condition. For example, with and , six SAPs are available. By using a fixedlength mapping scheme, it would be possible to map two bits to one of four SAPs, leaving two SAPs unused.
To overcome this issue, we employ a variablelength scheme based on full binary trees. A tree is a full binary tree if every node other than the leaf nodes has exactly two children. Every full binary tree comprised of internal nodes has leaves. Considering the total set of node full binary trees^{3}^{3}3For the rest of the paper, unless explicitly stated otherwise, a node full binary tree is one with internal nodes., the maximum depth of a tree in the set ranges from to .
It is well known that one can map symbols from a source alphabet to uniquely and instantaneously decodable bit sequences using full binary trees. Indeed, this method is employed in the celebrated Huffman source coding algorithm. For the IM system considered herein, we apply a reverse mapping approach, which entails the use of a chosen binary tree to map source bit sequences to SAPs. Each edge of the tree is labeled with a zero or a one, and the tree is constructed such that it has leaves. Each SAP in the set of admissible patterns is associated with a leaf. The bittoSAP mappings are determined by tracing the unique path from the root node to each leaf, recording the bit labels for each edge in order along the way. Fig. 1 provides an illustration of this procedure for the example of and .
Similar to Huffman source coding, the use of full binary trees to develop a bittoSAP mapping rule ensures each mapping is unique and instantaneously encodable. Uniqueness results from the binary tree structure. Instantaneous encodability simply means that the encoder can map bit sequences to SAPs using the minimum amount of information. To illustrate this point, we can again turn to Fig. 1. Suppose the SAP bit sequence is . Reading left to right and referring to Fig. 1, we see that the encoder only needs to read the first three bits to map them to the SAP associated with the second leaf. The encoder would then read , which also yields a valid SAP (the fifth leaf), and so on. In this example, it is clear that the encoder does not need to interpret long sequences of bits in order to decide upon the correct mapping relating to the first few bits.
It is also important to note that every possible two and threebit sequence is accounted for in the mapping shown in Fig. 1. This property extends to mappings based on other values of and . For a maximum entropy bit source, each subsequence consisting of bits will appear with probability . Thus, we immediately deduce that an SAP associated with a leaf node at level below the root will be transmitted with probability . This feature of binarytree encoding imposes a constraint on the system, which much of the literature published on this topic to date has largely ignored. In this work, we will exploit the structure imposed by this constraint to develop efficient optimization procedures for OFDMIM systems.
IiB OFDM Model
Once bittosymbol mappings (both constellation and SAP) have been completed, each length OFDM symbol vector is processed with a
point inverse discrete Fourier transform (DFT) and a cyclic prefix of adequate length (to mitigate the effects of channel dispersion) is appended to each timedomain symbol array prior to filtering, upconversion, and transmission. At the receiver, the received signal is downconverted, filtered, and sampled. The cyclic prefix is then removed from each received baseband symbol vector before processing with a DFT. It is wellknown that this sequence of procedures converts the dispersive channel into a parallel channel, and the signal on each subcarrier is (ideally) free of interference from other subcarriers.
We now formalize the OFDM model. Define . We can uniquely associate each SAP with an index in the set . For each , denote by the set of indices of the subcarriers that are active under pattern , where equality holds when (which corresponds to a conventional OFDM system). The index symbol is randomly distributed over with probabilities , . The channel inputoutput relationship for subcarrier conditioned on the SAP can be written as
(1) 
where is the complex channel coefficient for subcarrier (with ); the input symbols are zero mean and independent over the subcarriers with being the transmit power on subcarrier for index ; the noise is independent over the subcarriers with . Throughout this paper, we will assume the channel gains are known at the transmitter. We will return to this model in the context of mutual information optimization in Sections IV and V.
Iii Full Binary Trees
One of the goals of this work is to develop a method of computing the bittoSAP mapping that maximizes the achievable rate of an OFDMIM system. This is equivalent to determining the full binary tree that defines the optimal mapping. To achieve this aim, we will need a method of considering all full binary trees of a given size as well as all SAPtoleaf assignments. At first glance, this is a complicated problem. The number of node trees is given by the Catalan number
and the number of SAPtoleaf assignments is . However, it is possible to significantly simplify the problem by making use of symmetry. The important aspect of the mapping is not in the exact tree that is chosen, but rather in the level of the leaf node that a given SAP is assigned to. As noted in Section II, an SAP assigned to a leaf at level has probability of being transmitted. We can transpose leaf nodes at a given level in any way we wish and still achieve the same SAP probability distribution. This reasoning leads us to consider a smaller set of trees, which we call the reduced set of node full binary trees . Each tree in this set actually corresponds to an automorphism group of the complete set. Moreover, consider a given tree and denote the number of leaves at level by . Due to the symmetry stated above, the number of ways of assigning objects (i.e., SAPs, where ) to the leaf nodes such that we attain a unique probability distribution is
(2) 
which can be considerably smaller than the total permutations. We now give preliminary results on the construction and enumeration of the set , which will be useful in determining systematic optimization procedures and analyzing computational complexity.
Iiia Construction
In order to choose the best tree for encoding, we require a method of constructing all trees in . The approach we propose is outlined in Algorithm 1, which is valid for . The initial set consists of the single full binary tree with one root and two leaves (at level one). This protograph is recursively appended to trees to obtain the set . The algorithm is presented in a somewhat informal way here for clarity; we formalize it slightly in Appendix A in order to prove the following proposition.
Proposition 1.
Algorithm 1 returns the complete reduced set of full binary trees with internal nodes.
Proof:
See Appendix A in the Supplemental Material. ∎
As an example of the output of Algorithm 1, Fig. 2 shows the sets generated for . Note that the tree shown for the set is the protograph . The number of protographs contained in a graph of is .
IiiB Enumeration
As noted above, the number of ordered full binary trees with internal nodes is given by the Catalan number . The reduced set of node full binary trees contains significantly fewer elements. For example, Fig. 2 shows that two trees are contained in ; yet, by considering all orderings of these two trees, we can enumerate five ordered trees ().
Let denote the number of trees in the set . From Algorithm 1, we can infer the relations
(3) 
since each step in the for loop at most doubles the number of elements in . This bound captures the slower exponential growth in the number of trees in the reduced set compared to the set of ordered trees. Numerical results have shown that the bound overestimates the rate of increase in . Published results on full binary trees have attempted to obtain generating functions for the number of trees in unordered, unlabelled sets (see, e.g., [24] and references therein). However, it appears that results on the reduced sets that we are interested in remain undiscovered.
It is possible to obtain a tighter bound on by analyzing Algorithm 1. The bound is given as a recurrence relation in the following proposition.
Proposition 2.
The number of trees in is upper bounded by
(4) 
where if is a power of two and otherwise, and the summation is empty when .
Proof:
See Appendix B in the Supplemental Material. ∎
The accuracy of each of the two bounds given above is illustrated for sets of up to twenty internal nodes in Fig. 3. From the figure, we see that the loose bound slightly overestimates the growth rate of . The recursion is exact up to , but slowly diverges for larger , although it clearly remains fairly tight up to . Practically, we will be interested in reasonably small ; hence, the recursion is a useful tool for analyzing the IM systems studied in this paper.
Iv Mutual Information Optimization: Relaxation
We now provide details of new results and methods related to the optimization of the mutual information in OFDMIM systems. As noted in Section II, the SAP probabilities are constrained by the binary tree chosen for encoding. Before we treat these constraints, we will consider the relaxed problem, for which it is assumed that SAPs can be transmitted with any probability. This will give an upper bound on the achievable rate for the constrained system, and we will use the approaches developed herein to treat that case in Section V.
Consider a single set of subcarriers that adhere to the model described in Section II. We collect the received symbols in the vector . Furthermore, we collect the transmitted symbols in the vector , noting that is nonzero only when subcarrier is active, as given by the encoded SAP. Define the SAP probability vector and the power vector . We are interested in the probabilities in and transmit powers in that maximize the mutual information
(5) 
Conditioned on , we assume when . Choosing to be Gaussian is not proven to achieve capacity, but the assumption provides a tractable expression. In this case, the complex random vector
has probability density function (pdf)
(6) 
where
(7) 
with being the complex Gaussian pdf
with mean zero and variance
.Writing , the optimization problem is formulated as
(8)  
subject to  
Note that the relaxation alluded to earlier manifests in the simple constraint . If we were to consider only probability vectors that adhere to the binarytree encoding methodology, this constraint would be defined differently (see Section V). We now detail several strategies for solving, either approximately or exactly, the optimization problem stated in (8).
Iva Concavity and Numerical Optimization
The following result that can be used to solve (8) numerically.
Lemma 1.
For fixed , the problem
(9)  
subject to  
is concave.
Proof:
See Appendix C in the Supplemental Material. ∎
For the special case where is constant for all and a balanced power distribution is chosen (i.e., for all and ), Lemma 1 leads to the following.
Proposition 3.
When the channel gains and transmit powers are constant across frequency, the optimal SAP probability distribution is uniform.
Proof:
See Appendix D in the Supplemental Material. ∎
More generally, Lemma 1 suggests that it may be economic to solve (8) by employing a block coordinate descent (BCD) approach [25], in which one would alternately maximize the mutual information in either or while keeping the other vector fixed at the previously obtained optimum value. The method requires the constraints of the problem to be convex, which is clearly satisfied. Furthermore, the maximization over each of the vectors and , keeping the other constant, must be unique. Lemma 1 implies this condition is met in part, but it is not clear whether the condition may be violated for the maximization of over for a fixed in some parameterizations of and . Nevertheless, the smoothness of the objective function provides some assurance that a BCD approach will converge to a local extremum.
One may encounter numerical problems when using the BCD technique to solve (8) since, in general, the evaluation of requires highdimensional numerical integration or timeconsuming Monte Carlo methods. In practice, we have found that the BCD method can only be employed to optimize systems with three or four subcarriers per group; larger systems require different approaches.
IvB A Lower Bound
It is possible to obtain an approximate solution to (8) by considering a lower bound on the mutual information rather than the mutual information, itself. The following proposition provides one such bound.
Proposition 4.
For transmit powers and SAP probabilities , satisfies the lower bound
(10) 
where is a diagonal matrix with the th element of the diagonal satisfying
(11) 
Proof:
See Appendix E in the Supplemental Material. ∎
The bound given above is a result of Jensen’s inequality and is, thus, not particularly tight. In fact, a slightly different application of the inequality yields a marginally tighter bound [26, Th. 2]. However, the utility in Proposition 4 is not in the accuracy of the bound, but rather in the ease with which this bound can be optimized over the SAP probabilities. These optimal probabilities are captured in the following proposition.
Proposition 5.
Let with . Suppose is nonsingular, and let , with denoting the element in the th row and th column of . The SAP probabilities that maximize the lower bound given in (10) are given by
(12) 
where .
Proof:
See Appendix F in the Supplemental Material. ∎
Note that the probabilities given in (12) are dependent upon the subcarrier powers. The BCD approach can be employed in a fairly straightforward manner to compute the power values by alternately computing (12) for fixed powers, then fixing these probabilities in (10) and computing the maximizing power values. Alternatively, one can, in theory, substitute (12) into (10) and compute the optimal powers directly. However, the nonlinear form of (12) can cause problems using this approach.
A condition that must be satisfied in order to invoke Proposition 5 is that must be nonsingular. It is possible that this condition is not met, for example when only a single subcarrier in the set of active subcarriers is allocated power. Such cases can typically be dealt with by using other results reported in this section (e.g., the asymptotic results detailed below). In general, we have found that Proposition 5 is applicable to a wide range of system configurations.
IvC ClosedForm Asymptotics
It is naturally preferable to solve (8) analytically. To make progress in this direction, we apply the following strategy: first, we find the probabilities that maximize the mutual information for any given values of the transmit powers, i.e., the optimal probabilities are functions of the powers; then, the mutual information that corresponds to the optimal probabilities found previously is maximized over the powers in .
IvC1 Probability Optimization
To obtain a closedform expression for the optimal SAP probability distribution as a function of the powers, we first resort to a highSNR analysis, which gives rise to the following result.
Proposition 6.
For fixed powers , let and define the probability values
(13) 
Then, is a lower bound of that is tight in the high SNR regime, i.e., as .
Proof:
See Appendix G in the Supplemental Material. ∎
In addition to a simple, closedform expression for the optimal SAP probability distribution at high SNR, this result also provides an upper bound on the achievable rate, as stated in the following corollary.
Corollary 1.
The function is an upper bound on the maximal mutual information , which is tight as .
Proof:
See Appendix H in the Supplemental Material. ∎
We now turn our attention to the lowSNR case, for which we obtain the following beautifully intuitive result, which is a somewhat discrete version of the well known waterfilling principle at low SNR.
Proposition 7.
For fixed powers , let , i.e., corresponds to the group of strongest subcarriers. Then, the index probabilities
(14) 
maximize the mutual information at low SNR, which satisfies the asymptotic equivalence , as .
Proof:
See Appendix I in the Supplemental Material. ∎
IvC2 Power Optimization
Propositions 6 and 7 and Corollary 1 yield closedform expressions for the mutual information, which depend upon the powers . As a result, these expressions can be used to develop optimal power allocation rules in the high and lowSNR regimes. It turns out that the optimal rules follow our conventional understanding of power allocation in OFDM systems, as formalized in the following proposition.
Proposition 8.
For high SNR (as ), allocating powers for the subcarriers of each SAP according to the waterfilling strategy is optimal under power constraints for each pattern. For low SNR (as ), allocating powers according to the waterfilling strategy is optimal.
Proof:
See Appendix J in the Supplemental Material. ∎
The waterfilling result for the lowSNR case is somewhat unsurprising given that Proposition 7 indicates the mutual information expression is the same as that for OFDM with only active subcarriers. On the other hand, the optimality of waterfilling at high SNR is not immediately obvious from Corollary 1. These results lead us to a simple mutual information optimization strategy for and at high and low SNR: one should perform waterfilling power allocation for each subcarrier pattern and then compute the corresponding probabilities according to Proposition 6 or 7 and select the result that maximizes the objective.
V Achievable Rate Optimization: Constrained
We now consider a more practical rate optimization problem that is effectively the same as (8) but with a nonlinear constraint on the probabilities . As discussed in Sections II and III, SAP probabilities depend on two things: (1) the full binary tree that corresponds to the bittoSAP mapping operation, and (2) the ordering of the SAPtoleaf assignment.
Let denote the set of feasible probability vectors of length that can be constructed by considering all nonredundant SAPtoleaf assignments for all binary trees in including null assignments. For example, for , is constructed by considering all mappings of three (out of four) SAPs to two leaves on the second level and one leaf on the first level of the single tree in (cf. Fig. 2). There are mappings of three SAPs to the leaves, and ways of choosing the active SAPs. The inclusion of null assignments in this way ensures we consider the case of not using some SAPs that may correspond to poor channel conditions. Under this definition of , the number of elements (probability vectors) in is
(15) 
where denotes the number of leaves at level in tree . We further define the union , where consists of the vectors with one element equal to one and the rest equal to zero.
The constrained optimization problem can now be formulated as
(16)  
subject to  
We propose two methods of solving this problem here: an enumerative approach, and a projection from the relaxation.
Va Enumerative Approach
This approach is based on the enumeration of all possible probability distributions of the SAPs, i.e., all . The allocated powers are optimized for each probability distribution . The pair that yields the highest mutual information is the solution to the problem stated in (16).
For a given probability distribution , the power allocation problem may not be solved analytically. In this case, one can invoke the asymptotic results stated in Proposition 8 to obtain the power values. First, the waterfilling power allocation solution would be calculated for each SAP. Then, the distribution that maximizes the mutual information would be chosen.
VB Projection from the Relaxation
A much more computationally efficient method of treating (16) can be developed by first considering the relaxation studied in the previous section. First, we relax the constraint to find a solution to (8). Any of the approaches used in Section IV can be applied. We let denote the probability distribution computed in this step. We then project onto the feasible vector and take this to be the partial solution to (16). The power allocation vector is then computed to maximize the mutual information.
The projection of the relaxed solution onto a point in the set can be accomplished efficiently by using the Huffman coding algorithm [19]. To this end, we interpret the elements of as source symbol probabilities, then generate a full binary tree according to the Huffman algorithm. As discussed in Section II, SAPs associated with a leaf node in the tree at level will be transmitted with probability . Hence, the probabilities in are replaced with the corresponding probabilities derived from the tree structure to yield a candidate for .
It is important to note that this basic approach will only yield trees (and associated probability distributions) with leaves, i.e., the algorithm maps to only. To ensure we consider mappings to all points in , we require a slightly modified approach. The full details of the complete projection algorithm are given in Algorithm 2, and an example depicting how the algorithm works is shown in Fig. 4. The function in Algorithm 2 arranges the set of arguments in decreasing order; performs the inverse mapping (again, acting on elements). The function takes a set of “source probabilities” and returns the corresponding set of depths, or path lengths from the root to a given leaf. Finally, the function computes the distance between the discrete probability distributions and . In the next section, we consider three distance measures: Euclidean distance, for which
(17) 
Kullback–Leibler (KL) divergence, for which
(18) 
and total variation distance, for which
(19) 
Algorithm 2 is heuristic. It is not guaranteed to produce the solution to (16). However, results have shown it performs very well in practical scenarios (see Section VI).
Vi Numerical Results
In this section, we present a numerical analysis of the methods described above. We begin with a discussion of the mutual information. We then give a brief analysis of the error rate of the systems described herein. In what follows, we define , which can be interpreted as the average transmit SNR per subcarrier. All mutual information results are given in units of nats, and all curves were obtained via Monte Carlo sampling when closedform expressions were not available. For all systems, we set and . It should be noted, however, that we also performed extensive simulations for and , and observed very similar trends to the case where and . We have included some of these results in the Supplemental Material (see Appendix K).
Via Mutual Information
We begin with a simple case. Consider a system operating in AWGN with equal channel gains (i.e., for all ). Proposition 3 states that the optimal SAP probability distribution in this system is uniform. Adopting this result, Fig. 5 shows the mutual information for an OFDMIM system that uses all six SAPs (each with probability ) compared to one that limits the number of utilized SAPs to four where uniform power allocation is applied. The small improvement offered by the former approach simply arises as a result of the additional SAPs that are used^{4}^{4}4One would expect an improvement when finite constellations are used for signalling, rather than Gaussian signals. However, this study is beyond the scope of this paper.. However, we note that a uniform SAP distribution is infeasible given a uniform binary source^{5}^{5}5This fact has typically been ignored in the literature to date..


To better understand the advantages that can be brought by utilizing all SAPs along with the binarytree encoding strategy, we now analyze the case where the channel gains are defined by , for some . We assume full channel knowledge is available at the transmitter, so that SAP probabilities and power allocation can be optimized. Fig. 5(a) shows the mutual information for , and Fig. 5(b) gives results for . In both figures, the different curves represent different SAP probability assignment strategies and bounds. The first three curves exhibit the mutual information computed by using the respective analytic results. The fourth curve (“Projected (Euclidean)”) illustrates the mutual information attained by employing Algorithm 2.^{6}^{6}6Curves corresponding to other distance functions are not shown because they yield results that are nearly identical to the Euclidean case. In this case, is computed by using the analytic form given in Proposition 6. Note that this curve represents an achievable rate for OFDMIM systems that utilize all SAPs. For all curves other than the “Benchmark”, waterfilling power allocation is employed, since this approach is optimal at high and low SNR in the relaxed setting (cf. Proposition 8). The benchmark curve relates to a standard OFDMIM system where the four SAPs, chosen according to the lexicographic principle discussed in [16], are transmitted with equal probability and uniform power allocation^{7}^{7}7Note that the curves that employ waterfilling power allocation and SAP probability optimization include the case where only four SAPs may be chosen, which would correspond to the benchmark curve but where waterfilling is employed. Such a selection, if deemed to be optimal, would naturally arise through the SAP probability optimization procedure. As a result, the true benchmark only utilizes uniform power allocation here..
In Fig. 5(a), we see that the (relaxed) lower bound of Proposition 6 and the lowSNR result of Proposition 7 are similar, and that convergence to the upper bound of Corollary 1 occurs at high SNR. Moreover, the fully constrained result (where ) denoted by the “” markers is very close to the analytic curves corresponding to the relaxed optimization. The benchmark curve is noticeably lower than all results that offer SAP probability optimization and power allocation. This was also seen in simulations for and systems (not shown). Turning our attention to Fig. 5(b), we see that the advantages offered by optimization diminish for less variable channel conditions. The optimized scenario (“Projected (Euclidean)”) still offers an advantage that saturates the upper bound from midtohigh SNR values, but it is marginal.
For this simple system ( and ), the results shown in Fig. 6 point to a need to understand how frequency selectivity affects performance. To this end, Fig. 7 illustrates the mutual information as a function of . The first, fourth, and fifth curves relate to those with the same labels shown in Fig. 6. The second curve shows the mutual information attained by using the probabilities given in Proposition 5. The third curve (“Projected (Euclidean)”) illustrates the mutual information attained by employing Algorithm 2, where is computed using Proposition 5. Again, waterfilling is used for all systems except the benchmark, where uniform power allocation is used. The advantages offered by optimization in highly frequency selective channels are apparent in this example^{8}^{8}8Again, similar results were observed for and systems.. It is observed that one data point is missing for the curves related to Proposition 5. This omission results from the fact that the matrix in Proposition 5 is singular for the corresponding paramterization. Hence, for this point, one would choose a different method of obtaining in the initialization step of Algorithm 2. To conclude this discussion, it is important to note that Algorithm 2 roughly achieves the same mutual information promised by the analytic lower bound. The upper bound is only tight at high SNR; hence, it is not particularly tight for most values in this figure.
ViB Block Error Rate


Apart from achievable rate, error performance is another key performance metric of a communication system. It should be apparent that a scheme that is designed to maximize the achievable rate does not necessarily optimize the error performance. Nevertheless, it is important to consider the effects that the designs detailed in Sections IV and V have on the error rate. Note that, because the lengths of the bit sequences encoded as SAPs and modulated signals are variable, the measurement of bit errors would be difficult to assess in a standardized manner. Therefore, to maintain brevity and clarity, we choose to evaluate the blockerror rate (BLER) instead of biterror rate for the [27, 28, 29]. Here, a block is a group of subcarriers.
For simplicity and optimality, we adopt the maximum likelihood (ML) detection scheme at the receiver to estimate the received signal vector
consisting of received modulated symbols and nulls on independent subcarriers^{9}^{9}9Independence can be assumed by considering the case where a subcarrierlevel block interleaver is employed [30].. We assume that channel knowledge is available at both transmitter and the receiver. The estimated signal vector satisfies(20) 
where is a candidate transmit vector (obtained by following the model described in Section II) and is the diagonal channel coefficient matrix. We write the BLER conditioned on the transmitted signal vector as . Averaging over gives our measure of interest: , which is now only dependent on the channel state captured in . This measure is useful for evaluating performance in a slowfading environment. It also allows us to observe how channel variations affect error performance.
Following the simulation setting for the mutual information analysis, we configured the channel gains to be and let be uniformly distributed over , . We also normalized for the noise power and let and as an example. We do not apply rate adaptation in this study; consequently, we employ a uniform power allocation scheme in all simulations related to error performance. This allows us to focus on the effect that binarytree optimization (i.e., bittoSAP optimization) has on performance. We numerically examined the BLER for OFDMIM with the SAP probability distribution optimized under two conditions. The first condition only requires the number of leaves in the binary tree that defines bittoSAP mapping to be equal to or smaller than the number of SAPs. The second condition restricts the encoder to only consider full binary trees with leaves, which reduces the achievable rate at low SNR. Also, the classic OFDMIM scheme studied in [9] was adopted as a benchmark. We adopt the lexicographic codebook design for the classic OFDMIM system to select four out of six SAPs for comparison purposes [16]. The numerical results are presented in Fig. 8, which were obtained by collecting blockerror events for each SNR point (subject to random additive white Gaussian noise).
In Fig. 8, it is apparent that the rateoptimized OFDMIM systems designed according to the first condition outperform those designed under the second condition. This behavior correlates with the fact that the first condition is less restrictive than the second. Perhaps more interestingly, we note that the rateoptimized system does not always outperform the classical OFDMIM scheme. When signals are subject to deep fading, the rateoptimized system designed according to the first condition may only utilize a single SAP consisting of the best subcarriers, and the system is reduced to OFDM with active subcarriers. This system is capable of performing better than those that encode information in the index domain as well as signal space, since block errors arising from incorrect SAP decoding do not occur.
Vii Conclusions
In this paper, we provided a thorough treatment of the rateoptimization problem for OFDMIM systems with channel knowledge at the transmitter. We cast the problem as one of mapping bit sequences to activation patterns, which enabled us to utilize a binary tree formalism for algorithm development and analysis. To this end, we presented new results on full binary trees, both in terms of algorithmic construction and enumeration. We also reported a number of new analytic bounds and asymptotic results related to the relaxed mutual information optimization problem where SAP probabilities can take any values in the interval subject to a sum probability constraint. We then used the results pertaining to the relaxed problem to develop a heuristic algorithm for obtaining a feasible solution to the constrained problem. Numerical results indicate that this solution is nearly optimum (relative to the relaxed upper bound), particularly in the low and high SNR regimes, and the optimized approach is capable of offering a rate advantage over the conventional OFDMIM benchmark of [9].
A number of open problems remain. First, it is not clear whether an analytic form for the optimal power values exists for all SNR values; only low and high SNR results were reported here. In fact, it is not readily apparent that the mutual information is concave in the power vector ; hence, a general analytic form may not be forthcoming. As an alternative, it would be preferable to develop an efficient numerical approach to solving the relaxed optimization problem. The use of BCD was briefly discussed here, but further work is needed to determine whether this method would be a viable solution in practice. It is also not known whether the heuristic projection algorithm (Algorithm 2) is, in fact, optimal in some sense. Finally, and more generally, only rateoptimization for uniform sources was considered; it would be fruitful to study the BLERminimization problem as well as nonuniform sources and nonGaussian signalling (i.e., finite signal constellations).
Acknowledgment
The authors wish to thank Dr. Hachem Yassine and Mr. Jinchuan Tang for their helpful suggestions regarding the proof of Proposition 3 and the structuring of numerical simulations.
References
 [1] D. Slepian, “Permutation modulation,” Proceedings of the IEEE, vol. 53, no. 3, pp. 228–236, Mar 1965.
 [2] E. Basar, M. Wen, R. Mesleh, M. D. Renzo, Y. Xiao, and H. Haas, “Index modulation techniques for nextgeneration wireless networks,” IEEE Access, vol. 5, pp. 16 693–16 746, 2017.
 [3] N. Ishikawa, S. Sugiura, and L. Hanzo, “50 years of permutation, spatial and index modulation: From classic RF to visible light communications and data storage,” IEEE Communications Surveys Tutorials, vol. 20, no. 3, pp. 1905–1938, thirdquarter 2018.
 [4] R. Y. Mesleh, H. Haas, S. Sinanović, C. W. Ahn, and S. Yun, “Spatial modulation,” IEEE Transactions on Vehicular Technology, vol. 57, no. 4, pp. 2228–2241, July 2008.
 [5] M. D. Renzo, H. Haas, A. Ghrayeb, S. Sugiura, and L. Hanzo, “Spatial modulation for generalized MIMO: Challenges, opportunities, and implementation,” Proceedings of the IEEE, vol. 102, no. 1, pp. 56–103, Jan 2014.
 [6] A. K. Khandani, “Mediabased modulation: A new approach to wireless transmission,” in 2013 IEEE International Symposium on Information Theory, July 2013, pp. 3050–3054.
 [7] E. Basar and I. Altunbas, “Spacetime channel modulation,” IEEE Transactions on Vehicular Technology, vol. 66, no. 8, pp. 7609–7614, Aug 2017.
 [8] D. Tsonev, S. Sinanovic, and H. Haas, “Enhanced subcarrier index modulation (SIM) OFDM,” in IEEE GLOBECOM Workshops (GC Wkshps), Dec 2011, pp. 728–732.
 [9] E. Başar, U. Aygölü, E. Panayırcı, and H. V. Poor, “Orthogonal frequency division multiplexing with index modulation,” IEEE Transactions on Signal Processing, vol. 61, no. 22, pp. 5536–5549, Nov 2013.
 [10] A. E. Willner, “Communication with a twist,” IEEE Spectrum, vol. 53, no. 8, pp. 34–39, August 2016.
 [11] Y. Yang, W. Cheng, W. Zhang, and H. Zhang, “Mode modulation for wireless communications with a twist,” IEEE Transactions on Vehicular Technology, vol. 67, no. 11, pp. 10 704–10 714, Nov 2018.
 [12] Y. Yang and B. Jiao, “Informationguided channelhopping for high data rate wireless communication,” IEEE Communications Letters, vol. 12, no. 4, pp. 225–227, April 2008.
 [13] M. Maleki, H. R. Bahrami, S. Beygi, M. Kafashan, and N. H. Tran, “Space modulation with CSI: Constellation design and performance evaluation,” IEEE Transactions on Vehicular Technology, vol. 62, no. 4, pp. 1623–1634, May 2013.
 [14] A. Younis, N. Serafimovski, R. Mesleh, and H. Haas, “Generalized spatial modulation,” in Conference Record of the Forty Fourth Asilomar Conference on Signals, Systems and Computers, Nov 2010, pp. 1498–1502.
 [15] J. Wang, S. Jia, and J. Song, “Generalized spatial modulation system with multiple active transmit antennas and low complexity detection scheme,” IEEE Transactions on Wireless Communications, vol. 11, no. 4, pp. 1605–1615, April 2012.
 [16] S. Dang, G. Chen, and J. P. Coon, “Lexicographic codebook design for OFDM with index modulation,” IEEE Transactions on Wireless Communications, vol. 17, no. 12, pp. 8373–8387, Dec 2018.
 [17] Y. Liu and J. P. Coon, “Mitigating bitsynchronization errors in Huffmancodingaided index modulation,” accepted by IEEE Communications Letters, Dec 2018.
 [18] M. Wen, X. Cheng, M. Ma, B. Jiao, and H. V. Poor, “On the achievable rate of OFDM with index modulation,” IEEE Transactions on Signal Processing, vol. 64, no. 8, pp. 1919–1932, April 2016.
 [19] D. A. Huffman, “A method for the construction of minimumredundancy codes,” Proceedings of the IRE, vol. 40, no. 9, pp. 1098–1101, Sept 1952.
 [20] A. I. Siddiq, “Low complexity OFDMIM detector by encoding all possible subcarrier activation patterns,” IEEE Communications Letters, vol. 20, no. 3, pp. 446–449, March 2016.
 [21] W. Wang and W. Zhang, “Huffman codingbased adaptive spatial modulation,” IEEE Transactions on Wireless Communications, vol. 16, no. 8, pp. 5090–5101, Aug 2017.
 [22] Z. Hu, J. Liu, and F. Chen, “On the mutual information and Huffman coding for OFDMIM,” in 2nd IEEE Advanced Information Management, Communicates, Electronic and Automation Control Conference (IMCEC), May 2018, pp. 298–302.
 [23] M. I. Kadir, H. Zhang, S. Chen, and L. Hanzo, “Entropy coding aided adaptive subcarrierindex modulated OFDM,” IEEE Access, vol. 6, pp. 7739–7752, 2018.
 [24] F. Murtagh, “Counting dendrograms: a survey,” Discrete Applied Mathematics, vol. 7, no. 2, pp. 191–199, 1984.
 [25] D. P. Bertsekas, Nonlinear Programming. Athena Scientific Belmont, 1999.
 [26] M. F. Huber, T. Bailey, H. DurrantWhyte, and U. D. Hanebeck, “On entropy approximation for Gaussian mixture random vectors,” in Multisensor Fusion and Integration for Intelligent Systems, 2008. MFI 2008. IEEE International Conference on. IEEE, 2008, pp. 181–188.
 [27] S. Dang, J. P. Coon, and G. Chen, “Adaptive OFDM with index modulation for twohop relayassisted networks,” IEEE Transactions on Wireless Communications, vol. 17, no. 3, pp. 1923–1936, Mar. 2018.
 [28] Z. Wang, S. Dang, and D. T. Kennedy, “Multihop index modulationaided OFDM with decodeandforward relaying,” IEEE Access, vol. 6, pp. 26 457–26 468, 2018.
 [29] J. Zhao, S. Dang, and Z. Wang, “Fullduplex relayassisted orthogonal frequencydivision multiplexing with index modulation,” IEEE Systems Journal, pp. 1–12, 2018.
 [30] Y. Xiao, S. Wang, L. Dan, X. Lei, P. Yang, and W. Xiang, “OFDM with interleaved subcarrierindex modulation,” IEEE Communications Letters, vol. 18, no. 8, pp. 1447–1450, 2014.
 [31] S. Boyd and L. Vandenberghe, Convex Optimization. Cambridge University Press, 2004.
Appendix A Proof of Proposition 1
Define the mapping for any integers and , such that appends the protograph to the leftmost available node at height , relative to the deepest leaf, of a tree in . Hence, for a tree with maximum depth , connects two edges to the leftmost leaf node in level of . These edges are, in turn, each connected to a leaf node at depth . Note that always maps a tree to a new tree with one additional internal node and one additional leaf, whereas , for , will return the empty set if no height leaves exist in the tree on which acts. Algorithm 1 applies and to each tree in with every step of the for loop. The following lemma guarantees that and generate unique trees.
Lemma 2.
When applied to elements of , the mappings and yield nonisomophic trees.
Proof:
The mapping is onetoone. Hence, the image consists of trees. None of these trees are isomorphic, since no trees in are isomorphic. Similarly, where admissible, the mapping is onetoone. In the inadmissible case where returns the empty set, the mapping is manytoone; but this can be ignored, since no tree is generated. Thus, the image consists of at most trees. Again, none of these trees are isomorphic, since no trees in are isomorphic. Furthermore, we deduce that , since every has two leaves at the deepest level and every has more than two leaves at the deepest level. ∎
We now state the following lemma, which concludes the proof.
Lemma 3.
For ,
(21) 
is a complete reduced set of node full binary trees.
Proof:
Let . It is easy to verify (cf. Fig. 2) that and are complete sets. Moreover, Lemma 2 ensures contains no isomorphic trees for . Hence, to prove that is a complete reduced set of full binary trees for , we must show that
(22) 
for and .
Assume the lemma is true for all . Choose . Consider the mapping . We treat several possibilities. If the operation maps to the empty set, the set relation is satisfied since by definition (no tree is generated). On the other hand, if is nonempty and the deepest level of contains exactly two leaves (and hence the same can be said for since ), then we must show that there exists a tree such that . Note that, in this case, has an inverse, and the composition commutes. Thus, we write
(23) 
where, in the second and third equalities, it is understood that operates on the level at height relative to the deepest leaf in . But, by the inductive hypothesis and Lemma 2, we have that . It follows that
(24) 
as required.
Now suppose is nonempty and the deepest level of contains more than two leaves. In this case, we must show that there exists a tree such that . We take a similar approach, recognizing that has an inverse, and the composition commutes. It follows that
(25) 
and, by induction, . Finally, we have that
(26) 
as required. ∎
Appendix B Proof of Proposition 2
The proposition can be seen to hold (with equality) for by explicit construction of . For , consider the mappings given in Appendix A. As noted in the proof of Lemma 2 in that appendix, is onetoone. Thus, . Moreover, from Lemma 2 and the definition of given in Lemma 3, we know that
(27) 
The set can be partitioned into two subsets: one set that contains trees that are mapped to node trees in under and one set does not admit a mapping under . We call trees in the first subset open trees and trees in the second subset closed trees. Noting that , we have
(28) 
To lower bound , we apply the following reasoning. A node closed tree is formed by appending a closed subtree of size, say, internal nodes to a subtree of size . Each set with for some positive integer has exactly one dense closed tree, i.e., a tree where every level is fully connected to the pervious and next levels, and the deepest level has leaves. Thus, for every , we can enumerate closed trees. This is a lower bound, since other combinations of node closed subtrees and node trees exist. Finally, we note that if is one less than a power of two, contains a dense subtree, which gives rise to the parameter stated in the proposition.
Appendix C Proof of Lemma 1
Referring to (9), the equality constraint is affine and the inequality constraints are convex. Now, consider the objective function
(29) 
Hence, we must prove that is concave in . Let us interpret (cf. (6)) as a function of for a fixed . Then the mapping is linear in this context. Furthermore, , where is concave. It is well known that a composition of a concave function and a linear function is concave, and that concavity is preserved under nonnegative weighted integration [31, sec. 3.2]. The weight function here is simply one, so it follows that is concave in .
Appendix D Proof of Proposition 3
Starting from Lemma 1, we form the KKT conditions
(30) 
where and are the Lagrange multipliers. The gradient of the Lagrangian is
(31) 
and the gradient of with respect to can be written as
(32) 
where . Hence, the optimal vector must satisfy
(33) 
where we have used the fact that . Furthermore, since the problem is concave, any , , and that satisfy (33) and the rest of the conditions in (D) are primal and dual optimal, and thus yield the maximum mutual information. Choose for all . In this case, the inequality constraints are inactive, which implies . As a result, (33) is satisfied (along with the rest of the KKT conditions), and thus uniform SAP probabilities is optimal.
Appendix E Proof of Proposition 4
Using Jensen’s inequality and considering the concavity of the logarithm, can be lower bounded as
(34) 
where
(35)  
and . The integrals in the expectation can be evaluated, which yields
Comments
There are no comments yet.