Efficient Pairing in Unknown Environments: Minimal Observations and TSP-based Optimization

03/23/2022
by   Naoki Fujita, et al.
0

Generating paired sequences with maximal compatibility from a given set is one of the most important challenges in various applications, including information and communication technologies. However, the number of possible pairings explodes in a double factorial order as a function of the number of entities, manifesting the difficulties of finding the optimal pairing that maximizes the overall reward. In the meantime, in real-world systems, such as user pairing in non-orthogonal multiple access (NOMA), pairing often needs to be conducted at high speed in dynamically changing environments; hence, efficient recognition of the environment and finding high reward pairings are highly demanded. In this paper, we demonstrate an efficient pairing algorithm to recognize compatibilities among elements as well as to find a pairing that yields a high total compatibility. The proposed pairing strategy consists of two phases. The first is the observation phase, where compatibility information among elements is obtained by only observing the sum of rewards. We show an efficient strategy that allows obtaining all compatibility information with minimal observations. The minimum number of observations under these conditions is also discussed, along with its mathematical proof. The second is the combination phase, by which a pairing with a large total reward is determined heuristically. We transform the pairing problem into a traveling salesman problem (TSP) in a three-layer graph structure, which we call Pairing-TSP. We demonstrate heuristic algorithms in solving the Pairing-TSP efficiently. This research is expected to be utilized in real-world applications such as NOMA, social networks, among others.

READ FULL TEXT VIEW PDF
05/13/2022

Joint Power Allocation and Beamformer for mmW-NOMA Downlink Systems by Deep Reinforcement Learning

The high demand for data rate in the next generation of wireless communi...
05/31/2019

When Full Duplex Wireless Meets Non-Orthogonal Multiple Access: Opportunities and Challenges

Non-orthogonal multiple access (NOMA) is a promising radio access techno...
08/22/2018

Signature-based Non-orthogonal Multiple Access (S-NOMA) for Massive Machine-Type Communications in 5G

The problem of providing massive connectivity in Internet-of-Things (IoT...
03/21/2018

Downlink Non-Orthogonal Multiple Access (NOMA) in Poisson Networks

A network model is considered where Poisson distributed base stations tr...
04/17/2020

Index Modulation-Based Flexible Non-Orthogonal Multiple Access

Non-orthogonal multiple access (NOMA) is envisioned as an efficient cand...
12/02/2021

Reward-Free Attacks in Multi-Agent Reinforcement Learning

We investigate how effective an attacker can be when it only learns from...
01/13/2018

Sparse NOMA: A Closed-Form Characterization

Understanding fundamental limits of the various technologies suggested f...

References

1 Introduction

1.1 Introduction of a pairing problem

Various systems and applications require to combine multiple elements into an array of pairs, including information or communication technologies. The process of partitioning the set of elements into disjoint sets with exactly 2 elements each is called "Pairing" in this paper. An example is found in non-orthogonal multiple access (NOMA) in the latest wireless communication systems [1, 2, 3, 4, 5, 6]. In NOMA, multiple terminals share a common frequency band simultaneously, which greatly improves the frequency utilization efficiency. The key process here is user pairing: the base station allocates higher and lower transmission power for communications to the terminals located far and near the base station, respectively. The terminals then conduct successive interference cancellation (SIC) calculations to extract the original signal. Therefore, determining the combination of user pairing that maximizes the total data rate of all users is critical. However, to the best of the authors’ knowledge, optimal pairing algorithms which can work with a large number of users or terminals have not been proposed, even though various pairing algorithms have been proposed in previous studies[7, 8]. When the number of users , the total number of possible pairings is 945. With , the total number becomes in the order of , which is a double factorial scaling as introduced in Sect. III. Therefore, an efficient pairing strategy is indispensable. The importance of pairing is also observed in other situations and applications, such as college admission [9], economics [10] and donor exchange[11] among others[12, 13].

In this paper, we demonstrate a fast pairing algorithm consisting of an efficient recognition of compatibilities among elements as well as an efficient determination of the pairing that yields high total compatibility. Here, compatibility quantifies the performance of a given pair and total compatibility is the summation of compatibilities of all pairs for a pairing, where we also call the given set of pairs among all elements a pairing. The optimal pairing should maximize the total compatibility of the system. However, in general, obtaining the globally maximal total compatibility would require an exhaustive search of all pairings. Therefore, a heuristic algorithm is needed to obtain an approximately maximal total compatibility. This study highlights the following two aspects in discussing the pairing problem.

The first point is the time duration required to obtain information about the compatibilities of the system, which we call observation time hereafter. In the absence of prior information about compatibilities, multiple observations are required to infer the compatibility between all elements. Furthermore, we presuppose that we cannot directly measure individual compatibility among elements; only the total compatibility of a certain pairing is observable. The fewer observations, the shorter the overall time required for pairing. More generally, the objective of an algorithm for compatibility observation is to guess as accurately as possible the real compatibilities in as few steps as possible, which is schematically illustrated in Fig. 1(a). In this paper, we demonstrate that by exploiting the inherent structural properties of the pairing problem, which we call exchange rules, the number of observations needed for acquiring all compatibilities is significantly reduced.

The second point is the efficient derivation of the optimal pairing based on the information on compatibility; we call the required time for this process combining time in this paper. Even if complete information about the compatibilities is available, it may take a considerable amount of time to find the optimal pairing because of the huge number of possible pairings, as schematically represented in Fig. 1(b). In this paper, we transform the derivation of optimal pairing into a traveling salesman problem (TSP). TSP is a widely known combinatorial optimization problem to find the shortest pathway in a graph for a salesman while visiting all vertices via edges . In addition to the compatibility information, we append two more layers to account for the requirement of the pairing problem; we call the re-formulated pairing problem the Pairing-TSP. Notably, the resulting graph is not fully connected. Once the situation is represented by a TSP problem, we can benefit from a variety of heuristic algorithms in the literature to efficiently solve the combinatorial explosion issue. Furthermore, this paper proposes a novel heuristic algorithm that is different from conventional algorithms and suitable for pairing problems.

Regarding the second point discussed above, a related problem is maximum weight matching (MWM). In MWM, the goal is to select edges from a weighted graph so that any two selected edges do not share common vertices while maximizing the sum of the weights of the selected edges. This is a combinatorial optimization problem. The pairing problem discussed in this study is a particular case of MWM where the graph is complete and the number of vertices is even. Several efficient algorithms have been proposed for solving MWM. For example, Gabow[14] proposed an algorithm with a computation time of , Cygan et al.[15] developed a randomized algorithms with a computation time of for graphs with integer weights ( is the exponent of matrix multiplication[16] and is the maximum integer edge weight), while Duan et al. [17] worked on an algorithm achieving an approximation ratio of , with computation time of for arbitrary weights and for integer weights ( is a positive arbitrary value and is the maximum value). Here, .

In our paper, we approach the problem from a different perspective and present new methods for the pairing problem. In particular, we formulate it as a TSP problem. Additionally, MWM literature does not consider any approach to obtain compatibility information to the best of our knowledge, which is the main aspect of the observation phase in our manuscript.

Figure 1: Efficient pairing in unknown environments. There are two phases. (a) The first is the observation phase to grasp the compatibility among elements. (b) The second is the combining phase to find a pairing yielding high compatibility.

1.2 Overview of this paper

With a view to the efficient realization of optimal pairing, the present study demonstrates an efficient observation strategy to measure the compatibilities among the entities on the basis of limited information. Furthermore, based on the insight that the optimal pairing problem is transformed into a TSP problem in a three-layer graph structure, we demonstrate heuristic algorithms to find a high-performance pairing that can be applied even when the number of elements is large.

This paper is organized as follows. First, we formulate the pairing problem in Sect. II. Second, for the observation phase, in Sect. III, we show the minimum number of observations needed to infer the complete set of compatibilities and propose an observation algorithm with a computational complexity of the square of the number of elements. Sect. IV examines the combining phase, where we introduce how to convert the pairing problem to a TSP and propose an algorithm for solving the resulting Pairing-TSP. In Sect. V, we numerically evaluate the performances of the combining phase algorithms. Sect. VI concludes the paper.

2 Objective function and constraints

Here we assume that the number of elements is an even natural integer , while the index of each element is a natural number between 1 and . We define the set of all users as follows;

Then, we define the set of all possible pairs for as :

The compatibility between the elements and is denoted by . The reward function of a pair is given by its compatibility.

We define pairing as follows;

Then, , which is called "the total compatibility of pairing " hereafter is defined as follows;

And we define the set of all pairings . The pairing problem discussed in this study is formulated as follows;

3 Observation Phase

3.1 Exchange Rule

As discussed in the Introduction, we assume that each cannot be directly observed, but is observable. By observing such values with different pairings , we can recognize all values. The number of all available pairings is , meaning that the number of necessary observations is at most . Here, the number denoted by

is the double factorial of an odd number

defined by . Therefore, the total number of possible pairings dramatically increases when becomes large, indicating the importance of efficiently recognizing compatibilities with as few observations as possible. In the following, we prove that the total compatibility of all possible pairings can be calculated based on a limited number of observations, leading to a significant reduction of the required observations.

To improve the readability of the following discussion, we define the exchange rule as:

(1)

This exchange rule describes the amount of change in the total compatibility between a pairing containing and and a pairing containing and . Therefore, each exchange rule can be calculated from two observations. For example, to find in , we can observe and and calculate the difference. For a large , there are many sets of pairings corresponding to any given exchange rule, such that finding one exchange rule will give the amount of change between multiple sets simultaneously. For example, mentioned above is also the difference between and .

3.2 Observation Algorithm

In this part we propose a simple algorithm with observation time in . As an example, we will use the setting shown in Table 1 to illustrate the proposed observation algorithm. As discussed earlier, we assume that each compatibility () cannot be observed directly. It will prove beneficial to not calculate the original set of compatabilities directly, but to use a derived set of compatiabilities denoted by and with the two following properties: for a given pairing, , and some are always equal to 0. If such properties hold, we could calculate any total compatibility via with a reduced number of observations, instead of via .

Indeed, we found that such exists and can be defined as the following;

In this definition, we found that the number of non-zero elements is , which is smaller than the number of elements. That is, this definition reduces the number of non-zero elements. When we denote a pairing as , from the definition of we can write;

The above equations prove that and are equal for any pairing .
As a consequence, any exchange rule can be written using either or while providing the same value. Thanks to this property and for any , the computation can be greatly simplified. For example, if we compute the value of the exchange rule for , we can transform the equation as follows;

That is, we can obtain the difference between the two elements ( and ) from a single exchange rule.
In the proposed observation algorithm, the following values (Eqs. (2), (3), and (4)) are obtained from observations;

(2)
(3)
(4)

By definition, following equations hold;

(5)
(6)
Table 1: The original setting
Table 2: (a) The change in the horizontal direction (solid arrow), and the change in the vertical direction (dotted arrow). (b) calculated by observation.

Eq. (5) represents the changes along the horizontal direction , and Eq. (6) represents the changes in the vertical directions (in Table 2(a)). Let be given by , is represented using Eqs. (5), and (6). Then, by using Eq. (4), is determined, and subsequently all values are determined as summarized in Table 2(b). Number of observations needed for Eqs. (2), (3), (4) are , and 1, respectively, because each exchange rule value can be calculated by only two observations. It follows that the observation number by the proposed algorithm is .

3.3 Minimum Number of Observations

We proved the following theorem;

Theorem 1

The minimum number of observations required to know the entire set of compatibilities is when the number of elements is ().

This theorem is based on the idea that if there are total linearly independent pairings, then the required number of observations is . This algorithm is proved by the following explanation. First, by design, the set of preserves the total compatibility obtained from for all pairings, and the number of independent is . Therefore, the minimum number of observations is at most . Second, the values indicated in Eqs. (2), (3), and (4) represent linearly independent observables, such that the minimum number of observations is at least . For these reasons, the minimum number of observations required to know the complete set of compatibilities is when the number of elements is ().

4 Combining Algorithm

Based on obtained by the observation algorithm, we can compute the total compatibility of all possible pairings . However, as discussed in the Introduction, the number of pairings scales up very quickly as a function of . In this section we re-formulate the pairing problem into a Traveling Salesman Problem (TSP) to realize an efficient combining algorithm.

4.1 Traveling Salesman Problem

TSP concerns finding the route that minimizes the total cost of traveling to a given set of locations, with the cost between each two locations being given. The salesman starts his or her tour from a starting node and visits all other nodes exactly once before returning to the starting node. The complexity of the TSP stems from the large number of possible routes which scales up very quickly with the number of nodes, such that a brute-force solving considering all possible routes is too costly in general.

4.2 Solving Pairing Problem as a TSP: Pairing-TSP

Figure 2: The path of the traveling salesman problem in the three-layer graph structure (Pairing-TSP) corresponds to the pairing problem. An example case with the number of elements () being 6 is illustrated. The first and second layers have nodes, and the third layer has nodes. All nodes in the first layer are connected with each other. All nodes in the second layer are connected to a different node in the first layer and all nodes in the third layer. By constructing such a three-layer graph structure, the solution to the corresponding TSP problem provides the pairing yielding high compatibility.

In this study, we transform the problem of heuristically finding the pairing with a large total compatibility into a TSP with a three-layer network structure, which is schematically shown in Fig. 2. We call the re-formulated problem Pairing-TSP.

In this Pairing-TSP, we arrange the first and the second layers to have nodes, while the third layer is configured with nodes. Let the nodes of the first and the second layers be indexed with natural numbers ranging from 1 to . In the first layer, the cost of the route between the nodes and are given by . There is a one-to-one correspondence between the nodes in the first layer and the nodes in the second layer; in other words, there is a unique link between each node in the first layer and the corresponding node in the second layer. As the other links between the first and second layers are not permitted, the Pairing-TSP results in a non-complete graph.

Finally, the third layer consists of nodes indexed between and , being even. Here, the nodes in the second layer and the nodes in the third layer are fully connected. That is, the node in the second layer is connected with nodes () in the third layer. Note that the cost of all routes except intra-first-layer links is set to zero. Nevertheless, remember that the salesman must visit all nodes in the second and the third layer too, not just the first layer. Now we demonstrate that the solution of such Pairing-TSP corresponds to the solution of pairing by noticing the following two inherent constraints.

Figure 3: A solution to the TSP problem in the three-layer graph structure corresponds to a pairing. This can be explained via forbidden routes illustrated in the following two examples. (a) A route that goes from the third layer to the third layer by passing through the second layer cannot be included in the TSP solution. (b) A route that visits three nodes of the first layer consecutively cannot be included in the TSP solution.

First, consider a route that goes from a node in the third layer to a node in the second layer and then goes back to the third layer, as shown by the red lines in Fig. 3(a). Such a route fragment cannot be included in the solution of TSP. Each node in the third layer can be connected to at most 2 nodes in the second layer. Therefore, if different nodes in the third layer are connected to the same node in the second layer, there will be at least one node in the second layer that cannot be connected to the third layer. With these reasons, a route fragment such as the red lines in Fig. 3(a) is forbidden.

Secondly, the case of the thick red lines in Fig. 3(b) of three consecutive connections in the first layer cannot be included in the solution of TSP. The reason is if such connections exist, then there has to be the configuration of Fig. 3(a) somewhere. Therefore, by construction, the salesman never visits three consecutive nodes in the first layer; instead, after visiting two nodes in the first layer, the salesman always moves to the second layer. Finally, in the solution of Pairing-TSP, the salesman will visit two nodes in the first layer consecutively via visiting the second and the third layer. When the connection between the nodes and in the first layer is included in the solution of Pairing-TSP, we consider that elements of and are paired.

Since the summation of the cost along the route of a solution of Pairing-TSP and the total compatibility of the corresponding pairing are opposite in sign, minimizing the cost of Pairing-TSP is equivalent to maximizing the total compatibility by appropriate pairing construction. For those reasons, we can guarantee the correspondence between the original pairing problem and Pairing-TSP.

4.3 Pairing-Nearest Neighbor Method (PNN)

In solving Pairing-TSP, we propose two algorithms on the basis of existing algorithms for the general TSP.

0:  Array indexes start at 1
1:  input: ( is the compatibility matrix, whose element stores compatibility )
2:   ( stores the nodes the salesman visits, and denotes the node which salesman visits th)
3:   start point in the first layer
4:  
5:   adjacent node of in the second layer
6:  
7:  while  do
8:     if  then
9:         nearest adjacent node of in the first layer (If there are multiple nearest adjacent nodes, the salesman chooses

with the same probability in them)

10:     else if  then
11:         adjacent node of in the second layer
12:     else if  then
13:         adjacent node of in the third layer chosen with the same probabilities
14:     else if  then
15:         adjacent node of in the second layer chosen with the same probabilities
16:     else if  then
17:         adjacent node of in the first layer
18:     end if
19:     
20:     
21:  end while
22:  return  
Algorithm 1 Pairing-Nearest Neighbor Method (PNN)

The first one is what we call the pairing-nearest neighbor method, which is referred to as PNN in short hereafter. PNN is a modification of the nearest neighbor method, which is an algorithm to visit the nearest unvisited node from the current node [18]

. As discussed in Sect. IV.B., a solution of Pairing-TSP does not allow three or more consecutive node visits in the first layer, the salesman needs to go to the second, the third, and the second layer before coming back to the first layer again. If there are multiple least-cost routes to the next node, they are assumed to be chosen randomly with equal probability. This algorithm can obtain an estimated solution with a computational complexity of

. A pseudo-code of PNN is summarized in Algorithm 1. Herein, denotes the route of the salesman, and represents the time step of the salesman. Note that there are in total vertices through which the salesman travels. Lines 4 and 5 specify the start and the end node, respectively. The time step suggests where the salesman is in the three-layer structure as well as his/her directions to the downward or upward of the layers. There are five kinds of possible movement of the salesman; (1) Move from the 1st layer to the 1st layer. (2) Move from the 1st layer to the 2nd layer. (3) Move from the 2nd layer to the 3rd layer. (4) Move from the 3rd layer to the 2nd layer. (5) Move from the 2nd layer to the 1st layer. In all cases, the destination is chosen only from the unvisited nodes. In this manner, duplicate visits to any node are avoided.

4.4 Pairing 2-opt Method (P2-opt)

The second algorithm is what we call the pairing 2-opt method referred to as P2-opt hereafter, which is a modification of the 2-opt method [19] to update the initial solution derived by PNN.

Figure 4: Example of P2-opt reconnection when . The red numbers overlaid in the connections indicate the cost of connections. At each check, two pairs (or two connections) are considered while all other pairings (or connections) remain the same. Here we consider the pairing combination of . We first examine reconnections concerning and . In this situation, we check the three alternative pairings or connections , , and . Since is the smallest cost route, the reconnections are not applied. Second, we examine reconnections concerning and . In this case, we check three alternatives , , and . The reconnection is not adapted again because is the smallest cost route. Third, reconnections about and are investigated. The three alternatives are , , and . Here the reconnection to is applied since it yields the minimum cost route. In this case, the number of checks (NOC) is three.
0:  Array indexes start at 1
1:  input: ( stores which nodes are paired; and are paired for each positive integer )
2:  input: ( is the compatibility matrix, whose element stores compatibility )
3:  input: (exchange limit)
4:  
5:  while  do
6:     for  to  do
7:        for  to  do
8:           
9:           
10:           
11:           if  then
12:              
13:              
14:              
15:              
16:              break all for-loops
17:           else if  then
18:              
19:              
20:              
21:              
22:              break all for-loops
23:           end if
24:           if  and  then
25:              return  
26:           end if
27:        end for
28:     end for
29:  end while
30:  return  
Algorithm 2 Pairing 2-opt Method (P2-opt)

The original 2-opt method compares the original and one alternative route and updates the current solution by reconnecting some of the nodes so that the total cost decreases [19]. Conversely, in Pairing-TSP, there are three () possible combinations for 2 given pairs of 4 nodes. Therefore, the proposed P2-opt compares the costs of three routes. If the compatibility is not improved by recombining any of the pairs, the algorithm terminates. Fig. 4 illustrates the reconnection procedure of the proposed P2-opt with an example of pairing when . A pseudo-code of P2-opt is shown in Algorithm 2. The three alternatives are represented by lines 8 to 10. In P2-opt, the rewiring is considered only on the first layer among these three alternatives. This rewiring never introduces duplicate visits. Note that the connections involving the 2nd and 3rd layers have zero cost for the salesman. Therefore, any rewired route in the first layer, which is a pairing , provides a certain route for the salesman in the three-layer graph structure.

5 Simulation

5.1 Problem Setting

We constructed the compatibility set by generating uniform random numbers between 0 and 10000. A total of 100 different sets were generated for each setting, and the average over different settings was examined. Note that each set of compatibilities is reconstructed here following the observation algorithm based on the construction of described in Sect. III. We want to compare the performance of PNN versus random pairing, evaluate how much P2-opt can improve a solution found by PNN through additional rewiring steps, and how the performance gain depends on the number of rewirings as introduced in Sect. IV.

5.2 Performance Indicator for the Derived Pairing

Let be the total compatibility that corresponds to the pairing derived through the combining algorithm. The larger and the closer it is to the global maximum, the better it is. To quantify the performance of the combining algorithm in terms of how far is from the maximum, we define as a performance indicator with the following formula:

(7)

where is the number of nodes in the first layer, is the upper limit value of , and is the lower limit value of . In this simulation, and . ranges from 0 to 1 and represents the relative distance of the current pairing from the theoretical minimum or maximum possible values for , 0 being for the absolute worst and 1 for the absolute best pairing, respectively.

5.3 Performance of PNN and P2-opt

We conducted a performance comparison between (a) No-Strategy, (b) PNN, (c) PNN and P2-opt as a function of the number of elements from 100 to 1000, as summarized in Fig. 5. Herein the exchange limit was fixed to be 600. "No-Strategy" indicates random selection of the route in the first layer. "PNN and P2-opt" means that we get an initial solution by PNN and update solution by P2-opt.

The performance of No-Strategy is roughly 0.5 regardless of , as expected by the definition of in Eq. (7

). The pairing of the proposed strategies, PNN and P2-opt, reaches a performance index greater than 0.9. Furthermore, we can confirm that P2-opt processing enhanced the solution of PNN. Furthermore, the standard deviation tends to be smaller for (c) PNN and P2-opt, (b) PNN, and (a) No-strategy, in that order. Regarding the relationship between the number of elements

and performance, the performance of both (b) PNN and (c) PNN and P2-opt improves as the number of elements increases. The standard deviation tends to decrease for all three methods as the number of elements increases.

Figure 5: Performance comparison by the index among (a) No-Strategy, (b) PNN, and (c) PNN and P2-opt methods as a function of the number of elements . We can observe that PNN greatly improves the performance, and P2-opt provides additional enhancements.

5.4 Effect of P2-opt

As described in Sect. IV.D., P2-opt aims at reducing the total cost of a TSP route by locally exchanging connections. To examine the effect of such an exchange, here we set an upper limit to the number of exchanges in P2-opt, which we define by the P2-opt exchange limit denoted by . Fig. 6 shows the evolution of as a function of for different element numbers from 100 to 1000 in intervals of 100, each point representing the average among 100 different compatibility sets. This result highlights two trends: first, saturates beyond a certain limit ; second, as increases, increasing improves the performance until a new saturation level. Indeed, when , the performance reached its maximum value with , whereas monotonically increases until when . These observations demonstrate that a sufficient exchange limit exists depending on the number of first-layer nodes of the given problem.

Figure 6: Performance evaluation of the proposed P2-opt algorithm, i.e. rewiring of connections in the first layer of the Pairing-TSP. The performance () increases as a function of exchange limit () in the P2-opt algorithm. The colors indicate the different numbers of elements ranging from 100 to 1000.

5.5 Number of checks of P2-opt

In the P2-opt algorithm, two pairs of the current pairing are compared at every turn, and the nodes are reconnected accordingly if the rewiring improves the total compatibility (Fig. 4). Here, the order in which the pairs are checked is round-robin, meaning that each time the pairs are reconnected, they are rechecked from the beginning. Therefore, there is a possibility of double-checking, meaning that certain reconnections are re-calculated. That is to say, there is a room for further accelerating the algorithm in reducing the number of checks.

In the meantime, the computation cost of the P2-opt algorithm represents how often compatible pairs are compared, which we call the number of checks (NOC). The circular marks and their associated error bars in Fig. 7 represent the mean and the standard deviation of the NOC, respectively, when the number of elements ranges from 100 to 2000 in intervals of 100. For each , 100 different compatibility sets were examined. The exchange limit was given by 600 regardless of . However, when P2-opt achieves the local maximum pairing and the algorithm terminates, then the total number of exchanges is actually less than .

Figure 7: Evaluation of the computational cost in P2-opt. A total number of checks (NOC), which quantifies how often compatible pairs are compared, is evaluated as a function of the number of elements (). 100 different compatibility sets were examined for each . The graph shows the average and standard deviation.
Figure 8: Analysis of the underlying mechanism of P2-opt. The number of actually conducted checks per exchange loop is evaluated as the progress of the algorithm. Here, the exchange limit () is set as 600. For each , the graph shows the average value over 100 different compatibility sets.

From Fig. 7, we can observe several trends. Clearly, the NOC increases as the number of elements increases. However, the slope flattens when the number of elements is greater than approximately 1200. Furthermore, the standard deviation gets larger when the number of elements goes beyond 1200.
To examine the inherent mechanisms behind such tendencies, we analyzed the time evolution of the NOC per exchange loop. The curves in Fig. 8 represent the evolution over exchange loops of the NOC regarding compatibility settings whose number of elements ranges from 100 to 1200 in intervals of 100, averaged over 100 different compatibility sets for each setting. The P2-opt exchange limit was fixed at 600. From Fig. 8, we observe that the average NOC initially increases as number of exchange loops elapses. Initially, any rewiring may improve the total compatibility; hence the NOC per exchange loop is small. As the number of exchanges increases, rewiring may not necessarily improve the total compatibility because the calculated route may already be in a good solution. Therefore, the NOC until actual rewiring happens increases. Beyond a certain point, the calculated route has a relatively low cost; therefore, the NOC grows until P2-opt has converged, but becomes 0 once P2-opt has converged. In Fig. 8, 100 trials were simulated for each and averaged over, such that the NOC gradually decreased after some point because the number of converged trials steadily increased.

Indeed, in the case of , the NOC becomes almost zero when exchange loop is 300. Similarly, in the case of , the NOC becomes very small when the total number of exchanges is 600. In the case of , however, the NOC is large, approximately when the total number of exchanges is 600. That is to say, the search for a better solution may be insufficient. Such an observation is consistent with the change of the slope in Fig. 7 induced at . In other words, when

is small, the variance is small because a sufficiently low-cost route solution has been obtained, whereas when

is greater than 1000, the is insufficient, and so the variance becomes large, and the slope of the graph against is slow.

5.6 Comparison of computational costs

In this section, we discuss the computational complexity of each method. First, the total number of possible pairings is . Therefore, the computational complexity by enumeration is , and the number of observations required is also . On the other hand, the number of observations needed for the proposed observation algorithm is , and the computational complexity of the proposed combining algorithm is for PNN and at most for P2-opt.

6 Conclusion

In this study, we propose an algorithm for efficiently and heuristically determining a pairing that provides large total compatibility among entities, which lies in a process at the heart of some of the latest information and communications technologies such as non-orthogonal multiple access (NOMA) in wireless networks, matching problems in economics, among others. We identify two main phases to optimize the pairing: observation and combination. One of the main hypotheses of this study is that one can only observe the total compatibility for any given one pairing. In the meantime, the number of all possible pairing pairings grows as , where is the number of entities. Therefore, efficient strategies to measure the compatibility among elements are essential. We demonstrate that the minimum number of observations to know the complete set of all compatibilities is smaller than the total number of combinations of this set. This finding does not depend on the combining phase. Also, by exploiting the exchange relationships inherent in the problem, we propose an efficient algorithm scaling as to observe all compatibilities among elements. In the combining phase, we demonstrate that the derivation of the best pairing is equivalent to solving a traveling salesman problem (TSP) in a three-layer graph structure, which we name Pairing-TSP. We propose two heuristic approaches to efficiently resolve Pairing-TSP: the pairing-nearest neighbor (PNN) and the pairing 2-opt (P2-opt) methods, both of which exploit unique characters inherent in the architecture of Pairing-TSP. Numerical simulations confirm the principles of the algorithms. In summing up, the present study first proposed an algorithm to estimate the compatibility among elements only via the total compatibility with minimal observations. Then, through the insight that the pairing problem is equivalent to solving a special class of TSPs, we demonstrate heuristic methods to accomplish pairing efficiently. We consider that the contents herein contribute to achieving more efficient pairing than conventional methods, especially for the case of a large number of users in NOMA systems, as well as other pairing applications. We expect our findings to be applicable also to social systems such as social networking services and education.

7 Effect of Initial Node in PNN

In the PNN, traveling starts from a node in the first layer. Here we examined the effect of the starting node on the resultant pairing performance. More specifically, we analyzed the standard deviation of the performance indicator defined in Sect. V.B. while changing the starting node through all nodes in the first layer. In the simulations, was given from 100 to 1000 with a 100 interval, while 100 types of compatibilities were prepared for each given . We calculated standard deviations for initial nodes for each of the 100 compatibility sets. Then, we averaged all 100 standard deviations for each .

Figure 9: The standard deviation of the performance for each method as a function of when the initial point is changed. is set from 100 to 1000 and we prepare 100 types of compatibilities for each .

The red, green, and blue circular marks in Fig. 9 show the standard deviation of the performance indicator as a function of the number of elements when the pairing attribution was conducted with completely random strategy (or No-Strategy), PNN, and PNN and P2-opt, respectively. We can observe that the standard deviation decreases as the number of elements increases for all methods. In particular, the dependence of the performance on the initial node of PNN and P2-opt is smaller than that of No-Strategy and PNN. Since the maximum standard deviation is smaller than when in the case of PNN and PNN and P2-opt, we can conclude that the initial node selection in PNN has a negligible effect on the resultant pairing quality.

References

  • [1] M. Aldababsa, M. Toka, S. Gökçeli, G. K. Kurt, and O. Kucur, "A tutorial on nonorthogonal multiple access for 5G and beyond," Wirel. Commun. Mob. Comput., vol. 2018, 2018.
  • [2] Z. Ding, P. Fan, and H. V. Poor, "Impact of user pairing on 5G nonorthogonal multiple-access downlink transmissions," IEEE Trans. Veh. Technol., vol. 65, no. 8, pp. 6010-6023, 2016.
  • [3] L. Chen, L. Ma, and Y. Xu, "Proportional Fairness-Based User Pairing and Power Allocation Algorithm for Non-Orthogonal Multiple Access System," IEEE Access, vol. 7, pp. 19602-19615, 2019.
  • [4] Z. Ali, W. U. Khan, A. Ihsan, O. Waqar, G. A. S. Sidhu, and N. Kumar, "Optimizing Resource Allocation for 6G NOMA-Enabled Cooperative Vehicular Networks," IEEE OJ-ITS, vol. 2, pp. 269-281, 2021.
  • [5] H. Zhang, Y. Duan, K. Long, and V. C. M. Leung, "Energy Efficient Resource Allocation in Terahertz Downlink NOMA Systems," IEEE Trans. Commun., vol. 69, no. 2, pp. 1375-1384, 2021.
  • [6] W. Yin, L. Xu, P. Wang, Y. Wang, Y. Yang, and T. Chai, "Joint Device Assignment and Power Allocation in Multihoming Heterogeneous Multicarrier NOMA Networks," IEEE Syst. J., 2020.
  • [7] M. B. Chahab, M. Irfan, M. F. Kader and S. Y. Shin, "User pairing schemes for capacity maximization in non-orthogonal multiple access systems," Wirel. Commun. Mob. Comput., vol. 16, no. 17, pp. 2884-2894, 2016.
  • [8] L. Zhu, J. Zhang, Z. Xiao, X. Cao, and D. O. Wu, “Optimal User Pairing for Downlink Non-Orthogonal Multiple Access (NOMA),” IEEE Wireless Commun. Lett., Vol. 8, pp. 328–331, 2019.
  • [9] D. Gale, L. S. Shapley, "College Admissions and the Stability of Marriage," Am. Math. Mon., vol. 69, pp. 9-15, 1962.
  • [10] A. E. Roth, "The Economics of Matching: Stability and Incentives," Math. Oper. Res., vol. 7, no. 4, pp. 617-628, 1982.
  • [11] E. Ergin, T. Sonmez, and M. U. Unver, "Dual-Donor Organ Exchange," Econometrica, vol. 85, no. 5, pp. 1645-1671, 2017.
  • [12] N. Kohl and S. E. Karisch, "Airline crew rostering: Problem types, modeling, and optimization," Ann. Oper. Res., vol. 127, no. 1, pp. 223-257, 2004.
  • [13] R. Krishankumar and K. S. Ravichandran, "Optimal pairing of teammates for enhancing communication rates in software projects using ant colony optimization approach," ARPN J. Eng. Appl. Sci., vol. 11, no. 5, pp. 2939-2944, 2016.
  • [14] H. N. Gabow, "Data structures for weighted matching and nearest common ancestors with linking," in Proc. the first annual ACM-SIAM SODA, pp. 434-443, 1990.
  • [15] M. Cygan, H. N. Gabow, and P. Sankowski, "Algorithmic applications of Baur-Strassen’s theorem: Shortest cycles, diameter, and matchings," J. ACM, vol. 62, no. 4, pp. 1-30, 2015.
  • [16] V. V. Williams, "Multiplying matrices faster than Coppersmith-Winograd," in Proc. the forty-fourth annual ACM STOC, pp. 887-898, 2012.
  • [17] R. Duan and S. Pettie, "Linear-time approximation for maximum weight matching," J. ACM, vol. 61, no. 1, pp. 1-23, 2014.
  • [18] A. H. Halim and I. Ismail, "Combinatorial optimization: comparison of heuristic algorithms in travelling salesman problem," Arch. Comput. Methods Eng., vol. 26, no. 2, pp. 367-380, 2019.
  • [19] S. Hougardy, F. Zaiser, and X. Zhong, "The approximation ratio of the 2-Opt Heuristic for the metric Traveling Salesman Problem," Oper. Res. Lett., vol. 48, no. 4, pp. 401-404, 2020.