A Robust Advantaged Node Placement Strategy for Sparse Network Graphs

12/09/2018 ∙ by Kai Ding, et al. ∙ 0

Establishing robust connectivity in heterogeneous networks (HetNets) is an important yet challenging problem. For a HetNet accommodating a large number of nodes, establishing perturbation-invulnerable connectivity is of utmost importance. This paper provides a robust advantaged node placement strategy best suited for sparse network graphs. In order to offer connectivity robustness, this paper models the communication range of an advantaged node with a hexagon embedded within a circle representing the physical range of a node. Consequently, the proposed node placement method of this paper is based on a so-called hexagonal coordinate system (HCS) in which we develop an extended algebra. We formulate a class of geometric distance optimization problems aiming at establishing robust connectivity of a graph of multiple clusters of nodes. After showing that our formulated problem is NP-hard, we utilize HCS to efficiently solve an approximation of the problem. First, we show that our solution closely approximates an exhaustive search solution approach for the originally formulated NP-hard problem. Then, we illustrate its advantages in comparison with other alternatives through experimental results capturing advantaged node cost, runtime, and robustness characteristics. The results show that our algorithm is most effective in sparse networks for which we derive classification thresholds.



There are no comments yet.


page 14

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Establishing connectivity in heterogeneous networks has been of high significance in the studies of HetNets in MANETs, WSNs, and multi-facility locations [1]. HetNets are typically composed of nodes with different capabilities and are formed by a collection of clusters. Generally, each cluster contains several standard nodes with short communication ranges and a cluster head node [2]. The cluster head node is an advantaged node serving as the gateway of this cluster in communication with other cluster heads. Connectivity scenarios of multi-tier networks have found extensive applications in different disciplines including but not limited to health surveillance, environment monitoring, earthquake detection, and Internet of Things (IoT). In all these applications, a large number of low-capability standard nodes (SNs) rely on a small number of advantaged nodes (ANs) to communicate.

Similar to literature work of [17, 22], this paper assumes HetNets are formed by SNs arranged in clusters with each cluster designated an AN gateway. AN gateways are assumed to have much longer communication ranges and able to simultaneously connect to multiple nodes [19]. While the assumption guarantees intra-cluster connectivity and a certain length of life-time [22], inter-cluster connectivity still needs to be established by placement of additional intermediate ANs. Lin and Xue [4] abstract this problem in the form of a Steiner minimum tree problem with minimum number of Steiner points and bounded edge length. We refer to this algorithm as SMT not to be confused with MST used to represent minimum spanning trees. Lin and Xue provide an approximation algorithm to the original NP-complete problem with a polynomial time complexity and a performance ratio of . This algorithm lays the groundwork of several other approximation algorithms with smaller (better) performance ratios [6, 7, 16, 8]. In [19], the authors develop a node placement algorithm for clustered ad-hoc networks subject to capacity constraints. Other related works, albeit at small scale sensor networks, include [9, 23, 22, 24] in which energy and network lifetime constraints are emphasized in node placement.

All of the above algorithms use the Gilbert disk connectivity model [3, 20, 15] representing the communication range of an AN as a circle. One disadvantage of this model is lack of boundary connectivity robustness where the distance between two centers is close to the distance threshold of connectivity . In such cases, a pair of connected nodes can easily become disconnected as the result of small position perturbations, a phenomenon occurring frequently and unpredictably, especially in harsh environments. To compensate against these cases, fault-tolerant -connectivity () node placement algorithms have been developed [10, 9, 12, 11, 13]. By using a much larger number of ANs, these algorithms guarantee there are always different paths between each pair of ANs.

In addition to the disadvantage above, SMT-based methods are subject to a second yet major disadvantage. Since the minimum spanning tree is formed once statically to represent the topology of the network graph, SMT-based methods do not consider the effects of changes to minimum spanning tree as the result of placing ANs in subsequent iterations. This can lead to potentially over utilizing AN resources, since it is possible to establish connectivity with a smaller number of ANs.

As detailed in Section 3 and Section 5, this work provides a dynamic strategy for AN placement capable of dynamically considering the effects of changes to minimum spanning tree while offering robust network connectivity in the presence of perturbations. In essence, we seek an AN placement strategy that carries a certain level of robustness therein. To avoid the inherent problem of Gilbert disk model in boundary connectivity cases, we model the communication ranges of nodes as hexagons embedded within the circles representing the actual communication ranges of nodes. Two nodes are considered connected only when their associated hexagons have a common edge. Consequently, a pair of connected nodes actually have a margin of perturbation conserving connectivity. Projecting the node placement problem into HCS with integer coordinates allows us to utilize the higher computation efficiency of HCS compared to a conventional Cartesian Coordinate System (CCS) in minimizing the number of intermediate ANs, identifying their positions, and accounting for topology perturbations.

In our work, we consider a two-tier graph of nodes in which clusters of SNs are to be connected with a minimum number of ANs. ANs are distinguished from SNs by their higher ranges of communication and ability to simultaneously connect to a large number of standard nodes. Each cluster of SNs is assumed to be equipped with an AN allowing full connectivity of the nodes within the cluster. Multiple clusters of SNs may or may not be connected depending on their separation distance. It is important to note that inter-cluster connectivity as facilitated by ANs is mostly a function of distance as opposed to interference because of the much larger separation distances of ANs and much stronger power profiles compared to SNs.

The main contribution of our work is as follows. First and for the purpose of offering robustness, we introduce a hexagonal coordinate system and develop associated extended algebra. Relying on the proposed HCS, we then formulate a class of geometric distance optimization problems aiming at finding the minimum number of ANs and their positions to guarantee robust connectivity of a given HetNet. We prove that our formulated problem is NP-hard and offer an exhaustive search algorithm for solving this NP-hard problem as well as a low complexity algorithm for solving an approximation of this problem. We show our heuristic solution closely tracks the exhaustive search algorithm while enjoying excellent node cost, runtime, and robustness characteristics compared to other alternatives. Our proposed approximation algorithm utilizes far fewer ANs than a

-connected network. This is because establishing a -connected network requires many additional edges to a graph so as to preserve connectivity under edge or vertex cuts. Naturally, adding edges will increase the number of intermediate ANs.

The rest of the paper is organized as follows. Section II describes connectivity model. In Section III, the hexagonal coordinate system and the associated algebra are introduced. Section IV describes the formulation of the connectivity problem, the proof of NP-hardness, and an exhaustive search algorithm solving the problem. Section V includes the heuristic node placement algorithm and the associated analysis. Section VI contains our experimental results. Finally, Section VII concludes the paper.

2 Connectivity Model

Based on the landmark Gilbert connectivity model [3], early connectivity models in network graphs mainly consider the distance between nodes. Later, a number of more realistic models [26, 21, 18] were established to capture connectivity using propagation, fading, shadowing, signal-to-interference-noise ratio (SINR), symbol error rate (SER), and capacity. A review of these recent works reveals that using a distance-based connectivity model is justified when high power long range communication dominates other factors such as interference, fading, and shadowing. Accordingly, this work assumes that ANs are characterized by longer communication ranges, higher powers, and higher lifetimes compared to SNs.

A pair of nodes {, } are considered to be bi-directionally connected if both and located within each other’s communication range . In the definition above, the distance between nodes is a realistic measure of connectivity because inter-cluster communication relies on LOS links established betweem high power ANs. For a pair of SN and AN nodes with ranges and in radii, connectivity is established only when the distance between two nodes is less than or equal to .

In our model, a number of SNs form a connected cluster for which the center of geometry can be calculated. A number of these clusters in a given area compose a network topology scenario. The location of clusters could be random or follow some certain distribution rule depending on the SN deployment preference. The 3 red dots in Fig. 1 represent 3 SNs with communication ranges of forming a sample cluster of SNs. Each cluster is assumed to be supported by an AN gateway node. This AN is typically located at the center of geometry of the cluster in order to maximize the number of SNs to which it is directly connected. Alternatively, AN gateways may have a small displacement from the center of geometry. Nonetheless, SNs within a cluster are all connected to the AN gateway node and able to communicate with nodes outside of the cluster through the AN gateway node. Thus, the problem of global connectivity is converted to connecting individual clusters utilizing additional intermediate AN nodes as necessary. Based on the connectivity condition given above, one AN ought to locate within the communication range of another AN so as to establish connectivity. A pair of ANs with communication ranges of are connected if the two circles with radii and centered around them overlap.

In order to provide a margin of robustness in presence of location perturbation, we model the communication area of an AN by a hexagon with an edge length of . Considering the extended range of AN compared to SN, we assume is approximately two orders of magnitude larger than . Without loss of generality, the communication area of an AN can be set as a hexagon with an edge length of where is a positive integer chosen such that the expression accurately approximates the value of . The length selection of offers a couple of geometrical advantages. First, any vertex of a large hexagon overlaps with the vertex of a hexagonal cell at the same relative position. Second, center to edge distance of a hexagon is conveniently measurable by the distance measure defined in the next section. This distance relates to the minimum distance coverage by an AN and will be utilized in Section 5. Two ANs are then robustly connected if their associated hexagons have a common edge.

3 Algebra in Hexagonal Coordinate System

First, the node placement problem is projected into a so-called hexagonal coordinate system. To set up the HCS, we have to specify the origin, axes, and coordinates. The origin is defined as the center of geometry of all clusters. From this origin, we start tiling the plane with hexagonal cells. These cells have an edge length equal to the communication radius of an SN, . The first cell share the same center of geometry as the origin point with coordinates . Then, we establish the rest of the tessellation with equal-sized hexagonal cells. Theoretically, an infinite tessellation can tile an infinite-extending plane without either overlapping or gaps. In practice, we stop when the area of interest is fully tiled. The -axis goes through the origin and is perpendicular to a pair of parallel edges of the cell containing the origin. The -axis cuts through all hexagonal cells along that direction through their center and edge. The -axis is defined as the rotation of the -axis by counter-clockwise, as shown in Fig. 1. The -axis also crosses the origin and vertically cuts across the edges of all cells along the way including origin.

Fig. 1: The hexagonal coordinate system (HCS).

In this coordinate system, coordinates are associated with those of a hexagonal cell unlike other coordinate systems such as that of [25] in which the axis goes through the center and a cell vertex. Points and in Fig. 1 illustrate a pair of coordinate examples.

3.1 Operation Definitions in HCS

3.1.1 Distance Measure

Since a point in an HCS actually represents the location of a hexagonal cell, a distance measure between two hexagonal cells aims at counting the number of cells moving from one cell to another. The distance between point A and B in Fig. 1 serves as a typical example. For a given pair of points and

, the distance measure for the vector

is defined as follows.


For example, is a vector starting at the center of cell and ending at the center of cell in Fig. 1. The distance between and is representing the shortest path from to covers 6 cells.

Theorem 1.

The distance measure defined by (1) is a distance.


Noticing that a distance in HCS calculated through (1) is non-negative, it is left to prove the triangular inequity:

We have the following three cases to consider.

Case 1 . In this case, we have

Case 2 . In this case, we have

Case 3 . In this case, we have

The vector addition rule in HCS follows that of Cartesian coordinate system. For vector and , we have .

3.1.2 Inner Product

The definition of inner production in an HCS is not the same as that of Cartesian coordinate system since the two basis vectors are not perpendicular to each other. Let’s call and the two basis vectors in the HCS along x- and -axis, respectively. We define the inner product as follows.


where , , represents the transpose operator, and is a symmetric matrix defined below.


It is observed that the inner product of a pair of vectors is zero if they are perpendicular to each other, as shown by vector and in Fig. 1. Further, the inner product operation is commutative as is a symmetric matrix.

3.2 Orientation of Distance Vector

When dealing with the least number of ANs required to link two clusters, one has to realize that the maximum covered distance of an AN has its own orientation. When the distance vector between two clusters is closely aligned with - or -axis, one has to possibly use more ANs than a case in which the distance vector is oriented at the direction away from each axis. In the latter case, one uses the length of diagonal to divide the distance and decide how many ANs are needed. Since we are mainly concerned about whether the distance vector is more aligned with -, -axis, or with the diagonals of the head and tail cluster, we take the basis axis as the reference point of the orientation. In the following subsections, we discuss a number of cases in which is in different quadrants. We specify the quadrant in which is located by inspecting a vector parallel to and of the same length as with a starting point at the origin.

3.2.1 is in the 1st or 3rd Quadrant

As shown in Fig. 2, the distance vector between cluster and is . The orientation of is represented by which is between and -axis.

Fig. 2: Calculating angle between and -axis when is in the 1st or 3rd quadrant. In the other two quadrants, the quadrant is divided into two areas and is calculated with respect to -axis in Area I or -axis in Area II.
Fig. 3: The longest possible distance covered by an AN.

In the triangle formed by , , and , we can identify the value of from the Law of Sines, with and , as . Hence,


3.2.2 is in the 2nd or 4th Quadrant

In HCS, quadrants 2 and 4 are larger in area than quadrants 1 and 3. The angle between -axis and -axis is which exceeds . Whether we take - or -axis as the reference, the method of the former subsection leads to a point of discontinuity in Eq. (4). Therefore, we partition each quadrant into two areas, as shown in quadrant 2 of Fig. 2. In Area I, the orientation of distance vector is referred to as -axis, while in Area II, it is referred to as -axis. Then, we can extract the associated equations from the Law of Sines separately. In Area I with , we have . Therefore,


In Area II, , we have . Then,


With distance orientation information, we are able to calculate the least number of intermediate ANs required to connect two clusters. That is to get the longest covering range of one AN along the direction of the distance vector and then divide the distance by the range. The following lemma gives the possible longest distance covered by an AN with a communication range .

Lemma 1.

The longest possible distance covered by an AN is an odd integer between

and corresponding to direction relative to its neighboring AN.


Fig. 3 shows two extreme cases. Segment is the shortest possible distance covered by adding one AN, , that is connected to . The direction of vector is perpendicular to the common edge. On the other hand, segment is the longest possible distance covered by adding one AN. Here, we calculate them separately. Recalling that the edge length of a large hexagon in Fig. 3 is , that of a small hexagon cell is , and , we have

As long as two ANs are connected and have one common edge, the distance between them in HCS is larger than the minimum case and less than the maximum case . Last but not least, if two ANs have one edge in common, the distance between them is always an odd number. ∎

4 NP-hard Problem Statement and Exhaustive Search Algorithm

In this section, we prove that our node placement problem in HCS is NP-hard by showing that it is a reduction from Knapsack problem which is known to be NP-complete. Then, we provide an exhaustive search algorithm to solve the problem as a comparison benchmark.

4.1 NP-Hard Problem Statement

Problem 1 (Knapsack Problem [29]) Given a set of items each with a weight and a profit where , is there a way of choosing units of each item to fill the knapsack such that the profit of the items chosen is at least while the total weight of the items chosen is not exceeding ?

Problem 2 (Node placement problem in HCS) Given pre-deployed gateway nodes with integer coordinates in an HCS and a minimum spanning tree of length formed by these nodes, can one cover the total distance of by additional intermediate nodes?

In Problem 2, covering length with ANs is equivalent to being able to find a connected path between any arbitrary pair of nodes where every pair of neighboring nodes have distances in the range .

Theorem 2.

There is a polynomial time reduction from Problem 1 to Problem 2.


Suppose set has cardinality and contains all odd integers between and , i.e., . We start with an instance of Problem 1 with which set with cardinality is associated. Then, we construct an instance of Problem 2 with which set also with cardinality is associated.

According to Lemma 1, each intermediate AN, based on the orientation of distance vector to its neighbor, covers a distance type where . These ’s are items to be packed in instance . Let denote the number of ANs of type . Then, the total distance covered by all intermediate ANs is expressed as


Let profit in instance be equal to . Assuming the weight of each AN is , i.e., , the total weight of all intermediate ANs amounts to the number of intermediate ANs, i.e.,


Considering the statement above and assuming , the process of constructing from occurs in polynomial time.

In Problem 2, we are seeking a yes/no answer to the question “Can we, by using intermediate ANs, cover a total distance of ?” If the answer to Problem 1 is yes, we can fill the knapsack such that a minimum profit of is reached without exceeding a maximum weight of .

Through the reduction above, it is feasible to cover a length of at least by placing at most additional nodes. In addition, if the answer to Problem 2 is no, which is a special instance of Problem 1 with , , and , Problem 1 will also have no answer. This implies a polynomial reduction from Problem 1 to Problem 2. ∎

Therefore, we conclude that Problem 2 is NP-hard. In the next subsection, we provide an exhaustive search algorithm to solve Problem 2.

4.2 Exhaustive Search Algorithm

Our exhaustive search algorithm uses a number of intermediate ANs and tries to rearrange their locations so as to establish global connectivity, until the smallest number of ANs that connect the entire graph is identified. We use to denote the number of intermediate ANs in exhaustive search. There is a finite set of feasible locations representing the candidate coordinates of intermediate AN locations in HCS. Generally speaking, all coordinate points of HCS except those occupied by pre-deployed clusters are feasible. The number of feasible locations is then derived as


where represents the area of a hexagonal cell of edge length , is the field area, and is the number of pre-deployed clusters. Those feasible locations are then stored in an matrix .

In our exhaustive search algorithm, we test all possible combinations of coordinates in and check if there is one configuration that accomplishes connectivity of all clusters. If not, we increase by and repeat the same process until the least number of intermediate ANs rendering global connectivity is reached. The algorithmic pseudo code is given in Algorithm 1.

One may notice the considerable computational complexity of the nested ’for’ loop. Given feasible AN locations, when the optimal solution is reached, say connecting the entire graph with ANs, then the runtime of exhaustive algorithm is at least in the order of . It can be shown, by Stirling’s formula and binomial theorem, that the runtime is bounded as shown below.


In practice, we are able to strategically preclude some locations that have very low possibility of accommodating an AN. For instance, an AN may not be placed too close to a pre-deployed cluster, and all ANs typically are, but not always, located inside the convex hull containing all pre-deployed clusters. With this strategy, we can reduce the size of to some extent. However, to the best of our knowledge, there is no systematic strategy of reducing the number of feasible locations.

  Input: Location of pre-deployed clusters
  Output: Coordinates of intermediate ANs
  Establish finite HCS and
5:  Put coordinates of feasible AN positions in
  while (Graph not fully connected) do
     for all  possible combos of feasible positions do
10:        Place ANs at these positions
        if  is connected then
           Set coordinates of ANs from
        end if
15:     end for
  end while
Algorithm 1 Exhaustive Search Algorithm

5 Heuristic Connectivity Algorithm

As the node placement problem described in the previous section is NP-hard, it is realistic to find a heuristic near-optimal solution offering a reasonable time complexity. Hence, this section provides a description of our heuristic connectivity algorithm and its complexity analysis.

As illustrated in section II, we model the communication range of an AN by a hexagonal cell with an edge length of where is a positive natural number. A robust connectivity criterion is defined as when two large hexagons have a common edge. Consequently, we are dealing with the task of connecting a number of hexagons within the network graph by optimally placing a number of hexagons of the same size between each pair as needed. As this task can be abstracted as a distance optimization problem within the HCS, we introduce a class of geometric distance optimization (GDO) algorithms.

The main algorithm of interest in this paper is referred to as enhanced geometric distance optimization (EGDO) algorithm. The name stems from the fact that the algorithm is an enhanced version of a pair of GDO algorithms proposed in [27]. Referred to as LongestHCS and ShortestHCS, our work of [27] shows that the LongestHCS algorithm outperforms ShortestHCS algorithm in most scenarios. The main improvement of EGDO algorithm over GDO (LongestHCS) algorithm is its significantly improved time complexity. As described in Subsection 5.1, the latter is achieved by locally modifying the existing MST in iterative steps as opposed to forming a new MST after each iteration as it was the case of GDO algorithms. In the rest of this paper, we refer to the LongestHCS algorithm as the GDO algorithm.

5.1 EGDO Algorithm

Given a number of clusters distributed in the plane, the first step is to set up the HCS origin and axes. We set the origin of the HCS at the center of geometry of these clusters. Then, we set up - and -axis for HCS at the origin, i.e., . After setting up the HCS, the coordinates of all clusters in HCS are specified. Utilizing the HCS and the associated algebra developed in Section 3, the 5-step iterative algorithm 2 shown below leads us to establishing network graph connectivity using the minimum number of intermediate ANs. In what follows, the individual steps of algorithm 2 are described in detail.

  Input: Location of pre-deployed clusters
  Output: Coordinates of intermediate ANs
  Step 1: Establish HCS, calculate MST, find terminals
5:  while (Graph not fully connected) do
     Step 2: Identify clusters and to be connected
     Step 3: Connect and using minimum no of ANs
     Step 4: Modify MST using ANs placed in Step 3
     Step 5: Break if graph is fully connected
10:  end while
Algorithm 2 EGDO Algorithm

Step 1: Calculate MST and Find Terminals

In this Step, we first initialize the iteration counter to . Then, we calculate the initial distance weighted MST formed by a given set of clusters. The distance between a pair of clusters is calculated based on the distance measure definition in Section 3. The MST is calculated using Kruskal algorithm [5]. The calculated MST is presented by an matrix in which represents the iteration number, each row identifies the two vertices of an edge, and is the number of edges. Matrix includes the edges in an increasing order of edge length. Given , we find all terminal nodes. A terminal node is an AN in the network connected to only one AN. From these terminals, we establish a set of nodes of interest for use in the next step. Once we place a new intermediate AN, we compare the distances between this new AN and the nodes in which we are interested. We refer to the nodes in which we are interested as potentially adjustable nodes (PANs). If the distance between the new AN and a PAN is smaller than the edge length connected to that PAN, we change the route by deleting the edge connected to the PAN and connecting the PAN with the new AN.

Step 2: Identify the Pair of Connecting Clusters

Since the rows of distance matrix are in increasing order, the last row of identifies the edge to be connected in the next step. Let us assume the elements of the last row are clusters and .

One might be curious as to why we select the pair of nodes that have the longest distance in between. The answer lies in the fact that the longest distance pair yielded our best experimental from among the variants tested. Other variants include always selecting the shortest distance pair, alternating between shortest and longest distance pairs, and several types of clustering strategies discussed in [28].

Step 3: Connect the Pair of Selected Clusters

This step attempts to achieve two goals. The primary goal is to connect the selected pair of clusters and (representing either gateway ANs or intermediate ANs) with the least number of intermediate ANs. The secondary goal is to deploy those intermediate ANs, which we denote as , in a way to bring the remaining isolated clusters closer thereby helping their future connectivity. We refer to the set of intermediate ANs in iteration as . In order to achieve these goals, we utilize the following iterative process.

Case 1: When using a single AN suffices to connect and , and is placed between and . The exact position of is calculated by solving the optimization problem given in this section.

In order to identify the coordinates of the new node placed in the -th iteration of Case 1, we introduce a pair of conditions.

  1. Maintain the connectivity of all three ANs, namely, , , and the new node .

  2. Identify the position of node

    by maximizing the probability of connecting the newly aggregated cluster to other pre-deployed clusters and minimizing the overlap area between

    and the other two nodes.

In short, we want to connect the pair of selected clusters by placing , as well as expect to facilitate the connectivity of remaining clusters by intelligently placing when possible. Accordingly, we formulate the following distance maximization problem graphically depicted in Fig. 4.

S.T. (12)

In this problem, and .


By definition of , is connected to (or ) and has the least overlap area with (or ) as long as assumes its value from the set of feasible positions determined by the inner product constraint (12) (or (13)). In other words, the distance between and (or ) is maximized along the direction of vector (or ).

Fig. 4: A graphical representation of the feasible points of in Case 1.

In what follows, we explain the meaning of these constraints. As shown in Fig. 4, the cells in solid dark and light grey color represent all feasible positions of assuring connectivity to nodes and . These two inner product constraints maintain slides along the line perpendicular to at point and at point as shown by gray cells in Fig. 4, while identifies the furthest position can reach along the direction(s) of while staying connected to . In short, the inner product constraint identifies the track of movement for and controls the range on the track. Although has six possible directions, we do not need to inspect them all. Based on the relative position of with respect to , only one facade of each node needs to be considered. The same explanation also applies to . To solve the optimization problem, we first ignore the inequality constraints and calculate the position of by the two equality constraints, we verify the connectivity of the two gateway nodes. With different pairs of and selected in set , there will be two solutions associated with the coordinates of satisfying inequality constraints. The one closer to the origin is chosen in order to improve the probability of connecting to other clusters. Now, let us assume we solve Case 1 of Step 3 leading to specifying the coordinates of intermediate node as , as , and as . Now select to be and to be . Then, solving the equality constraint (12) yields


Similarly, solving the equality constraint (13) yields


By solving Eq. (19) and Eq. (20), the coordinates of are identified which are required to be integers within the HCS. More importantly, the position of must satisfy the inequality constraints (14), (15), (16), and (17). In this case, all possible combinations of and in set are to be inspected. Note that and cannot be parallel to each other, otherwise Eq. (19) and Eq. (20) have no joint solution. This rule only applies to Case 1.

Case 2: When using only one AN cannot establish connectivity between and , the following iterative process is initiated. In this case, we assume , i.e., ANs are required to connect and where .

  1. Place an AN next to each of the two clusters and . Referred to as and , , these two ANs are connected to their associated clusters, and , respectively.

  2. Find the exact positions of and by minimizing the sum of three distances between and , and the origin, and the origin.

  3. Inspect connectivity between and . If connected, terminate the process. If more ANs are needed, replace the two end nodes with and and then solve the problem of connecting them by going through the same process described before.

Fig. 5: A graphical representation of the feasible points of and in Case 2.

We assume the AN pair and are being connected using ANs and with coordinates and , respectively. We use two sets of constraints to find feasible positions of and corresponding to and , respectively. Then, different combinations of these feasible positions of and are inspected in order to find the one minimizing the distance between and , and the origin, as well as and the origin. As shown in Fig. 5, dark gray cells on the line segment perpendicular to represent all feasible positions of . Similarly, light gray cells on the line segment perpendicular to represent all feasible positions of . The distance optimization problem is accordingly described as below.

S.T. (22)

The set of constraints used to find feasible positions of are expressed by Eq. (22) and Eq. (26), while the one used to find positions of are expressed by Eq. (23) and Eq. (27). Here, and are in .

However, only has one choice in because the vector can only point to the facade that faces . Similarly, only has one choice in because the vector can only point to the facade that faces . After finding all feasible positions for and , we have to conduct an exhaustive search among different combinations of the feasible positions to find the unique combination that minimizes the objective function.

Step 4: Modify MST

In this step, first the current PAN set is identified following the placement of one or more intermediate ANs in the previous step. In Case 1 of Step 3, this step takes place after the single AN is placed. In Case 2 of Step 3, this step is initiated right after and are placed. It is observed that placing one or two new ANs can only introduce local changes to the topology of the network, i.e., topology changes are limited to the neighboring nodes of newly placed ANs.

In a given MST, a -connected node or terminal may be connected to a newly placed ANs guaranteed not to create a loop. However, connecting a node with or more edges to a PAN may result in creating a loop. In order to avoid the possibility of creating a loop, only -connected nodes and terminals in an MST are considered as PANs.

Accordingly, we propose a line graph method in order to identify these PANs. A line graph starts from a terminal node. This terminal is the first node. Then following the line, the second node is reached and so on. The line ends when a -connected node, a terminal node, or a currently selected node is reached. If a node on the line does not belong to any of the categories above, then it is a -connected node and is subsequently added to the current PAN set. Note that we are only interested in terminals or -connected nodes because -connected nodes cannot be modified or else the entire spanning tree will become disconnected. The nodes on the line represent the current PAN set which is one subset of all PANs.

Having identified the current PAN set, the MST can be modified accordingly. In order to always keep the last row of as the edge to be connected, we modify the MST according to the cases of Step 3 above.

If we are to follow Case 1 and place a single AN, we remove the last row of and insert two rows and as the top rows of . This changes the representation of MST from in this iteration to in the next iteration.

If we are to follow Case 2 when adding a pair of ANs and , we replace the last row of with and then insert and as the top rows of for iteration .

Second, for each newly added AN in set , say , we compare the distance between and the -th node on a line graph, , with the distance between -th and -th node on the line. If is smaller, we modify the tree by deleting the edge between -th and -th node on this line graph, and then adding the edge between and -th node. After this modification, becomes a -connected node, as will be connected with and or other intermediate ANs between and , as well as -th node on this line graph. Therefore, the edges ending at can never be modified. After making each modification, we stop searching for other PANs on the current or other line graphs.

In this process, we always compare with the edge lengths of MST entries and insert the associated edges at the right place in order to preserve the ascending order of edge lengths in MST matrix.

Step 5: Check Stoppage Rule

When the selected pair of clusters is found to have already been connected, the algorithm stops. Otherwise, we increment the iteration counter by and go back to Step 2.

It is worth noting that the main difference between EGDO and GDO algorithm is the fact that EGDO algorithm adjusts the MST due to local topology changes associated with adding ANs as opposed to recalculating a new MST done by GDO. This leads to a significant reduction of average time complexity in EGDO algorithm as reported in Section 5.2.2 and Section 6.

5.2 Analysis of Complexity

In this subsection, we analyze the computational complexity of EGDO in comparison with GDO algorithm. We first determine the time complexity of solving the optimization problem of Step 3 as it is the common step shared by both GDO and EGDO algorithms, and then analyze the complexity of the recursive algorithm.

5.2.1 Complexity of Solving the Optimization Problem

The total number of cases that need to be inspected in order to solve the optimization problem of Step 3 of Section 5.1 is equal to . As mentioned before, each and has six possible directions but cannot be in parallel or else there is no solution to Eq.(19) and Eq.(20). However, this number can be reduced to based on the relative position of with respect to . We argue that solving the optimization problem in Case 1 of Step 3 takes constant time as solving Eq.(19) costs constant time.

If we are to follow Case 2 in Step 3, we conduct an exhaustive search for combinations of feasible positions for and . The total number of all individual feasible positions for is


The same number also represents the total number of all individual feasible positions for . Thus, the total number of combinations that either GDO or EGDO algorithm need to inspect in Case 2 of Step 3 is . This is an exhaustive search within a finite number of candidates since is determined by the communication range of an AN. Therefore, finding the particular combination of the pair , that minimizes their distance takes constant time.

Next, we note that Case 2 follows the same approach iteratively until and are connected. This is because , the total number of intermediate ANs required to connect and in iteration , is a finite number known at the beginning of each iteration and is decreasing in consequent iterations as the selected edge length within MST never increases. Hence, we conclude that solving the optimization problem of Case 2 also takes constant time. This constant is a function of as well as the number of ANs needed to connect the two selected nodes.

5.2.2 Complexity of EGDO Algorithm

In this subsection, we analyze the complexity of the other steps of the EGDO algorithm. In Step 1, the time complexity of calculating the distance weight matrix between all nodes is in the order of provided that there are pre-deployed clusters. To calculate the minimum spanning tree takes where is the number of edges in the initial network graph. Since we need to inspect all edges in the weight matrix, is close to . To find terminals, we need to inspect the degree of each node. Completing this process takes a time complexity of . Hence, the total complexity of Step 1 is in the order of . In Step 4 and in order to find the sets of PANs, we start from each terminal node and stop after reaching a certain type of node. This is a search process that usually stops way before going through all nodes at present. Assuming that the algorithm starts with pre-deployed clusters, stops after iterations, is the number of intermediate ANs added in iteration with , and represents the total number of ANs after iteration . Thus, the worst case time complexity of Step 4 is at -th iteration. However, the average time complexity is much shorter as the search stops way before . Step 2 and Step 5 takes constant time which can be ignored.

In summary, the worst case time complexity of EGDO algorithm is in the order of

However, the average time complexity is much shorter considering the fact that the search process of Step 4 stops before as showed by our experiments in Section 6.

5.2.3 Complexity of GDO Algorithm

In comparison, we analyze the complexity of the other steps of GDO algorithm. In Step 1 the time complexity of calculating the distance weight matrix between all nodes is provided that there are pre-deployed clusters. Similarly, the time complexity of Step 1 is in the order of . Since the GDO algorithm recalculates the MST after each iteration and the number of nodes in MST is increasing, the runtime accumulates. With the same definition of , , and , the time complexity of GDO algorithm is

The worst case time complexity of GDO algorithm is hence in the order of . Even though the bound is not tight, GDO algorithm has a much higher time complexity than EGDO as presented in Section 6.

6 Experimental Results

In this section, we first compare the results of our algorithm with those of exhaustive search algorithm. The latter serves as the benchmarking baseline finding the global optimal solution to the problem of node placement albeit with a very high time complexity. We show that our algorithm provides results close to the optimal solution given by exhaustive search within a limited area where the time complexity of exhaustive search is affordable. Then, we compare the performance of EGDO algorithm with those of GDO and variants of the SMT [4] algorithm. Our experimental results cover AN cost, i.e., the number of intermediate ANs, runtime, robustness, and the effects of HCS.

6.1 Comparison with Exhaustive Search Algorithm

In this subsection, we compare the AN cost of EGDO algorithm with exhaustive search algorithm, without considerations of runtime, in order to show our EGDO algorithm is in fact producing results close to the global optimal solution.

In order to examine the deviation of EGDO solution from the globally optimal solution, we run experiments in a field of , with and . The selection of parameters allows for completing exhaustive search experiments in realistic time. Fig. 6 gives the results. The horizontal axis is the number of pre-deployed clusters varying in the range from to and the vertical axis is the AN cost. For each point on the -axis, we run 50 different configurations and record the number of intermediate ANs used. Then, we fit the data to a polynomial curve. The blue and black curves show the AN cost of EGDO and exhaustive search algorithms, respectively. While the AN cost of EGDO is always higher than that of exhaustive search, the largest gap observed between two curves along the vertical axis is less than . Without being able to offer a mathematical proof, the gap falls in the range of -approximation ratio. It can also be observed that both curves start dropping beyond a certain point. This is because when the number of pre-deployed clusters grows, the network becomes denser and requires less ANs to establish connectivity. This aspect will be further investigated in the following subsections.

On the aspect of runtime, the completion time of EGDO algorithm is in the range of to seconds in our simulation setting. However, the exhaustive search algorithm takes from several minutes to over ten hours to complete within the same simulation settings.

Fig. 6: An AN cost comparison of EGDO and exhaustive search algorithms.

6.2 Performance Comparison of SMT and EGDO Algorithm

In this subsection, we compare the performance of SMT, GDO, and EGDO algorithms measured by the minimum AN cost and runtime. When comparing the two classes of algorithms, we also consider the fact that GDO and EGDO algorithms dynamically update minimum spanning trees while the original SMT algorithm forms the minimum spanning tree once statically. Therefore, we modify the SMT algorithm to become a dynamic algorithm in which the minimum spanning tree is recalculated after connecting every edge. We refer to the original static SMT algorithm as StaSMT and the revised dynamic SMT algorithm as DynSMT. Because SMT algorithms model the communication range of a node as a disk while GDO and EGDO do so as a hexagon embedded within the disk, SMT algorithms cover distance with a smaller number of ANs in average. Yet one should notice that this is not a fair comparison as a circle always covers a longer distance than the hexagon embedded in it. Additionally, it is important to note that DynSMT and GDO algorithms recalculate the entire spanning tree each time after a pair of clusters are connected. Hence, the time complexity of these algorithms is higher than those of StaSMT and EGDO algorithms.

Fig. 7: An AN cost comparison among StaSMT, DynSMT, GDO, and EGDO algorithms.

The experiments are conducted in a field with and . Fig. 7 provides an AN cost comparison among the StaSMT, DynSMT, GDO, and EGDO algorithms for a fixed field size. The results show that GDO algorithm uses an average of more AN resources than StaSMT. Further, the EGDO algorithm sometimes consumes a slightly larger number of AN resources than GDO algorithm, because in doing local modification it might miss some larger scale variations in network graph topology caused by a newly added AN. However, comprehensive experimental results have shown that these differences are negligible. Interestingly, it is also observed that there is no significant difference between the performance of the two variants of the SMT algorithm. This is alluded to the fact that unlike the GDO algorithm, the recalculated minimum spanning tree in DynSMT is not much different from the previously calculated minimum spanning tree obtained by StaSMT algorithm. The results of all four algorithms show an initial rise followed by a drop alongside some variations. The rise is related to the fact that an increase in the number of pre-deployed clusters in a sparse network requires utilizing more intermediate ANs. As the grows even larger within a fixed field size, the sparse network evolves to a dense network covering most of the field with AN gateways thereby reducing the number of intermediate ANs. All four algorithm tend to use the same number of intermediate ANs as the value of grows to in this setting.

Fig. 8: A runtime comparison of StaSMT, DynSMT, GDO, and EGDO algorithms.

Fig. 8 includes a comparison of runtimes among the StaSMT, DynSMT, GDO, and EGDO algorithms for the same fixed field size. It is observed that StaSMT has the lowest runtime because it only forms the minimum spanning tree once. DynSMT has a much longer runtime than the other three algorithms in general. Among three dynamic algorithms GDO, EGDO, and DynSMT, EGDO has the shortest runtime by far. While the runtime is generally higher than that of StaSMT, it gets closer to that of StaSMT for values of greater than . This behavior is related to the fact that the cost of calculating the minimum spanning tree increases as grows and also a smaller number of intermediate nodes are needed.

Considering the fact that the AN cost performance of DynSMT is slightly better than that of StaSMT but its runtime is significantly longer, we conclude that the advantage of DynSMT does not justify its increased time complexity. Therefore, we mainly compare the performance of StaSMT and EGDO algorithms in the rest of our experiments, considering comparable performance of GDO and EGDO but much better time complexity of EGDO. We note that EGDO algorithm uses an additional AN resources in average due to the use of a hexagon instead of a circle to represent the communication range of a node and also has a slightly longer runtime compared to StaSMT algorithm. However, it offers much better robustness characteristics as reported in the next subsection.

6.3 Partial and Global Robustness Tests

In this subsection, we evaluate the robustness of network connectivity algorithms by applying perturbations to the position of nodes. In each experiment, we first establish global network connectivity applying EGDO and StaSMT algorithms. Once connectivity is established, we introduce random perturbations to the position of pre-deployed clusters. This scenario is referred to as partial perturbation as it does not perturb the position of intermediate ANs added for establishing connectivity. We also conduct additional robustness experiments in which all existing ANs after node placement are perturbed. We refer to such experiments as global perturbation experiments. A perturbation constitutes a random directional displacement of the AN from its original position by a fixed distance . The fixed value of perturbation displacement , albeit in random direction, represents the experimental finding within the topology of our experiments introducing the most pronounced impact on network connectivity without completely partitioning the network. In each experiment and after applying perturbation, we test global connectivity.

We conduct our experiments in different field sizes but report sample results for a field. The set of pre-deployed clusters are distributed randomly following a uniform Poisson point process in the field of experiment. We set parameters and at and , respectively. The number of clusters varies from to by a step size of . Before reporting our results, we define a measure to quantify robustness. Equation (29) gives the definition of the measure referred to as robustness factor (RF). The RF measure not only takes into consideration the probability of staying connected after perturbation, but also the number of intermediate ANs used to establish connectivity.


In Equation (29), and represent the probabilities of remaining connected after perturbation is applied to the cases of EGDO and SMT algorithms, respectively. Accordingly, the calculation of in perturbation tests is described below. In each experiment, the global connectivity count is increased by one if the network remains connected after applying perturbation. The value of is identified by dividing the global connectivity count to the total number of experiments, which is 500 here. Similarly, is identified. The numbers and represent the number of intermediate ANs used to establish global connectivity in EGDO and SMT algorithms.

Because EGDO algorithm uses hexagons instead of circles, it generally covers a given distance along a line with a larger number of ANs than SMT. However, placing nodes towards the center of geometry within HCS offsets some of the impact. Generally speaking, the EGDO algorithm is observed to use a larger number of intermediate ANs than StaSMT. In return, it offers a higher level of robustness.

Fig. 9: A drawing of average robustness factor as a function of the number of pre-deployed clusters in perturbation tests.

Experimental results of partial perturbation within confidence intervals are shown by red line in Fig. 9. The horizontal axis shows the number of pre-deployed disconnected clusters before we apply any node placement algorithm. The vertical axis is the value of RF averaged over 500 different scenarios at each given number of pre-deployed clusters. We notice that the value of RF is in the range as two probability measures are within and the EGDO algorithms is expected to use a larger number of ANs than the SMT algorithm. A positive value of RF closer to 1 means that EGDO algorithm achieved much better robustness characteristics while using a relatively small number of ANs. An inspection of the reported results of Fig. 9 reveals that the EGDO algorithm shows a significant performance advantage in sparse networks. However and as the number of pre-deployed clusters increases, there is a threshold of cluster density beyond which EGDO algorithm will lose its advantage over SMT algorithm. More information about the threshold will be given in the next subsection.

Besides partial perturbation tests, we also conduct global perturbation experiments. In these tests, we perturb the positions of pre-deployed AN gateway nodes as well as intermediate AN nodes. All ANs within the connected network graph are displaced along a random direction by an amplitude of . The value of RF is calculated in the same way as explained before. The test results within confidence intervals are shown in Fig. 9 by the blue curve. Compared to partial perturbation test results, the RF values in global perturbation tests show a lower starting point and a faster drop rate as the density of clusters grows higher. The results show that the difference in perturbation robustness is very significant in some scenarios. Specifically, it is observed that the value of is one to two orders of magnitude larger than the value of in some instances.

6.4 Inspection of Cluster Density Threshold Value

Fig. 10: The identification of threshold for different field sizes.

As described in the previous subsection, we observe a threshold of AN density beyond which the network can no longer be regarded as sparse. The threshold to which we refer as denotes a cluster density value passed which the EGDO algorithm offers no advantage compared to the SMT algorithm. In this subsection, we raise a hypothesis that the value of threshold is related to the density of ANs, namely, the field area divided by the total area of AN coverage. We note that both SMT and EGDO algorithms seek to minimize the AN cost. Yet, the SMT algorithm attempts at reducing the total distance covered by ANs while the EGDO algorithm tries to reduce the AN overlap areas. In essence, minimizing the area of overlap is no longer meaningful when the AN density goes beyond a certain value. As cluster density grows, the average overlap area increases. Thus, the robustness of SMT algorithm will inherently improve and EGDO algorithm no longer offers any robustness advantage. To numerically validate this hypothesis, we conduct experiments on three different field sizes, of , , and . We apply the partial perturbation test to each field and vary the number of pre-deployed clusters. The threshold value for each field size is identified as where the plots of RF versus AN cross the horizontal axis. Perturbation experiments are repeated 100 times in each scenario and for every number of clusters. Further, we test 100 different scenarios and report the average results. In Fig. 9(a), Fig. 9(b), and Fig. 9(c), the RF curves approximately cross the -axis at values of , , and .

Table I(a) records average intermediate AN cost for each given number of pre-deployed clusters in the test of the field. Table I(b) and Table I(c) show the AN cost in the tests of and field sizes, respectively. As described above, the threshold is defined as


The threshold values , , and are calculated below for , , and field size scenarios where absorbs all constants.

From the calculations, the values of , , and are all around to . While not reported here, we have observed similar patterns with different values of , , and field sizes. The results numerically support our hypothesis that the value of threshold is related to the ratio of the field area and the total area covered by ANs.

No. of Clusters EGDO AN cost SMT AN Cost
10 20.2 18.1
20 26.6 24.0
30 29.2 26.1
32 29.7 27.1
35 30.3 27.3
40 30.9 28.6
50 30.9 28.2
60 30.5 28.0
70 30.2 28.4
80 29.5 27.2
(a) Measures of AN cost in field test.
No. of Clusters EGDO AN cost SMT AN Cost
10 46.5 41.6
20 63.7 56.9
30 74.8 67.4
40 84.1 75.5
50 91.4 81.9
60 96.0 86.2
70 100.4 90.8
80 103.9 93.8
90 105.2 95.2
100 109.0 99.8
120 112.9 102.7
150 116.3 106.4
200 117.9 109.1
(b) Measures of AN cost in field test.
No. of Clusters EGDO AN cost SMT AN Cost
30 127.8 120.7
40 136.5 121.6
50 148.7 133.3
60 161.8 145.3
70 170.6 153.0
80 179.0 159.8
90 187.0 169.3
100 193.2 173.5
120 205.4 184.8
150 218.3 197.0
200 233.8 211.3
240 244.3 222.3
260 247.3 227.2
300 252.1 231.6
(c) Measures of AN cost in a field test.
TABLE I: Average AN costs in different field sizes.

6.5 An AN Cost Comparison of SMT and EGDO in HCS

Since EGDO algorithm utilizes hexagonal tiles instead of radial disks to model the range of advantaged nodes, one can raise the question as to what happens when applying SMT algorithm to a network using hexagonal tiling. In order to answer this question, we run an additional experiment.

Fig. 11: An AN cost comparison of SMT and EGDO algorithms in HCS.

Our experimental setting is described as follows. Within an area of , we randomly deploy a number of clusters ranging from to at an increasing step size of . In this experiment, we set and at and , respectively. For each fixed number of pre-deployed clusters, we run different randomly distributed scenarios. Then, we average the number of ANs to report our results. Fig. 11 compares the AN cost of establishing connected graphs, through SMT and EGDO algorithms with the same level of built-in robustness, as a function of the number of pre-deployed clusters. In this setting, the network is no longer considered sparse once the number of pre-deployed clusters reaches .

It can be observed from the results that EGDO algorithm performs slightly better when the number of clusters is small. As the number of pre-deployed clusters grows, the EGDO algorithm intends to use even a smaller number of ANs than the SMT algorithm to establish full connectivity. The number of ANs used by the EGDO algorithm is typically to less than those used by the SMT algorithm for as long as the network is sparse, i.e., the number of pre-deployed clusters is less than . Interestingly, the AN cost advantage of the EGDO algorithm becomes even more apparent for a dense network with more than pre-deployed clusters. However, the advantages of EGDO over SMT in dense networks are not of high significance because a dense network naturally offers robustness.

While not shown here, it is also important to note that representing the communication range of an AN with a reduced radius circle or a reduced edge square in CCS leads to utilizing an increased number of ANs in establishing connectivity.

7 Conclusion

In this paper, we investigated robust connectivity in two-tiered heterogeneous network graphs through systematic placement of advantaged nodes. Our method was developed utilizing a so-called hexagonal coordinate system (HCS) in which we developed an extended algebra. We formulated and solved (within bounds) an NP-hard problem addressing graph connectivity. Further, we developed a class of near-optimal yet low complexity geometric distance optimization (GDO) algorithms approximating the original problem. Experimental results showed the effectiveness of our proposed GDO algorithms measured in terms of advantaged node cost and robustness of connectivity in sparse networks in comparison with variants of exhaustive search and Steiner minimum tree (SMT) algorithms. Our experimental results also offered a couple of additional important insights. First, it was commonly observed that our proposed GDO algorithms lost their advantages in comparison with SMT algorithms past a density threshold value due to the higher density of nodes. Second, below the specific sparsity threshold, our proposed algorithms used smaller numbers of AN nodes if we applied HCS representation to SMT algorithms in order to improve robustness.


  • [1] Y. Levin and A. Ben-Israel, A heuristic method for large-scale multifacility location problems, Comput. Oper. Res., 2004.
  • [2] M. Younis and K. Akkaya, Strategies and techniques for node placement in wireless sensor networks: A Survey, Ad Hoc Networks 6(2008).
  • [3] E. N. Gilbert, Random plane networks, Journal of the Society for Industrial and Applied Mathematics, 9 (1961).
  • [4] G-H. Lin and G. Xue, Steiner tree problem with minimum number of Steiner points and bounded edge-length. Information Processing Letters 69.2 (1999).
  • [5] R.K. Ahuja, T.L. Magnanti, J.B. Orlin, Network Flows, Prentice-Hall, Englewood Cliffs, NJ (1993)
  • [6] E.L. Lloyd, G. Xue, Relay node placement in wireless sensor networks, IEEE Transactions on Computers 56 (1) (2007).
  • [7] D. Chen, D-Z Du, X-D Hu, G-H Lin, L. Wang, and G. Xue, Approximations for Steiner trees with minimum number of Steiner points, Journal for Global Optimization 18: 17–33, 2000.
  • [8] D. Du, L. Wang and B. Xu, The Euclidean bottleneck Steiner tree and Steiner tree with minimum number of Steiner points, Computing and Combinatorics ,Guilin 2001
  • [9] J. Tang, B. Hao, and A. Sen, Relay node placement in large scale wireless sensor networks, Computer Communications, 29 (2006).
  • [10] B. Hao, J. Tang and G. Xue, Fault-tolerance relay node placement in wireless sensor networks: Formulation and Approximation, in Proceeding of the Workshop on High Performance Switching and Routing (HPSR), 2004.
  • [11] G. Gupta, M. Younis, Fault-tolerant clustering of wireless sensor networks, Proceedings of IEEE WCNC, 2003.
  • [12] X. Han, X. Cao, E.L. Lloyd and C.-C. Shen, Fault-tolerant relay nodes placement in heterogeneous wireless sensor networks, Proceeding of the 26th IEEE/AMC Joint Conference on Computers and Communications(INFOCOM’07), Anchorage AK, May 2007.
  • [13] J. Bredin, E. Demaine, M. Taghi Hajiaghayi, D. Rus, Deploying sensor networks with guaranteed fault tolerance, MobiHOC, 2005
  • [14] Q. Wang, K. Xu, G. Takahara, H. Hassanein, Locally optimal relay node placement in heterogeneous wireless sensor networks, In Proc. IEEE LCN, 2005.
  • [15] J. Pan, Y.T. Hou, L. Cai, Y. Shi, S.X. Shen, Topology control for wireless sensor networks, Proceedings of ACM MOBICOM, 2003.
  • [16] X. Cheng, D.Z. Du, L. Wang, B. Xu, Relay sensor placement in wireless sensor networks, ACM/Springer Journal of Wireless Networks 14, 3(2008).
  • [17] Y.T. Hou, Y. Shi, H.D. Sherali, On energy provisioning and relay node placement for wireless sensor networks, IEEE Transactions on Wireless Communications 4 (5) (2005).
  • [18] Y. Lin, W. Yu, and Y. Lostanlen, Optimization of wireless access point placement in realistic urban heterogeneous networks, in GLOBECOM, 2012.
  • [19] S. Perumal, J.S. Baras, Aerial platform placement algorithm to satisfy connectivity and capacity constraints in wireless ad-hoc networks, IEEE GLOBECOM, 2008.
  • [20] N. Li and J. C. Hou, Topology control in heterogeneous wireless networks: Problems and Solutions, IEEE INFOCOM, 2004.
  • [21] O. Doussee, F. Baccelli, P. Thiran, Impcat of interference on connectivity in ad-hoc and networks, IEEE INFOCOM, 2003.
  • [22] K. Xu, Q. Wang, H. Hassanein, G. Takahara, Optimal design of wireless sensor networks: Minimum cost with lifetime constraints, Proc. IEEE WiMob 2005.
  • [23] Q. Wang, G. Takahara, H. Hassanein and K. Xu, On relay node placement and locally optimal traffic allocation in Heterogeneous wireless sensor networks, IEEE LCN, 2005
  • [24] Q. Wang, K. Xu, H. Hassanei, G. Takahara, Minimum cost guaranteed lifetime design for heterogeneous wireless sensor networks, IEEE IPCCC, 2005.
  • [25] I. Stojmenovic, Honeycomb networks: Topological properties and communication algorithms, IEEE Transaction on Parallel and Distributed Systems, 8(10)(1997)
  • [26] H. Yousefi’zadeh, H. Jafarkhani, J. Kazemitabar, Outage probability metrics of connectivity for MIMO fading ad-hoc networks, KICS/IEEE Journal of Communications and Networks, (2009).
  • [27] K. Ding, H. Yousefi’zadeh, A systematic node placement strategy for multi-tier heterogeneous network graphs, In Proc. of IEEE WCNC, 2016.
  • [28] N. Li, and J.C. Hou, Improving connectivity of wireless ad hoc networks, The Second Annual International Conference on Mobile and Ubiquitous Systems: Networking and Services. IEEE, 2005.
  • [29] D.Z. Du, P.M. Pardalos, eds., 2013.

    Handbook of combinatorial optimization: supplement

    (Vol. 1). Springer Science Business Media.