Distributed Beamforming for Agents with Localization Errors

03/27/2020 ∙ by Erfaun Noorani, et al. ∙ University of Maryland The University of Texas at Austin 0

We consider a scenario in which a group of agents aim to collectively transmit a message signal to a client through beamforming. In such a scenario, we investigate the effects of uncertainty in the agents' local positions on the quality of the communication link formed between the agents and the client. We measure the quality of the communication link by the magnitude of the beamforming gain which is a random variable due to the agents' localization errors. To ensure the existence of a reliable communication link despite the agents' localization errors, we formulate a subset selection problem which is to select a subset of agents that minimizes the variance of the beamforming gain while ensuring that the expected beamforming gain exceeds a desired threshold. We first develop two greedy algorithms, greedy and double-loop-greedy, each of which returns a globally optimal solution to the subset selection problem under certain sufficient conditions on the maximum localization error. Then using the functional properties of the variance and expected value of the beamforming gain, we develop a third algorithm, difference-of-submodular, which returns a locally optimal solution to a certain relaxation of the subset selection problem regardless of the maximum localization error. Finally, we present numerical simulations to illustrate the performance of the proposed algorithms as a function of the total number of agents, maximum localization error, and the expected beamforming gain threshold.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 7

page 8

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

Beamforming is a technique for wireless communication in which the phases of antenna array elements are selected to induce constructive or destructive interference in desired directions [VanVeen1988]. The theory of beamforming is well-developed [Trees2002], and its applications are ubiquitous [Jayaprakasam2017]. Distributed beamforming refers to a beamforming scheme where the antenna elements, e.g., mobile robots equipped with small-size antennas [Oh2013], are distributed over a wireless network and cooperatively organize to a virtual array. In this paper, we study the effect of uncertainty in the positions of the antenna elements, to which we refer to as the agents, on the quality of the communication link formed between the agents and a client.

In distributed beamforming, the main objective is to set the phases of the signals transmitted by the agents such that the beamforming gain, i.e., a quantity proportional to the signal-to-noise-ratio of the signal received by the client, is maximized

[Trees2002]. When the related system parameters are perfectly known, one can attain the maximum beamforming gain by including all the agents to beamforming array and setting their phases appropriately [mudumbai]

. The presence of uncertainties in the system parameters, on the other hand, significantly affects the applicability of the classical theory. For instance, the effects of phase estimation errors in linear arrays

[Steinberg1976PrinciplesOA] and phase jitters [Ochiai2005] cause possible antennae misalignment, and thus, result in interference patterns which preclude forming a reliable communications link.

In the presence of localization errors, even though including all the agents to beamforming array still maximizes the expected

beamforming gain, it may deteriorate the quality of the communication link by decreasing the probability that the

actual

beamforming gain is above a certain threshold. Therefore, in order to ensure the existence of a reliable communication link between the agents and the client, it may sometimes be desirable to include only a subset of the agents to the beamforming array. Assuming that the agents’ local positions follow a Gaussian distribution with known mean and variance, we consider the variance of the beamforming gain as a risk measure and develop algorithms to select a subset of agents that minimizes the variance of the beamforming gain while ensuring that the expected beamforming gain is above a desired threshold.

We first show that, when all the agents’ localization errors are below a certain threshold, a simple greedy selection algorithm returns a globally optimal solution to the formulated subset selection problem. We then illustrate with a numerical example that the greedy selection does not always minimize the considered risk measure. By slightly modifying the greedy algorithm to improve its empirical performance when the agents’ localization errors violate the aforementioned threshold, we obtain the double-loop-greedy algorithm which has the same theoretical optimality guarantees with the greedy algorithm.

We also develop another algorithm, which we refer to as the difference-of-submodular algorithm, which utilizes the functional properties of the mean and variance of the beamforming gain and returns a locally optimal solution to a certain relaxation of the subset selection problem regardless of the magnitude of the agents’ localization errors. Specifically, we show that both the mean and variance of the beamforming gain are supermodular set functions. Then, considering the expected beamforming gain as a regularization term instead of a constraint in the subset selection problem, we utilize the submodular-supermodular procedure developed in [Narasimhan2005] to find a locally optimal solution to the resulting optimization problem.

Notation: We denote the set of real numbers and the set of natural numbers, respectively, by and . The set where is denoted by . The size of a set is denoted by . Finally, the expectation and the covariance of a

-dimensional random vector

are denoted by and , respectively.

Ii System Model

In this section, we define some contextual details required to formalize the notion of distributed beamforming. In particular, we state our assumptions regarding the communication channel between the agents and the client, and introduce the distributed model through which the agents transmit signals.

Ii-a Communication Channel

We consider a group of agents each of which is equipped with a single ideal isotropic antenna with a constant transmit power . The agents aim to transmit a common message , a continuous time signal, to a stationary client. The content of the message could represent raw measurements or a waveform encoding digital data. The assumptions regarding the communication channels of the agents are listed as follows.

  1. The transmitted message propagates in free space with no reflection or scattering, i.e., no multipath fading or shadowing, and the incident wave can be assumed to be a plane wave.

  2. The client is located at the far-field region. Formally, for an agent , let be the Euclidean distance between the agent and the client. We assume that

    where is the carrier wavelength.

  3. The distance between the agents are long enough that the mutual coupling among their antennae are negligible.

  4. The agents communicate with the client over a narrow-band wireless channel at some carrier frequency . Therefore, each agent has a flat-fading channel to the client, which can be represented by a complex scalar gain where is the known channel gain, and is the known phase response of the channel.

  5. The channel gains satisfy for all .

  6. Each agent has a local oscillator synchronized to the same carrier frequency . Furthermore, all local oscillators are time-synchronized.

The first three assumptions as well as the last two are typical in the study of phased arrays [Ochiai2005, mudumbai]. From a practical point of view, it has been shown that, when the agents operate in low very high frequency (VHF) band, e.g., around Mega-hertz (MHZ), the communication channel has improved penetration, long-range propagation, and significantly reduced multi-path [Dagefu2015]. As for the fourth assumption, for an agent with known relative position with respect to the client, one can characterize the frequency response of a channel up to a certain accuracy above which improvements will have a negligible effect on the output. The synchronization problem in distributed beamforming is a well studied subject [Jayaprakasam2017]. In this work, we abstract away the synchronization problem by our last assumption and focus solely on the effects of localization error on beamforming.

Ii-B Distributed Transmission Model

Due to the power constraints of the antennas carried by the agents, a subset of agents collectively form a virtual array to transmit the message signal to the stationary client. Specifically, each agent transmits the signal where is the amplitude of the transmission, and is a phase adjustment performed by the agent. Then, the signal received by the client is

(1)
(2)

where is additive Gaussian noise with zero mean, and is the relative phase offset resulting from the relative positions of the agents and the client. In particular, under the assumption that the local oscillators are time-synchronized, the phase offset of the signal at the location of the client relative to a signal transmitted by an agent located at is

(3)

where is the client’s known position, is the wavelength, denotes the inner product of two vectors, denotes the Euclidean norm of a vector, and is the standard modulo operation.

In this paper, we assume that, the agents’ local positions are not exactly known. In particular, for a given , we assume that where and are, respectively, the known mean and the known covariance of the Gaussian distribution. This assumption is reasonable in practice because the first and second-order statistics of pose within an unknown environment are easy to obtain through, e.g., LIDAR scans. Moreover, we assume that and are independent for such that .

In the setting described above, we investigate the effects of the agents’ localization errors on the quality of the beam formed by the agents and develop algorithms for choosing a subset of agents that collectively forms a beam with an acceptable quality of service despite the agents’ localization errors. To that end, a few definitions are required. Specifically, for a given subset and phase adjustment parameters for all , the array factor is defined as

(4)

where is the vector of phase adjustments. Under the assumption that the agents’ channel gains satisfy for all , the magnitude of the array factor is proportional to the square root of the received signal-to-noise-ratio (SNR) by the client [mudumbai]. Let the total phase be

(5)

Taking the square of the magnitude of the array factor, we obtain the beamforming gain which is proportional to the received SNR and given by

(6)

The beamforming gain (6) is a fundamental quantifier of the quality of a communications link, which we seek to optimize as a function of the agents that transmit the message signal.

Iii Problem Statement

A canonical quantifier of performance in communication systems is the signal-to-noise-ratio (SNR) of the signal received by the client, which measures the quality of the communication link from a virtual antenna array. As previously mentioned, SNR is proportional to the beamforming gain (6). Thus, when the relative phase offset , resulting from the relative positions of the agent and the client, is known, one may seek a reliable communication link by maximizing the beamforming gain, i.e., selecting a pair such that

(7)

The beamforming gain can be maximized by choosing and setting for all . To see this, recall that the total phase , and note that

(8)

The upper bound in (8) is attained if and only if and for all , i.e., the total phases of all agents are aligned. In other words, for a fully known system, the set of all agents achieves the highest SNR since all the phases of the agents can be precisely set such that the message signals from all agents interfere constructively at the location of the client.

In this paper, we focus on a scenario where the exact value of the relative phase offsets are not known to the system designer due to the agents’ localization errors. This situation arises, e.g., in complex urban terrains or GPS-denied environments. Consequently, in the considered model, the beamforming gain is a random variable. We are interested in finding a subset of agents that, with high probability, forms a reliable communication link with the client through beamforming. The formal problem statement is as follows.

Problem 1: (Subset selection) For a constant , and the fixed phase adjustment parameters where, for all , , find a subset such that

(9a)
(9b)

Note that the vector of phase adjustment parameters is not a design variable in the subset selection problem. The rationale behind the choice of is that for any given subset , it can be shown that is maximized by choosing . Since is not a variable in the considered problem, for simplicity, we define the notation

Remark: One can argue that, when is a random variable, a reasonable objective for the designer may be to maximize the expected beamforming gain, i.e., selecting a pair such that

(10)

It can be shown that is maximized by choosing and . In other words, the expected beamforming gain is maximized by including all the agents to the beamforming and aligning their total phases in expectation.

Although including all the agents to the beamforming, i.e., choosing , maximizes the expected beamforming gain, it may decrease the probability that the beamforming gain exceeds a certain threshold, which in turn reduces the reliability of the link. In the subset selection problem, considering the variance of the beamforming gain as a risk measure, we aim to find a subset of agents that achieves a desired level of gain while increasing the reliability of the link. Such a formulation is widely used in risk-sensitive optimization models that arise in econometrics [levy1979approximating, markowitz2000mean].

Iv Statistical Properties of Beamforming Gains

To solve the subset selection problem, we first derive the explicit forms of the expected value and the variance of the beamforming gain

. We do so by deriving the probability distribution of the total phase offset

which is defined in (5).

Recall that in the subset selection problem, we fix the phase adjustment parameters by setting . Therefore, we have , which implies that the distribution of is just a shifted version of the distribution of . Recall also that where and that the th agent’s local position has the Gaussian distribution , where the phase offset is defined in (3). Then, using simple algebraic manipulations, it can be shown that where

(11)

We refer to as the effective error in the localization of the

th agent. We now need to map the probability density function (pdf) of

to the interval . To perform the mapping, we use the so-called wrapped distributions[wrapped_book, wrapped_1].

Definition 1

[wrapped_book] For a given random variable on with the pdf , let be the induced wrapped variable of period . The wrapped pdf of the variable is given by

Applying the above definition to and using the fact that , we obtain the

wrapped normal distribution

of as

(12)

for all .

We now state a remarkable property of wrapped distributions, which allows us to derive the explicit forms of the expected value and the variance of the beamforming gain.

Proposition 1

[wrapped_book] Let be a random variable on , and be the induced wrapped variable of period . For any , we have

(13)

Proposition 1

states that the wrapped distribution has identical moments to the unwrapped distribution, which permits us to derive explicit forms of

and , as stated in the following lemma.

Lemma 1

Let . The expected value and the variance of the beamforming gain are, respectively,

(14)
(15)

Proof for the above result, as well as proofs for all the results presented in this paper, can be found in Appendix. These properties of the mean and variance of the beamforming gain, built upon properties of wrapped distributions, motivate the algorithm development in the next section. In particular, through their explicit forms, we derive conditions on the localization error for which monotonicity and supermodularity properties may be established.

V Agent Selection under Localization Errors

In this section, we propose three algorithms to solve the subset selection problem and discuss their optimality. The key point is that when localization errors are below a certain threshold, we are able to establish that a greedy subset selection scheme attains optimal performance directly. Then, for the case that the localization errors exceeds the aforementioned threshold, we provide a counterexample that demonstrates that the greedy algorithm may yield suboptimal solution. Inspired by the numerical examples generated by the ”greedy” approach, we propose a modified “double” greedy scheme with the same optimality guarantees as our “greedy” scheme and better performance at the cost of higher computation. For the case that these sufficient optimality conditions break down, we derive monotonicity and supermodularity properties of the expressions in (9a)-(9b). By taking advantage of these conditions, we propose an algorithm that exploits their structure based on the difference of submodular functions and discuss its local optimality.

For the rest of the paper, we assume that the optimization problem in (9a)-(9b) has a feasible solution. For a given problem instance, the validity of such an assumption can be easily verified by checking whether for due to the following result.

Proposition 2

For any such that , we have

(16)

The above result follows immediately from the monotonicity of . Specifically, since , given in (14), is a sum of nonnegative terms, adding an element to the subset can only increase the sum.

V-a Greedy Algorithm for Small Localization Errors

The first algorithm, shown in Algorithm 1, that we propose to solve the subset selection problem (9a)-(9b) is based on a simple greedy approach. The Greedy algorithm first sorts the agents’ effective errors in ascending order. Initializing the output set to the empty set, it then iteratively adds the agent with the next lowest effective error to the output set until the constraint is satisfied. This procedure is summarized in Algorithm 1.

1:Input: for all , .
2:Sort such that .
3:,
4:while  do
5:     ,
6:return .
Algorithm 1 Greedy

The time complexity of the Greedy algorithm is the same (up to a certain constant) with the time complexity of the sorting algorithm used as a subprocedure. The sorting of an array of length can be performed efficiently by a variety of algorithms including merge sort [mergesort] and quick sort [quicksort].

Optimality of the Greedy algorithm We now present sufficient conditions on the set of effective localization errors under which the Greedy algorithm returns an optimal solution to the problem in (9a)-(9b).

Recall from (12) that the effective localization error is the variance of the total phase . Hence, it can be seen as a representative of the uncertainty in the total phase . In particular, in two extremes, when , with probability 1, and when ,

is uniformly distributed over the interval

. We measure the total effective error of a subset by the function such that

(17)

One can intuitively think that a solution to the subset selection problem can be obtained by choosing a subset of agents that satisfy the constraint (9b) and have the minimum total effective error (17), i.e.,

(18a)
(18b)

The following result, together with Proposition 2, implies that Algorithm 1 returns a globally optimal solution to the problem in (18a)-(18b).

Proposition 3

For any such that , we have

Unfortunately, the optimization problems in (9a)-(9b) and (18a)-(18b) are not equivalent in general. However, there are certain sufficient conditions under which these problems become equivalent, which are formalized in the following theorem.

Theorem 1

For a given set of effective localization errors, let where for all . A solution to the problem in (18a)-(18b) is also a solution to the problem in (9a)-(9b) if either one of the following conditions hold:

  • where ,

  • .

Proof for the above result is provided in Appendix. Theorem 1 establishes that if the two agents with the lowest effective localization errors exceed the desired expected beamforming gain threshold , or if all the agents have “small” effective localization errors, i.e., for all , then the Greedy algorithm returns an optimal solution to the original subset selection problem (9a)-(9b).

We emphasize that the Greedy algorithm returns an optimal solution to the subset selection problem if a given problem instance satisfies either one of the sufficient conditions in Theorem 1. The following numerical example illustrates that for problem instances that violates the sufficient conditions, the Greedy algorithm may result in a suboptimal subset selection.

Example 1

We construct a problem instance for which the optimization problems in (9a)-(9b) and (18a)-(18b) are not equivalent. Let the total number of agents be , and the expected gain threshold be . Furthermore, let the ordered set of effective errors be . It is clear that the problem instance does not satisfy the second sufficient condition in Theorem 1. Moreover, by direct calculation, one can observe that for . Hence, the instance does not satisfy the first sufficient condition in Theorem 1 as well. Using Proposition 3, one can conclude that, for this problem instance, at least three agents should be included in the beamforming. Let , , , and be all possible subsets of containing three elements. In Fig. 1, we provide the expected value and the variance of the beamforming gain as well as the total effective error for each . All subsets satisfy . Observe that the optimal solution for this problem instance is the subset which has the maximum total effective error instead of the minimum one . Therefore, for this instance, the Greedy algorithm does not yield the optimal solution to the subset selection problem.

Fig. 1: Expected value and the variance of the beamforming gain as a function of subsets satisfying . The maximum expected value is attained by the subset which has the minimum total effective error . The variance of the beamforming gain is minimized by the subset which has the maximum total effective error .

The above numerical example illustrates a counterintuitive fact: for some problem instances, the optimal subset consists of the agents with the highest effective localization errors. This phenomenon arises due to the physical interaction effects of constructive/destructive interference of the sinusoids in (6). Next, we develop a modified algorithm that accounts for the fact that for some instances the agents with the highest effective localization errors provides an optimal solution to the subset selection problem.

V-B Double-Loop-Greedy for Improved Empirical Performance

Inspired by the numerical example given in the previous section, as the second approach, we propose the Double-Loop-Greedy (DLP) algorithm, shown in Algorithm 2, to solve the subset selection problem. Similar to the Greedy algorithm, the DLG algorithm first sorts the agents’ effective errors in ascending order. It then initializes two sets, namely, and , to the empty set. Starting from the agent with the lowest effective error, at each iteration, the agent with the next lowest effective error is iteratively added to the set until the constraint is satisfied. Note that is the same with the output of the Greedy algorithm. Similarly, starting from the agent with the highest effective error, at each iteration, the agent with the next highest effective error is iteratively added to the set until the constraint is satisfied. Finally, the DLG algorithm compares the variance of the beamforming gain for the sets and , and outputs the one with smaller value.

1:Input: for all , .
2:Sort such that .
3:, , ,
4:while  do
5:     ,
6:while  do
7:     ,
8:if  then
9:else
10:return .
Algorithm 2 Double-Loop-Greedy (DLG)

We note that the time complexity of the DLG algorithm is the same (up to a certain constant) as the time complexity of the Greedy algorithm.

Optimality of the DLG algorithm For a given problem instance, the subset returned by the DLG algorithm is guaranteed to satisfy where is the subset returned by the Greedy algorithm. Hence, the DLG algorithm also provides a globally optimal solution to the problem in (18a)-(18b) under the sufficient conditions stated in Theorem 1. Moreover, as can be seen from the numerical example given in the previous section, the DLG algorithm may also return a globally optimal solution to problem instances on which the Greedy algorithm performs poorly.

V-C Difference-of-Submodular (DS) algorithm

Both the Greedy (Algorithm 1) and DLG (Algorithm 2) are guaranteed to return globally optimal solutions to the subset selection problem only under the sufficient conditions stated in Theorem 1. In this section, we propose a third approach to solve the subset selection problem, which utilizes the functional properties of the expected value and the variance of the beamforming gain, and always returns a locally optimal solution to a certain relaxation of the subset selection problem.

Before presenting the Difference-of-Submodular (DS) algorithm, we first provide a definition of submodularity and show that both the expected value and the variance of the beamforming gain are supermodular set functions.

Definition 2

A set function is submodular if for every with and every , we have .

A set function is said to be supermodular if the set function is submodular. Next, we formalize that the objective and constraints in (9a) - (9a) satisfy the definition of supermodularity.

Theorem 2

Both the expected value, , and the variance, , of the beamforming gain are supermodular set functions.

Proof of Theorem 2 can be found in Appendix. Let and be submodular set functions. In [Narasimhan2005], the authors present an algorithm, called Submodular-Supermodular-Procedure (SSP), that returns a local optimal solution to the following problem

(19)

To be precise in our statements, we also provide a definition of local optimality for set functions.

Definition 3

[Narasimhan2005] For a set function , a sequence is said to converge to a local minimum if there exists a constant such that for all , and for any , for all .

The DS algorithm, shown in Algorithm 3, utilizes the SSP as a subprocedure to return a locally optimal solution to a certain relaxation of the subset selection problem. In particular, it takes two parameters and as inputs as well as the agents’ effective localization errors and the expected gain threshold . At the th iteration, where , using the SSP as a subprocedure, the DS algorithm finds a locally optimal solution to the following problem

(20)

where is iteratively defined as . The algorithm terminates when the solution returned by the SSP satisfies .

1:Input: for all , , , .
2:, .
3:while  do
4:     ,
5:     
6:     , .
7:return .
Algorithm 3 Difference-of-Submodular (DS)

Convergence of the DS algorithm For the DS algorithm to terminate, SSP outputs a subset such that . At the th iteration, the SSP finds a locally optimal solution to the problem in (20) by computing successive modular approximations of the function and finding a globally optimal solution to each of the resulting approximation problems. Since and , the parameter increases at each iteration. Hence, in terms of the objective value, the globally optimal solution of the approximate problems become closer to the globally optimal solution of which is . Since we assumed at the beginning that there exists a feasible solution to the subset selection problem, the DS algorithm is guaranteed to terminate for some finite .

Optimality of the DS algorithm As mentioned earlier, at each iteration, the DS algorithm computes a locally optimal solution to the problem in (20). Hence, the subset returned by the DS algorithm is a locally optimal solution the following relaxation of the subset selection problem

(21)

where is the number of iterations until the convergence of the DS algorithm. We also note that the above problem formulation is sometimes referred to as “regularized version” of the original constrained optimization problem [boyd2004convex]. Such regularization approaches to solve the original problems are also quite common in portfolio management [neu2017unified]

and reinforcement learning

[steinbach2001, koppel2017policy, koppel2018nonparametric], among many others.

Vi Numerical Simulations

In this section, we present numerical simulation results that demonstrate the performance of the proposed algorithms on randomly generated subset selection problem instances. The possible heterogeneity of agents and their capability in accurate localization requires algorithms that can offer acceptable performance for a mixture of agents in a range of localization accuracies. In addition, the scalability of the algorithms and their performance for small to mid-size swarms is of special interest in robotic applications. To these points, we have provided simulation results that show the scalability and average performance of our proposed algorithms for a variety of collections of randomly generated agents.

For a given problem instance, we measure the performance of an algorithm by the suboptimality ratio of its output. Specifically, let be an optimal solution to the given subset selection problem instance (9a)-(9b), and be the output of a given algorithm. We define the suboptimality ratio (SR) of the algorithm on the given problem instance as

Note that all the proposed algorithms, i.e., Greedy, Double Loop Greedy (DLG), and Difference Submodular (DS), have since they are guaranteed to output a subset such that .

Fig. 2: Average suboptimality ratios of the proposed methods over randomly generated 1000 problem instances. (Top) The suboptimality ratios of the Greedy, double loop greedy (DLG), and Difference Submodular (DS) algorithms for varying total number of agents and maximum localization errors (). (Bottom) A closer look on the average suboptimality ratios of the greedy algorithms.

We compare the performance of the Greedy, DLG, and DS algorithms with respect to three parameters: the total number of agents , the maximum localization error , and the expected beamforming gain threshold where . Recall that, for a given problem instance, is defined as the maximum expected beamforming gain that can be achieved by the agents.

In the first experiment, we investigate the relationship between the algorithms’ average suboptimality ratios, the total number of agents , and the bound on the agents’ localization errors. For a given and , a problem instance consists of localization errors where each is uniformly randomly generated from the interval and the expected beamforming gain threshold . The threshold is set to for allowing the algorithms to output subsets of different sizes if it is optimal to do so. For the DS algorithm, we initialize the Lagrangian parameter with and set the multiplying factor to . For each and each , we generate 1000 problem instances and illustrate the average suboptimality ratios of all algorithms in Fig. 2.

As can be seen from the Fig. 2, on average, both the Greedy and DLG algorithms performs better than the DS algorithm for all (,) pairs. Moreover, the DLG algorithm always performs better than the Greedy algorithm as it is theoretically guaranteed to do so. Since the average suboptimality ratios of all three algorithms are less than 2 over the randomly generated instances, we believe that they are all good candidates to solve the subset selection problem for practical purposes. However, considering the relative simplicity of Greedy and DLG algorithms with respect to the DS algorithm, a designer may find the Greedy and DLG algorithms more appealing.

In the second experiment, we investigate the relationship between the algorithms’ worst-case suboptimality ratios, the total number of agents , and the threshold on the expected beamforming gain. For given and , a problem instance consists of localization errors where each is generated uniformly randomly from the interval and the expected beamforming gain threshold . We initialize the Lagrangian parameter with and set for the DS algorithm. For each and each , we generate 1000 problem instances. The maximum suboptimality ratios of the algorithms over the generated instances is shown in Fig. 3.

Fig. 3: The maximum suboptimality ratio of the proposed methods over randomly generated 1000 problem instances. Suboptimality ratios are shown for varying total number of agents and thresholds .

As can be seen from the Fig. 3 that both Greedy and DLG algorithms perform better than the DS algorithm over all randomly generated instances. In particular, the maximum suboptimality ratio of the greedy algorithms are always less than 1.5 whereas the DS algorithm has large suptiomality ratios, e.g., , for large total number of agents and small expected beamforming gain thresholds . Finally we note that, in general, the suboptimality ratio of all algorithms increase with decreasing threshold .

We observe in our experiments that the optimal subset usually consists of the agents with similar localization errors instead of the agents with small localization errors. For example, suppose that and the localization errors are . If we set , the optimal set is . On the other hand, if we set , then the optimal set is . Unfortunately, we have no theoretical explanations for such a counter-intuitive phenomenon due to the complicated functional form of the variance .

Vii Conclusion

We have formulated a subset selection problem that aims to find a subset of agents, each of which is equipped with an ideal isotropic antenna, that forms a reliable communication link with a client through beamforming. We have presented three algorithms for solving the subset selection problem, and have discussed their computational complexity and optimality. The presented theoretical analysis, together with the conducted numerical experiments, suggests that simple greedy approaches provide reasonably good performance with low computational cost, although such approaches may not be optimal unless the given problem instance satisfies certain sufficient conditions on the maximum localization error. On the other hand, for general problem instances, one can obtain local optimal solutions to the formulated subset selection problem by using an algorithm that utilizes the supermodularity of the variance and the expected value of the beamforming gain. All the proposed algorithms can be thought of as attempts towards approximate trade-off analysis and attempts towards finding a desirable Pareto points.

References