I Introduction
Submodular maximization problems are relevant to many fields and applications, including sensor placement [13], outbreak detection in networks [14], maximizing and inferring influence in a social network [12, 9]
[15], clustering [17], assigning satellites to targets [19], path planning for multiple robots [21], and leader selection and resource allocation in multiagent systems [2, 16]. An important similarity among these applications is the presence of an objective function which exhibits a “diminishing returns” property. For instance, consider a company choosing locations for its retail stores. If, for a given city, the company has no retail stores, and chooses to add a location in that city, the marginal gain in revenue from that store would be higher than if the company chose a city where it already had 100 retail stores. Objectives (such as revenue in this instance) satisfying this property are submodular.While submodular minimization can be solved in polynomial time, submodular maximization is an NPhard problem for certain subclasses of submodular function. Therefore, much effort has been devoted to finding and improving algorithms which approximate the optimal solution in polynomial time. A key result of this line of research is that algorithms exist which give strong guarantees as to how well the optimal solution can be approximated.
One such algorithm is the greedy algorithm, first proposed in the seminal work [18]. Here it was shown that for certain classes of constraints the solution provided by the greedy algorithm must be within of the optimal, and within of the optimal for the more general case of constraints [6]. Since then, more sophisticated algorithms have been developed to show that there are many instances of the submodular maximixation problem that can be solved efficiently within the guarantee [1, 7]. It has also been shown that progress beyond this level of optimality is not possible using a polynomialtime algorithm, where the indicator step for the time complexity is the evaluation of the objective function [5].
In addition to the strong performance guarantees, a nice benefit to using the greedy algorithm to solve a submodular maximization problem is that it is simple to implement, even in distributed settings. One recent line of research has studied a distributed version of the greedy algorithm, which can be implemented using a set of agents, each with its own action set [8]. In this algorithm, the agents are ordered and choose sequentially, each agent greedily choosing its best action relative to the actions which the previous agents have chosen. The solution provided is the set of all actions chosen. Like the standard greedy algorithm, it has been shown that this distributed greedy algorithm guarantees the solution is within 1/2 the optimal.
In this setting, recent literature has emerged which attempts to quantify how information impacts the performance of the algorithm, specifically as the information among the agents degrades. For instance, [10] describes how the 1/2guarantee decreases as agents can only observe a subset of the actions chosen by previous agents. The work in [11] shows how an intelligent choice of which action to send to future agents can recover some of this loss in performance. Other work has also begun to explore how additional knowledge of the structure of the action sets can offset this loss [3, 4].
This paper addresses the impact of information in the other direction: how increasing the amount of actions that can be shared among the agents improves the performance. We introduce the concept of message passing, wherein agents not only choose an action as part of the algorithm solution, but also choose some actions to share with future agents. Future agents may choose these shared actions as part of the solution, thus message passing is a way to augment the action sets of future agents in the sequence to offset any agent that may not have access to valuable actions.
Message passing gives rise to two key questions: what policy should agents choose to select actions to pass and how does it affect performance? We address the first question in Section IIC, where we propose an augmented greedy policy that is nearoptimal in the limit of a large number of agents or a large number of shared messages. The performance question is addressed both from a worstcase and a bestcase perspective. It turns out that it is possible to find ”bad” problem instances for which message sharing brings little benefit when the number of agents is large. Moreover, we prove in Theorem 1 that this is so regardless of the message passing policy used. On the flip side, there are also ”good” problem instances for which message passing can improve performance significantly, by a multiplicative factor that can be as large as the number of agents (Theorem 2) and such performance gains are achieved by the proposed augmented greedy policy.
Ii Model
Let be a set of elements and a scalarvalued function. We restrict to have the following three properties:

Normalized: .

Monotone: for .

Submodular: for all and .
This paper focuses on a subclass of distributed submodular optimization problems. To that end, let be a set of agents where each is associated with a local set of elements that contains the various sets of elements that agent can choose from. We focus on the scenario where each agent can select as most elements from , and hence we express this choice set by , where we use as opposed to when the dependence on and is clear. We denote a choice profile by , and will evaluate the objective of this profile as . The central goal is to establish an admissible selection process for the agents such that the emergent action profile satisfies
(1) 
We will often express as merely when the problem instanced is implied.
Iia The Greedy Algorithm
It is well known that characterizing an optimal allocation in (1) is an NPHard problem in the number of agents for certain subclasses of submodular functions . However, there are very simple algorithms that can attain near optimal behavior for this class of submodular optimization problems. One such algorithm, termed the greedy algorithm [6], proceeds according to the following rule: each agent sequentially selects their choice by greedily choosing the action which yields the greatest benefit to the objective , i.e.,
(2) 
Note that while the greedy algorithm can be implemented in a distributed fashion, there is an informational demand on the system as each agent must be aware of the choices of all previous agents , i.e., . Further, each agent must also be able to compute the optimal choice as defined in (2).
While relatively simple, the greedy algorithm is also highperforming as it is guaranteed to produce a solution which is within 1/2 of the optimal, i.e., . In fact, the greedy algorithm is shown to give the highest performance guarantees possible for any algorithm which runs in polynomial time for some classes of distributed submodular maximization problems.
IiB A Motivating Example
Consider a scenario in which flying vehicles carry onboard cameras that capture images of ground vehicles of interest and return their pixel coordinates. Each vehicle has access to a large collection of pixel coordinate measurements taken by its own camera, which we denote by the local element set . However, each vehicle needs to select a much smaller subset of these measurements (no more than ) to send to a centralized location for data fusion. The goal is to select the best set of
measurements that each vehicle should send to the centralized location so that an optimal estimate
of the ground vehicle’s position can be recovered by fusing the measurements from all the vehicles. It was shown in [20] that an optimal estimator that achieves the Cramér–Rao lower bound (CRLB) results in an error covariance matrix that can be written aswhere is a symmetric positive definite matrix that encodes apriori information about the position of the ground vehicle, is the complete set of measurements, and is a symmetric positive semidefinite matrix that depends on the relative position and orientation of the flying vehicle’s camera with respect to the ground vehicle for each measurement . In addition, it was shown that the function defined for any as
(3) 
is normalized, monotone, and submodular. It turns out that maximizing (3) corresponds to the socalled
optimality, which essentially minimizes the volume of confidence intervals.
IiC The Greedy Algorithm with Message Passing
The focus of this work is on understanding the degree to which interagent communications can be exploited to improve the performance guarantees associated with the greedy algorithm. Accordingly, here we propose an augmented greedy algorithm where each agent is tasked with making both a selection and communication decision, denoted by and respectively. In particular, we focus on this situation where agent can communicate up to of its measurements to the forthcoming agents , i.e., . Agent can then select its choices either from among original set or to also include some subset of the communicated measures . Accordingly, we replace the decisionmaking rule of each agent given in (2) with a new rule which dictates how the agent selects its choice and its message in response to the previous selections and messages. In particular, we focus on rules of the form
(4) 
where and . It is important to highlight that the performance of a collection of policies is ultimately gauged by the performance of the resulting allocation , as the communicated messages are merely employed to influence these decision making rules. The following algorithm highlights an opportunity for message passing to potentially improve the performance guarantees associated with the greedy algorithm through augmenting the agents’ choice sets.
Definition 1 (Augmented Greedy Algorithm)
In the augmented greedy algorithm each agent is associated with a selection rule as in (4) of the form
(5)  
(6) 
The communication depicted above entails each agent forwarding the best measurements that it is unable to select to the remaining agents. Then, each remaining agent can choose whether or not its selection should include these augmented choices. Note that we will require a policy to be deterministic, so the rules in (5)–(6) do not constitute a specific policy, since the may not be unique. Therefore we refer to a policy which has the form of (5)–(6), in conjunction with some tiebreaking rule, as an augmented greedy policy.
Our goal in this paper is to characterize the policies of the form given in (4) that optimize the quality of the emergent choice profile . Formally, a policy for agent is a function of the form
(7) 
where for any , the output
satisfies and . This constraint ensures that each agent can only select elements either from its own set or elements shared by previous agents . See Figure 1 for an example which uses this extended model.
Given and , we will refer to the set of all admissible policies as . For a policy , and formally let denote the emergent profile when is applied to the problem instance . We will often simply use when the problem instance is implied.
IiD Performance Measure
Given a submodular function with element set , we measure the performance of the best policy as
(8) 
where we restrict attention to choice sets of the form . Throughout, we will often focus on worstcase guarantees for any submodular function which we characterize by
(9) 
where we restrict attention to ’s that are submodular, monotone, and normalized. Note that when we restrict attention to decisionrules aligned with the greedy algorithm, as in (2), the bound given in (9) is . The goal of this paper will be to approximate the optimal policy in .
Iii A WorstCase Analysis
In this section we explore whether message passing can increase worstcase guarantees beyond the 1/2 guarantee of the nominal greedy algorithm. We show that an augmented algorithm is a nearoptimal algorithm in this setting, but also show that the benefits in terms of worstcase analysis decrease as the number of agents increases. Additionally, we show that even an optimal policy does not generally increase performance by much.
Theorem 1
For any and any , the following statements are true:
(10)  
(11) 
We will give the formal proof for the theorem below, with a brief description here. The upper bound is shown by presenting a canonical example, for which no policy can guarantee a performance above the bound in (10). The lower bound is proven by showing that if an augmented greedy policy is implemented, the system performance cannot be below that in (11).
Assuming that one could design a policy such that meets the upper bound in (10), we see that in general the guaranteed performance does not increase much above the 1/2 guaranteed by the standard greedy algorithm given by (2), especially for large . However, in cases where is small, one could see an increase in guaranteed performance: for instance, when and , then .
Another observation about Theorem 1 is that the upper and lower bounds are equal when  a range for where increasing does not affect . This implies that increasing higher than does not offer any benefit in terms of worstcase performance guarantees. Therefore, when considering constraints on how much information agents may share with one another, it may not benefit the system to increase capacity beyond that bound.
We also see that the upper and lower bounds are equal depending on how many agents are in the system. For instance, when , the lower and upper bounds are equal. Since any augmented greedy policy of the form (5)–(6) can be used to create the lower bound, this implies that any augmented greedy policy is optimal in this setting. Likewise, as , we see that the bounds are increasingly tighter, showing that for high , any augmented greedy policy is a nearoptimal policy. This provides motivation for further studying the augmented greedy algorithm in the next section. We now proceed with the proof for Theorem 1:
For convenience of notation, we define for to be . To show the upper bound in (10), we introduce a problem instance such that for any ,
(12) 
Fix and let , where is such that . Let for all . This implies that all the elements in are equallyvalued, and that on this subset of is modular. For agents , any reasonable strategy ^{1}^{1}1It should be straightforward to see that if a policy does not follow the prescribed decisions listed in this proof, a similar can be designed which will lead to a lower . will simply set to be elements which have not been previously chosen or shared by other agents. Likewise, any reasonable will simply set to be elements which have not been previously chosen or shared.
We now define the action set for agent . For simplicity, denote , for some fixed . Define , which will serve as an element that “covers” . In other words, and for any . Then . For some illustrative examples, see Figure 2.
The objective function is fully defined as
(13) 
for any . Notice that the term is binary, indicating whether , and if so, appropriately “covers” any elements in . It should be clear (especially considering the examples in Figure 2) that such an is submodular, monotone, and normalized.
Under any reasonable strategy (as described above), the selections for the first agents are unique elements, thus . The selection for agent is elements, thus . The optimal selections for agent is elements not in . Then the optimal selection for agent is with additional elements in , thus . This implies that
(14) 
Since this performance is achieved by any optimal policy for this particular choice of , we conclude that that this expression is an upper bound on .
We now show the lower bound on by assuming takes on the form in (5)–(6), and showing that for any , cannot be lower than the lower bound in (11). We first define the function :
(15) 
for . This can be thought of as the marginal contribution of given .
Lemma 1
For a problem instance , assume that is an augmented greedy policy. Then
(16) 
Iv A BestCase Analysis
In this section we consider an optimistic approach to understanding how an increase in message passing affects the performance of the system. We assume that the use of an augmented greedy policy, which Theorem 1 shows is nearoptimal, and study its potential effects. In particular, since the nominal greedy algorithm in (2) is merely the augmented greedy algorithm when , comparing solutions of the two on an instancebyinstance basis can give insight into the potential benefits of message passing. While the previous section focused on worstcase scenarios, here we ask the question: how much could action sharing increase performance for any individual problem instance?
Figure 3 answers this question for 100 million problem instances, randomly chosen  the details of which are described at the end of this section. For each instance, an augmented greedy policy was implemented, as well as a nominal greedy policy . We report the number , meaning that any number greater than 1 is a strict increase in performance. Here, , and theoretical upper and lower bounds which will be shown in Theorem 2 are the vertical lines in orange. We can see from the histogram that in about 2/3 of problem instances, there is an increase in performance, and no instance shows a decrease. Furthermore, over 1/4 of instances show at least a 10% increase and over 5% show at least a 20% increase. Thus we see in simulation that message passing can improve performance significantly, which is stated more formally with the following result:
Theorem 2
For any , any , and any problem instance , the following holds:
(21) 
where is the solution to an augmented greedy algorithm and satisfies (5) for all , and is the solution to a nominal greedy algorithm and satisfies (2) for all . Furthermore, for each inequality there exists a problem instance which shows tightness.
For space considerations, we present this result without its proof ^{2}^{2}2The interested reader may find the full proof at https://www.ece.ucsb.edu/davidgrimsman/CDC2020.pdf, but note that the proof will be included in future iterations of this work. The proof focuses mainly on the upper bound in (21), relying, in a similar fashion to Theorem 1, on the properties of given in –, and the submodularity and monotonicity of . The lower bound, somewhat trivially, corresponds to the lower bound for in Theorem 1.
While the result in Theorem 1 was a somewhat negative result (in general, increasing does not provide much higher performance guarantees), here we see an upside to message passing. For any given instance , one could increase by a factor of . And, though this will not be the case for every problem instance, we see in Figure 3 that one can expect to increase by some amount.
Theorem 2 also gives insight into how much increasing might help any given problem instance. Whereas previously we saw that increasing above offered no further guarantees, here we see potential benefits to increasing all the way up to . We also see the potential drawback: on any given instance, could decrease by almost a factor of 1/2, although we did not see scenarios like this surface in simulation.
We now describe the details of the simulations in Figure 3. There are 3 agents , and 6 elements , where is a uniformly randomly assigned number in . Then for any , . The action sets for each agent are also uniformly randomly assigned with replacement, i.e., agent 1 could be assigned element
twice. This gives the sizes of the action sets some variance among the simulations. Then we restrict that
, , and . In this setting, we run the algorithm twice on each instance: once implementing augmented greedy policy and once implementing nominal greedy policy . We then report . This simulation was run times to generate the histogram in Figure 3. We see that in about 2/3 of cases, the augmented greedy algorithm improves performance.V Numerical Example
In this section, we present the results for instances of the state estimation problem in Section IIB, where flying vehicles move in straight lines, each carrying a sidelooking camera with planar projective geometry, focal length
, and measurement noise in the image plane with standard deviation
pixel. A large number of instances were created with the target fixed at the origin and drawing the initial positions of the two vehicles uniformly random in the square and the direction of motion uniformly across the interval . Each vehicle collects 100 independent measurements along a path of length 50. Details of how to construct the corresponding matrices and in (3) that quantify the information gain of camera measurements are found in [20].Figure 4 summarizes the results in terms of the ratio between the performance of the augmented greedy algorithm (5)–(6) and the classical greedy algorithm (2), tie breaks were resolved arbitrarily. In both cases, each algorithm selects measurements to be sent to the centralized location for fusion. For the augmented greedy algorithm, the first vehicle selects measurements to be shared with the second vehicle. This means that the second vehicle now has the option of picking his best measurements to send to the fusion center out of the set consisting of its own collected measurements 100 together with the shared from the first vehicle. Because the number of measurements is very large, we cannot actually find the measurements that maximize (5)–(6), (2). Instead, we have used the usual greedy algorithm to make this selection, which is guaranteed to achieve no less than of the optimal.
While measurement selection is based on the submodular function (3), it is more common (and meaningful) to judge performance gains in terms of the Doptimality estimation criterion of (without the log function). The ratio for the performance in terms of the Doptimality estimation criteria is shown in Figure 4 (b). Figure 5 shows a simulation example corresponding to a large performance improvement of the augmented greedy policy over the basic one.
Vi Conclusion
In this paper we have shown how message passing affects the performance guarantees of a group of agents using the greedy algorithm. We showed that when , we receive no additional guarantees. We also showed that a simple augmented greedy policy gives nearoptimal performance guarantees. Using such a policy, this paper explored how much performance could increase for any problem instance, and showed by simulation that these results are relevant in a realworld application.
Future work will continue to explore message passing, first by attempting to create tighter bounds on . We also have some preliminary results related to settings where (5) and (6) cannot be computed directly, only approximated. Another direction could be to apply these results to situations where agents can only see the actions of a subset of previous agents, and again ask questions about what actions should be shared and selected.
References

[1]
(2007)
Maximizing a submodular set function subject to a matroid constraint.
In
International Conference on Integer Programming and Combinatorial Optimization
, pp. 182–196. Cited by: §I.  [2] (2011) A submodular optimization framework for leader selection in linear multiagent systems. In Conference on Decision and Control and European Control Conference, pp. 3614–3621. Cited by: §I.
 [3] (2018) Distributed submodular maximization on partition matroids for planning on large sensor networks. In Conference on Decision and Control, pp. 6792–6799. Cited by: §I.
 [4] (2019) Distributed matroidconstrained submodular maximization for multirobot exploration: theory and practice. Autonomous Robots 43 (2), pp. 485–501. Cited by: §I.
 [5] (1998) A threshold of ln for approximating set cover. Journal of the ACM (JACM) 45 (4), pp. 634–652. Cited by: §I.
 [6] (1978) An analysis of approximations for maximizing submodular set functions—ii. In Polyhedral combinatorics, pp. 73–87. Cited by: §I, §IIA.
 [7] (2009) Covering games: approximation through noncooperation. In International Workshop on Internet and Network Economics, pp. 184–195. Cited by: §I.
 [8] (2017) Distributed submodular maximization with limited information. Trans. on Control of Network Systems 5 (4), pp. 1635–1645. Cited by: §I.
 [9] (2012) Inferring networks of diffusion and influence. ACM Transactions on Knowledge Discovery from Data (TKDD) 5 (4), pp. 21. Cited by: §I.
 [10] (2018) The impact of information in greedy submodular maximization. Trans. on Control of Network Systems. Cited by: §I.
 [11] (2018) Strategic information sharing in greedy submodular maximization. In 2018 IEEE Conference on Decision and Control (CDC), pp. 2722–2727. Cited by: §I.
 [12] (2003) Maximizing the spread of influence through a social network. In Proceedings of the ACM SIGKDD international conference on Knowledge discovery and data mining, pp. 137–146. Cited by: §I.
 [13] (2006) Nearoptimal sensor placements: maximizing information while minimizing communication cost. In Proceedings of the International Conference on Information Processing in Sensor Networks, pp. 2–10. Cited by: §I.
 [14] (2007) Costeffective outbreak detection in networks. In Proceedings of the ACM SIGKDD international conference on Knowledge discovery and data mining, pp. 420–429. Cited by: §I.
 [15] (2011) A class of submodular functions for document summarization. In Proceedings of the Meeting of the Association for Computational Linguistics: Human Language TechnologiesVolume 1, pp. 510–520. Cited by: §I.
 [16] (2017Sept) The role of information in distributed resource allocation. IEEE TCNS 4 (3), pp. 654–664. External Links: Document, ISSN Cited by: §I.
 [17] (2013) Distributed submodular maximization: identifying representative elements in massive data. In Advances in Neural Information Processing Systems, pp. 2049–2057. Cited by: §I.
 [18] (1978) An analysis of approximations for maximizing submodular set functions I. Mathematical Programming 14 (1), pp. 265–294. Cited by: §I.
 [19] (2019) Distributed greedy algorithm for multiagent task assignment problem with submodular utility functions. Automatica 105, pp. 206–215. Cited by: §I.
 [20] (202003) Heterogeneous measurement selection for vehicle tracking using submodular optimization. In Proc. of the 2020 IEEE Aerospace Conf., pp. 1–10. Cited by: §IIB, §V.

[21]
(2007)
Efficient planning of informative paths for multiple robots..
In
International Joint Conferences on Artificial Intelligence
, Vol. 7, pp. 2204–2211. Cited by: §I.
Comments
There are no comments yet.