The Impact of Message Passing in Agent-Based Submodular Maximization

04/07/2020 ∙ by David Grimsman, et al. ∙ The Regents of the University of California 0

Submodular maximization problems are a relevant model set for many real-world applications. Since these problems are generally NP-Hard, many methods have been developed to approximate the optimal solution in polynomial time. One such approach uses an agent-based greedy algorithm, where the goal is for each agent to choose an action from its action set such that the union of all actions chosen is as high-valued as possible. Recent work has shown how the performance of the greedy algorithm degrades as the amount of information shared among the agents decreases, whereas this work addresses the scenario where agents are capable of sharing more information than allowed in the greedy algorithm. Specifically, we show how performance guarantees increase as agents are capable of passing messages, which can augment the allowable decision set for each agent. Under these circumstances, we show a near-optimal method for message passing, and how much such an algorithm could increase performance for any given problem instance.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

Submodular maximization problems are relevant to many fields and applications, including sensor placement [13], outbreak detection in networks [14], maximizing and inferring influence in a social network [12, 9]

, document summarization

[15], clustering [17], assigning satellites to targets [19], path planning for multiple robots [21], and leader selection and resource allocation in multiagent systems [2, 16]. An important similarity among these applications is the presence of an objective function which exhibits a “diminishing returns” property. For instance, consider a company choosing locations for its retail stores. If, for a given city, the company has no retail stores, and chooses to add a location in that city, the marginal gain in revenue from that store would be higher than if the company chose a city where it already had 100 retail stores. Objectives (such as revenue in this instance) satisfying this property are submodular.

While submodular minimization can be solved in polynomial time, submodular maximization is an NP-hard problem for certain subclasses of submodular function. Therefore, much effort has been devoted to finding and improving algorithms which approximate the optimal solution in polynomial time. A key result of this line of research is that algorithms exist which give strong guarantees as to how well the optimal solution can be approximated.

One such algorithm is the greedy algorithm, first proposed in the seminal work [18]. Here it was shown that for certain classes of constraints the solution provided by the greedy algorithm must be within of the optimal, and within of the optimal for the more general case of constraints [6]. Since then, more sophisticated algorithms have been developed to show that there are many instances of the submodular maximixation problem that can be solved efficiently within the guarantee [1, 7]. It has also been shown that progress beyond this level of optimality is not possible using a polynomial-time algorithm, where the indicator step for the time complexity is the evaluation of the objective function [5].

In addition to the strong performance guarantees, a nice benefit to using the greedy algorithm to solve a submodular maximization problem is that it is simple to implement, even in distributed settings. One recent line of research has studied a distributed version of the greedy algorithm, which can be implemented using a set of agents, each with its own action set [8]. In this algorithm, the agents are ordered and choose sequentially, each agent greedily choosing its best action relative to the actions which the previous agents have chosen. The solution provided is the set of all actions chosen. Like the standard greedy algorithm, it has been shown that this distributed greedy algorithm guarantees the solution is within 1/2 the optimal.

In this setting, recent literature has emerged which attempts to quantify how information impacts the performance of the algorithm, specifically as the information among the agents degrades. For instance, [10] describes how the 1/2-guarantee decreases as agents can only observe a subset of the actions chosen by previous agents. The work in [11] shows how an intelligent choice of which action to send to future agents can recover some of this loss in performance. Other work has also begun to explore how additional knowledge of the structure of the action sets can offset this loss [3, 4].

This paper addresses the impact of information in the other direction: how increasing the amount of actions that can be shared among the agents improves the performance. We introduce the concept of message passing, wherein agents not only choose an action as part of the algorithm solution, but also choose some actions to share with future agents. Future agents may choose these shared actions as part of the solution, thus message passing is a way to augment the action sets of future agents in the sequence to offset any agent that may not have access to valuable actions.

Message passing gives rise to two key questions: what policy should agents choose to select actions to pass and how does it affect performance? We address the first question in Section II-C, where we propose an augmented greedy policy that is near-optimal in the limit of a large number of agents or a large number of shared messages. The performance question is addressed both from a worst-case and a best-case perspective. It turns out that it is possible to find ”bad” problem instances for which message sharing brings little benefit when the number of agents is large. Moreover, we prove in Theorem 1 that this is so regardless of the message passing policy used. On the flip side, there are also ”good” problem instances for which message passing can improve performance significantly, by a multiplicative factor that can be as large as the number of agents (Theorem 2) and such performance gains are achieved by the proposed augmented greedy policy.

Ii Model

Let be a set of elements and a scalar-valued function. We restrict to have the following three properties:

  • Normalized: .

  • Monotone: for .

  • Submodular: for all and .

This paper focuses on a subclass of distributed submodular optimization problems. To that end, let be a set of agents where each is associated with a local set of elements that contains the various sets of elements that agent can choose from. We focus on the scenario where each agent can select as most elements from , and hence we express this choice set by , where we use as opposed to when the dependence on and is clear. We denote a choice profile by , and will evaluate the objective of this profile as . The central goal is to establish an admissible selection process for the agents such that the emergent action profile satisfies

(1)

We will often express as merely when the problem instanced is implied.

Ii-a The Greedy Algorithm

It is well known that characterizing an optimal allocation in (1) is an NP-Hard problem in the number of agents for certain subclasses of submodular functions . However, there are very simple algorithms that can attain near optimal behavior for this class of submodular optimization problems. One such algorithm, termed the greedy algorithm [6], proceeds according to the following rule: each agent sequentially selects their choice by greedily choosing the action which yields the greatest benefit to the objective , i.e.,

(2)

Note that while the greedy algorithm can be implemented in a distributed fashion, there is an informational demand on the system as each agent must be aware of the choices of all previous agents , i.e., . Further, each agent must also be able to compute the optimal choice as defined in (2).

While relatively simple, the greedy algorithm is also high-performing as it is guaranteed to produce a solution which is within 1/2 of the optimal, i.e., . In fact, the greedy algorithm is shown to give the highest performance guarantees possible for any algorithm which runs in polynomial time for some classes of distributed submodular maximization problems.

Ii-B A Motivating Example

Consider a scenario in which flying vehicles carry on-board cameras that capture images of ground vehicles of interest and return their pixel coordinates. Each vehicle has access to a large collection of pixel coordinate measurements taken by its own camera, which we denote by the local element set . However, each vehicle needs to select a much smaller subset of these measurements (no more than ) to send to a centralized location for data fusion. The goal is to select the best set of

measurements that each vehicle should send to the centralized location so that an optimal estimate

of the ground vehicle’s position can be recovered by fusing the measurements from all the vehicles. It was shown in [20] that an optimal estimator that achieves the Cramér–Rao lower bound (CRLB) results in an error covariance matrix that can be written as

where is a symmetric positive definite matrix that encodes a-priori information about the position of the ground vehicle, is the complete set of measurements, and is a symmetric positive semi-definite matrix that depends on the relative position and orientation of the flying vehicle’s camera with respect to the ground vehicle for each measurement . In addition, it was shown that the function defined for any as

(3)

is normalized, monotone, and submodular. It turns out that maximizing (3) corresponds to the so-called

-optimality, which essentially minimizes the volume of confidence intervals.

Ii-C The Greedy Algorithm with Message Passing

(a) An example problem, where , , . Each box represents an element of , and each row represents for each agent, i.e., the elements in to which the agent has access. The function is represented by the width of each box, where the width of elements not specifically labeled in the diagram is 1. For , is the total amount of horizontal space covered by the elements in . For instance, and . Here we assume the use of some policy for selection and message passing. The arrows indicate the message passing dictated by , for instance . The boxes with the dashed outline indicate that is not in or , but is included as part of the agents’ augmented action set, should they choose to use it. The boxes shaded in blue indicate the elements chosen by , and the boxes shaded in green are the optimal choices, where those differ.
(b) A table representing the performance for 4 different solution methods. First, the optimal solution to (1) is given. Then, the independent solution is shown, which is the solution where each agent chooses independent of the other agents. The solution to the greedy algorithm assumes that the agents choose according to (2). Finally, last row assumes agents choose according to some policy .
Fig. 1: An example problem illustrating the extended model from Section II-C.

The focus of this work is on understanding the degree to which inter-agent communications can be exploited to improve the performance guarantees associated with the greedy algorithm. Accordingly, here we propose an augmented greedy algorithm where each agent is tasked with making both a selection and communication decision, denoted by and respectively. In particular, we focus on this situation where agent can communicate up to of its measurements to the forthcoming agents , i.e., . Agent can then select its choices either from among original set or to also include some subset of the communicated measures . Accordingly, we replace the decision-making rule of each agent given in (2) with a new rule which dictates how the agent selects its choice and its message in response to the previous selections and messages. In particular, we focus on rules of the form

(4)

where and . It is important to highlight that the performance of a collection of policies is ultimately gauged by the performance of the resulting allocation , as the communicated messages are merely employed to influence these decision making rules. The following algorithm highlights an opportunity for message passing to potentially improve the performance guarantees associated with the greedy algorithm through augmenting the agents’ choice sets.

Definition 1 (Augmented Greedy Algorithm)

In the augmented greedy algorithm each agent is associated with a selection rule as in (4) of the form

(5)
(6)

The communication depicted above entails each agent forwarding the best measurements that it is unable to select to the remaining agents. Then, each remaining agent can choose whether or not its selection should include these augmented choices. Note that we will require a policy to be deterministic, so the rules in (5)–(6) do not constitute a specific policy, since the may not be unique. Therefore we refer to a policy which has the form of (5)–(6), in conjunction with some tie-breaking rule, as an augmented greedy policy.

Our goal in this paper is to characterize the policies of the form given in (4) that optimize the quality of the emergent choice profile . Formally, a policy for agent is a function of the form

(7)

where for any , the output

satisfies and . This constraint ensures that each agent can only select elements either from its own set or elements shared by previous agents . See Figure 1 for an example which uses this extended model.

Given and , we will refer to the set of all admissible policies as . For a policy , and formally let denote the emergent profile when is applied to the problem instance . We will often simply use when the problem instance is implied.

Ii-D Performance Measure

Given a submodular function with element set , we measure the performance of the best policy as

(8)

where we restrict attention to choice sets of the form . Throughout, we will often focus on worst-case guarantees for any submodular function which we characterize by

(9)

where we restrict attention to ’s that are submodular, monotone, and normalized. Note that when we restrict attention to decision-rules aligned with the greedy algorithm, as in (2), the bound given in (9) is . The goal of this paper will be to approximate the optimal policy in .

Iii A Worst-Case Analysis

(a) An example where and , i.e. . Here , since is “covered” by . In this example, .
(b) An example where and , i.e., . Here , since this element of is “covered” by . We see that .
Fig. 2: Two graphical illustrations of worst-case style as shown in the proof for Theorem 1 for . This representation is similar to that in Figure 1, in that the rows are the action sets and is the amount of horizontal space covered by the boxes in . Again, blue squares represent elements chosen by some policy , orange are those elements which messages passed using policy , and green are the optimal choices , when those differ from the algorithm. Note that any elements which are shared with future agents are already in the future agents’ action sets.

In this section we explore whether message passing can increase worst-case guarantees beyond the 1/2 guarantee of the nominal greedy algorithm. We show that an augmented algorithm is a near-optimal algorithm in this setting, but also show that the benefits in terms of worst-case analysis decrease as the number of agents increases. Additionally, we show that even an optimal policy does not generally increase performance by much.

Theorem 1

For any and any , the following statements are true:

(10)
(11)

We will give the formal proof for the theorem below, with a brief description here. The upper bound is shown by presenting a canonical example, for which no policy can guarantee a performance above the bound in (10). The lower bound is proven by showing that if an augmented greedy policy is implemented, the system performance cannot be below that in (11).

Assuming that one could design a policy such that meets the upper bound in (10), we see that in general the guaranteed performance does not increase much above the 1/2 guaranteed by the standard greedy algorithm given by (2), especially for large . However, in cases where is small, one could see an increase in guaranteed performance: for instance, when and , then .

Another observation about Theorem 1 is that the upper and lower bounds are equal when - a range for where increasing does not affect . This implies that increasing higher than does not offer any benefit in terms of worst-case performance guarantees. Therefore, when considering constraints on how much information agents may share with one another, it may not benefit the system to increase capacity beyond that bound.

We also see that the upper and lower bounds are equal depending on how many agents are in the system. For instance, when , the lower and upper bounds are equal. Since any augmented greedy policy of the form (5)–(6) can be used to create the lower bound, this implies that any augmented greedy policy is optimal in this setting. Likewise, as , we see that the bounds are increasingly tighter, showing that for high , any augmented greedy policy is a near-optimal policy. This provides motivation for further studying the augmented greedy algorithm in the next section. We now proceed with the proof for Theorem 1:

For convenience of notation, we define for to be . To show the upper bound in (10), we introduce a problem instance such that for any ,

(12)

Fix and let , where is such that . Let for all . This implies that all the elements in are equally-valued, and that on this subset of is modular. For agents , any reasonable strategy 111It should be straightforward to see that if a policy does not follow the prescribed decisions listed in this proof, a similar can be designed which will lead to a lower . will simply set to be elements which have not been previously chosen or shared by other agents. Likewise, any reasonable will simply set to be elements which have not been previously chosen or shared.

We now define the action set for agent . For simplicity, denote , for some fixed . Define , which will serve as an element that “covers” . In other words, and for any . Then . For some illustrative examples, see Figure 2.

The objective function is fully defined as

(13)

for any . Notice that the term is binary, indicating whether , and if so, appropriately “covers” any elements in . It should be clear (especially considering the examples in Figure 2) that such an is submodular, monotone, and normalized.

Under any reasonable strategy (as described above), the selections for the first agents are unique elements, thus . The selection for agent is elements, thus . The optimal selections for agent is elements not in . Then the optimal selection for agent is with additional elements in , thus . This implies that

(14)

Since this performance is achieved by any optimal policy for this particular choice of , we conclude that that this expression is an upper bound on .

We now show the lower bound on by assuming takes on the form in (5)–(6), and showing that for any , cannot be lower than the lower bound in (11). We first define the function :

(15)

for . This can be thought of as the marginal contribution of given .

Lemma 1

For a problem instance , assume that is an augmented greedy policy. Then

(16)

The proof for this lemma is found in Appendix -A. We can now show the following:

(17)
(18)
(19)
(20)

where (17) is true by monotonicity and by definition of , (18) is true by submodularity, and (19) is true by Lemma 1. This implies that is greater than the lower bound in (11).

Iv A Best-Case Analysis

Fig. 3: The results for the simulations described in Section IV. Here we randomly generate and for 3 agents, then apply followed by . This plot shows a histogram for the comparison between the two for different simulations. A value shows that there was an increase in performance: note that the bar at value 1.00 is cut off to show the curve for the other values. The main takeaways are summarized in the table: that roughly 2/3 of simulations see an increase in performance, whereas 0 see a decrease.

In this section we consider an optimistic approach to understanding how an increase in message passing affects the performance of the system. We assume that the use of an augmented greedy policy, which Theorem 1 shows is near-optimal, and study its potential effects. In particular, since the nominal greedy algorithm in (2) is merely the augmented greedy algorithm when , comparing solutions of the two on an instance-by-instance basis can give insight into the potential benefits of message passing. While the previous section focused on worst-case scenarios, here we ask the question: how much could action sharing increase performance for any individual problem instance?

Figure 3 answers this question for 100 million problem instances, randomly chosen - the details of which are described at the end of this section. For each instance, an augmented greedy policy was implemented, as well as a nominal greedy policy . We report the number , meaning that any number greater than 1 is a strict increase in performance. Here, , and theoretical upper and lower bounds which will be shown in Theorem 2 are the vertical lines in orange. We can see from the histogram that in about 2/3 of problem instances, there is an increase in performance, and no instance shows a decrease. Furthermore, over 1/4 of instances show at least a 10% increase and over 5% show at least a 20% increase. Thus we see in simulation that message passing can improve performance significantly, which is stated more formally with the following result:

Theorem 2

For any , any , and any problem instance , the following holds:

(21)

where is the solution to an augmented greedy algorithm and satisfies (5) for all , and is the solution to a nominal greedy algorithm and satisfies (2) for all . Furthermore, for each inequality there exists a problem instance which shows tightness.

For space considerations, we present this result without its proof 222The interested reader may find the full proof at https://www.ece.ucsb.edu/davidgrimsman/CDC2020.pdf, but note that the proof will be included in future iterations of this work. The proof focuses mainly on the upper bound in (21), relying, in a similar fashion to Theorem 1, on the properties of given in , and the submodularity and monotonicity of . The lower bound, somewhat trivially, corresponds to the lower bound for in Theorem 1.

While the result in Theorem 1 was a somewhat negative result (in general, increasing does not provide much higher performance guarantees), here we see an upside to message passing. For any given instance , one could increase by a factor of . And, though this will not be the case for every problem instance, we see in Figure 3 that one can expect to increase by some amount.

Theorem 2 also gives insight into how much increasing might help any given problem instance. Whereas previously we saw that increasing above offered no further guarantees, here we see potential benefits to increasing all the way up to . We also see the potential drawback: on any given instance, could decrease by almost a factor of 1/2, although we did not see scenarios like this surface in simulation.

We now describe the details of the simulations in Figure 3. There are 3 agents , and 6 elements , where is a uniformly randomly assigned number in . Then for any , . The action sets for each agent are also uniformly randomly assigned with replacement, i.e., agent 1 could be assigned element

twice. This gives the sizes of the action sets some variance among the simulations. Then we restrict that

, , and . In this setting, we run the algorithm twice on each instance: once implementing augmented greedy policy and once implementing nominal greedy policy . We then report . This simulation was run times to generate the histogram in Figure 3. We see that in about 2/3 of cases, the augmented greedy algorithm improves performance.

(a) Ratio of performance using the criteria in .
(b) Ratio of performance using the D-optimality estimation criterion of
Fig. 4: Histograms of the relative performance for random simulations of the 2 vehicle camera estimation problem. A large number of samples fall at 1, which is interpreted as augmented greedy performing the same as nominal greedy. Instances where the value is greater than one indicate the augmented greedy performed better than the nominal greedy in the simulated scenario. Figure (a) shows the criterion of (3), which is the of the D-optimality estimation criterion. Figure (b) directly shows the D-optimality criteria (without the space.). The height of the bin corresponding to the ratio of 1 has been cropped.

V Numerical Example

In this section, we present the results for instances of the state estimation problem in Section II-B, where flying vehicles move in straight lines, each carrying a side-looking camera with planar projective geometry, focal length

, and measurement noise in the image plane with standard deviation

pixel. A large number of instances were created with the target fixed at the origin and drawing the initial positions of the two vehicles uniformly random in the square and the direction of motion uniformly across the interval . Each vehicle collects 100 independent measurements along a path of length 50. Details of how to construct the corresponding matrices and in (3) that quantify the information gain of camera measurements are found in [20].

Figure 4 summarizes the results in terms of the ratio between the performance of the augmented greedy algorithm (5)–(6) and the classical greedy algorithm (2), tie breaks were resolved arbitrarily. In both cases, each algorithm selects measurements to be sent to the centralized location for fusion. For the augmented greedy algorithm, the first vehicle selects measurements to be shared with the second vehicle. This means that the second vehicle now has the option of picking his best measurements to send to the fusion center out of the set consisting of its own collected measurements 100 together with the shared from the first vehicle. Because the number of measurements is very large, we cannot actually find the measurements that maximize (5)–(6), (2). Instead, we have used the usual greedy algorithm to make this selection, which is guaranteed to achieve no less than of the optimal.

While measurement selection is based on the submodular function (3), it is more common (and meaningful) to judge performance gains in terms of the D-optimality estimation criterion of (without the log function). The ratio for the performance in terms of the D-optimality estimation criteria is shown in Figure 4 (b). Figure 5 shows a simulation example corresponding to a large performance improvement of the augmented greedy policy over the basic one.

Fig. 5: Simulation example corresponding to a large performance improvement of the augmented greedy policy over the basic one. The blue line is the path of the vehicle that make greedy selections first. The blue ’*’ represents the selections. The red line is the path of the second vehicle and red ’*’ represents its selections. The red ’*’ of the blue path are the selections chosen by the second vehicle that was shared from the first. The black ’*’ is the ground truth of the object being tracked. In this example, the 2nd vehicle is moving almost directly towards the target and has therefore very small diversity in viewing angle. Because of this, it is beneficial for this vehicle to transmit to the fusion station of the measurements collected by vehicle 1 and shared with vehicle 2.

Vi Conclusion

In this paper we have shown how message passing affects the performance guarantees of a group of agents using the greedy algorithm. We showed that when , we receive no additional guarantees. We also showed that a simple augmented greedy policy gives near-optimal performance guarantees. Using such a policy, this paper explored how much performance could increase for any problem instance, and showed by simulation that these results are relevant in a real-world application.

Future work will continue to explore message passing, first by attempting to create tighter bounds on . We also have some preliminary results related to settings where (5) and (6) cannot be computed directly, only approximated. Another direction could be to apply these results to situations where agents can only see the actions of a subset of previous agents, and again ask questions about what actions should be shared and selected.

References

  • [1] G. Calinescu, C. Chekuri, M. Pál, and J. Vondrák (2007) Maximizing a submodular set function subject to a matroid constraint. In

    International Conference on Integer Programming and Combinatorial Optimization

    ,
    pp. 182–196. Cited by: §I.
  • [2] A. Clark and R. Poovendran (2011) A submodular optimization framework for leader selection in linear multi-agent systems. In Conference on Decision and Control and European Control Conference, pp. 3614–3621. Cited by: §I.
  • [3] M. Corah and N. Michael (2018) Distributed submodular maximization on partition matroids for planning on large sensor networks. In Conference on Decision and Control, pp. 6792–6799. Cited by: §I.
  • [4] M. Corah and N. Michael (2019) Distributed matroid-constrained submodular maximization for multi-robot exploration: theory and practice. Autonomous Robots 43 (2), pp. 485–501. Cited by: §I.
  • [5] U. Feige (1998) A threshold of ln for approximating set cover. Journal of the ACM (JACM) 45 (4), pp. 634–652. Cited by: §I.
  • [6] M. L. Fisher, G. L. Nemhauser, and L. A. Wolsey (1978) An analysis of approximations for maximizing submodular set functions—ii. In Polyhedral combinatorics, pp. 73–87. Cited by: §I, §II-A.
  • [7] M. Gairing (2009) Covering games: approximation through non-cooperation. In International Workshop on Internet and Network Economics, pp. 184–195. Cited by: §I.
  • [8] B. Gharesifard and S. L. Smith (2017) Distributed submodular maximization with limited information. Trans. on Control of Network Systems 5 (4), pp. 1635–1645. Cited by: §I.
  • [9] M. Gomez-Rodriguez, J. Leskovec, and A. Krause (2012) Inferring networks of diffusion and influence. ACM Transactions on Knowledge Discovery from Data (TKDD) 5 (4), pp. 21. Cited by: §I.
  • [10] D. Grimsman, M. S. Ali, J. P. Hespanha, and J. R. Marden (2018) The impact of information in greedy submodular maximization. Trans. on Control of Network Systems. Cited by: §I.
  • [11] D. Grimsman, J. P. Hespanha, and J. R. Marden (2018) Strategic information sharing in greedy submodular maximization. In 2018 IEEE Conference on Decision and Control (CDC), pp. 2722–2727. Cited by: §I.
  • [12] D. Kempe, J. Kleinberg, and É. Tardos (2003) Maximizing the spread of influence through a social network. In Proceedings of the ACM SIGKDD international conference on Knowledge discovery and data mining, pp. 137–146. Cited by: §I.
  • [13] A. Krause, C. Guestrin, A. Gupta, and J. Kleinberg (2006) Near-optimal sensor placements: maximizing information while minimizing communication cost. In Proceedings of the International Conference on Information Processing in Sensor Networks, pp. 2–10. Cited by: §I.
  • [14] J. Leskovec, A. Krause, C. Guestrin, C. Faloutsos, C. Faloutsos, J. VanBriesen, and N. Glance (2007) Cost-effective outbreak detection in networks. In Proceedings of the ACM SIGKDD international conference on Knowledge discovery and data mining, pp. 420–429. Cited by: §I.
  • [15] H. Lin and J. Bilmes (2011) A class of submodular functions for document summarization. In Proceedings of the Meeting of the Association for Computational Linguistics: Human Language Technologies-Volume 1, pp. 510–520. Cited by: §I.
  • [16] J. R. Marden (2017-Sept) The role of information in distributed resource allocation. IEEE TCNS 4 (3), pp. 654–664. External Links: Document, ISSN Cited by: §I.
  • [17] B. Mirzasoleiman, A. Karbasi, R. Sarkar, and A. Krause (2013) Distributed submodular maximization: identifying representative elements in massive data. In Advances in Neural Information Processing Systems, pp. 2049–2057. Cited by: §I.
  • [18] G. L. Nemhauser, L. A. Wolsey, and M. L. Fisher (1978) An analysis of approximations for maximizing submodular set functions I. Mathematical Programming 14 (1), pp. 265–294. Cited by: §I.
  • [19] G. Qu, D. Brown, and N. Li (2019) Distributed greedy algorithm for multi-agent task assignment problem with submodular utility functions. Automatica 105, pp. 206–215. Cited by: §I.
  • [20] M. R. Kirchner, J. P. Hespanha, and D. Garagic (2020-03) Heterogeneous measurement selection for vehicle tracking using submodular optimization. In Proc. of the 2020 IEEE Aerospace Conf., pp. 1–10. Cited by: §II-B, §V.
  • [21] A. Singh, A. Krause, C. Guestrin, W. J. Kaiser, and M. A. Batalin (2007) Efficient planning of informative paths for multiple robots.. In

    International Joint Conferences on Artificial Intelligence

    ,
    Vol. 7, pp. 2204–2211. Cited by: §I.