Distributed Submodular Maximization with Parallel Execution

03/09/2020 ∙ by Haoyuan Sun, et al. ∙ The Regents of the University of California California Institute of Technology 0

The submodular maximization problem is widely applicable in many engineering problems where objectives exhibit diminishing returns. While this problem is known to be NP-hard for certain subclasses of objective functions, there is a greedy algorithm which guarantees approximation at least 1/2 of the optimal solution. This greedy algorithm can be implemented with a set of agents, each making a decision sequentially based on the choices of all prior agents. In this paper, we consider a generalization of the greedy algorithm in which agents can make decisions in parallel, rather than strictly in sequence. In particular, we are interested in partitioning the agents, where a set of agents in the partition all make a decision simultaneously based on the choices of prior agents, so that the algorithm terminates in limited iterations. We provide bounds on the performance of this parallelized version of the greedy algorithm and show that dividing the agents evenly among the sets in the partition yields an optimal structure. We additionally show that this optimal structure is still near-optimal when the objective function exhibits a certain monotone property. Lastly, we show that the same performance guarantees can be achieved in the parallelized greedy algorithm even when agents can only observe the decisions of a subset of prior agents.



There are no comments yet.


page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

Submodular maximization is an important topic with relevance to many fields and applications, including sensor placement [1], outbreak detection in networks [2], maximizing and inferring influence in a social network [3, 4]

, document summarization

[5], clustering [6], assigning satellites to targets [7], path planning for multiple robots [8], and leader selection and resource allocation in multiagent systems [9, 10]. An important similarity among these applications is the presence of an objective function which exhibits a “diminishing returns” property. For instance, a group of leaders can impose some influence on a social network, but the marginal gain in influence achieved by adding a new leader to the group decreases as the size of the group increases. Objective functions (such as influence) satisfying this property are submodular.

The submodular maximization problem is to choose a set of elements (such as leaders) which maximize the submodular objective function, according to some constraints. This problem is known to be NP-Hard for certain subclasses of submodular functions [11]. Therefore, much research has focused on how to approximate the optimal solution [12, 13, 14, 15]. The overall message of this research is that simple algorithms can perform well by providing solutions which are guaranteed to be within some factor of optimal.

One such algorithm is the greedy algorithm, first proposed in [16]. It was shown in this seminal work that for certain classes of constraints the solution provided by the greedy algorithm is guaranteed to be within of the optimal, and within of the optimal for the more general case [17]. Since then, more sophisticated algorithms have been developed to show that there are many instances of the submodular maximixation problem which can be solved efficiently within the guarantee [12, 18]. It has also been shown that progress beyond this level of optimality is not possible using a polynomial-time algorithm [19].

More recent research has focused on distributed algorithms, since in many cases having a centralized agent with access to all the relevant data is untenable [20, 6, 21]. In this case, the greedy algorithm can be generalized using a set of agents, each with its own decision set. The combined set of decisions by the agents is evaluated by the submodular objective function, which they seek to maximize. Each agent chooses sequentially, maximizing its marginal contribution compared to the actions of the prior agents. In this setting, the greedy algorithm has been shown to provide a solution within of the optimal.

Observe that with this distributed greedy algorithm, each agent must have access to all prior agents’ decisions, meaning that the algorithm requires iterations to terminate. For large , this may be unacceptable. Therefore, researchers have explored how to parallelize the greedy algorithm [5, 22, 23]. While much has been done, many of these parallelization techniques still require some entity to have access to all decisions – in essence, part of the parallelization technique is how to assign decision sets to agents.

A natural extension is to consider the case where no such centralized authority is present, i.e., the agents’ decision sets are determined a priori and cannot be modified. Terminating the greedy algorithm in iterations thus requires a partitioning of the agents into sets , where now each agent in set simultaneously chooses an action which maximizes its marginal contribution relative to the actions chosen by all the agents in sets . In this setting, one can ask the following questions:

  1. What is the best way to partition the agents? Should the agents be spread evenly across the sets in the partition, or should set 1 or set be larger than the others?

  2. If some additional structure is known about the submodular objective function, how does that affect the partitioning strategy?

In the non-parallelized setting, recent research has explored how the performance of the greedy algorithm degrades as we relax the information sharing constraint that each agent must have access to the decisions of all prior agents [24, 25]. Therefore, it is a natural extension to ask in the parallelized setting:

  1. If we relax the requirement that an agent in partition set must have access to the decisions of all agents in sets , can the same level of performance be maintained by the greedy algorithm?

In response to the questions listed above, the contributions of this paper are the following:

  1. Theorem 1 shows that partitioning the agents into equally-sized sets yields the highest performance guarantee of the greedy algorithm, given a time constraint . The performance guarantee is also shown, in terms of and .

  2. Theorem 2 shows that if we know some additional structure about the submodular function, i.e., a lower bound on each agent’s relative marginal contribution, then a near-optimal partitioning strategy is the same as that of Theorem 1. Increased performance guarantees are also shown.

  3. Theorems 3 and 4 prove that the above performance guarantees on the greedy algorithm can be preserved even with less information sharing among the agents.

Ii Model

Consider a base set and a set function . Given , we define the marginal contribution of with respect to as


In this paper, we are interested in distributed algorithms for maximization of submodular functions, where there is a collection of decision-making agents, named as , and a set of decisions . And each decision-making agent is associated with a set of permissible decisions so that form a partition of . For rest of the paper, we refer to decision-making agents as simply agents. Then the collection of decisions by all agents is called an action, and we denote the set of all actions as the action profile . An action is evaluated by an objective function as . Furthermore, we restrict our attention to ’s that are submodular functions, which satisfy the following properties:

  • Normalized .

  • Monotonic for all and .

  • Submodular for all and .

For simplicity, we will refer to the set of submodular functions as . The goal of the submodular maximization problem is to find:


For convenience, we overload to accept elements: for , and allow multiple inputs: for . We also denote for any nonnegative integer and for any nonnegative integers . For any and , we will write with the understanding that we mean . For instance .

A distributed greedy algorithm can be used to approximate the problem as stated in (2). In the greedy algorithm, the agents make decision in sequential order according to their names and each agent attempts to maximize its marginal contribution with respect to the choices of prior agents:


Note that the greedy solution may not be unique. In the context of this paper, we assume that is the worst solution among all possible greedy solutions.

Let be from the greedy algorithm subject to objective function and action profile . Then, to measure the quality of the greedy algorithm, we consider its competitive ratio:


The well known result from [17] states that , which means that (3) guarantees that the performance of the greedy solution is always within 1/2 of optimal. Furthermore, only one agent can make a decision at any given time and thus a solution will be found in precisely iterations. The central focus of this works is on characterizing the achievable competitive ratio for situations where a system-design does not have the luxury of have iterations to construct a decision. To this end, we introduce the parallelized greedy algorithm to allow multiple agents make decisions at the same time. More formally, we consider a situation where the system is given limited number of iterations and needs to come up with an iteration assignment so that each agent makes decision only based off the choices made by agents from earlier iterations:


where . Note, that to maintain consistency with (3), must preserve the implicit ordering of the greedy algorithm: whenever .

We are interested in the performance of (5) given a fixed number of agents and number of iterations. We define the competitive ratio of a time step assignment as:


where is from the parallelized greedy algorithm subject to objective function , action profile and iteration assignment . Denote the set of all possible iteration assignments with agents and at most iterations as:


So we seek to find the best possible competitive ratio, and the iteration assignment which achieves the optimal competitive ratio (if such assignment exists):


Additionally, we want to know on average how many agents are making decisions in the same iteration, so we define the concurrency as . Later we will find to be heavily dependent on this measure.

(a) Illustration of weighted subset cover problem. The 5 agents are represented by nodes. The target set are represented by boxes. For each box, its label indicate its weight. Each dark line indicates a decision mapped to some subset of , for example, .
Optimal 13
Greedy 11
Parallel () 8
Parallel () 9
(b) For the above weighted set cover problem, this table shows the decisions by several algorithms described in this paper. and are the optimal iteration assignments as shown in Fig. 1(a) and 1(c), respectively. Due to space constraints, we used element instead of set to denote each decision, e.g. we wrote but we mean . To illustrate how the greedy algorithm works, let’s consider . We have because agents 1 and 2 are deciding at the same time. Then agents 3 and 4 choose at the same time, and one possible result is . Lastly, agent 5 makes a decision, it sees that agents 3 and 4 already took and , so it chooses as the decision maximizing its marginal contribution.
Fig. 1: An instance of weighted subset cover problem and the behavior of various algorithms for this problem.

An example of the submodular maximization problem is the weighted set cover problem [18]. Given a target set and a mapping from the decisions to subsets of . The value of an action profile is determined by a weight function :


Intuitively, this problem aims to “cover” as much of the target set as possible. An instance of this problem is illustrated by Figure 1.

Iii Optimal Parallel Structures

In this section, we present the best possible competitive ratio for the parallelized greedy algorithm and optimal iteration assignment which achieves such a bound. We perform this analysis for both general submodular objective functions and submodular objective functions with an additional property which we call -strict monotonicity.

Note that according to (5), a pair of agents do not utilize each other’s decisions (in either direction) when they are in the same iteration. This intuitively represent some “blindspots” in the parallelized greedy algorithm compared to the original greedy algorithm. One way to reduce those “blindspots” is to divide the agents evenly among the available iterations, in other words, we want the number of agents deciding in parallel to be as close to the concurrency as possible. Theorems 1 and 3 illustrate that this idea yields the best possible iteration assignment.

Theorem 1

Given a parallelized submodular maximization problem with agents and iterations, the competitive ratio is:


In particular, when ,




The above optimal iteration assignment is illustrated by Figure 2. This theorem shows that the competitive ratio is inversely proportional to concurrency. We will present a more general version of Theorem 1 and its proof in Section IV-C.

(a) The optimal assignment for , as described in (12).
(b) A non-optimal assignment for .
(c) The optimal assignment for , as described in (13).
(d) A non-optimal assignment for .
Fig. 2: Examples of optimal iteration assignments as given by Theorem 1. Each agent, represented by a node, is named as . Each column, labeled by a Roman numeral, contains all agents executed at a given iteration. Also shown are some examples of iteration assignments that are not optimal. According to Theorem 1, the competitive ratio in Fig. 1(a) is and the competitive ratio in Fig. 1(c) is also . On the other hand, in Section IV-C, Lemma 1 shows that the competitive ratio in Fig. 1(b) and Fig. 1(d) are both . Also, according to Theorem 2, when the objective function is -strictly monotone, Fig. 1(a) and Fig. 1(c) are nearly-optimal, but Fig. 1(b) and 1(d) are not.

Recent research, such as [21], has been focused on whether imposing additional structure on the objective function and action profile would help the underlying greedy algorithm to yield better performing solutions. Here, we introduce a property that enables a parallelized greedy algorithm to yield higher competitive ratio.

Fig. 3: The lower bound (16) on the optimal competitive ratio when the objective function is -strictly monotone, as given in Theorem 2. For , we plot the lower bound against . Note that at , the lower bound is , which is consistent with (11), and at , the competitive ratio becomes 1, as expected from our intuition. We can also observe that the lower bound increase approximately linearly with respect to for large concurrency , demonstrating that the property of -strict monotonicity is effective at increasing the competitive ratio.
Definition 1

Objective function is said to be -strictly monotonic for some real value if for all and . For simplicity, we will refer to the set of functions that are both submodular and -strictly monotone as .

We can extend competitive ratio defined in (6) and (8) to when they are subject to -strict monotonicity:


The theorem below presents an upper bound on the best possible competitive ratio subject to -strict monotonicity and demonstrates that the iteration assignment (13) nearly achieves this bound.

Theorem 2

Given a parallelized submodular and -strictly monotone maximization problem with agents and iterations, then:


Furthermore, the time step assignment in (13) achieves the lower bound .

Note that as , (16) converges to (11), and as , converges to 1. This confirms the intuition that for higher , actions have less “overlap” and as a result the greedy algorithm can perform closer to the optimal. For more detail, Figure 3 illustrates how the lower bound changes with respect to at different . Also, as both the lower and upper bounds converge to . So the lower and upper bounds are close to each other when concurrency is high. We will present a more general version of Theorem 2 and its proof in Section IV-D.

Iv Parallelization as Information Exchange

Iv-a Preliminaries

In this section, we will employ several concepts from graph theory. Throughout this section, we will assume that we have an undirected graph :

Definition 2

Nodes form a clique if for all distinct , . A clique cover of is a partition of so that each set in the partition forms a clique. And the clique cover number is the least number of cliques necessary to form a clique cover.

Definition 3

Nodes form an independent set if for all distinct , . The maximum independent set is an independent set with the largest possible number of nodes. And the independence number is the size of the maximum independent set.

Iv-B Parallization and Information

(a) The optimal assignment for , as described in (12).
(b) A non-optimal assignment for .
(c) The optimal assignment for , as described in (13).
(d) A non-optimal assignment for .
Fig. 4: The induced information graph of iteration assignments from Figure 2 as discussed in Section IV. In the information graphs, an edge where represents that the iteration of agent is before the iteration of agent and hence agent requires knowing the choice made by agent before making its own decision. Each edge can be thought of as the flow of information exchanges between the agents in which a prior agent share its choice so later agents can decide.

In both (3) and (5), each agent uses the choices of prior agents to make its own decision. Alternatively, we can view this process as an agent communicating its decision to other agents who depend on this piece of information. We can thus model this information exchange as an undirected graph , where represents the agents and each edge represents information being exchanged between two agents. Since the greedy algorithm has an implicit ordering induced by the agents’ names, for every pair so that , agent requires knowing the choice of agent , but agents does not access agent ’s decision. Hence, we can use , which we will call the information graph, to determine the set of prior agents which agent accesses as . From here, it is natural to consider the generalized greedy algorithm proposed in [24]:


First we noticed that the parallelized greedy algorithm as defined in (5) is a special case of (17). In the parallelized greedy algorithm, each agent requires information from all agents executed in prior iterations. Therefore, an iteration assignment it induces a corresponding information graph so that , . Figure 4 illustrates how the iteration assignments from Figure 2 induce corresponding information graphs. For rest of this section, we mention an iteration assignment with the understanding that we are interested in its induced information graph.

Conversely, an information graph induces an ordering to parallelize the generalized greedy algorithm. If all agents in have made their decisions but agent is still undecided, then we can let agents and to make their decisions in the same iteration without affecting the algorithm. Intuitively, we can think of information being relayed through some paths in and the earliest iteration for which agent can decide depends on the longest path in leading to node . Formally, a simple induction argument shows that the function determines the earliest iteration in which an agent could decide:


This function can be used to construct a parallelization of the greedy algorithm in which agent makes its decision in iteration and for every , . From here we can define as the set of graphs which induces a parallelization with at most iterations:


From the the definitions, it is clear that . Hence we can adopt competitive ratios for information graphs so that (6), (8) and (9) become:


and (14) and (15) become:


where is from the generalized greedy algorithm subject to objective function , action profile and information graph .

It is natural to ask whether the above generalization of parallelized greedy algorithm yields higher competitive ratio under the same number of agents and iterations. Also, in some applications, the information exchange may incur some costs, and therefore having a information graph with many edges is undesirable. So we wish to answer a second question, whether we can achieve the same competitive ratio in (11) and (16) with information graphs that have fewer edges than that of (12) and (13). Theorems 3 and 4, as presented below, show that generalizing to information graphs does not yield higher competitive ratio but allows us to achieve the same optimal competitive ratio with fewer edges.

Iv-C Extension of Theorem 1 to Information Graphs

We will present and prove a version of Theorem 1 generalized to information graphs, and Theorem 1 follows from the same logic.

Theorem 3

Given a parallelized submodular maximization problem with agents and iterations, the competitive ratio is:


In particular, when , for the following edge set , achieves the equality :


Otherwise, for the following edge set , achieves the equality :


Figure 5 illustrates the aforementioned optimal information graphs. Note that (26) and (27) are subgraphs of the information graph induced by (12) and (13), respectively. Furthermore, (26) has approximately times as many edges as (12) and (27) has approximately times as many edges as (13). This is true because every agent (except agent when ) requires information from just one agent from each prior time step. Additonally, (27) is “delightfully parallel” so that we can partition the agents into threads according to the clique in which they reside, and no information need to be exchanged among threads.

(a) The optimal graph for , as described in (26).
(b) The optimal graph for , as described in (27).
Fig. 5: Examples of optimal information graph as given by Theorem 3. The agents, represented by nodes, are implicitly named as from left to right and the Roman numeral labels indicate the iteration in which the agents will be executed. Note that Fig. 4(a) is a subgraph of Fig. 3(a) and Fig. 4(b) is a subgraph of Fig. 3(c). All four of these graphs achieve their respective optimal competitive ratio, which all happen to be . However, both Fig. 4(a) and Fig. 4(b) have 4 edges, whereas Fig. 3(a) has 6 edges and Fig. 3(c) has 8 edges. Also according to Theorem 4, when the objective function is -strictly monotone, Fig. 4(a) and Fig. 4(b) are nearly-optimal.

To prove Theorem 3, we utilize results from [25] that relate the competitive ratio of a graph to its independence number and clique cover number:

Lemma 1

Given any information graph ,


Additionally, if there exists and a maximum independent set s.t. for some , then


To better leverage Lemma 1, it is also useful to show that is related to the concurrency:

Lemma 2

For any , .

Let be the set of agents executed in the th iteration. Note that must be an independent set for any . By the pigeonhole principle, ; hence .

If , combining Lemma 1 and 2 yields . Now let be the graph described in (26) and we can show that . First note that for any , and if , . Hence this graph is in . Let , and note that for all , . Then we have:


where (30a) and (30g) follow from monotonicity, (30b) and (30f) follow from telescoping sums, (30c) follows from submodularity, (30d) follows from the graph structure as defined in (26), and (30e) follows from the definition of the greedy algorithm as stated in (17). From here we can conclude that when .

Now we consider the case where . If , then by Lemma 1, . If , then and apply the pigeonhole principle on , there exists some , so that . The above is true because the condition ensures that . Therefore there must exist some so that . Then, Lemma 1 implies that . Now let be the graph described by (27) For any , , hence this graph is in . Also note that it consists of disjoint cliques, therefore . By Lemma 1, we have , hence the equality is achieved.

Iv-D Extension of Theorem 2 to Information Graphs

We will prove a version of Theorem 2 generalized to information graphs, and the original theorem follows from the same logic.

Theorem 4

Given a parallelized submodular and -strictly monotone maximization problem with agents and iterations, then:


Furthermore, the graph as defined in (27) achieves the lower bound .

To prove Theorem 4, we present a lower and an upper bound on the competitive ratio subject to -strict monotoncity in a similar fashion as Lemma 1. The following lemma is proven in the Appendix.

Lemma 3

Given any information graph ,


Now we come back to the proof of Theorem 4. The upper bound in (31) follows from combining Lemma 2 and the upper bound in Lemma 3. Consider the graph described by (27); as we already argued previously in Section IV-C, and . Hence, from the lower bound in Lemma 3, we conclude that this achieves the lower bound in (31), so we are done.

V Conclusion

In this paper, we derived bounds on the competitive ratio of the parallelized greedy algorithm for both submodular objective functions and those with an additional property of -strictly monotone. We also provided the optimal design which achieves such bound and showed that a graph theoretic approach yields more effective parallelization that still achieves the same competitive ratio.

There are several directions of future research. One possibility is to consider whether employing other structural properties on the objective functions can also improve the competitive ratio of greedy algorithm. In particular, we are interested in properties that consider fixed number actions at once because -strict monotonicty considers arbitrarily many actions at once. Another possible direction is to consider applying something other than the marginal contribution in making the greedy decisions.

[Proof of Lemma 3]

We first show the lower bound on . In this proof, we consider some minimal clique covering of . Let denotes the clique containing the node, and . We will upper bound in terms of ’s. Then we express as a concave function in and using convexity to derive the final lower bound.

First, we need to have an appropriate objective function and action profile . Given , we can assume that without affecting the competitive ratio. We want for all , which ensures that all decisions are distinct. Suppose not and for some , , then we transform to so that , for any , and . Also define as:


where . Note that the above transformation does not affect the competitive ratio because and . Also, from some simple algebra:


where , . From (34), it is easy to verify that satisfies all properties of submodular functions and -strict monotonicty; but for brevity, we will not explicitly show them here. Hence, for rest of the proof, we can safely assume that for all .

We bound through strict monotonicity:


where (35a) follows from submodulairty, (35b) follows from the fact that are disjoint, (35c) follows from -strict monotonicity, and (35d) is a telescoping sum.

Then we bound using properties of submodular functions and the greedy algorithm.


where (36a) and (36f) are telescoping sums, (36b) and (36d) follow from submodularity, (36e) follows from the fact that ’s are disjoint, and (36c) follows from the greedy algorithm as defined in (17).

Combining (35) and (36), we have that


And we can upper bound ’s using strict monotonicity in a manner same as that of (35).


Hence for any ,


where we WLOG impose the normalization that . Note that the RHS is concave with respect to . Recall Jensen’s inequality states that given a convex function and numbers , then is minimized when . Therefore, substituting into (37) yields:


Hence we derived the lower bound.

Now we show the upper bound on . Let be a maximum independent set of and . Consider and action profile determined by:


Define the objective function so that for any ,


Note that through this construction, is a submodular function and is also -strictly monotonic. also has the following properties for any :

  1. .

  2. for any because by definition, we have .

  3. for any

From these properties, the agents in are equally incentivized to pick either option. In the greedy solution, the actions are the ’s. And in the optimal solution, the actions are the ’s. This results in and , hence .


  • [1] A. Krause, C. Guestrin, A. Gupta, and J. Kleinberg, “Near-optimal sensor placements: Maximizing information while minimizing communication cost,” in Proceedings of the International Conference on Information Processing in Sensor Networks.   ACM, 2006, pp. 2–10.
  • [2] J. Leskovec, A. Krause, C. Guestrin, C. Faloutsos, C. Faloutsos, J. VanBriesen, and N. Glance, “Cost-effective outbreak detection in networks,” in Proceedings of the ACM SIGKDD international conference on Knowledge discovery and data mining.   ACM, 2007, pp. 420–429.
  • [3] D. Kempe, J. Kleinberg, and É. Tardos, “Maximizing the spread of influence through a social network,” in Proceedings of the ACM SIGKDD international conference on Knowledge discovery and data mining.   ACM, 2003, pp. 137–146.
  • [4] M. Gomez-Rodriguez, J. Leskovec, and A. Krause, “Inferring networks of diffusion and influence,” ACM Transactions on Knowledge Discovery from Data (TKDD), vol. 5, no. 4, p. 21, 2012.
  • [5] H. Lin and J. Bilmes, “A class of submodular functions for document summarization,” in Proceedings of the Meeting of the Association for Computational Linguistics: Human Language Technologies-Volume 1.   Association for Computational Linguistics, 2011, pp. 510–520.
  • [6] B. Mirzasoleiman, A. Karbasi, R. Sarkar, and A. Krause, “Distributed submodular maximization: Identifying representative elements in massive data,” in Advances in Neural Information Processing Systems, 2013, pp. 2049–2057.
  • [7] G. Qu, D. Brown, and N. Li, “Distributed greedy algorithm for multi-agent task assignment problem with submodular utility functions,” Automatica, vol. 105, pp. 206–215, 2019.
  • [8] A. Singh, A. Krause, C. Guestrin, W. J. Kaiser, and M. A. Batalin, “Efficient planning of informative paths for multiple robots.” in

    International Joint Conferences on Artificial Intelligence

    , vol. 7, 2007, pp. 2204–2211.
  • [9] A. Clark and R. Poovendran, “A submodular optimization framework for leader selection in linear multi-agent systems,” in Conference on Decision and Control and European Control Conference.   IEEE, 2011, pp. 3614–3621.
  • [10] J. R. Marden, “The role of information in distributed resource allocation,” IEEE TCNS, vol. 4, no. 3, pp. 654–664, Sept 2017.
  • [11] L. Lovász, “Submodular functions and convexity,” in Mathematical Programming The State of the Art.   Springer, 1983, pp. 235–257.
  • [12] G. Calinescu, C. Chekuri, M. Pál, and J. Vondrák, “Maximizing a submodular set function subject to a matroid constraint,” in

    International Conference on Integer Programming and Combinatorial Optimization

    .   Springer, 2007, pp. 182–196.
  • [13] M. Minoux, “Accelerated greedy algorithms for maximizing submodular set functions,” in Optimization Techniques.   Springer, 1978, pp. 234–243.
  • [14] N. Buchbinder, M. Feldman, J. Seffi, and R. Schwartz, “A tight linear time (1/2)-approximation for unconstrained submodular maximization,” SIAM Journal on Computing, vol. 44, no. 5, pp. 1384–1402, 2015.
  • [15] J. Vondrák, “Optimal approximation for the submodular welfare problem in the value oracle model,” in

    ACM Symposium on Theory of Computing

    .   ACM, 2008, pp. 67–74.
  • [16] G. L. Nemhauser, L. A. Wolsey, and M. L. Fisher, “An analysis of approximations for maximizing submodular set functions I,” Mathematical Programming, vol. 14, no. 1, pp. 265–294, 1978.
  • [17] M. L. Fisher, G. L. Nemhauser, and L. A. Wolsey, “An analysis of approximations for maximizing submodular set functions—ii,” in Polyhedral combinatorics.   Springer, 1978, pp. 73–87.
  • [18] M. Gairing, “Covering games: Approximation through non-cooperation,” in International Workshop on Internet and Network Economics.   Springer, 2009, pp. 184–195.
  • [19] U. Feige, “A threshold of ln for approximating set cover,” Journal of the ACM (JACM), vol. 45, no. 4, pp. 634–652, 1998.
  • [20] A. Clark, B. Alomair, L. Bushnell, and R. Poovendran, Submodularity in dynamics and control of networked systems.   Springer, 2016.
  • [21] M. Corah and N. Michael, “Distributed submodular maximization on partition matroids for planning on large sensor networks,” in Conference on Decision and Control.   IEEE, 2018, pp. 6792–6799.
  • [22] X. Pan, S. Jegelka, J. E. Gonzalez, J. K. Bradley, and M. I. Jordan, “Parallel double greedy submodular maximization,” in Advances in Neural Information Processing Systems, 2014, pp. 118–126.
  • [23] A. Ene, H. L. Nguyen, and A. Vladu, “A parallel double greedy algorithm for submodular maximization,” arXiv preprint arXiv:1812.01591, 2018.
  • [24] B. Gharesifard and S. L. Smith, “Distributed submodular maximization with limited information,” Trans. on Control of Network Systems, vol. 5, no. 4, pp. 1635–1645, 2017.
  • [25] D. Grimsman, M. S. Ali, J. P. Hespanha, and J. R. Marden, “The impact of information in greedy submodular maximization,” Trans. on Control of Network Systems, 2018.