1 Introduction
Submodular functions are discrete analogs of convex functions. They arise naturally in many areas, such as the study of graphs, matroids, covering problems, and facility location problems. These functions are extensively studied in operations research and combinatorial optimization
(Krause and Golovin, 2012). Recently, submodular functions have proven to be key concepts in other areas such as machine learning, algorithmic game theory, and social sciences. As such, they have been applied to a host of important problems such as modeling valuation functions in combinatorial auctions, feature and variable selection
Krause and Guestrin (2005a), data summarization Lin and Bilmes (2011), and influence maximization Kempe et al. (2003).Classical results in submodular optimization consider the oracle model
whereby the access to the optimization objective is provided through a black box — an oracle. However, in many applications, the objective has to be estimated from data and is subject to stochastic fluctuations. In other cases the value of the objective may only be obtained through simulation. As such, the exact computation might not be feasible due to statistical or computational constraints. As a concrete example, consider the problem of
influence maximization in social networks Kempe et al. (2003). The objective function is defined as the expectation of a stochastic process, quantifying the size of the (random) subset of nodes influenced from a selected seed set. This expectation cannot be computed efficiently, and is typically approximated via random sampling, which introduces an error in the estimate of the value of a seed set. Another practical example is the exemplarbased clustering problem, which is an instance of the facility location problem. Here, the objective is the sum of similarities of all the points inside a (large) collection of data points to a selected set of centers. Given a distribution over point locations, the true objective is defined as the expected value w.r.t. this distribution, and can only be approximated as a sample average. Moreover, evaluating the function on a sample involves computation of many pairwise similarities, which is computationally prohibitive in the context of massive data sets.In this work, we provide a formalization of such stochastic submodular maximization tasks. More precisely, we consider set functions , defined as for , where is an arbitrary distribution and for each realization , the set function is monotone and submodular (hence is monotone submodular). The goal is to maximize subject to some constraints (e.g. the cardinality constraint) having access only to i.i.d. samples .
Methods for submodular maximization fall into two major categories: (i) The classic approach is to directly optimize the objective using discrete optimization methods (e.g. the Greedy algorithm and its accelerated variants), which are stateoftheart algorithms (both in practice and theory), at least in the case of simple constraints, and are most widely considered in the literature; (ii) The alternative is to lift the problem into a continuous domain and exploit continuous optimization techniques available therein (Calinescu et al., 2011). While the continuous approaches may lead to provably good results, even for more complex constraints, their high computational complexity inhibits their practicality.
In this paper we demonstrate how modern stochastic optimization techniques (such as SGD, AdaGrad Duchi et al. (2011) and Adam Kingma and Ba (2015)), can be used to solve an important class of discrete optimization problems which can be modeled using weighted coverage functions. In particular, we show how to efficiently maximize them under matroid constraints by (i) lifting the problem into the continuous domain using the multilinear extension Vondrák (2008), (ii) efficiently computing a concave relaxation of the multilinear extension Seeman and Singer (2013)
, (iii) efficiently computing an unbiased estimate of the gradient for the concave relaxation thus enabling (projected) stochastic gradient ascentstyle algorithms to maximize the concave relaxation, and (iv) rounding the resulting fractional solution without loss of approximation quality
Calinescu et al. (2011). In addition to providing convergence and approximation guarantees, we demonstrate that our algorithms enjoy strong empirical performance, often achieving an order of magnitude speedup with less than error with respect to Greedy. As a result, the presented approach unleashes the powerful toolkit of stochastic gradient based approaches to discrete optimization problems.Our contributions.
In this paper we (i) introduce a framework for stochastic submodular optimization, (ii) provide a general methodology for constrained maximization of stochastic submodular objectives, (iii) prove that the proposed approach guarantees a approximation in expectation for the class of weighted coverage functions, which is the best approximation guarantee achievable in polynomial time unless , (iv) highlight the practical benefit and efficiency of using continuousbased stochastic optimization techniques for submodular maximization, (v) demonstrate the practical utility of the proposed framework in an extensive experimental evaluation. We show for the first time that continuous optimization is a highly practical, scalable avenue for maximizing submodular set functions.
2 Background and problem formulation
Let be a ground set of elements. A set function is submodular if for every , it holds . Function is said to be monotone if for all . We focus on maximizing subject to some constraints on . The prototypical example is maximization under the cardinality constraint, i.e., for a given integer , find , , which maximizes . Finding an exact solution for monotone submodular functions is NPhard Feige (1998), but a approximation can be efficiently determined Nemhauser et al. (1978). Going beyond the approximation is NPhard for many classes of submodular functions Nemhauser et al. (1978); Krause and Guestrin (2005b). More generally, one may consider matroid constraints, whereby is a matroid with the family of independent sets , and maximize such that . The Greedy algorithm achieves a approximation Fisher et al. (1978), but Continuous Greedy introduced by Vondrák (2008); Calinescu et al. (2007) can achieve a optimal solution in expectation. Their approach is based on the multilinear extension of , , defined as
(1) 
for all . In other words, is the expected value of of over sets wherein each element
is included with probability
independently. Then, instead of optimizing over , we can optimize over the matroid base polytope corresponding to : , where is the matroid’s rank function. The Continuous Greedy algorithm then finds a solution which provides a approximation. Finally, the continuous solution is then efficiently rounded to a feasible discrete solution without loss in objective value, using Pipage Rounding Ageev and Sviridenko (2004); Calinescu et al. (2007). The idea of converting a discrete optimization problem into a continuous one was first exploited by Lovász (1983) in the context of submodular minimization and this approach was recently applied to a variety of problems Vondrák (2007); Iyer and Bilmes (2015); Bach (2010).Problem formulation.
The aforementioned results are based on the oracle model, whereby the exact value of for any is given by an oracle. In absence of such an oracle, we face the additional challenges of evaluating , both statistical and computational. In particular, consider set functions that are defined as expectations, i.e. for we have
(2) 
where is an arbitrary distribution and for each realization , the set function is submodular. The goal is to efficiently maximize subject to constraints such as the cardinality constraint, or more generally, a matroid constraint.
As a motivating example, consider the problem of propagation of contagions through a network. The objective is to identify the most influential seed set of a given size. A propagation instance (concrete realization of a contagion) is specified by a graph . The influence of a set of nodes in instance is the fraction of nodes reachable from using the edges . To handle uncertainties in the concrete realization, it is natural to introduce a probabilistic model such as the Independent Cascade Kempe et al. (2003) model which defines a distribution over instances that share a set of nodes. The influence of a seed set is then the expectation , which is a monotone submodular function. Hence, estimating the expected influence is computationally demanding, as it requires summing over exponentially many functions . Assuming as in (2), one can easily obtain an unbiased estimate of for a fixed set by random sampling according to . The critical question is, given that the underlying function is an expectation, can we optimize it more efficiently?
Our approach is based on continuous extensions that are linear operators on the class of set functions, namely, linear continuous extensions. As a specific example, considering the multilinear extension, we can write where denotes the extension of . As a consequence, the value of , when , is an unbiased estimator for and unbiased estimates of the (sub)gradients may be obtained analogously. We explore this avenue to develop efficient algorithms for maximizing an important subclass of submodular functions that can be expressed as weighted coverage functions. Our approach harnesses a concave relaxation detailed in Section 3.
Further related work. The emergence of new applications, combined with a massive increase in the amount of data has created a demand for fast algorithms for submodular optimization. A variety of approximation algorithms have been presented, ranging from submodular maximization subject to a cardinality constraint Mirzasoleiman et al. (2015); Wei et al. (2014); Badanidiyuru and Vondrák (2014), submodular maximization subject to a matroid constraint Calinescu et al. (2007), nonmonotone submodular maximization Feige et al. (2011), approximately submodular functions Horel and Singer (2016), and algorithms for submodular maximization subject to a wide variety of constraints Kulik et al. (2009); Feldman et al. (2011); Vondrák (2013); Iyer and Bilmes (2013); Ene and Nguyen (2016). A closely related setting to ours is online submodular maximization Streeter and Golovin (2008), where functions come one at a time and the goal is to provide timedependent solutions (sets) such that a cumulative regret is minimized. In contrast, our goal is to find a single (timeindependent) set that maximizes the objective (2). Another relevant setting is noisy submodular maximization, where the evaluations returned by the oracle are noisy Hassidim and Singer (2016); Singla et al. (2016). Specifically, Singla et al. (2016) assumes a noisy but unbiased oracle (with an independent subGaussian noise) which allows one to sufficiently estimate the marginal gains of items by averaging. In the context of cardinality constraints, some of these ideas can be carried to our setting by introducing additional assumptions on how the values vary w.r.t. to their expectation . However, we provide a different approach that does not rely on uniform convergence and compare sample and running time complexity comparison with variants of Greedy in Section 3.
3 Stochastic Submodular Optimization
We follow the general framework of Vondrák (2008) whereby the problem is lifted into the continuous domain, a continuous optimization algorithm is designed to maximize the transferred objective, and the resulting solution is rounded. Maximizing subject to a matroid constraint can then be done by first maximizing its multilinear extension over the matroid base polytope and then rounding the solution. Methods such as the projected stochastic gradient ascent can be used to maximize over this polytope.
Critically, we have to assure that the computed local optima are good in expectation. Unfortunately, the multilinear extension lacks concavity and therefore may have bad local optima. Hence, we consider concave continuous extensions of that are efficiently computable, and at most a constant factor away from to ensure solution quality. As a result, such a concave extension could then be efficiently maximized over a polytope using projected stochastic gradient ascent which would enable the application of modern continuous optimization techniques. One class of important functions for which such an extension can be efficiently computed is the class of weighted coverage functions.
The class of weighted coverage functions (WCF).
Let be a set and let be a nonnegative modular function on , i.e. , . Let be a collection of subsets of . The weighted coverage function defined as
is monotone submodular. For all , let us denote by and by the indicator function. The multilinear extension of can be expressed in a more compact way:
(3) 
where we used the fact that each element was chosen with probability .
Concave upper bound for weighted coverage functions.
To efficiently compute a concave upper bound on the multilinear extension we use the framework of Seeman and Singer (2013). Given that all the weights , in (3) are nonnegative, we can construct a concave upper bound for the multilinear extension using the following Lemma. Proofs can be found in the Appendix A.
Lemma 1.
For define . Then the Fenchel concave biconjugate of is . Also
Furthermore, is an extension of , i.e. : .
Consequently, given a weighted coverage function with represented as in (3), we can define
(4) 
and conclude using Lemma 1 that , as desired. Furthermore, has three interesting properties: (1) It is a concave function over , (2) it is equal to on vertices of the hypercube, i.e. for one has , and (3) it can be computed efficiently and deterministically given access to the sets , . In other words, we can compute the value of using at most operations. Note that is not the tightest concave upper bound of , even though we use the tightest concave upper bounds for each term of .
Optimizing the concave upper bound by stochastic gradient ascent.
Instead of maximizing over a polytope , one can now attempt to maximize over . Critically, this task can be done efficiently, as is concave, by using projected stochastic gradient ascent. In particular, one can control the convergence speed by choosing from the toolbox of modern continuous optimization algorithms, such as Sgd, AdaGrad and Adam. Let us denote a maximizer of over by , and also a maximizer of over by . We can thus write
which is the exact guarantee that previous methods give, and in general is the best nearoptimality ratio that one can give in polytime. Finally, to round the continuous solution we may apply RandomizedPipageRounding Calinescu et al. (2011) as the quality of the approximation is preserved in expectation.
Matroid constraints.
Constrained optimization can be efficiently performed by projected gradient ascent whereby after each step of the stochastic ascent, we need to project the solution back onto the feasible set. For the case of matroid constraints, it is sufficient to consider projection onto the matroid base polytope. This problem of projecting on the base polytope has been widely studied and fast algorithms exist in many cases Bach et al. (2013); Brucker (1984); Pardalos and Kovoor (1990). While these projection algorithms were used as a key subprocedure in constrained submodular minimization, here we consider them for submodular maximization. Details of a fast projection algorithm for the problems considered in this work are presented the Appendix D. Algorithm 1 summarizes all steps required to maximize subject to matroid constraints.
Convergence rate.
Since we are maximizing a concave function over a matroid base polytope , convergence rate (and hence running time) depends on , as well as maximum gradient norm (i.e. with probability ). ^{1}^{1}1Note that the function is neither smooth nor strongly concave as functions such as are not smooth or strongly concave. In the case of the base polytope for a matroid of rank , is , since each vertex of the polytope has exactly ones. Also, from (4), one can build a rough upper bound for the norm of the gradient:
which depends on the weights as well as and is hence problemdependent. We will provide tighter upper bounds for gradient norm in our specific examples in the later sections. With , and classic results for SGD ShalevShwartz and BenDavid (2014), we have that
where is the total number of SGD iterations and is the final outcome of SGD (see Algorithm 1). Therefore, for a given , after iterations, we have
Summing up, we will have the following theorem:
Theorem 2.
Let be a weighted coverage function, be the base polytope of a matroid , and and be as above. Then for each , Algorithm 1 after iterations, produces a set such that .
Remark.
Indeed this approximation ratio is the best ratio one can achieve, unless PNP Feige (1998). A key point to make here is that our approach also works for more general constraints (in particular is efficient for simple matroids such as partition matroids). In the latter case, Greedy only gives approximation and fast discrete methods like StochasticGreedy Mirzasoleiman et al. (2015) do not apply, whereas our method still yields an optimal solution.
Time Complexity.
One can compute an upper bound for the running time of Algorithm 1 by estimating the time required to perform gradient computations, projection on , and rounding. For the case of uniform matroids, projection and rounding take and time, respectively (see Appendix D). Furthermore, for the applications considered in this work, namely expected influence maximization and exemplarbased clustering, we provide linear time algorithms to compute the gradients. Also when our matroid is the uniform matroid (i.e. cardinality constraint), we have . By Theorem 2, the total computational complexity of our algorithm is .
Comparison to Greedy.
Let us relate our results to the classical approach. When running the Greedy algorithm in the stochastic setting, one estimates where are i.i.d. samples from . The following proposition bounds the sample and computational complexity of Greedy. The proof is detailed in the Appendix B.
Proposition 3.
Let be a submodular function defined as (2). Suppose for all and all . Assume denotes the optimal solution for subject to cardinality constraint and denotes the solution computed by the greedy algorithm on after steps. Then, in order to guarantee
it is enough to have
i.i.d. samples from . The running time of Greedy is then bounded by
where is an upper bound on the computation time for a single evaluation of .
As an example, let us compare the worstcase complexity bound obtained for SGD (i.e. ) with that of Greedy for the influence maximization problem. Each single function evaluation for Greedy amounts to computing the total influence of a set in a sample graph, which makes (here we assume our sample graphs satisfy ). Also, a crude upper bound for the size of the gradient for each sample function is (see Appendix E.1). Hence, we can deduce that SGD can have a factor speedup w.r.t. to Greedy.
4 Applications
We will now show how to instantiate the stochastic submodular maximization framework using several prototypical discrete optimization problems.
Influence maximization.
As discussed in Section 2, the Independent Cascade Kempe et al. (2003) model defines a distribution over instances that share a set of nodes. The influence of a set of nodes in instance is the fraction of nodes reachable from using the edges . The following Lemma shows that the influence belongs to the class of WCF.
Lemma 4.
The influence function is a WCF. Moreover,
(5)  
(6) 
where is the set of all nodes having a (directed) path to .
We return to the problem of maximizing given a distribution over graphs sharing nodes . Since is a weighted sum of submodular functions, it is submodular. Moreover,
Let
be the uniform distribution over vertices. Then,
(7) 
and the corresponding upper bound would be
(8) 
This formulation proves to be helpful in efficient calculation of subgradients, as one can obtain a random subgradient in linear time. For more details see Appendix E.1. We also provide a more efficient, biased estimator of the expectation in the Appendix.
Facility location.
Let be a complete weighted bipartite graph with parts and and nonnegative weights . The weights can be considered as utilities or some similarity metric. We select a subset and each selects with the highest weight . Our goal is to maximize the average weight of these selected edges, i.e. to maximize
(9) 
given some constraints on . This problem is indeed the Facility Location problem, if one takes to be the set of facilities and to be the set of customers and to be the utility of facility for customer . Another interesting instance is the Exemplarbased Clustering problem, in which is a set of objects and is the similarity (or inverted distance) between objects and , and one tries to find a subset of exemplars (i.e. centroids) for these objects.
The stochastic nature of this problem is revealed when one writes (9) as the expectation , where is the uniform distribution over and . One can also consider this more general case, where ’s are drawn from an unknown distribution, and one tries to maximize the aforementioned expectation.
First, we claim that for each is again a weighted coverage function. For simplicity, let and set , with and .
Lemma 5.
The utility function is a WCF. Moreover,
(10)  
(11) 
We remark that the gradient of both and can be computed in linear time using a recursive procedure. We refer to Appendix E.2 for more details.
5 Experimental Results
We demonstrate the practical utility of the proposed framework and compare it to standard baselines. We compare the performance of the algorithms in terms of their wallclock running time and the obtained utility. We consider the following problems:

Influence Maximization for the Epinions network^{2}^{2}2http://snap.stanford.edu/. The network consists of 75 879 nodes and 508 837 directed edges. We consider the subgraph induced by the top 10 000 nodes with the largest outdegree and use the independent cascade model Kempe et al. (2003). The diffusion model is specified by a fixed probability for each node to influence its neighbors in the underlying graph. We set this probability to be , and chose the number of seeds .

Facility Location for Blog Selection. We use the data set used in Glance et al. (2005), consisting of 45 193 blogs, and 16 551 cascades. The goal is to detect information cascades/stories spreading over the blogosphere. This dataset is heavytailed
, hence a small random sample of the events has high variance in terms of the cascade sizes. We set
. 
Exemplarbased Clustering on CIFAR10. The data set contains 60 000 color images with resolution . We use a single batch of 10 000 images and compare our algorithms to variants of Greedy over the full data set. We use the Euclidean norm as the distance function and set . Further details about preprocessing of the data as well as formulation of the submodular function can be found in Appendix E.3.
Baselines.
In the case of cardinality constraints, we compare our stochastic continuous optimization approach against the most efficient discrete approaches (Lazy)Greedy and (Lazy)StochasticGreedy, which both provide optimal approximation guarantees. For StochasticGreedy, we vary the parameter in order to explore the running time/utility tradeoff. We also report the performance of randomly selected sets. For the two facility location problems, when applying the greedy variants we can evaluate the exact objective (true expectation). In the Influence Maximization application, computing the exact expectation is intractable. Hence, we use an empirical average of samples (cascades) from the model. We note that the number of samples suggested by Proposition 3 is overly conservative, and instead we make a practical choice of samples.
Results.
The results are summarized in Figure 1. On the blog selection and influence maximization applications, the proposed continuous optimization approach outperforms StochasticGreedy in terms of the running time/utility tradeoff. In particular, for blog selection we can compute a solution with the same utility faster than StochasticGreedy with . Similarly, for influence maximization on Epinions we the solution faster than StochasticGreedy with . On the exemplarbased clustering application StochasticGreedy outperforms the proposed approach. We note that the proposed approach is still competitive as it recovers of the value after less than thousand iterations.
We also include an experiment on Influence Maximization over partition matroids for the Epinions network. In this case, Greedy only provides a approximation guarantee and StochasticGreedy does not apply. To create the partition, we first sorted all the vertices by their outdegree. Using this order on the vertices, we divided the vertices into two partitions, one containing vertices with even positions, other containing the rest. Figure 1 clearly demonstrates that the proposed approach outperforms Greedy in terms of utility (as well as running time).
Acknowledgments
The research was partially supported by ERC StG 307036. We would like to thank Yaron Singer for helpful comments and suggestions.
References
 Ageev and Sviridenko [2004] Alexander A Ageev and Maxim I Sviridenko. Pipage rounding: A new method of constructing algorithms with proven performance guarantee. Journal of Combinatorial Optimization, 8(3):307–328, 2004.
 Bach et al. [2013] Francis Bach et al. Learning with submodular functions: A convex optimization perspective. Foundations and Trends® in Machine Learning, 6(23):145–373, 2013.
 Bach [2010] Francis R. Bach. Convex analysis and optimization with submodular functions: a tutorial. CoRR, abs/1010.4207, 2010.
 Badanidiyuru and Vondrák [2014] Ashwinkumar Badanidiyuru and Jan Vondrák. Fast algorithms for maximizing submodular functions. In Proceedings of the TwentyFifth Annual ACMSIAM Symposium on Discrete Algorithms, pages 1497–1514. SIAM, 2014.
 Brucker [1984] P. Brucker. An o(n) algorithm for quadratic knapsack problems. Operations Research Letters, 3(3):163–166, 1984.
 Calinescu et al. [2007] Gruia Calinescu, Chandra Chekuri, Martin Pál, and Jan Vondrák. Maximizing a submodular set function subject to a matroid constraint. In International Conference on Integer Programming and Combinatorial Optimization, pages 182–196. Springer, 2007.
 Calinescu et al. [2011] Gruia Calinescu, Chandra Chekuri, Martin Pál, and Jan Vondrák. Maximizing a monotone submodular function subject to a matroid constraint. SIAM Journal on Computing, 40(6):1740–1766, 2011.
 Duchi et al. [2011] John Duchi, Elad Hazan, and Yoram Singer. Adaptive subgradient methods for online learning and stochastic optimization. Journal of Machine Learning Research, 2011.
 Ene and Nguyen [2016] Alina Ene and Huy L. Nguyen. Constrained submodular maximization: Beyond 1/e. pages 248–257, 2016.
 Feige [1998] Uriel Feige. A threshold of ln n for approximating set cover. Journal of the ACM, 45(4):634 ? 652, 1998.
 Feige et al. [2011] Uriel Feige, Vahab S Mirrokni, and Jan Vondrak. Maximizing nonmonotone submodular functions. SIAM Journal on Computing, 40(4):1133–1153, 2011.
 Feldman et al. [2011] Moran Feldman, Joseph Naor, and Roy Schwartz. A unified continuous greedy algorithm for submodular maximization. In Foundations of Computer Science (FOCS), 2011 IEEE 52nd Annual Symposium on, pages 570–579. IEEE, 2011.
 Fisher et al. [1978] Marshall L Fisher, George L Nemhauser, and Laurence A Wolsey. An analysis of approximations for maximizing submodular set functions. In Polyhedral combinatorics, pages 73–87. Springer, 1978.
 Glance et al. [2005] Natalie Glance, Matthew Hurst, Kamal Nigam, Matthew Siegler, Robert Stockton, and Takashi Tomokiyo. Deriving marketing intelligence from online discussion. In Proceedings of the eleventh ACM SIGKDD international conference on Knowledge discovery in data mining, pages 419–428, 2005.
 Gomez and Krause [2010] Ryan Gomez and Andreas Krause. Budgeted nonparametric learning from data streams. Proceedings of the 27^{th} International Conference on Machine Learning, 2010.
 Hassidim and Singer [2016] Avinatan Hassidim and Yaron Singer. Submodular optimization under noise. CoRR, abs/1601.03095, 2016.
 Horel and Singer [2016] Thibaut Horel and Yaron Singer. Maximizing approximately submodular functions. NIPS, 2016.
 Iyer and Bilmes [2013] Rishabh K Iyer and Jeff A Bilmes. Submodular optimization with submodular cover and submodular knapsack constraints. In Advances in Neural Information Processing Systems, pages 2436–2444, 2013.
 Iyer and Bilmes [2015] Rishabh K. Iyer and Jeff A. Bilmes. Polyhedral aspects of submodularity, convexity and concavity. Arxiv, CoRR, abs/1506.07329, 2015.
 Kempe et al. [2003] David Kempe, Jon Kleinberg, and Eva. Tardos. Maximizing the spread of influence through a social network. 9^{th} ACM SIGKDD Conference on Knowledge Discovery and Data Mining, pages 137–146, 2003.
 Kingma and Ba [2015] Diederik P. Kingma and Jimmy Ba. Adam: A method for stochastic optimization. ICLR, 2015.
 Krause and Golovin [2012] Andreas Krause and Daniel Golovin. Submodular function maximization. Tractability: Practical Approaches to Hard Problems, 3(19):8, 2012.

Krause and Guestrin [2005a]
Andreas Krause and Carlos Guestrin.
Nearoptimal nonmyopic value of information in graphical models.
In
Conference on Uncertainty in Artificial Intelligence (UAI)
, July 2005a.  Krause and Guestrin [2005b] Andreas Krause and Carlos Guestrin. Nearoptimal nonmyopic value of information in graphical models. In Proceedings of the TwentyFirst Conference on Uncertainty in Artificial Intelligence, pages 324–331. AUAI Press, 2005b.
 Kulik et al. [2009] Ariel Kulik, Hadas Shachnai, and Tami Tamir. Maximizing submodular set functions subject to multiple linear constraints. In Proceedings of the twentieth Annual ACMSIAM Symposium on Discrete Algorithms, pages 545–554. Society for Industrial and Applied Mathematics, 2009.
 Kumar and Bach [2016] K. S. Sesh Kumar and Francis Bach. Activeset methods for submodular minimization problems. hal01161759v3, 2016.

Lin and Bilmes [2011]
Hui Lin and Jeff Bilmes.
A class of submodular functions for document summarization.
In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language TechnologiesVolume 1, pages 510–520. Association for Computational Linguistics, 2011.  Lovász [1983] László Lovász. Submodular functions and convexity. In Mathematical Programming The State of the Art, pages 235–257. Springer, 1983.
 Mirzasoleiman et al. [2015] Baharan Mirzasoleiman, Ashwinkumar Badanidiyuru, Amin Karbasi, Jan Vondrak, and Andreas Krause. Lazier than lazy greedy. Association for the Advancement of Artificial Intelligence, 2015.
 Nemhauser et al. [1978] George L. Nemhauser, Laurence A. Wolsey, and Marshall L. Fisher. An analysis of approximations for maximizing submodular set functions  i. Mathematical Programming, 14(1):265–294, 1978.
 Pardalos and Kovoor [1990] P. M. Pardalos and N. Kovoor. An algorithm for a singly constrained class of quadratic programs subject to upper and lower bounds. Mathematical Programming, 46(1):321–328, 1990.
 Seeman and Singer [2013] Lior Seeman and Yaron Singer. Adaptive seeding in social networks. pages 459–468, 2013.
 ShalevShwartz and BenDavid [2014] Shai ShalevShwartz and Shai BenDavid. Understanding Machine Learning : From Theory to Algorithms. Cambridge University Press, 2014.
 Singla et al. [2016] Adish Singla, Sebastian Tschiatschek, and Andreas Krause. Noisy submodular maximization via adaptive sampling with applications to crowdsourced image collection summarization. In Proc. Conference on Artificial Intelligence (AAAI), February 2016.
 Streeter and Golovin [2008] Matthew Streeter and Daniel Golovin. An online algorithm for maximizing submodular functions. NIPS, 2008.
 Vondrák [2007] Jan Vondrák. Submodularity in combinatorial optimization. Charles University, Prague, 2007.

Vondrák [2008]
Jan Vondrák.
Optimal approximation for the submodular welfare problem in the value
oracle model.
In
Proceedings of the fortieth annual ACM symposium on Theory of computing
, pages 67–74. ACM, 2008.  Vondrák [2013] Jan Vondrák. Symmetry and approximability of submodular maximization problems. SIAM Journal on Computing, 42(1):265–304, 2013.
 Wei et al. [2014] Kai Wei, Rishabh Iyer, and Jeff Bilmes. Fast multistage submodular maximization. In International Conference on Machine Learning (ICML), Beijing, China, 2014.
Appendix A Proof of Lemma 1
Here we prove the inequality mentioned in the lemma. Proof of the fact of being Fenchel biconjugate is in Appendix F.
We prove the lefthandside inequality, since the righthandside inequality is a consequence of Fenchel biconjugateness.
Let . We note from the inequality that . We thus obtain
Now, if then the result is clear. Also, if , then we note that the function is decreasing for , and hence, . The lefthandside inequality thus follows immediately.
Appendix B Proof of Proposition 3
Note that the total number of subsets of cardinality less than is bounded from above by . For each such set we want the estimate to be at most away from . Also, note that the function is itself a submodular function and maximizing it would give a approximation to its optimum. Hence, it is enough to have enough samples such that for all subsets of cardinality at most the two values and differ by at most epsilon. By using Hoeffding’s inequality and a union bound over all the subsets of cardinality at most (note that ) we get the result.
Appendix C Proof of Lemmas 4 and 5
c.1 Lemma 4
Proof.
Let , where is the set of vertices reachable from . By construction, there is a onetoone correspondence between elements of and , namely . For , let be its corresponding subset in , i.e. . It’s obvious that . Setting , makes a WCF. But , so is also a WCF.
Moreover, for each , the set is the set of all elements of that contain , which are precisely those vertices from which there is a (directed) path to . We also relax our notation, and replace any element of by its correspondent in . Hence,
which are polytime computable since one can find with a simple BFS algorithm in for each .
∎
c.2 Lemma 5
Proof.
Write instead of .
Let , where , and let (set ). Note that there is a natural bijection between and , namely . Let be the modular function with weights , defined on , and define the WCF as
(12) 
Since ’s are forming a decreasing chain, and (12) becomes
which is exactly .
Furthermore, is simply the set . Hence, we can write the multilinear extension and the corresponding upper bound as
∎
Appendix D Fast Algorithms for Projection and Rounding
In this section, we show how projection (w.r.t. Mahalanobis norm) can be done in time and rounding in time for the uniform matroid. This projection algorithm also proves to be useful in case of partition matroid polytope. We also discuss a projection method on general matroid base polytopes, based on the method of Kumar and Bach [2016], which needs to solve a total number of submodular function minimization (SFM) tasks (details below).
d.1 Efficient projection on the uniform matroid
Let be a diagonal matrix with positive entries,
. Our aim is to project a vector
on the uniform matroid base polytope defined asThe polytope is the convex hull of all the vectors that have precisely ones and zeros. Projecting onto entails finding a point in , such that
where is the Mahalanobis norm (i.e. the Mahalanobis distance to ). Note that in the special case of , this problem boils down to orthogonal projection of onto . We first transform this problem into an orthogonal projection, and solve that projection in .
(13) 
where (13) suggests an orthogonal projection on the polytope . By defining the vector , one has . Theorem 6 shows that this projection can be done in , and Algorithm 2 depicts the algorithm achieving the solution.
Theorem 6.
Let , where is given. Then for any given point one can find the solution to in time. Moreover this solution is unique.
Proof.
Let us begin by writing the KKT optimality conditions for the projected vector . The Lagrangian is defined by
where and . Minimizing the Lagrangian w.r.t. gives for each :
(14) 
and also considering complementary slackness, we should have and . If one provides suitable and that satisfy the equations above, then would be the optimal solution. In what follows, we construct and provide suitable .
For each , define , where and are applied elementwise. By definition, one has . Let . We claim that if for a value of , , we are done, since , and it satisfies the KKT conditions: If , by definition of it means that , so we can set and . If , it means , so we can set and . Otherwise, , which in that case we set .
So it suffices to provide an such that . For each , define and . It’s obvious that if then , if then , and otherwise . So is a continuous decreasing function, and so will be . Note that if , then and if , then . So by continuity, there is some such that . Now let be the set of all distinct values among and . It’s clear that for all , is a linear function. By exploiting this fact, we can find by searching through these endpoints. Detailed procedure is explained in Algorithm 2. ∎
d.2 Efficient projection on Partition matroid base polytope
Let be a ground set and be a partition of . A partition matroid, includes all sets such that for all we have . It’s easy to see that the base polytope would be
In order to project onto , we first note that it becomes a separable objective, partitioned over . This means that it is sufficient to project onto the uniform matroid of , for all . Since each projection takes time, the total process would be .
d.3 Projection on general matroid base polytopes
Let us now ask whether there is an efficient projection algorithm for general matroid polytopes. Here, we argue that the method proposed by Kumar and Bach [2016] would be a reasonable candidate in the case of general matroid polytopes.
Let be a submodular function, such that , and let be its Lovasz extension. We define the base polytope of as the set
It can be shown Bach et al. [2013] that the Lovasz extension is the support function of this polytope, i.e.
(15) 
For any consider the task of minimizing the following objective with respect to :
(16) 
By using (15), we can rewrite (16) in the following dual form
(17) 
in which the latter expression is precisely the projection of on . In Kumar and Bach [2016], the authors have exploited the structural properties of the Lovasz extension and the faces of the base polytope to create the socalled “Activeset" algorithm. The Activeset algorithm iteratively solves instances of isotonic regression as well as submodular function minimization tasks, whose overall complexity is less than a single submodular function minimization call (recall that by submodular function minimization, we mean the task of solving ). By knowing (17), the algorithm can be viewed as a sequence of iterative projections on outerapproximations of the base polytope.
For any matroid, its associated rank function is a monotone submodular function. Also, the base polytope for a matroid’s rank function is exactly the matroid base polytope. As a result of (17), we
Comments
There are no comments yet.