Scaling Submodular Maximization via Pruned Submodularity Graphs

06/01/2016
by   Tianyi Zhou, et al.
University of Washington
0

We propose a new random pruning method (called "submodular sparsification (SS)") to reduce the cost of submodular maximization. The pruning is applied via a "submodularity graph" over the n ground elements, where each directed edge is associated with a pairwise dependency defined by the submodular function. In each step, SS prunes a 1-1/√(c) (for c>1) fraction of the nodes using weights on edges computed based on only a small number (O( n)) of randomly sampled nodes. The algorithm requires _√(c)n steps with a small and highly parallelizable per-step computation. An accuracy-speed tradeoff parameter c, set as c = 8, leads to a fast shrink rate √(2)/4 and small iteration complexity _2√(2)n. Analysis shows that w.h.p., the greedy algorithm on the pruned set of size O(^2 n) can achieve a guarantee similar to that of processing the original dataset. In news and video summarization tasks, SS is able to substantially reduce both computational costs and memory usage, while maintaining (or even slightly exceeding) the quality of the original (and much more costly) greedy algorithm.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 16

06/01/2016

Stream Clipper: Scalable Submodular Maximization on Stream

Applying submodular maximization in the streaming setting is nontrivial ...
04/06/2021

The Power of Subsampling in Submodular Maximization

We propose subsampling as a unified algorithmic technique for submodular...
10/16/2012

Learning Mixtures of Submodular Shells with Application to Document Summarization

We introduce a method to learn a mixture of submodular "shells" in a lar...
10/30/2018

Submodular Maximization Under A Matroid Constraint: Asking more from an old friend, the Greedy Algorithm

The classical problem of maximizing a submodular function under a matroi...
06/28/2017

Submodular Function Maximization for Group Elevator Scheduling

We propose a novel approach for group elevator scheduling by formulating...
01/03/2022

Submodular Maximization with Limited Function Access

We consider a class of submodular maximization problems in which decisio...
04/11/2013

Scaling the Indian Buffet Process via Submodular Maximization

Inference for latent feature models is inherently difficult as the infer...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Machine learning applications benefit from the existence of large volumes of data. The recent explosive growth of data, however, poses serious challenges both to humans and machines. One of the primary goals of a summarization process is to select a representative subset that reduces redundancy but preserves fidelity to the original data [19]

. Any further processing on only a summary (a small representative set) by either a human or machine thus reduces computation, memory requirements, and overall effort. Summarization has many applications such as news digesting, photo stream presenting, data subset selection, and video thumbnailing. A summarization algorithm, however, involves challenging combinatorial optimization problems, whose quality and speed heavily depend on the objective that assigns quality scores to candidate summaries.

Submodular functions [11, 19] are broadly applied as objectives for summarization, since they naturally capture redundancy amongst groups of data elements. A submodular function is a set function with a diminishing returns property, i.e., given a finite “ground” set , and any and a , we have:

(1)

This implies is more important to the smaller set than to the larger set . The increase reflects the importance of to and is called the “marginal gain” of conditioned on . The objective can be chosen from a large family of functions (e.g., including but not limited to facility location and set cover functions). Usually one requires a small summary, so a cardinality-based budget is used. Hence, a summarization task can be cast as the following:

(2)

Knapsacks and matroids are also often used as constraints. In this paper, however, we will primarily be concerned with cardinality constraints, but our methods do generalize to other constraints as well.

Though submodular maximization is NP-hard, a near optimal solution of (2) can be achieved via the greedy algorithm, having an approximation factor of [24]. The greedy algorithm starts with , and selects the next element with the largest marginal gain from , i.e., where , and this repeats until

. It is simple to implement and usually outperforms other methods, e.g., those based on integer linear programming.

Scaling up the greedy algorithm to very large data sizes (where is big) is a nontrivial practical problem. The per-step computation of greedy is expensive: each step needs to re-evaluate the marginal gains of all elements in conditioned on the new , and thus requires function evaluations. In addition, each step depends on the results from previous steps, so the computation does not trivially parallelize. Moreover, one typically must keep all elements in memory until the end of the algorithm, since any element might become the one with the largest marginal gain as grows. To overcome this problem, it would be helpful to have an economical screening method to reduce the data size before the costly submodular maximization is performed. While related work is described in §1.2, we next describe the contributions of this work.

1.1 Main Contribution

A submodular function can describe higher order relationships among multiple () elements via . In the greedy algorithm, selecting important elements (for maximizing ) requires evaluating for all each step. In this paper, we show that removing unimportant elements from

need only use a rough estimate of

, one that can be derived solely from pairwise relationships for a small set of element pairs . We encode the pairwise relationships as edge weights on a “submodularity graph”. By taking advantage of the properties of this graph, the size of the ground set can efficiently be reduced from to by randomly pruning the nodes on the graph according to a subset of the edge weights.

In particular, given objective , we define a directed submodularity graph whose nodes are the elements in , and each edge from tail to head is associated with a weight that reflects the worst-case net loss when maximizing caused by removing while retaining ( is the greatest loss when removing while retaining while is the least gain of retaining ). Intuitively, removing head nodes from with small-weight edges reduces the ground set from to a (hopefully much) smaller , and selecting elements from rather than causes a small overall objective loss but can be much faster.

Finding, however, the smallest such that the resulting objective loss can be upper bounded by some constant turns out to be another challenging non-monotone submodular maximization problem, leading to a chicken-and-egg situation. In addition, finding a near optimal solution to this problem requires computing weights on all edges. We instead propose a randomized pruning method called “submodular sparsification (SS)” to reduce the ground set. By leveraging a directed triangle inequality on the submodularity graph (Lemma 3), SS only needs to compute partial weights on a few randomly selected edges, and this only slightly increases the objective loss caused by using the reduced set rather than . At each step, SS randomly samples elements from as probes, and removes a fraction of head elements in that have the smallest weights from amongst the randomly selected elements. When tradeoff parameter

increases, the success probability of the randomized algorithm increases, but memory size

also increases. With it set as , the number of iterations is small, and per-iteration complexity is dominated by the computation of the pairwise edge weights, which is small and highly parallelizable. Hence, SS can scaled to large data sizes.

In experiments, we compare SS with the lazy greedy and sieve-streaming algorithm [2] on real-world news and video summarization datasets. Using the lazy greedy algorithm with an SS-reduced ground set, we achieve quality similar to that on the original ground set, but with computation and memory load greedy reduced and, in fact, comparable to a streaming algorithm whose quality is usually much worse than offline methods.

1.2 Related Work

A number of methods have been proposed to accelerate the speed of the greedy algorithm. Most of them, however, aim to reduce or distribute the computation rather than the memory, and rarely do they study how to reduce the ground set . Therefore, their contributions are mostly complementary with SS (i.e., they can be combined with SS to further improve algorithmic scalability).

The lazy, or accelerated, greedy algorithm [20, 17] reduces the number of function evaluations per step by lazily updating a priority queue of marginal gains over all elements. At each step, the algorithm repeatedly updates of the top element and re-inserts it to a queue until the top element does not change position in the queue — it then adds this element to the running solution. Due to submodularity, the lazy greedy algorithm has the same output and mathematical guarantee as the original greedy algorithm, but significantly reduces computation in practice, but in the worst case it is as slow (if not slower) than the original greedy algorithm.

Approximate greedy algorithms further reduce the number of function evaluations per step at a cost of a worse approximation factor. In [27, 3], each step only approximates identifying the element with the largest marginal gain by finding any element whose marginal gain is larger than a fraction of of its upper bound. The “lazier than lazy greedy” approach [22] selects the element from a smaller random subset each step, so only the marginal gains of need be computed. A similar algorithm in [7] randomly selects an element from a reasonably good subset per step, and extends to the non-monotone case.

Streaming submodular maximization [2, 8, 9, 12, 4] studies how to approximate the greedy algorithm in one pass of data under a limited memory budget (i.e., the algorithm can access only a small number of elements in the stream history at a time). The best known approximation factor and hardness are both [2, 8], worse than the of the offline greedy algorithm.

Distributed and parallel greedy algorithms [23, 26] typically partition the ground set into several not-necessarily disjoint pieces and assigns them to multiple machines, then run greedy on each machine, and finally combine the results. These approaches fall into the framework of composable coresets. The existence of such methods for some important submodular maximization problems is not always possible [14]. In [21], a -randomized composable coreset method is proposed to achieve an expected bound for the combined solution. The major difference of this paper is that we study how to reduce the ground set rather than partition it, by developing a coreset-like algorithm on submodularity graph rather than running greedy algorithm to achieve coreset on each machine. However, by replacing the greedy algorithm on each machine with SS, we can further speed up distributed submodular maximization by speeding up the computation at each parallel node.

Another class of methods [16, 27] accelerates the greedy algorithm by maximizing a surrogate function whose evaluation is faster and cheaper than the original objective. The surrogate can be either a tight modular lower bound or a simpler submodular function. It can also be adaptively changed in each step to better approach the original objective. In [27], a simple pruning method is used to reduce by exploiting , a lower bound of for . E.g., element whose singleton gain is less than the largest over all can be safely removed. Besides exploiting the global redundancy of via , the weight used in SS further takes the pairwise relationship into account. This can result in further ground set reduction.

2 Submodularity Graph

We next introduce the “submodularity graph,” a useful and efficient tool to explore the redundancy of ground sets in a submodular maximization process.

Definition 1.

The submodularity graph is a weighted directed graph defined by a normalized submodular function where is the set of nodes corresponding to the ground set, and each directed edge from to has weight defined as:

(3)

Intuitively, the weight measures the worst case net loss in maximizing on a reduced set with removed and retained. In Eq. (3), is the maximum possible gain can offer a set involving , while is the minimal possible gain can contribute to the solution because holds by submodularity. Hence, a small indicates is unimportant if is retained in a solution, while a large implies that is always important. Taken together, a small would suggest removing while keeping . Note is a net loss, combining both the “local” importance of and the “global” importance of . Previous work such as [27] and curvature based methods [15] do not leverage local and global importance in the same way.

We further generalize to a “conditional submodularity graph” describing the pairwise relationships conditioned on set . Accordingly, the edge weight on is:

(4)

reduces to when , usually the starting set in a greedy submodular maximization procedure. Below we give a detailed analysis of how edge weight can be used to remove elements from . For notational simplicity, we use “” to denote the set union “,” and “” for set subtraction “”. We start by studying two properties of .

Lemma 1.

If , for any such that , .

Proof.

Submodularity requires . From the definition of in (4), the conclusion is immediate. ∎

Lemma 2.

For any and , if and , then

(5)
Proof.
(6)
(7)

The first equality is obtained using the definition of the marginal gain, while the inequality is from submodularity and since . ∎

Lemma 2 states that the weight relates the two marginal gains of and relative to . The marginal gain plays a critical role in various submodular maximization algorithms since it measures how much is improved by adding to . In each step, the greedy algorithm selects the element with the largest , i.e., , and increases by .

If should be selected by the greedy algorithm at the current step, but for some reason is missing in (a reduced ground set), then greedy instead selects . In this case, the objective increases by rather than . By the relative optimality of in and Lemma 2, we have

(8)

Hence, the objective loss caused by removing from and using instead is at most the minimal weight over all edges entering from other elements in . In other words, an upper bound on the price for pruning is , which reflects the contribution of to the set . If it is small, the objective loss is, relatively speaking, negligible and may be removed with impunity. We hence define this concept as a “divergence” of from on :

Definition 2.

On the submodularity graph , the divergence of a node from a set of nodes is defined as . Similarly, the divergence on the conditional submodularity graph is defined as .

Although the edge weights are asymmetric, we next show that a directed triangle inequality holds on . This plays significant role in SS, since it provides an upper bound on an edge weight based on weights of adjacent edges, and thus avoids needing to compute all the edge weights exactly.

Lemma 3.

For , we have .

The proof is given in [1]. A similar inequality also holds for defined on .

3 Submodular Sparsification

In this section, we introduce submodular sparsification (SS), a randomized pruning algorithm that reduces to without drastically hurting the optimality of submodular maximization. Although pruning the conditional submodularity graph with the greedy algorithm can rule out additional elements, here we focus on reducing before running any submodular maximization algorithm, i.e., when , but it is worth noting that SS can be easily extended to .

3.1 Pruning as Submodular Maximization

According to Eq. (8) and Definition 2, small for all pruned elements leads to small loss in the per-step increase of objective function by the greedy algorithm. By parameterizing an upper bound , the following seeks the best pruned set for use in the maximization of .

Definition 3 (submodular sparsification).

The submodular sparsification problem is to solve:

(9)
Proposition 1.

The objective function in Eq. 9 is non-monotone submodular.

The proof is in [1]. Let of size be the optimal solution of Eq. (9) (note all are -dependent, the proof also shows is monotone in ). Running greedy on rather than yields:

Theorem 1.

Let , where is normalized non-decreasing and submodular, let be a greedy solution to the problem . If , the following approximation bound holds for .

(10)

A proof of this is given in [1]. Unfortunately, solving Eq. (9) leads to a chicken-and-egg problem: even approximately solving this unconstrained non-monotone submodular maximization requires an expensive bi-directional randomized greedy algorithm [6] having approximation factor and that is slow in practice. Also, when is not a graph based submodular function (such as facility location or saturated coverage), solving Eq. (9) requires a costly computation of the weights on all edges.

3.2 Randomized Pruning

Drawing inspiration from bi-criteria -clustering in Euclidean space [10], we develop a randomized pruning method (“submodular sparsification (SS)”) on a submodularity graph to produce a reduced ground set without either computing all weights or running bi-directional greedy.

1:  Input: , , ,
2:  Output:
3:  Initialize: ,
4:  while  do
5:     Sample items uniformly at random from and place them in ;
6:     ;
7:     ;
8:     for  do
9:        
10:     end for
11:     Remove from the top of elements with the smallest ;
12:  end while
13:  
Algorithm 1 Submodular Sparsification (SS)

The submodular sparsification procedure is given in Algorithm 1. It starts from the original ground set and an empty set . At each iteration, it randomly samples a size- set111The base of all logarithms in this paper is if not otherwise specified. of elements from the current , acting as probes to test the redundancy of the remaining elements in , that are removed from and added to . It then removes the top elements from having the smallest divergence from on because of their unimportance to . The procedure repeats and the size of shrinks exponentially fast (with a shrink rate of ) until it falls below a threshold. The parameter controls the size of a probe set and influences the size of the final . In our analysis below, for to produce a sufficiently large success probability. In practice, we choose to produce a fast shrink rate , since it can remove more than half () of per step. With , since is unknown in practice, we find that , also, empirically works well (see Section 4).

Algorithm 1 finishes in iterations. It leads to small iteration complexity when . The per iteration computation is dominated by computing , which requires calculating pairwise relationships. This can be simplified if is graph based, because the first greedy step already requires all of the pairwise similarities/distances needed for further evaluations. When is not graph based, this can be accelerated via parallelization, since disjoint pairs in the set may be independently computed. may be precomputed once in linear time.

3.3 Analysis of Submodular Sparsification

According to Lemma 2, a small leads to a small objective loss when is removed and retained. Instead of solving non-monotone submodular maximization in Eq. (9), SS randomly selects probes to rule out elements from . The following lemma uses the directed triangle inequality in Lemma 3 to study which ’s, if sampled, can lead to a relatively small and thus a small in Algorithm 1. Proofs of all the following results can be found in [1].

Lemma 4.

Let be the tail node of an edge with the minimal weight over all edges from elements in to head . Then, for any item , where

we have that .

Lemma 4 states that for any item , if and at least one is sampled in Algorithm 1, then , the maximal loss in caused by dropping , is sufficiently small, so can be safely removed. The below discusses how to sample s and drop s.

Proposition 2.

For an element and , define its -NN ball as the set of elements in with the smallest , and let denote the set of elements ruled out by . If one is sampled into in some iteration of Algorithm 1, then all the elements in outside the ball fulfill the following:

(11)

Based on Proposition 2, we can derive the maximal number of removed elements whose importance represented by cannot be upper bounded.

Proposition 3.

For each , if one is sampled into and added to in some iteration of Algorithm 1, then

(12)

The following proposition explains why Algorithm 1 reduces ground set exponentially by a ratio of . It also shows that all the pruned elements satisfy , which indicates that ruling out them from will lead to at most a loss in objective .

Proposition 4.

Before line 11 of Algorithm 1, the following holds.

(13)

Therefore, it is safe to remove the fraction of items from with the smallest , since their importance can be upper bounded. Proposition 4 results in the following Lemma.

Lemma 5.

For each , if at least one is sampled and added into , where is the output of Algorithm 1, we have .

Figure 1: Utility and time cost vs. size of data

Now we study the failure probability, i.e., the probability that the condition in Lemma 5 is not true.

Proposition 5.

If for each , the probability that sampling an item uniformly from such that is not less than , and if , then the probability that no is sampled and added into for at least one in at least one iteration of Algorithm 1 is at most .

By using Lemma 5 and Proposition 5, we replace in the proof of Theorem 1 with , which yields:

Theorem 2.

Under the assumptions in Proposition 5, the size of the output of Algorithm 1 is . With high probability, i.e., , we have that , , and thus the greedy algorithm on outputs a solution such that

(14)

where is the optimal solution to Eq. (2), and is the budget in Eq. (2).

Remarks: Critically, via and , the above analysis shows a tradeoff between: 1) the approximation bound, 2) the size of (the memory load), and 3) the computational cost. The approximation bound Eq. (14) can be improved if in Eq. (9) is small, but a smaller leads to larger (size of the optimal solution to Eq. (9)). This results in a larger reduced set of size ; and a larger produced by Algorithm 1 means more computation per step. It also shows a tradeoff between the success probability and (the memory) via : if is large, the success probability increases, but also increases. Note that measures the loss from approximate optimality (the guarantee), and measures the -reducibility of . SS fails when . On real datasets we observe even when is small, thus suggesting a large zone of practical success for SS.

Figure 2: Relative utility and time cost associated with different sizes of reduced set , which correspond to different values of varying between with step size .

SS can also reduce the ground set for non-monotone submodular maximization monotone under general constraints (e.g., knapsack or matroid) by applying it before any algorithm runs. All previous analysis still holds in general except Theorem 1 and Theorem 2, whose proofs rely on a cardinality constraint and monotonicity. They can be easily modified, however, by applying Eq. (19) to the proof the other algorithm’s bound. The fundamental reason is that the properties (Lemmas 1-3) of weight on the submodularity graph depend only on submodularity and non-negativity of .

3.4 Additional Improvements

In practice, several techniques can be further applied to Algorithm 1 to improve either its effectiveness or efficiency. Firstly, the pruning technique based on proposed in [27] can be applied to before running Algorithm 1 to rule out additional elements and save computation.

Figure 3: Statistics of relative utility , ROUGE-2 score and F1-score on daily news summarization results of days’ news from New York Times corpus between 1996-2007.

The second improvement would use importance rather than uniform sampling in Algorithm 1. According to Proposition 5, sampling with large is helpful to increase the probability of and , which leads to a larger success probability . Intuitively, large suggests may be important, while large indicates its importance is undiminished by other elements in .

The third strategy is to further reduce by exploring its redundancy. In particular, after Algorithm 1, the bi-directional greedy algorithm [6] can be used to solve Eq. (9) defined on the reduced ground set . Since is much smaller than , the cost may be acceptable.

4 Experiments

In this section, on several news and video datasets, we compare the summary achieved by running the greedy algorithm on the reduced set of SS with summaries achieved by other algorithms on the original set . We use the feature based submodular function as our objective, where is a set of features, and is a modular score ( is the affinity of element to feature ). This function typically achieves good performance on summarization tasks. Our baseline algorithms are the lazy greedy approach [20] (which has identical output as greedy but is faster) and the “sieve-streaming” [2] approach for streaming submodular maximization, which has low memory requirements as it takes one pass over the data. We set and in Algorithm 1.

4.1 Empirical Study on News

Figure 1 shows how and time cost varies when we change . The budget size

of the summary set to the number of sentences in a human generated summary. The number of trials in sieve-streaming is

, leading to memory requirement of . The utility curve of SS overlaps that of lazy greedy, while its time cost is much less and increases more slowly than that of lazy greedy. Sieve-streaming performs much worse than SS in terms of utility, and its time cost is only slightly less (this is because it quickly fills with elements and stops much earlier before seeing all elements). Figure 2 shows how relative utility ( is the greedy solution) and SS time cost vary with the size of the reduced set . SS quickly reaches a once the size exceeds , while its computational cost increases slowly.

4.2 News Summarization

Figure 4: Size of data vs. time cost on daily news summarization results of days’ news from New York Times corpus between 1996-2007. The area of each circle is proportional to the relative utility .

We conduct summarization experiments on two large news corpora, The NYTs annotated corpus 1996-2007 (https://catalog.ldc.upenn.edu/LDC2008T19), and the DUC 2001 corpus (http://www-nlpir.nist.gov/projects/duc). The first dataset includes articles published in the NYTs over days from 1996-2007. We collect the sentences in articles associated with human generated summaries as the ground set (with sizes varying from to ), and extract their TFIDF features to build . We concatenate the sentences from all human generated summaries for the same date as a reference summary. We compare the machine generated summaries produced by different methods with the reference summary by ROUGE-2 [18] (recall on 2-grams) and ROUGE-2 F1-score (F1-measure based on recall and precision on 2-grams).

We also compare their relative utility. As before, sieve-streaming has memory set at . The statistics over days are shown in Figure 3. SS has a relative utility of on most days, while sieve-streaming is mostly in the region. Both the ROUGE-2 and F1 score of SS are better than sieve-streaming, and even outperform greedy a bit. This may be because SS removes many of the elements on which greedy might become trapped in some local sub-optimal region.

Figure 5: Scatter plot of relative utility achieved by submodular sparsification on the days’ news with the corresponding size of ground set and the size of reduced set . Each point corresponds to one day.

Figure 4 shows the number of sentences per day and the corresponding time cost of each algorithm. The area of each circle is proportional to relative utility. We use a log scale time axis for a wider dynamic range. SS reduces computation over lazy greedy especially when is large. Sieve-streaming’s time cost decreases when , but its relative utility is reduced due to the aforementioned early stopping. Figure 5 shows the distribution of relative utility achieved by SS with different data sizes and reduced ground set sizes over different days. The relative utility of SS is on most days, and even when . This indicates that summarization on the reduced set achieved by SS can even occasionally outperform that on the original ground set .

4.3 Video Summarization

We apply lazy greedy, sieve-streaming, and SS to videos from dataset SumMe [13] (http://www.vision.ee.ethz.ch/~gyglim/vsum/). Each video has frames as given in Table 2 [1]. The results are given in [1]. The greedy algorithm on the SS-reduced ground set consistently approaches or outperforms lazy greedy on recall and F1-score, while the time cost is much smaller and a large fraction of frames may be removed.

References

  • [1] Anonymous. Supplementary material for submodular sparsification. In Submitted to NIPS, 2016.
  • [2] Ashwinkumar Badanidiyuru, Baharan Mirzasoleiman, Amin Karbasi, and Andreas Krause. Streaming submodular maximization: Massive data summarization on the fly. In SIGKDD, pages 671–680, 2014.
  • [3] Ashwinkumar Badanidiyuru and Jan Vondrák. Fast algorithms for maximizing submodular functions. In SODA, pages 1497–1514, 2014.
  • [4] Mohammadhossein Bateni, Mohammadtaghi Hajiaghayi, and Morteza Zadimoghaddam. Submodular secretary problem and extensions. ACM Trans. Algorithms, 9(4):32:1–32:23, 2013.
  • [5] Anna Bosch, Andrew Zisserman, and Xavier Munoz. Representing shape with a spatial pyramid kernel. In ACM International Conference on Image and Video Retrieval, pages 401–408, 2007.
  • [6] Niv Buchbinder, Moran Feldman, Joseph (Seffi) Naor, and Roy Schwartz. A tight linear time (1/2)-approximation for unconstrained submodular maximization. In FOCS, pages 649–658, 2012.
  • [7] Niv Buchbinder, Moran Feldman, Joseph (Seffi) Naor, and Roy Schwartz. Submodular maximization with cardinality constraints. In SODA, pages 1433–1452, 2014.
  • [8] Niv Buchbinder, Moran Feldman, and Roy Schwartz. Online submodular maximization with preemption. In SODA, pages 1202–1216, 2015.
  • [9] Chandra Chekuri, Shalmoli Gupta, and Kent Quanrud. Streaming algorithms for submodular function maximization. arXiv:1504.08024, 2015.
  • [10] Dan Feldman, Amos Fiat, Micha Sharir, and Danny Segev.

    Bi-criteria linear-time approximations for generalized k-mean/median/center.

    In Proceedings of the Twenty-third Annual Symposium on Computational Geometry, pages 19–26, 2007.
  • [11] Satoru Fujishige. Submodular functions and optimization. Annals of discrete mathematics. Elsevier, 2005.
  • [12] Ryan Gomes and Andreas Krause. Budgeted nonparametric learning from data streams. In ICML, 2010.
  • [13] Michael Gygli, Helmut Grabner, Hayko Riemenschneider, and Luc Van Gool. Creating summaries from user videos. In ECCV, 2014.
  • [14] Piotr Indyk, Sepideh Mahabadi, Mohammad Mahdian, and Vahab S. Mirrokni. Composable core-sets for diversity and coverage maximization. In PODS, pages 100–108, 2014.
  • [15] Rishabh Iyer, Stefanie Jegelka, and Jeff Bilmes. Curvature and optimal algorithms for learning and minimizing submodular functions. In NIPS, 2013.
  • [16] Rishabh Iyer, Stefanie Jegelka, and Jeff A. Bilmes. Fast semidifferential-based submodular function optimization. In ICML, 2013.
  • [17] Jure Leskovec, Andreas Krause, Carlos Guestrin, Christos Faloutsos, Jeanne VanBriesen, and Natalie Glance. Cost-effective outbreak detection in networks. In SIGKDD, pages 420–429, 2007.
  • [18] Chin-Yew Lin. Rouge: A package for automatic evaluation of summaries. In Text Summarization Branches Out: Proceedings of the ACL-04 Workshop, pages 74–81, 2004.
  • [19] Hui Lin and Jeff Bilmes. A class of submodular functions for document summarization. In ACL, pages 510–520, 2011.
  • [20] Michel Minoux. Accelerated greedy algorithms for maximizing submodular set functions. In Optimization Techniques, volume 7 of Lecture Notes in Control and Information Sciences, chapter 27, pages 234–243. 1978.
  • [21] Vahab Mirrokni and Morteza Zadimoghaddam. Randomized composable core-sets for distributed submodular maximization. In STOC, pages 153–162, 2015.
  • [22] Baharan Mirzasoleiman, Ashwinkumar Badanidiyuru, Amin Karbasi, Jan Vondrák, and Andreas Krause. Lazier than lazy greedy. In AAAI, pages 1812–1818, 2015.
  • [23] Baharan Mirzasoleiman, Amin Karbasi, Rik Sarkar, and Andreas Krause. Distributed submodular maximization: Identifying representative elements in massive data. In NIPS, pages 2049–2057, 2013.
  • [24] G. L. Nemhauser, L. A. Wolsey, and M. L. Fisher. An analysis of approximations for maximizing submodular set functions—I. Mathematical Programming, 14(1):265–294, 1978.
  • [25] Aude Oliva and Antonio Torralba. Modeling the shape of the scene: A holistic representation of the spatial envelope.

    International Journal of Computer Vision

    , 42(3):145–175, 2001.
  • [26] Xinghao Pan, Stefanie Jegelka, Joseph E Gonzalez, Joseph K Bradley, and Michael I Jordan. Parallel double greedy submodular maximization. In NIPS, pages 118–126, 2014.
  • [27] Kai Wei, Rishabh Iyer, and Jeff Bilmes. Fast multi-stage submodular maximization. In ICML, 2014.

5 Appendix

5.1 Proof of Lemma 3

Proof.

Firstly, we have the following inequality.

(15)

The first two equalities follow from the definition of marginal gain, while the inequality is due to submodularity. Following the definition of in Eq. (3), we have

(16)

The first inequality is due to Eq. (5.1), and the second inequality is via submodularity. ∎

5.2 Proof of Proposition 1

Proof.

Define a set for each such that . Note because and hence . The objective function in Eq. (9) can be written as

(17)
(18)

where is the simple set cover function [11], which is monotone non-decreasing submodular, and is a monotone decreasing modular (negative cardinality) function. Because the sum of a submodular function and a modular function is still submodular, the objective in Eq. (9) is non-monotone submodular. ∎

5.3 Proof of Theorem 1

Proof.

Recall that is the tail node of an edge with the minimal weight over all edges from elements in to head . Since , the greedy algorithm on will run for steps and select elements. We use to denote the solution set at the beginning of the step, let be the selected element in this step. In addition, let be the unfettered greedy choice at step . Then we have the following:

(19)

The first inequality is by Eq. (8), the second inequality is due to Lemma 1, while the last inequality comes from the definition of problem Eq. (9). Hence, for arbitrary , we have

(20)

The first inequality uses monotonicity of , while the second one is due to submodularity. The third inequality is due to the non-negativity of . The fourth inequality is due to the maximal greedy selection rule for the greedy algorithm on the original ground set . The fifth inequality is the result of applying Eq. (19). The last equality is due to the greedy selection rule for the greedy algorithm on the reduced ground set . Rearranging Eq. (20) yields

(21)

Let

(22)

then the rearranged inequality equals to

(23)

Since , this equals to

(24)

Since in total elements are selected by the greedy algorithm, applying Eq. (24) from to yields

(25)

By using the definition of in Eq. (22), the above inequality leads to

(26)

This completes the proof. ∎

5.4 Proof of Lemma 4

Proof.

The proof follows from Lemma 3 and our assumption to .

The first inequality is due to Lemma 3. The second inequality is because which follows from . The third inequality is due to . ∎

5.5 Proof of Proposition 2

Proof.

Recall is the optimal solution of problem in Eq. (9). Due to the definition of -NN ball, we have

(27)

Hence, . By using Lemma 4, we have

(28)

This completes the proof. ∎

5.6 Proof of Proposition 3

Proof.

According to Proposition 2, for each , if one is sampled into in some iteration of Algorithm 1, then any item outside the ball satisfies

Hence, one element fulfilling in the complement set must be contained in least one of the -NN balls whose centers are the elements in . Therefore, the total number of such is at most , the maximal number of elements in all the -NN balls. ∎

5.7 Proof of Proposition 4

Proof.

We consider , set at the beginning of the iteration, and , set right before the removal step of the previous iteration. According to the pruning amount :

(29)

Since Proposition 3 indicates

(30)

we have

Because the above result is correct for arbitrary , it completes the proof. ∎

5.8 Proof of Lemma 5

Proof.

According to Proposition 4, after removal, all the elements in are retained in . So none of them is in .

According to Proposition 2, if for each at least one alternate is sampled and added into ,