Streaming Algorithms for Cardinality-Constrained Maximization of Non-Monotone Submodular Functions in Linear Time

04/14/2021 ∙ by Alan Kuhnle, et al. ∙ 0

For the problem of maximizing a nonnegative, (not necessarily monotone) submodular function with respect to a cardinality constraint, we propose deterministic algorithms with linear time complexity; these are the first algorithms to obtain constant approximation ratio with high probability in linear time. Our first algorithm is a single-pass streaming algorithm that obtains ratio 9.298 + ϵ and makes only two queries per received element. Our second algorithm is a multi-pass streaming algorithm that obtains ratio 4 + ϵ. Empirically, the algorithms are validated to use fewer queries than and to obtain comparable objective values to state-of-the-art algorithms.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

A nonnegative, set function , where ground set is of size , is submodular if for all , , . Submodular objective functions arise in many learning objectives, e.g.

interpreting neural networks

[9], nonlinear sparse regression [10]. Some applications yield submodular functions that are not monotone (a set function is monotone if implies ): for example, image summarization with diversity [24], MAP Inference for Determinantal Point Processess [14], or revenue maximization on a social network [17]. In these applications, the task is to optimize a submodular function subject to a variety of constraints. In this work, we study the problem of submodular maximization with respect to a cardinality constraint (SMCC): i.e. given submodular and integer , determine . We consider the value query model, in which the function is available to an algorithm as an oracle that returns, in a single operation, the value of any queried set .

Because of the recent phenomenon of big data, in which data size has exhibited exponential growth [26, 22], there has been substantial effort into the design of algorithms for SMCC with efficient time complexity111In addition to the time complexity of an algorithm, we also discuss the number of oracle queries an algorithm makes or the query complexity; this information is important as the function evaluation may be expensive., e.g. Badanidiyuru and Vondrák [2], Mirzasoleiman et al. [23], Fahrbach et al. [11]. An algorithm introduced by Buchbinder et al. [6] obtains an expected ratio of in time, which is close to the best known factor in polynomial time of [4]. However, an algorithm with an expected ratio may produce a poor solution with constant probability unless independent repetitions are performed. Moreover, the derandomization of algorithms for submodular optimization has proven difficult; a method to derandomize some algorithms at the cost of a polynomial increase in time complexity was recently given by Buchbinder and Feldman [5]. No algorithm in prior literature has been shown to obtain a constant approximation ratio with high probability in linear time.

In addition to fast algorithms, much work has been done on streaming algorithms for SMCC, e.g. Badanidiyuru et al. [3], Chakrabarti and Kale [7], Mirzasoleiman et al. [25]; a streaming algorithm makes one or more passes through the ground set while using a small amount of memory, i.e. space, for some constant . No streaming algorithm has been proposed in prior literature with linear time complexity.

Contributions

Reference Ratio Time Passes QpEpP Memory
[8] 9 1
[12] 1
[15] 1
[1] 1
[6] N/A N/A
[18] 1
This paper 1
This paper 1
Table 1: The symbol indicates that the ratio holds only in expectation; to obtain its ratio with high probability, such an algorithm must be independently repeated. The second-to-last column gives the queries per element per pass (QpEpP) of each streaming algorithm. The algorithm is an offline algorithm for SMCC with ratio and time complexity on input of size . The notation indicates that the constants dependent on accuracy parameter have been suppressed.

We provide two deterministic algorithms that achieve a constant approximation factor in linear time. These are the 1) first algorithms that achieve a constant ratio with high probability for SMCC in linear time; 2) first deterministic algorithms to require a linear number of oracle queries; and 3) the first linear-time streaming algorithms for SMCC, including streaming algorithms that obtain a ratio in expectation only. Our first algorithm, QuickStream, is a single-pass algorithm that obtains ratio in oracle queries and time. Our second algorithm, MultiPassLinear, is a multi-pass algorithm that obtains a ratio of in time. Table 1 shows how our algorithms compare theoretically to state-of-the-art algorithms designed for SMCC.

To obtain our single-pass algorithm, we generalize the linear-time streaming method of Kuhnle [19] to non-monotone objectives. To generalize, we follow a strategy of maintaining two disjoint candidate sets that compete for incoming elements. Variations of this general strategy have been employed for non-monotone objectives in many recent algorithms, e.g. Kuhnle [18], Alaluf et al. [1], Feldman et al. [13], Haba et al. [15], Han et al. [16]. At a high level, one novel component of our work is that our disjoint candidate sets are infeasible and may lose elements, a complicating factor that requires careful analysis and to the best of our knowledge is novel to our algorithm. To obtain our multi-pass algorithm, we use the constant factor from our single-pass algorithm to modify and speed up the greedy-based -approximation of Kuhnle [18] that requires time.

Finally, an empirical validation shows improvement in query complexity and solution value of both our single-pass algorithm and our multi-pass algorithm over the current state-of-the-art algorithms on two applications of SMCC.

1.1 Related Work

The field of submodular optimization is too broad to provide a comprehensive survey. In this section, we focus on the most relevant works to ours.

Kuhnle [19] recently introduced a deterministic, single-pass streaming algorithm for SMCC with monotone objectives that obtains a ratio of in time. Our single-pass algorithm may be interpreted as an extension of this approach to non-monotone submodular functions, which requires significant changes to the algorithm and analysis: 1) To control the non-monotonicity, we introduce two disjoint sets and that competes for incoming elements. 2) Part of our analysis (Lemma 2) includes bounding the loss from set (resp. ), in terms of gains to (resp. ). This bound is unnecessary in Kuhnle [19] and is complicated by the periodic deletions from ,. 3) The condition on the marginal gain to add an element is relaxed through a parameter , which we use to optimize the ratio (at ).

Alaluf et al. [1] introduced a deterministic, single-pass streaming algorithm that obtains the state-of-the-art ratio , where is the ratio of an offline post-processing algorithm for SMCC with time complexity on an input of size . The algorithm of Alaluf et al. [1] uses disjoint, candidate solutions, all of which are feasible solutions. The empirical solution quality of our algorithm may be improved by using similar post-processing, as we evaluate in Section 4. The time complexity of their algorithm is and requires queries per incoming element, whereas our algorithm requires time, two queries per incoming element, and one additional query after the stream terminates.

Haba et al. [15] recently provided a general framework to convert a streaming algorithm for monotone, submodular functions to a streaming algorithm for general submodular functions, as long as the original algorithm satisifes certain conditions. The results in Table 1 are an application of this framework with the streaming algorithm of Badanidiyuru et al. [3], which is a -approximation if the objective is monotone and submodular. The method of Haba et al. [15] requires an offline algorithm for SMCC with ratio . Unfortunately, this framework cannot be directly applied to the algorithm of Kuhnle [19] to obtain a linear-time streaming algorithm for SMCC because guesses for OPT are required in the conversion, which would increase the time complexity by a factor. Further discussion is provided in Appendix A.

Feldman et al. [12] provided a randomized, single-pass algorithm that achieves ratio 5.828 in expectation. This algorithm requires time and makes oracle queries per element received. Further, the algorithm would have to be repeated to ensure the ratio holds with high probability. Our single-pass algorithm is deterministic and requres time.

Kuhnle [18] presented a deterministic algorithm that achieves ratio in time; this is the fastest deterministic algorithm in previous literature. Feldman et al. [13] and Han et al. [16] independently extended this algorithm to handle more general constraints with the same time complexity. Our multipass algorithm is based upon these algorithms; our algorithm uses the constant factor from our single-pass algorithm to speed up the basic approach and obtain ratio in time.

Preliminaries

An alternative characterization of submodularity is the following: is submodular iff. , . We use the following notation of the marginal gain of adding to set : . For element , . Throughout the paper, is a set function defined on subsets of , and . Technically, all algorithms described in this work as streaming algorithms are semi-streaming algorithms: since the solution size may be , it may take linear space to even store a solution to the problem. Omitted proofs may be found in the Appendices, all of which are located in the Supplementary Material.

2 Single-Pass Streaming Algorithm for Smcc

In this section, a linear-time, constant-factor algorithm is described. This algorithm (QuickStream, Alg. 1) is a deterministic streaming algorithm that makes one pass through the ground set and two queries to per element received.

2.1 Description of Algorithm

As input, the algorithm receives the value oracle for , the cardinality constraint , an accuracy parameter , and a threshold parameter . Two disjoint sets and are maintained throughout the execution of the algorithm. Elements of the ground set are processed one-by-one in an arbitrary order by the for loop on Line 6. Upon receipt of element , if there exists such that , is added to the set to which it has a higher marginal gain; otherwise, the element is discarded. The maximum size of is controlled by checking if the size of exceeds an upper bound. If it does, the size of is reduced by a factor of by keeping only the latter elements added to . Finally, at the termination of the stream, let and be the last elements added to and , respectively. Of these two sets, the one with the larger value is returned.

1:procedure QuickStream()
2:     Input: oracle , cardinality constraint , ,
3:     
4:     
5:     ,
6:     for  element received  do
7:         
8:         if  then
9:                        
10:         if  then
11:               elements most recently added to               
12:     .
13:     .
14:     return
Algorithm 1 A single-pass algorithm for SMCC.

2.2 Theoretical Guarantees

In this section, we prove the following theorem concerning the performance of QuickStream (Alg. 1).

Theorem 1.

Let , and let be an instance of SMCC. The solution returned by QuickStream satisfies

Further, QuickStream makes queries to the value oracle for , has time complexity , memory complexity , and makes one pass over the ground set.

The ratio decreases (improves) monotonically with increasing . It is trivial to obtain a -approximation for by remembering the best singleton, so plugging and optimizing over into the ratio Theorem 1 yields worst-case ratio of at . In the limit as , the ratio converges to , which is optimized at to yield ratio .

Overview of Proof

The proof can be summarized as follows. If no deletions occurred on Line 11, then , for . Once this is established (Lemmas 2 and 3), a bound (Inequality 16) on OPT with respect to follows from the inequality , which is a consequence of the submodularity of and the fact that .

Moreover, a large fraction of the value of (resp. ) is concentrated in the last elements added to (resp. ), as shown in Lemma 4. Finally, Lemma 1 shows that only a small amount of value is lost due to deletions, although this error creates considerable complications in the proof of Lemma 2.

Proof of Theorem 1.

The time, query, and memory complexities of QuickStream follow directly from inspection of the pseudocode. The rest of the proof is devoted to proving the approximation ratio.

The first lemma (Lemma 1) establishes basic facts about the growth of the value in the sets and as elements are received. Lemma 1 considers a general sequence of elements that satisfy the same conditions on addition and deletion as elements of or , respectively. The proof is deferred to Appendix B and depends on a condition to add elements and uses submodularity of to bound the loss in value due to periodic deletions.

Lemma 1.

Let be a sequence of elements, and a sequence of sets, such that , and satisfies , and , unless , in which case , where . Then

  1. , for any .

  2. Let . Then

Notation.

Next, we define notation used throughout the proof. Let denote the repective values of variables at the beginning of the -th iteration of for loop; let denote their respective final values. Also, let ; analogously, define . Let denote the element received at the beginning of iteration . We refer to line numbers of the pseudocode Alg. 1. Notice that after deletion of duplicate entries, the sequences satisfy the hypotheses of Lemma 1, with the sequence of elements in , respectively. Since many of the following lemmata are symmetric with respect to and , we state them generically, with variables standing in for one of , respectively. The notations are defined analogously to defined above. Finally, if , define Observe that

Lemma 2.

Let , such that . Let . Let . Let denote the iteration in which was processed. Then

Proof.

Since , we know that is added to the set during iteration ; therefore, by the comparison on Line 7 of Alg. 1, it holds that

(1)

If no deletion from occurs during iteration , the lemma follows from the fact that

For the rest of the proof, suppose that a deletion from does occur during iteration . For convenience, denote by the value of after the deletion from . By Inequality 17 in the proof of Lemma 1, it holds that

(2)

Hence,

(3)
(4)
(5)
(6)

where Inequality 3 follows from Lemma 1, Inequality 4 follows from Inequality 2, Inequality 5 follows from submodularity of , and Inequality 6 follows from Inequality 1. ∎

Lemma 3.

Let , such that . Then

Proof.
(7)
(8)
(9)
(10)

where Inequalities 7, 8 follow from submodularity of . Inequality 9 holds by the following argument: let . If , then it holds that by Line 7. Otherwise, if , Lemma 2 yields Inequality 10 follows from the fact that , Lemma 1, and the fact that , where each . ∎

Lemma 4.

Let , and let be the set of elements most recently added to . Then .

Proof.

For simplicity of notation, let . If , the result follows since . So suppose , and let be ordered by the iteration in which each element was added to . Also, let , for , and let . By Line 8, it holds that , for each ; thus,

(11)

From Inequality 11 and the submodularity, nonnegativity of , we have

Hence

By application of Lemma 3 with and then again with , we obtain

(12)
(13)

Next, we have that

(14)
(15)

where Inequality 14 follows from the fact that and submodularity and nonnegativity of . Inequality 15 follows from the summation of Inequalities 12 and 13. By application of Property 2 of Lemma 1, we have from Inequality 15

(16)

where . Observe that the choice of on Line 4 ensures that , by Lemma 8. Therefore, by application of Lemma 4, we have from Inequality 16

2.3 Post-Processing: Qs++ and Parameter

In this section, we briefly describe a modification to QuickStream that improves its empirical performance. Instead of choosing, on Line 14, the best of and as the solution; introduce a third candidate solution as follows: use an offline algorithm for SMCC in a post-processing procedure on the restricted universe to select a set of size at most to return. This method can only improve the objective value of the returned solution and therefore does not compromise the theoretical analysis of the preceding section. The empirical solution value can be further improved by lowering the parameter as this increases the size of , potentially improving the quality of the solution found by the selected post-processing algorithm.

Parameter

To speed up any algorithm by a constant factor (at a cost of in the approximation ratio), the following reduction may be employed. Suppose the input instance is , . Define by arbitrarily grouping elements of together in blocks of size (except for one block that may have less than elements). Define the objective function on a set of blocks to be ; run the algorithm on input to obtain a set of at most blocks, and let be the corresponding set of at most elements. Then, arbitrarily partition into subsets of size and take the subset with the highest value as a solution to the original instance .

For most algorithms, this reduction is not useful since it results in a large loss in solution quality. However, with QS++, if the reduction with is applied to the initial run of QuickStream only, and the post-processing is applied with , this reduction can result in a large speedup while still retaining a good quality of solution.

3 Multi-Pass Streaming Algorithm for Smcc

1:procedure MultiPassLinear()
2:     Input: oracle , cardinality constraint , , parameters , such that .
3:      Choice satisfies .
4:     while   do
5:         for   do
6:               and If is empty, break from loop.
7:              if  then
8:                                          
9:               
10:     return
Algorithm 2 A multi-pass algorithm for SMCC.

In this section, we describe a multi-pass streaming algorithm for SMCC that obtains ratio in linear time.

3.1 Description of Algorithm

The algorithm MultiPassLinear (Alg. 2) takes as input a value , such that ; and parameter , such that . Two initially empty, disjoint sets are maintained throughout the algorithm. The algorithm takes multiple passes through the ground set with descending thresholds in a greedy approach, where the stepsize is determined by accuracy parameter , and is initially set to . An element is added to the set in to which it has the higher marginal gain, as long as the gain is at least , and the set satisifies . Finally, the set in with highest value is returned.

The initial value for of allows MultiPassLinear to achieve its ratio in passes. The linear-time -approximation algorithm is obtained by using QuickStream to obtain the input parameters for MultiPassLinear.

3.2 Theoretical Guarantees

In this section, we prove the following theorem concerning the performance of MultiPassLinear (Alg. 2).

Theorem 2.

Let , and let be an instance of SMCC, with optimal solution value OPT. Suppose satisfy . The solution returned by MultiPassLinear satisfies Further, MultiPassLinear has time and query complexity , memory complexity , and makes passes over the ground set.

Proof.

To establish the approximation ratio, consider first the case in which satisfies after the first iteration of the while loop. Let be ordered by the order in which elements were added to on Line 7, let , , and let . Then and the ratio is proven.

Therefore, for the rest of the proof, suppose and immediately after the execution of the first iteration of the while loop. First, let , such that have their values at the termination of the algorithm. For the definition of and the proofs of the next two lemmata, see Appendix C. These lemmata together establish an upper bound on in terms of the gains of elements added to and .

Lemma 5.
Lemma 6.

Applying Lemma 6 with and separately with and summing the resulting inequalities yields

Thus,

from which the result follows. ∎

4 Empirical Evaluation

(a) Normalized Objective vs
(b) Normalized Queries vs
(c) Memory vs
(d) Normalized Objective vs
(e) Normalized Queries vs
(f) Memory vs
Figure 1: Evaluation of single-pass streaming algorithms (Set 1) on ca-AstroPh , in terms of objective value normalized by the standard greedy value, total number of queries normalized by , and the maximum memory used by each algorithm normalized by . The legend shown in (a) applies to all subfigures.
(a) Objective vs (ca-AstroPh)
(b) Objective vs (web-Google)
(c) Queries vs (web-Google)
Figure 2: Evaluation of algorithms (Set 2) on ca-AstroPh and web-Google , in terms of objective value normalized by the standard greedy value and total number of queries normalized by . The legend shown in (a) applies to all subfigures.

In this section, we validate our proposed algorithms in the context of single-pass streaming algorithms and other algorithms for SMCC that require nearly linear time. The source code and scripts to reproduce all plots are given in the Supplementary Material.

We performed two sets of experiments; the first set evaluates single-pass algorithms, while the second set evaluates algorithms with low overall time complexity.

Set 1: Single-Pass Algorithms

In this set of experiments, we compared our single-pass algorithm with the following algorithms:

  • Algorithm 2 (FKK) of Feldman, Karbasi, and Kazemi [12]; this algorithm achieves ratio in expectation and has time complexity.

  • Algorithm 1 (AEFNS) of Alaluf, Ene, Feldman, Nguyen, and Suh [1]; this algorithm achieves ratio and has time complexity , where are the approximation ratio and time complexity, respectively, of post-processing algorithm . The selection of is discussed below.

  • Our Algorithm 1 (QS) with parameters and , as described in Section 2.

  • Our Algorithm 1 (QS++) with various blocksize parameters and with parameter , as described in Section 2.3. The post-processing algorithm is described below.

AEFNS and QS++ used the same post-processing algorithm MultiPassLinear with input parameters as follows: QS++ used its solution value (before post-processing) and its approximation ratio for and , respectively. AEFNS does not have a ratio before post-processing, so the maximum singleton value and were used for and , respectively. Both algorithms used their respective value for accuracy parameter for the same parameter in MultiPassLinear.

Set 2: Algorithms with Low Time Complexity

In this set, we compared our algorithms with state-of-the-art algorithms with lowest time complexity in prior literature:

  • Algorithm 4 (BFS) of Buchbinder, Feldman, and Schwartz [6], the fastest randomized algorithm, which achieves expected ratio with time complexity.

  • Algorithm 2 (K19) of Kuhnle [18], the fastest deterministic algorithm in prior literature, which achieves ratio with time complexity.

  • Algorithm 2 (K21) of Kuhnle [20], the fastest algorithm with nearly optimal adaptivity, which achieves expected ratio in time.

  • Our Algorithm 2 (MPL), with an initial run of QS to determine input parameters and .

We also evaluated the performance of QS++ in this context. We remark that MPL and QS++ are similar in that each run QS followed by MultiPassLinear; the difference is that MPL runs MultiPassLinear on the entire ground set (which results in the improved approximation ratio of ), whereas QS++ runs MultiPassLinear only on its restricted ground set (which does not improve the approximation ratio of QS).

All algorithms used lazy evaluations whenever possible as follows. Suppose has already been computed, and the algorithm needs to check if , for some and . Then if , this evaluation may be safely skipped due to the submodularity of . Strictly speaking, this means that MultiPassLinear is not run as a streaming algorithm in our evaluation. However, using this optimization during post-processing does not compromise the streaming nature of the single-pass algorithms. Unless otherwise specified, the accuracy parameter of each algorithm is set to . For QS++, no attempt was made to optimize over the parameters and , although the choice of is important to obtain a large enough set of elements to improve the empirical objective value through post-processing.

Applications and Datasets

The algorithms were evaluated on two applications: cardinality constrained maximum cut and revenue maximization on social networks objectives. The maximum cut objective is defined as follows: given graph with edge weight function , the maximum cut objective is defined by . A variety of network topologies from the Stanford Large Network Dataset Collection [21] were used. For more details on the applications and datasets, see Appendix D.

Results: Set 1

Results for the objective value of maximum cut, total queries to , and memory are shown for the single-pass algorithms in Fig. 1 as the cardinality constraint and accuracy parameter vary. Results on other datasets and revenue maximization were analogous and are given in Appendix D.

In summary, QS++ with exhibited the best objective value of the streaming algorithms and sacrificed less than of the greedy value on all instances (Fig. 1(a)). Further, QS++ with achieved better than of the greedy value while using less than queries; and the variants of QS used more than an order of magnitude fewer queries than the other streaming algorithms (Fig. 1(b)). In addition, QS++ exhibited better robustness to the accuracy parameter than AEFNS (Fig. 1(d)), although the query complexity of AEFNS improved drastically with larger (Fig. 1(e)).

Results: Set 2

Results for the objective value of maximum cut and total queries to are shown for the algorithms in Fig. 2 as the cardinality constraint varies. Results on other datasets and revenue maximization were analogous and are given in Appendix D.

In summary, all of K19, K21, MultiPassLinear, and QS++ () returned nearly the greedy objective value, while BFS and QS++ () returned lower objective values (Figs. 2(a), 2(b)). In terms of total queries, our algorithms had the lowest, closely followed by K19. K21 and BFS required substantially more queries (Fig. 2(c)).

Discussion

QS++ with returned nearly the greedy value on all instances while using queries on instances with small and at most queries, the most efficient of any prior algorithm. Larger values of obtained further speedup (to less than queries) at the cost of objective value. MultiPassLinear improved the objective value of QS++ slightly for larger values and used roughly the same number of queries as QS++ with ; however, MultiPassLinear requires multiple passes through the ground set.

5 Concluding Remarks

In this work, we have presented two deterministic, linear-time algorithms for SMCC, which are the first linear-time algorithms for this problem that yield constant approximation ratio with high probability. The first algorithm is a single-pass streaming algorithm with ratio ; the second uses the output of the first algorithm in a multi-pass streaming framework to obtain ratio . A natural question for future work is if the ratio of could be improved in linear time; currently, the best deterministic algorithm has a ratio of in time [5].

References

  • Alaluf et al. [2020] Naor Alaluf, Alina Ene, Moran Feldman, Huy L. Nguyen, and Andrew Suh. Optimal streaming algorithms for submodular maximization with cardinality constraints. In 47th International Colloquium on Automata, Languages, and Programming (ICALP), 2020.
  • Badanidiyuru and Vondrák [2014] Ashwinkumar Badanidiyuru and Jan Vondrák. Fast algorithms for maximizing submodular functions. In ACM-SIAM Symposium on Discrete Algorithms (SODA), 2014.
  • Badanidiyuru et al. [2014] Ashwinkumar Badanidiyuru, Baharan Mirzasoleiman, Amin Karbasi, and Andreas Krause. Streaming Submodular Maximization: Massive Data Summarization on the Fly. In ACM SIGKDD Knowledge Discovery and Data Mining (KDD), pages 671–680, 2014.
  • Buchbinder and Feldman [2016] Niv Buchbinder and Moran Feldman. Constrained Submodular Maximization via a Non-symmetric Technique. Mathematics of Operations Research, 44(3), 2016.
  • Buchbinder and Feldman [2018] Niv Buchbinder and Moran Feldman. Deterministic Algorithms for Submodular Maximization. ACM Transactions on Algorithms, 14(3), 2018.
  • Buchbinder et al. [2015] Niv Buchbinder, Moran Feldman, and Roy Schwartz. Comparing Apples and Oranges: Query Tradeoff in Submodular Maximization. In ACM-SIAM Symposium on Discrete Algorithms (SODA), 2015.
  • Chakrabarti and Kale [2015] Amit Chakrabarti and Sagar Kale. Submodular maximization meets streaming: matchings, matroids, and more. Mathematical Programming, 154(1-2):225–247, 2015.
  • Chekuri et al. [2015] Chandra Chekuri, Shalmoli Gupta, and Kent Quanrud. Streaming Algorithms for Submodular Function Maximization. In International Colloquium on Automata, Languages, and Programming (ICALP), 2015.
  • Elenberg et al. [2017] Ethan R. Elenberg, Alexandros G. Dimakis, Moran Feldman, and Amin Karbasi. Streaming Weak Submodularity: Interpreting Neural Networks on the Fly. In Advances in Neural Information Processing Systems (NeurIPS), 2017.
  • Elenberg et al. [2018] Ethan R. Elenberg, Rajiv Khanna, Alexandros G. Dimakis, and Sahand Negahban. Restricted strong convexity implies weak submodularity. Annals of Statistics, 46(6B):3539–3568, 2018.
  • Fahrbach et al. [2019] Matthew Fahrbach, Vahab Mirrokni, and Morteza Zadimoghaddam. Submodular Maximization with Nearly Optimal Approximation, Adaptivity, and Query Complexity. In ACM-SIAM Symposium on Discrete Algorithms (SODA), pages 255–273, 2019.
  • Feldman et al. [2018] Moran Feldman, Amin Karbasi, and Ehsan Kazemi. Do less, Get More: Streaming Submodular Maximization with Subsampling. In Advances in Neural Information Processing Systems (NeurIPS), 2018.
  • Feldman et al. [2020] Moran Feldman, Christopher Harshaw, and Amin Karbasi. Simultaneous Greedys: A Swiss Army Knife for Constrained Submodular Maximization. 2020.
  • Gillenwater et al. [2012] Jennifer Gillenwater, Alex Kulesza, and Ben Taskar. Near-Optimal MAP Inference for Determinantal Point Processes. In Advances in Neural Information Processing Systems (NeurIPS), 2012.
  • Haba et al. [2020] Ran Haba, Ehsan Kazemi, Moran Feldman, and Amin Karbasi. Streaming Submodular Maximization under a k-Set System Constraint. In

    International Conference on Machine Learning (ICML)

    , 2020.
  • Han et al. [2020] Kai Han, Zongmai Cao, Shuang Cui, and Benwei Wu. Deterministic Approximation for Submodular Maximization over a Matroid in Nearly Linear Time. In NeurIPS, pages 1–12, 2020.
  • Hartline et al. [2008] Jason Hartline, Vahab S. Mirrokni, and Mukund Sundararajan. Optimal marketing strategies over social networks. International Conference on World Wide Web (WWW), pages 189–198, 2008.
  • Kuhnle [2019] Alan Kuhnle. Interlaced Greedy Algorithm for Maximization of Submodular Functions in Nearly Linear Time. In Advances in Neural Information Processing Systems (NeurIPS), 2019.
  • Kuhnle [2021a] Alan Kuhnle. Quick Streaming Algorithms for Maximization of Monotone Submodular Functions in Linear Time. In Artificial Intelligence and Statistics (AISTATS), 2021a.
  • Kuhnle [2021b] Alan Kuhnle. Nearly Linear-Time, Parallelizable Algorithms for Non-Monotone Submodular Maximization. In AAAI Conference on Artificial Intelligence, 2021b.
  • Leskovec and Krevl [2020] Jure Leskovec and Andrej Krevl. {SNAP Datasets}: {Stanford} Large Network Dataset Collection. url{http://snap.stanford.edu/data}, jun 2020.
  • Libbrecht et al. [2017] Maxwell W Libbrecht, Jeffrey A Bilmes, and William Stafford. Choosing non-redundant representative subsets of protein sequence data sets using submodular optimization. Proteins: Structure, Function, and Bioinformatics, (July 2017):454–466, 2017.
  • Mirzasoleiman et al. [2015] Baharan Mirzasoleiman, Ashwinkumar Badanidiyuru, Amin Karbasi, Jan Vondrak, and Andreas Krause. Lazier Than Lazy Greedy. In AAAI Conference on Artificial Intelligence (AAAI), 2015.
  • Mirzasoleiman et al. [2016] Baharan Mirzasoleiman, Ashwinkumar Badanidiyuru, and Amin Karbasi. Fast Constrained Submodular Maximization : Personalized Data Summarization. In International Conference on Machine Learning (ICML), 2016.
  • Mirzasoleiman et al. [2018] Baharan Mirzasoleiman, Stefanie Jegelka, and Andreas Krause. Streaming Non-Monotone Submodular Maximization: Personalized Video Summarization on the Fly. In AAAI Conference on Artificial Intelligence, 2018.
  • Mislove et al. [2008] Alan Mislove, Hema Swetha Koppula, Krishna P Gummadi, Peter Druschel, and Bobby Bhattacharjee. Growth of the Flickr Social Network. In First Workshop on Online Social Networks, 2008.

Appendix A Additional Discussion of Related Work

Haba et al. [15] recently provided a general framework to convert a streaming algorithm for monotone, submodular functions to a streaming algorithm for general submodular functions, as long as the original algorithm satisifes certain conditions, namely:

Definition 1 ([15]).

Consider a data stream algorithm for maximizing a non-negative submodular function subject to a constraint . We say that such an algorithm is an -approximation algorithm, for some and , if it returns two sets , such that , and for all , we have

In the above definition, is a set independence system; in our work, is always the -uniform matroid: . The algorithm of Kuhnle [19] returns a set ; it does hold that , where and satisify the above definition with and , but is not stored by the algorithm. Hence, the technique as outlined in Haba et al. [15] and Chekuri et al. [8] must be followed to create a tradeoff between the size of and ; since must be set to , logarithmically many guesses for OPT must be employed, which increases the runtime of Kuhnle [19] by an factor. Furthermore, the method of Haba et al. [15] requires an offline algorithm for SMCC. Since no linear-time algorithm exists for SMCC, one cannot obtain a linear-time algorithm by application of the framework of Haba et al. [15] to the streaming algorithm of Kuhnle [19].

Appendix B Proof of Lemma 1

Claim 1.

For any , , if , then .

Proof.

Follows directly from the inequality for . ∎

Proof of Property 1 of Lemma 1.

If no deletion is made during iteration of the for loop, then any change in is clearly nonnegative. So suppose deletion of set from occurs on line 11 of Alg. 1 during this iteration. Observe that , because the deletion is triggered by the addition of to . In addition, at some iteration of the for loop, it holds that . If , the lemma follows by submodularity and the condition to add to . Therefore, for the rest of the proof, suppose . From the beginning of iteration to the beginning of iteration , there have been additions and no deletions to , which add to precisely the elements in .

It holds that

where inequality (a) follows from submodularity and nonnegativity of , inequality (b) follows from the fact that each addition from to increases the value of by a factor of at least , and inequality (c) follows from Claim 1. Therefore

(17)

Next,

(18)

where inequality (d) follows from submodularity; inequality (e) is by the condition to add to on line 8; and inequality (f) holds since and . Finally, using Inequalities (17) and (18) as indicated below, we have

where the last inequality follows since and . ∎

Proof of Property 2 of Lemma 1.

Next, we bound the total value of and lost from deletions throughout the run of the algorithm.

Lemma 7.

Let .

Proof.

Observe that may be written as the union of pairwise disjoint sets, each of which is size and was deleted on line 11 of Alg. 1. Suppose there were sets deleted from ; write , where each is deleted on line 11, ordered such that implies was deleted after (the reverse order in which they were deleted); finally, let .

Claim 2.

Let . Then .

Proof.

There are at least elements added to and exactly one deletion event during the period between starting when until . Moreover, each addition except possibly one (corresponding to the deletion event) increases by a factor of at least . Hence, by Lemma 1 and Claim 1, . ∎

By Claim 2, for any , . Thus,

(Submodularity, Nonnegativity of )
(Claim 2)
(Sum of geometric series)

b.1 Justification of choice of

Lemma 8.

Let , and let . Choose , and let . Then