1 Introduction
As a natural extension of finite sets (equivalently, ), optimization of discrete functions on the integer lattice has received attention recently (Alon et al., 2012; Demaine et al., 2014; Soma and Yoshida, 2015). As an example, consider the placement of sensors in a water network (Krause et al., 2008a); in the set version, each sensor takes a value in , which corresponds to whether the sensor was placed. In the lattice version Soma and Yoshida (2015), each sensor has a power level in , to which the sensitivity of the sensor is correlated. As a second example, consider the influence maximization problem (Kempe et al., 2003); instead of the binary seeding of a user, the lattice version enables partial incentives or discounts to be used (Demaine et al., 2014).
Although many results from the optimization of submodular set functions have been generalized to the integer lattice (Soma and Yoshida, 2015, 2016; Ene and Nguyen, 2016), many objective functions arising from applications are not submodular (Bian et al., 2017b; Lin et al., 2017; Das and Kempe, 2011; Horel and Singer, 2016). In this work, we consider maximization subject to a cardinality constraint (MCC), where the function to be maximized may be nonsubmodular. Let (the budget), (the box), and let (the objective) be a nonnegative and monotonic^{1}^{1}1for all (coordinatewise), function with . Then determine
(MCC) 
where , .
Since the integer lattice may be represented as a multiset of size , one may use results for Problem MCC with nonsubmodular set functions. In particular, the tight ratio of the standard greedy algorithm by Bian et al. (2017b), where are discussed below, applies with the lattice adaptation of the standard greedy algorithm (StandardGreedy) given in Alg. 1. However, this approach requires queries of , which is not polynomial in the input^{2}^{2}2
The input is considered to be the vector
of length and the number represented in bits (w.l.o.g. each component of is at most ); the function is regarded as an oracle and hence does not contribute to input size. size . Even for applications with set functions, queries may be prohibitive, and researchers (Leskovec et al., 2007; Mirzasoleiman et al., 2015; Badanidiyuru and Vondrák, 2014) have sought ways to speed up the StandardGreedy algorithm. Unfortunately, these approaches rely upon the submodularity of , and there has been no analogous effort for nonsubmodular functions.To quantify the nonsubmodularity of a lattice function , we generalize the following quantities defined for set functions to the lattice: (1) the diminishingreturn (DR) ratio of (Lehmann et al., 2006), (2) the submodularity ratio of (Das and Kempe, 2011), and (3) the generalized curvature of (Bian et al., 2017b). Our main contributions are:

To speed up StandardGreedy (Alg. 1), we adapt the threshold greedy framework of Badanidiyuru and Vondrák (2014) to nonsubmodular functions; this yields an algorithm (ThresholdGreedy, Alg. 2) with approximation ratio , for any , the first approximation algorithm with polynomial query complexity for Problem MCC on the lattice. The query complexity of the StandardGreedy algorithm is improved from to , where are parameters of ThresholdGreedy.

We introduce the novel approximation algorithm FastGreedy, which combines elements of StandardGreedy and ThresholdGreedy to improve the performance ratio to , where is at least and in many cases^{3}^{3}3When the solution returned by FastGreedy satisfies . Otherwise, an upper bound on is returned. is determined by the algorithm. Furthermore, FastGreedy exploits the nonsubmodularity of the function to decrease its runtime in practice without sacrificing its performance guarantee, while maintaining the same worstcase query complexity as ThresholdGreedy up to a constant factor.

To demonstrate our algorithms, we introduce a general budget allocation problem for viral marketing, which unifies submodular influence maximization (IM) under the independent cascade model (Kempe et al., 2003) with the nonsubmodular boosting problem (Lin et al., 2017) and in addition allows partial incentives. We prove a lower bound on the DR and submodularity ratios for this unified framework, and we experimentally validate our proposed algorithms in this setting.
2 Related Work
The study of optimization of submodular set functions is too extensive to give a comprehensive overview. On the integer lattice, there have been many efforts to maximize submodular functions, e.g Soma and Yoshida (2017); Bian et al. (2017a); Gottschalk and Peis (2016). To the best of our knowledge, we are the first to study the optimization of nonsubmodular functions on the integer lattice. In the following discussion, we primarily restrict our attention to the maximization of monotonic, submodular lattice functions subject to a cardinality constraint and the maximization of nonsubmodular set functions.
Reduction of Ene and Nguyen (2016).
Ene and Nguyen (2016) have given a polynomialtime reduction from the lattice to a set that enables unified translation of submodular optimization strategies to DRsubmodular (i.e. DR ratio , see Section 3) functions on the integer lattice. Since this translation is designed for DRsubmodular functions, it does not give a polynomialtime algorithm for Problem MCC when is nonsubmodular. Specifically, for the case of maximization subject to a cardinality constraint, Ene and Nguyen (2016) rely upon the threshold greedy algorithm for submodular set functions (Badanidiyuru and Vondrák, 2014), which does not work for nonsubmodular functions without modifications such as the ones in our paper.
Threshold Greedy and Lattice Optimization.
To speed up the StandardGreedy for submodular set functions, Badanidiyuru and Vondrák (2014) introduced the threshold greedy framework, which speeds up the StandardGreedy algorithm for maximizing submodular set functions under cardinality constraint from function evaluations to , and it maintains the approximation ratio , for . Soma and Yoshida (2016) adapted the threshold approach for efficiently maximizing DRsubmodular functions on the integer lattice and provided approximation algorithms. Other adaptations of the threshold approach of Badanidiyuru and Vondrák (2014) to the integer lattice include Ene and Nguyen (2016); Soma and Yoshida (2015). To the best of our knowledge, in this work we make the first use of the threshold framework for nonsubmodular functions.
Our ThresholdGreedy algorithm is an adaptation of the algorithm of Soma and Yoshida (2016) for DRsubmodular maximization to nonsubmodular functions. The nonsubmodularity requires new analysis, in the following specific ways: (1) during the binary search phase, we cannot guarantee that we find the maximum number of copies whose average gain exceeds the threshold ; hence, we must settle for any number of copies whose average gain exceeds , while ensuring that the gain of adding one additional copy falls belows . (2) To prove the performance ratio, we require a combination of the DR ratio and the submodularity ratio . (3) The stopping condition (smallest threshold value) is different resulting from the nonsubmodularity; the proof this condition is sufficient requires another application of the DR ratio.
Optimization of NonSubmodular Set Functions.
For nonsubmodular set functions, the submodularity ratio was introduced by Das and Kempe (2011); we generalize to lattice functions in Section 3, and we show the DR ratio . Bian et al. (2017b) introduced generalized curvature of a set function, an analogous concept to the DR ratio as we discuss in Section 3. Bian et al. (2017b) extended the analysis of Conforti and Cornuéjols (1984) to nonsubmodular set functions; together with the submodularity ratio , they proved StandardGreedy has tight approximation ratio under cardinality constraint.
The DR ratio was introduced by Lehmann et al. (2006) for the valuation functions in the maximum allocation problem for a combinatorial auction; if each valuation function has DR ratio at least , the maximum allocation problem is a special case of maximizing a set function with DR ratio over a matroid, for which Lehmann et al. (2006) employ the StandardGreedy algorithm (for matroids).
Many other notions of nonsubmodular set functions have been introduced (Krause et al., 2008b; Horel and Singer, 2016; Borodin et al., 2014; Feige and Izsak, 2013). For a comprehensive discussion of the relation of these and additional notions to the submodularity ratio , we refer the reader to Bian et al. (2017b).
3 NonSubmodularity on the Lattice
In this section, we define the lattice versions of DR ratio , submodularity ratio , and generalized curvature , which are used in the approximation ratios proved in Section 4.
Notations.
For each , let be the unit vector with in the coordinate corresponding to , and 0 elsewhere. We write for . Given a box in the integer lattice , let the set of all nonnegative, monotonic lattice functions with , and domain be denoted . It is often useful to think of a vector as a multiset containing copies of , where is the value of ’s coordinate corresponding to . We use the notation to represent the multiset corresponding to the vector . Finally, we define and for to be the vector with the coordinatewise maximum and minimum respectively. Rather than an algorithm taking an explicit description of the function as input, we consider the function as an oracle and measure the complexity of an algorithm in terms of the number of oracle calls or queries.
We begin with the related concepts of DR ratio and generalized curvature.
Definition 1.
Let . The diminishingreturn (DR) ratio of , , is the maximum value in such that for any , and for all such that ,
Definition 2.
Let . The generalized curvature of , , is the minimum value in such that for any , and for all such that ,
The DR ratio extends the notion of DRsubmodularity of Soma and Yoshida (2015), which is obtained as the special case . Generalized curvature for set functions was introduced in Bian et al. (2017b). Notice that results in lower bounds on the marginal gain of to a vector , while results in upper bounds on the same quantity:
whenever and the above expressions are defined. Next, we generalize the submodularity ratio of Das and Kempe (2011) to the integer lattice.
Definition 3.
Let . The submodularity ratio of , , is the maximum value in such that for all , such that ,
The next proposition, proved in Appendix C, shows the relationship between DR ratio and submodularity ratio.
Proposition 1.
For all , .
In the rest of this work, we will parameterize functions by the nonsubmodularity ratios defined above and partition functions into the sets .
Greedy versions.
In the proofs of this paper, the full power of the parameters defined above is not required. It suffices to consider restricted versions, where the maximization is taken over only those vectors which appear in the ratio proofs. We define these greedy versions in Appendix B and include more discussion in Remark 1 of Section 4.1.
4 Algorithms
4.1 The ThresholdGreedy Algorithm
In this section, we present the algorithm ThresholdGreedy (Alg. 2) to approximate Problem MCC with ratio with polynomial query complexity. Appendix D contains the proofs of all lemmas, claims, and omitted details from this section.
Description.
ThresholdGreedy operates by considering decreasing thresholds for the marginal gain in its outer for loop; for each threshold , the algorithm adds on line 7 elements whose marginal gain exceeds as described below. The parameter determines the stepsize between successive thresholds; the algorithm continues until the budget is met (line 8) or the threshold is below a minimum value dependent on the parameter .
Intuitively, the goal of the threshold approach (Badanidiyuru and Vondrák, 2014) for submodular set functions is as follows. At each threshold (i.e., iteration of the outer for loop), add all elements whose marginal gain exceeds to the solution . On the lattice, adding all copies of whose average gain exceeds on line 7 would require the addition of the maximum multiple such that the average marginal gain exceeds :
(P1) 
as in the threshold algorithm of Soma and Yoshida (2016) for DRsubmodular functions, in which the maximum is identified by binary search. However, since is not DRsubmodular, it is not always the case that , for each . For this reason, we cannot find the maximum such by binary search. Furthermore, even if we found the maximum for each , we could not guarantee that all elements of marginal gain at least were added due to the nonsubmodularity of : an element whose gain is less than when considered in the inner for loop might have gain greater than after additional elements are added to the solution.
ThresholdGreedy more conservatively ensures that the number chosen for each satisfies both (P1) and
(P2) 
but it is not necessarily the maximum such .
Pivot.
Any satisfying both (P1) and (P2) is termed a pivot^{4}^{4}4For convenience, we also define the maximum value of , to be a pivot if satisfies (P1) only, and set , so that all pivots satisfy both properties. with respect to . Perhaps surprisingly, a valid pivot can be found with binary search in function queries, where ; discussion of BinarySearchPivot and proof of this results is provided in Appendix D, Lemma 2. By finding a pivot for each , ThresholdGreedy does not attempt to add all elements exceeding the marginal gain of threshold ; instead, ThresholdGreedy maintains the following property at each threshold.
Property 1.
Let be the solution of ThresholdGreedy immediately after the iteration of the outer for loop corresponding to threshold . Then for each , there exists such that .
Performance ratios.
Next, we present the main result of this section, the performance guarantee involving the DR and submodularity ratios. Observe that the query complexity of ThresholdGreedy is polynomial in the input size .
Theorem 1.
Let an instance of Problem MCC be given, with . If is the solution returned by ThresholdGreedy and is an optimal solution to this instance, then
The query complexity of ThresholdGreedy is .
If is given, the assignment , yields performance ratio at least .
Proof.
If , the ratio holds trivially; so assume . The proof of the following claim requires an application of the DR ratio.
Claim 1.
Let be produced by a modified version of ThresholdGreedy that continues until . If we show , the results follows.
Thus, for the rest of the proof let be as described in Claim 1. Let be the value of after the th execution of line 7 of ThresholdGreedy. Let be the th pivot, such that . The next claim lower bounds the marginal gain in terms of the DR ratio and the previous threshold.
Claim 2.
For each ,
Proof.
Let be the threshold at which is added to ; let . If is the first threshold,
If is not the first threshold, is the previous threshold value of the previous iteration of the outer for loop. By Property 1, there exists , such that . By the definition of DR ratio, .
In either case, by the fact that property (P1) of a pivot holds for , we have
Query complexity.
The for loop on line 4 (Alg. 2) iterates at most times; each iteration requires queries, by Lemma 2. ∎ For additional speedup, the inner for loop of FastGreedy may be parallelized, which divides the factor of in the query complexity by the number of threads but worsens the performance ratio; in addition to , the generalized curvature is required in the proof.
Corollary 1.
If the inner for loop of ThresholdGreedy is parallelized, the performance ratio becomes , for .
Remark 1.
A careful analysis of the usage of , in the proof of Theorem 1 shows that the full power of the definitions of these quantities is not required. Rather, it is sufficient to consider ThresholdGreedy versions of these parameters, as defined in Appendix B. In the same way, we also have FastGreedy version of based upon the proof of Theorem 2. The FastGreedy version of the DR ratio is an integral part of how the algorithm works and is calculated directly by the algorithm, as we discuss in the next section.
4.2 The FastGreedy Algorithm
The proof of the performance ratio of ThresholdGreedy requires both the submodularty ratio and the DR ratio . In this section, we provide an algorithm (FastGreedy, Alg. 3) that achieves ratio , with factor that it can determine during its execution. Appendix E provides proofs for all lemmas, claims, and omitted details.
Description.
FastGreedy employs a threshold framework analogous to ThresholdGreedy. Each iteration of the outer while loop of FastGreedy is analogous to an iteration of the outer for loop in ThresholdGreedy, in which elements are added whose marginal gain exceeds a threshold. FastGreedy employs BinarySearchPivot to find pivots for each for each threshold value . Finally, the parameter determines a minimum threshold value.
As its threshold, FastGreedy uses , where is the maximum marginal gain found on line 5, parameter is the intended stepsize between thresholds as in ThresholdGreedy, and is an upper bound on the DR ratio , as described below. This choice of has the following advantages over the approach of ThresholdGreedy: (1) since the threshold is related to the maximum marginal gain , the theoretical performance ratio is improved; (2) the use of to lower the threshold ensures the same^{5}^{5}5Up to a constant factor, which depends on . worstcase query complexity as ThresholdGreedy and leads to substantial reduction of the number of queries in practice, as we demonstrate in Section 6.
FastGreedy DR ratio .
If FastGreedy is modified^{6}^{6}6This modification can be accomplished by setting to ensure the condition on line 4 is always true on this instance. to continue until , let the final, smallest value of be termed the FastGreedy DR ratio on the instance. The FastGreedy DR ratio is at least the DR ratio of the function, up to the parameter :
Lemma 1.
Let parameters be given. Throughout the execution of FastGreedy on an instance of Problem MCC with , . Since can be arbitrarily small, .
Proof.
Initally, ; it decreases by a factor of at most once per iteration of the while loop. Suppose for some iteration of the while loop, and let have the value assigned immediately after iteration , have the value assigned after line 5 of iteration . Since a valid pivot was found for each during iteration , by property (P2) there exists , . Hence , by the definition of DR ratio. In iteration , has the value of from iteration , so the value of computed during iteration is at most , and does not decrease during iteration . ∎
Performance ratio.
Next, we present the main result of this section. In contrast to ThresholdGreedy, the factor of in the performance ratio has been replaced with ; at the termination of the algorithm, the value of is an output of FastGreedy if the solution satisfies . In any case, by Lemma 1, the performance ratio is at worst the same as that of ThresholdGreedy.
Theorem 2.
Let an instance of Problem MCC be given, with . Let be the solution returned by FastGreedy with parameters , and let be an optimal solution to this instance; also, suppose . Let be the FastGreedy DR ratio on this instance. Then,
The worstcase query complexity of FastGreedy is
If is given, the assignment , yields performance ratio at least .
Proof of query complexity.
The performance ratio is proved in Appendix E. Let be the sequence of values in the order considered by the algorithm. By Lemma 1, at most times; label each such index an uptick, and let be the indices of each uptick in order of their appearance. Also, let be the first index after such that , for each .
Next, we will iteratively delete from the sequence of values. Initially, let be the last uptick in the sequence; delete all terms from the sequence. Set and repeat this process until .
Claim 3.
For each selected in the iterative deletion above, there are at most values deleted from the sequence.
By Claim 3 and the bound on the number of upticks, we have deleted at most thresholds from the sequence; every term in the remaining sequence satisfies ; hence, the remaining sequence contains at most terms, by its initial and terminal values. The query complexity follows from the number of queries per value of , which is by Lemma 2. ∎
5 Influence Maximization: A General Framework
In this section, we provide a nonsubmodular framework for viral marketing on a social network that unifies the classical influence maximization (Kempe et al., 2003) with the boosting problem (Lin et al., 2017).
Overview.
The goal of influence maximization is to select seed users (i.e. initially activated users) to maximize the expected adoption in the social network, where the total number of seeds is restricted by a budget, such that the expected adoption in the social network is maximized. The boosting problem is, given a fixed seed set , to incentivize (i.e. increase the susceptibility of a user to the influence of his friends) users within a budget such that the expected adoption with seed set increases the most.
Our framework combines the above two scenarios with a partial incentive: an incentive (say,
% off the purchase price) increases the probability a user will purchase the product independently and increases the susceptibility of the user to the influence of his friends. Hence, our problem asks how to best allocate the budget between (partially) seeding users and boosting the influence of likely extant seeds. Both the classical influence maximization and the nonsubmodular boosting problem can be obtained as special cases, as shown in Appendix
F.Our model is related to the formulation of Demaine et al. (2014); however, they employ a submodular thresholdbased model, while our model is inherently nonsubmodular due to the boosting mechanism (Lin et al., 2017). Also, GIM is related to the submodular budgeted allocation problem of Alon et al. (2012), in which the influence of an advertiser increases with the amount of budget allocated; the main difference with GIM is that we modify incoming edge weights with incentives instead of outgoing, which creates the boosting mechanism responsible for the nonsubmodularity.
Model.
Given a social network , and a product , we define the following model of adoption. The allocation of budget to is thought of as a discount towards purchasing the product; this discount increases the probability that this user will adopt or purchase the product. Furthermore, this discount increases the susceptibility of the user to influence from its (incoming) social connections.
Formally, an incentive level is chosen for each user . With independent probability , user initially activates or adopts the product; altogether, this creates a probabilistic initial set of activated users. Next, through the classical Independent Cascade (IC) model^{7}^{7}7The IC model is defined in Appendix F. of adoption, users influence their neighbors in the social network; wherein the weight for edge is determined by the incentive level of user as well as the strength of the social connection from to .
We write to denote the probability of full graph realization and seed set when gives the incentive levels for each user. We write to denote the size of the reachable set from in realization . The expected activation in the network given a choice of incentive levels is given by where an explicit formula for is given in Appendix F. Finally, let .
Definition 4 (Generalized Influence Maximization (GIM)).
Let social network be given, together with the mappings , , for all , for each , where is the number of incentive levels. Given budget , determine incentive levels , with , such that is maximized.
Bound on nonsubmodularity.
Next, we provide a lower bound on the greedy submodularity ratios (see Appendix B). We emphasize that the assumption that the probability mappings as a function of incentive level be submodular does not imply the objective is DRsubmodular. Theorem 3 is proved in Appendix F.
Theorem 3.
Let be an instance of GIM, with budget . Let , . Suppose for all , the mappings , are submodular set functions. Then, the greedy submodularity ratios defined in Appendix B and the FastGreedy DR ratio are lower bounded by where is the maximum indegree in .
6 Experimental Evaluation
In this section, we evaluate our proposed algorithms for the GIM problem defined in Section 5. We evaluate our algorithms as compared with StandardGreedy; by the naive reduction of the lattice to sets in exponential time, this algorithm is equivalent performing this reduction and running the standard greedy for sets, the performance of which for nonsubmodular set functions was analyzed by Bian et al. (2017b).
In Section 6.1, we describe our methodology; in Section 6.2, we compare the algorithms and nonsubmodularity parameters. In Appendix G.1, we explore the behavior of FastGreedy as the parameters , and are varied.
6.1 Methodology
Our implementation uses Monte Carlo sampling to estimate the objective value
, with samples used. As a result, each function query is relatively expensive.We evaluate on two networks taken from the SNAP dataset Leskovec and Krevl (2014): caGrQc (“GrQc”; 15k nodes, 14.5K edges) and facebook
(“Facebook”; 4k nodes, 176K edges). Unless otherwise specified, we use 10 repetitions per datapoint and display the mean. The width of shaded intervals is one standard deviation. Standard greedy is omitted from some figures where running time is prohibitive. Unless noted otherwise, we use default settings of
, , . We use a uniform box constraint and assign each user the same number of incentive levels; the maximum incentive level for a user corresponds to giving the product to the user for free and hence deterministically seeds the user; we adopt linear models for the mappings , . We often plot versus , which is defined as the maximum number of deterministic seeds; for example, if with incentive levels, then .6.2 Results
In this section, we demonstrate the following: (1) our algorithms exhibit virtually identical quality of solution with StandardGreedy, (2) our algorithms query the function much fewer times, which leads to dramatic runtime improvement over StandardGreedy, (3) FastGreedy further reduces the number of queries of ThresholdGreedy while sacrificing little in solution quality, and (4) the nonsubmodularity parameters on a small instance are computed, which provides evidence that our theoretical performance ratios are useful.
Quality of Solution
In Fig. 1(a), we plot for the solution returned by each algorithm on the GrQc network with incentive levels; the difference in quality of solution returned by the three algorithms is negligible. In Fig. 1(b), we plot the same for the Facebook network with incentive levels; on Facebook, we drop StandardGreedy due to its prohibitive runtime. FastGreedy is observed to lose a small (up to 3%) factor, which we consider acceptable in light of its large runtime improvement, which we discuss next.
Number of Queries
Next, we present in Fig. 2 the number of function queries^{8}^{8}8Our implementation is in terms of the marginal gain. The number of function queries shown is the number of times the marginal gain function was called. each algorithm requires on the GrQc and Facebook networks. StandardGreedy required up to 20M queries on Facebook, hence it is not shown in Fig. 2(b). Both of our algorithms provide a large improvement over StandardGreedy; in particular, notice that StandardGreedy increases linearly with , while both of the others exhibit logarithmic increase in agreement with the theoretical query complexity of each. Furthermore, FastGreedy uses at least 14.5% fewer function queries than ThresholdGreedy and up to 43% fewer as grows.
In terms of CPU runtime, we show in Fig. 3 that FastGreedy significantly outperforms ThresholdGreedy; hence, the runtime appears to be dominated by the number of function queries as expected.
NonSubmodularity Parameters
The value of the FastGreedy DR ratio on GrQc is shown in Fig. 4(a); notice that it is relatively stable as the budget increases from to , although there is substantial drop from incentive levels to ; this may be explained as an increase in the nonsubmodularity resulting from inaccurate sampling of , since it is more difficult to detect differences between the finer levels. Still, on all instances tested, , which suggests the worstcase performance ratio of FastGreedy is not far from that of StandardGreedy.
Finally, we examine the various nonsubmodularity parameters on a very small instance which admits their computation: a random BarabasiAlbert network with nodes and incentive levels. We compute the FastGreedy version of the submodularity ratio defined in Appendix B by direct enumeration and consider the FastGreedy DR ratio . Results are shown in Fig. 4(b). The value of is close to and remains constant with increasing budget , while the FastGreedy submodularity ratio decreases slowly with . With and the FastGreedy , we can compute the worstcase performance ratio of FastGreedy across these instances: .
7 Conclusions
In this work, we provide two approximation algorithms for maximizing nonsubmodular functions with respect to a cardinality constraint on the integer lattice with polynomial query complexity. Since set functions are a special case, our work provides faster algorithms for the same problem with set functions than the standard greedy algorithm, although the performance ratio degrades from at least to , where is the FastGreedy DR Ratio. We propose a natural application of nonsubmodular influence maximization, for which we lower bound the relevant nonsubmodularity parameters and validate our algorithms.
References
 Alon et al. (2012) Noga Alon, Iftah Gamzu, and Moshe Tennenholtz. Optimizing budget allocation among channels and influencers. Proceedings of the 21st International Conference on World Wide Web (WWW), pages 381–388, 2012. doi: 10.1145/2187836.2187888. URL http://dl.acm.org/citation.cfm?doid=2187836.2187888.
 Badanidiyuru and Vondrák (2014) Ashwinkumar Badanidiyuru and J Vondrák. Fast algorithms for maximizing submodular functions. Proceedings of the 25th Annual ACMSIAM Symposium on Discrete Algorithms (SODA), pages 1497–1514, 2014. doi: 10.1137/1.9781611973402.110.
 Bian et al. (2017a) An Bian, Kfir Y. Levy, Andreas Krause, and Joachim M. Buhmann. Nonmonotone Continuous DRsubmodular Maximization: Structure and Algorithms. In Advances in Neural Information Processing Systems (NIPS), pages 486–496, 2017a. URL http://arxiv.org/abs/1711.02515http://papers.nips.cc/paper/6652nonmonotonecontinuousdrsubmodularmaximizationstructureandalgorithms.pdf.

Bian et al. (2017b)
Andrew An Bian, Joachim M. Buhmann, Andreas Krause, and Sebastian Tschiatschek.
Guarantees for Greedy Maximization of Nonsubmodular Functions with
Applications.
In
Proceedings of the 34th International Conference on Machine Learning (ICML)
, 2017b.  Borodin et al. (2014) Allan Borodin, Dai Tri Man Le, and Yuli Ye. Weakly Submodular Functions. arXiv preprint arXiv:1401.6697, 2014. URL http://arxiv.org/abs/1401.6697.
 Conforti and Cornuéjols (1984) Michele Conforti and Gérard Cornuéjols. Submodular set functions, matroids and the greedy algorithm: Tight worstcase bounds and some generalizations of the RadoEdmonds theorem. Discrete Applied Mathematics, 7(3):251–274, 1984. ISSN 0166218X. doi: 10.1016/0166218X(84)900039.
 Das and Kempe (2011) Abhimanyu Das and David Kempe. Submodular meets Spectral: Greedy Algorithms for Subset Selection, Sparse Approximation and Dictionary Selection. Proceedings of the 28th International Conference on Machine Learning (ICML), 2011. ISSN <null>. URL http://arxiv.org/abs/1102.3975.
 Demaine et al. (2014) Erik D. Demaine, Mohammad T. Hajiaghayi, Hamid Mahini, David L. Malec, S. Raghavan, Anshul Sawant, and Morteza Zadimoghadam. How to Influence People with Partial Incentives. Proceedings of the 23rd International Conference on World Wide Web (WWW), pages 937–948, 2014. doi: 10.1145/2566486.2568039. URL http://dl.acm.org/citation.cfm?id=2568039.
 Ene and Nguyen (2016) Alina Ene and Huy L. Nguyen. A Reduction for Optimizing Lattice Submodular Functions with Diminishing Returns. arXiv preprint arXiv:1606.08362v1, 2016.
 Feige and Izsak (2013) Uriel Feige and Rani Izsak. Welfare Maximization and the Supermodular Degree. In Proceedings of the 4th conference on Innovations in Theoretical Computer Science (ITCS), pages 247–256, 2013. ISBN 9781450318594. doi: 10.1145/2422436.2422466. URL http://dl.acm.org/citation.cfm?doid=2422436.2422466.
 Gottschalk and Peis (2016) Corinna Gottschalk and Britta Peis. Submodular Function Maximization over Distributive and Integer Lattices. arXiv preprint arXiv:1505:05423, 2016. URL http://arxiv.org/abs/1505.05423.
 Horel and Singer (2016) Thibaut Horel and Yaron Singer. Maximization of Approximately Submodular Functions. In Advances in Neural Information Processing Systems (NIPS), 2016.
 Kempe et al. (2003) David Kempe, Jon Kleinberg, and Éva Tardos. Maximizing the spread of influence through a social network. In Proceedings of the 9th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD), pages 137–146, 2003. ISBN 1581137370. doi: 10.1145/956755.956769. URL http://portal.acm.org/citation.cfm?doid=956750.956769.
 Krause et al. (2008a) Andreas Krause, Jure Leskovec, Carlos Guestrin, Jeanne M. VanBriesen, and Christos Faloutsos. Efficient sensor placement optimization for securing large water distribution networks. Journal of Water Resources Planning and Management, 134(6):516–526, 2008a. ISSN 07339496/2008/6516–526. doi: 10.1061/(ASCE)07339496(2008)134:6(516). URL http://ascelibrary.org/doi/abs/10.1061/(ASCE)07339496(2008)134:6(516).
 Krause et al. (2008b) Andreas Krause, Ajit Singh, and Carlos Guestrin. NearOptimal Sensor Placements in Gaussian Processes: Theory, Efficient Algorithms and Empirical Studies. In Journal of Machine Learning Research, volume 9, pages 235–284, 2008b. ISBN 1390681139. doi: 10.1145/1102351.1102385.
 Lehmann et al. (2006) Benny Lehmann, Daniel Lehmann, and Noam Nisan. Combinatorial auctions with decreasing marginal utilities. Games and Economic Behavior, 55(2):270–296, 2006. ISSN 08998256. doi: 10.1016/j.geb.2005.02.006.
 Leskovec and Krevl (2014) Jure Leskovec and Andrej Krevl. SNAP Datasets: Stanford large network dataset collection. http://snap.stanford.edu/data, June 2014.
 Leskovec et al. (2007) Jure Leskovec, Andreas Krause, Carlos Guestrin, Christos Faloutsos, Jeanne VanBriesen, and Natalie Glance. Costeffective Outbreak Detection in Networks. In Proceedings of the 13th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD), pages 420–429, 2007. ISBN 9781595936097. doi: 10.1145/1281192.1281239. URL http://eprints.pascalnetwork.org/archive/00005342/.
 Lin et al. (2017) Yishi Lin, Wei Chen, and John C.S. Lui. Boosting information spread: An algorithmic approach. Proceedings of the International Conference on Data Engineering (ICDE), pages 883–894, 2017. ISSN 10844627. doi: 10.1109/ICDE.2017.137.
 Mirzasoleiman et al. (2015) Baharan Mirzasoleiman, Ashwinkumar Badanidiyuru, Amin Karbasi, Jan Vondrak, and Andreas Krause. Lazier Than Lazy Greedy. In Proceedings of the TwentyNinth AAAI Conference on Artificial Intelligence (AAAI), pages 1812–1818, 2015. ISBN 9781577357018. URL http://arxiv.org/abs/1409.7938.
 Soma and Yoshida (2015) Tasuku Soma and Yuichi Yoshida. A Generalization of Submodular Cover via the Diminishing Return Property on the Integer Lattice. Advances in Neural Information Processing Systems (NIPS), 2015. ISSN 10495258.
 Soma and Yoshida (2016) Tasuku Soma and Yuichi Yoshida. Maximizing Monotone Submodular Functions over the Integer Lattice. In Quentin Louveaux and Martin Skutella, editors, Integer Programming and Combinatorial Optimization, pages 325–336, Cham, 2016. Springer International Publishing. ISBN 9783319334615. URL http://arxiv.org/abs/1503.01218.

Soma and Yoshida (2017)
Tasuku Soma and Yuichi Yoshida.
Non‐monotone DR‐Submodular Function Maximization.
In
Proceedings of the ThirtyFirst AAAI Conference on Artificial Intelligence (AAAI)
, pages 898–904, 2017.
Appendix A Organization of the Appendix
Appendix B defines the greedy versions of the nonsubmodularity parameters.
Appendix F defines the Independent Cascade model, proves that classical IM and boosting are subproblems of our IM model, and provides the proof of Theorem 3 from Section 5.
Appendix G provides additional experimental results characterizing the parameters of FastGreedy.
Appendix H presents details of our GIM implementation.
Appendix B Greedy Versions of NonSubmodularity Parameters
We define various greedy versions of the nonsubmodularity parameters in this section. In this work, these are referred to as FastGreedy submodularity ratio, etc., where the instance is clear from the context.
ThresholdGreedy DR ratio.
Definition 5 (ThresholdGreedy DR ratio).
Let an instance of Problem MCC be given, with budget constraint . Let be the sequence of values takes during execution of ThresholdGreedy on . The ThresholdGreedy version of the DR ratio on , is the maximum value such that for any , for any , if is the value of the greedy vector immediately after was considered during the inner for loop of the threshold directly preceding the one in which was considered ( if was considered during the first threshold),
Greedy versions of submodularity ratio.
Definition 6.
Let , and let an instance of Problem MCC be given, with budget constraint . Let be the sequence of values takes during execution of on . The version of the submodularity ratio on , is the maximum value such that for any , for any , for any such that and and ,
The FastGreedy submodularity ratio differs from the above two only in that the sequence of vectors are the value of the greedy vector at the beginning of each iteration of the outer while loop, instead of all values of during execution of the algorithm.
Appendix C Proofs for Section 3
Proof of Proposition 1.
Suppose . Let . Then,
Therefore, , since is the maximum number satisfying the above inequality. ∎
Appendix D BinarySearchPivot and Proofs for Section 4.1 (ThresholdGreedy)
BinarySearchPivot.
The routine BinarySearchPivot (Alg. 4) efficiently finds a pivot for each . BinarySearchPivot uses a modified binarysearch procedure that maintains such that both
(1)  
(2) 
Initially, and do satisfy (1), (2), or else we have already found a valid pivot (lines 5, 7). The midpoint of the interval is tested to determine if or should be updated to maintain (1), (2); this process continues until .
Lemma 2.
BinarySearchPivot finds a valid pivot in queries of , where , .
Proof of Lemma 2.
The routine BinarySearchPivot maintains inequalities (1), (2), it is enough to show that given (1), (2), there exists a such that is a pivot. Consider , for ; there must be a smallest such that satisfies property (P1) of being a pivot, since satisfies property (P1). If property (P2) is unsatisfied, then
contradicting the choice of since . The query complexity follows from a constant number of queries per iteration of the while loop and the fact that each iteration reduces the distance from to by a factor of 2; initially, this distance was . ∎
Omitted proofs from Section 4.1.
Proof that Property 1 holds.
Let be the value of immediately after is considered during the iteration corresponding to ; then property (P2) of pivot was satisfied: . ∎
Proof of Claim 1.
Suppose . Suppose , and let be the solution returned by a modified ThresholdGreedy that continues updating the threshold until . Order , and let , , with , so that . Also, let be the vector guaranteed for by Lemma 1 with the last threshold value of ThresholdGreedy. Then
Hence, for any , if
then
∎
From proof of Theorem 1:
“If , the ratio holds trivially”.
If , the ratio holds trivially from the inequality , for real , since
“from which the hypothesis of Claim 1 follows”.
Since and , we have .
Proof of Corollary 1.
As in proof of Theorem 1, suppose . Claim 1 still holds as before. Now, let be the value of at the beginning of the th iteration of the outer for loop with threshold value . Since the inner for loop is conducted in parallel, all marginal gains in iteration are considered with respect to . Order the vectors added in this iteration ; because each is a pivot, we know and .
Let , so and . Now for each and for each , there exists a vector such that (namely , from when was considered during the previous iteration , or if is the first iteration). Furthermore and . Hence
for any . The preceding argument proves an analogue of Claim 2, and the argument from here is exactly analogous to the proof of Theorem 1. ∎
Appendix E Proofs for Section 4.2
Proof of Theorem 2.
Since we have included as a hypothesis, we have the following claim, analogous to Claim 1.
Claim 4.
If is produced by the modified version of FastGreedy that continues until , and , then the Theorem is proved.
Proof.
Suppose , and let be the solution returned by a FastGreedy* which continues updating the threshold until . Order , and let , , with , so that . Then
Comments
There are no comments yet.