Fast Maximization of Non-Submodular, Monotonic Functions on the Integer Lattice

05/17/2018 ∙ by Alan Kuhnle, et al. ∙ 0

The optimization of submodular functions on the integer lattice has received much attention recently, but the objective functions of many applications are non-submodular. We provide two approximation algorithms for maximizing a non-submodular function on the integer lattice subject to a cardinality constraint; these are the first algorithms for this purpose that have polynomial query complexity. We propose a general framework for influence maximization on the integer lattice that generalizes prior works on this topic, and we demonstrate the efficiency of our algorithms in this context.



There are no comments yet.


page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

As a natural extension of finite sets (equivalently, ), optimization of discrete functions on the integer lattice has received attention recently (Alon et al., 2012; Demaine et al., 2014; Soma and Yoshida, 2015). As an example, consider the placement of sensors in a water network (Krause et al., 2008a); in the set version, each sensor takes a value in , which corresponds to whether the sensor was placed. In the lattice version Soma and Yoshida (2015), each sensor has a power level in , to which the sensitivity of the sensor is correlated. As a second example, consider the influence maximization problem (Kempe et al., 2003); instead of the binary seeding of a user, the lattice version enables partial incentives or discounts to be used (Demaine et al., 2014).

Although many results from the optimization of submodular set functions have been generalized to the integer lattice (Soma and Yoshida, 2015, 2016; Ene and Nguyen, 2016), many objective functions arising from applications are not submodular (Bian et al., 2017b; Lin et al., 2017; Das and Kempe, 2011; Horel and Singer, 2016). In this work, we consider maximization subject to a cardinality constraint (MCC), where the function to be maximized may be non-submodular. Let (the budget), (the box), and let (the objective) be a non-negative and monotonic111for all (coordinate-wise), function with . Then determine


where , .

Since the integer lattice may be represented as a multiset of size , one may use results for Problem MCC with non-submodular set functions. In particular, the tight ratio of the standard greedy algorithm by Bian et al. (2017b), where are discussed below, applies with the lattice adaptation of the standard greedy algorithm (StandardGreedy) given in Alg. 1. However, this approach requires queries of , which is not polynomial in the input222

The input is considered to be the vector

of length and the number represented in bits (w.l.o.g. each component of is at most ); the function is regarded as an oracle and hence does not contribute to input size. size . Even for applications with set functions, queries may be prohibitive, and researchers (Leskovec et al., 2007; Mirzasoleiman et al., 2015; Badanidiyuru and Vondrák, 2014) have sought ways to speed up the StandardGreedy algorithm. Unfortunately, these approaches rely upon the submodularity of , and there has been no analogous effort for non-submodular functions.

To quantify the non-submodularity of a lattice function , we generalize the following quantities defined for set functions to the lattice: (1) the diminishing-return (DR) ratio of (Lehmann et al., 2006), (2) the submodularity ratio of (Das and Kempe, 2011), and (3) the generalized curvature of (Bian et al., 2017b). Our main contributions are:

  • To speed up StandardGreedy (Alg. 1), we adapt the threshold greedy framework of Badanidiyuru and Vondrák (2014) to non-submodular functions; this yields an algorithm (ThresholdGreedy, Alg. 2) with approximation ratio , for any , the first approximation algorithm with polynomial query complexity for Problem MCC on the lattice. The query complexity of the StandardGreedy algorithm is improved from to , where are parameters of ThresholdGreedy.

  • We introduce the novel approximation algorithm FastGreedy, which combines elements of StandardGreedy and ThresholdGreedy to improve the performance ratio to , where is at least and in many cases333When the solution returned by FastGreedy satisfies . Otherwise, an upper bound on is returned. is determined by the algorithm. Furthermore, FastGreedy exploits the non-submodularity of the function to decrease its runtime in practice without sacrificing its performance guarantee, while maintaining the same worst-case query complexity as ThresholdGreedy up to a constant factor.

  • To demonstrate our algorithms, we introduce a general budget allocation problem for viral marketing, which unifies submodular influence maximization (IM) under the independent cascade model (Kempe et al., 2003) with the non-submodular boosting problem (Lin et al., 2017) and in addition allows partial incentives. We prove a lower bound on the DR and submodularity ratios for this unified framework, and we experimentally validate our proposed algorithms in this setting.

1:  Input: , ,
2:  Output:
4:  for  to  do
6:  return
Algorithm 1 StandardGreedy

2 Related Work

The study of optimization of submodular set functions is too extensive to give a comprehensive overview. On the integer lattice, there have been many efforts to maximize submodular functions, e.g Soma and Yoshida (2017); Bian et al. (2017a); Gottschalk and Peis (2016). To the best of our knowledge, we are the first to study the optimization of non-submodular functions on the integer lattice. In the following discussion, we primarily restrict our attention to the maximization of monotonic, submodular lattice functions subject to a cardinality constraint and the maximization of non-submodular set functions.

Reduction of Ene and Nguyen (2016).

Ene and Nguyen (2016) have given a polynomial-time reduction from the lattice to a set that enables unified translation of submodular optimization strategies to DR-submodular (i.e. DR ratio , see Section 3) functions on the integer lattice. Since this translation is designed for DR-submodular functions, it does not give a polynomial-time algorithm for Problem MCC when is non-submodular. Specifically, for the case of maximization subject to a cardinality constraint, Ene and Nguyen (2016) rely upon the threshold greedy algorithm for submodular set functions (Badanidiyuru and Vondrák, 2014), which does not work for non-submodular functions without modifications such as the ones in our paper.

Threshold Greedy and Lattice Optimization.

To speed up the StandardGreedy for submodular set functions, Badanidiyuru and Vondrák (2014) introduced the threshold greedy framework, which speeds up the StandardGreedy algorithm for maximizing submodular set functions under cardinality constraint from function evaluations to , and it maintains the approximation ratio , for . Soma and Yoshida (2016) adapted the threshold approach for efficiently maximizing DR-submodular functions on the integer lattice and provided -approximation algorithms. Other adaptations of the threshold approach of Badanidiyuru and Vondrák (2014) to the integer lattice include Ene and Nguyen (2016); Soma and Yoshida (2015). To the best of our knowledge, in this work we make the first use of the threshold framework for non-submodular functions.

Our ThresholdGreedy algorithm is an adaptation of the algorithm of Soma and Yoshida (2016) for DR-submodular maximization to non-submodular functions. The non-submodularity requires new analysis, in the following specific ways: (1) during the binary search phase, we cannot guarantee that we find the maximum number of copies whose average gain exceeds the threshold ; hence, we must settle for any number of copies whose average gain exceeds , while ensuring that the gain of adding one additional copy falls belows . (2) To prove the performance ratio, we require a combination of the DR ratio and the submodularity ratio . (3) The stopping condition (smallest threshold value) is different resulting from the non-submodularity; the proof this condition is sufficient requires another application of the DR ratio.

Optimization of Non-Submodular Set Functions.

For non-submodular set functions, the submodularity ratio was introduced by Das and Kempe (2011); we generalize to lattice functions in Section 3, and we show the DR ratio . Bian et al. (2017b) introduced generalized curvature of a set function, an analogous concept to the DR ratio as we discuss in Section 3. Bian et al. (2017b) extended the analysis of Conforti and Cornuéjols (1984) to non-submodular set functions; together with the submodularity ratio , they proved StandardGreedy has tight approximation ratio under cardinality constraint.

The DR ratio was introduced by Lehmann et al. (2006) for the valuation functions in the maximum allocation problem for a combinatorial auction; if each valuation function has DR ratio at least , the maximum allocation problem is a special case of maximizing a set function with DR ratio over a matroid, for which Lehmann et al. (2006) employ the StandardGreedy algorithm (for matroids).

Many other notions of non-submodular set functions have been introduced (Krause et al., 2008b; Horel and Singer, 2016; Borodin et al., 2014; Feige and Izsak, 2013). For a comprehensive discussion of the relation of these and additional notions to the submodularity ratio , we refer the reader to Bian et al. (2017b).

3 Non-Submodularity on the Lattice

In this section, we define the lattice versions of DR ratio , submodularity ratio , and generalized curvature , which are used in the approximation ratios proved in Section 4.


For each , let be the unit vector with in the coordinate corresponding to , and 0 elsewhere. We write for . Given a box in the integer lattice , let the set of all non-negative, monotonic lattice functions with , and domain be denoted . It is often useful to think of a vector as a multi-set containing copies of , where is the value of ’s coordinate corresponding to . We use the notation to represent the multiset corresponding to the vector . Finally, we define and for to be the vector with the coordinate-wise maximum and minimum respectively. Rather than an algorithm taking an explicit description of the function as input, we consider the function as an oracle and measure the complexity of an algorithm in terms of the number of oracle calls or queries.

We begin with the related concepts of DR ratio and generalized curvature.

Definition 1.

Let . The diminishing-return (DR) ratio of , , is the maximum value in such that for any , and for all such that ,

Definition 2.

Let . The generalized curvature of , , is the minimum value in such that for any , and for all such that ,

The DR ratio extends the notion of DR-submodularity of Soma and Yoshida (2015), which is obtained as the special case . Generalized curvature for set functions was introduced in Bian et al. (2017b). Notice that results in lower bounds on the marginal gain of to a vector , while results in upper bounds on the same quantity:

whenever and the above expressions are defined. Next, we generalize the submodularity ratio of Das and Kempe (2011) to the integer lattice.

Definition 3.

Let . The submodularity ratio of , , is the maximum value in such that for all , such that ,

The next proposition, proved in Appendix C, shows the relationship between DR ratio and submodularity ratio.

Proposition 1.

For all , .

In the rest of this work, we will parameterize functions by the non-submodularity ratios defined above and partition functions into the sets .

Greedy versions.

In the proofs of this paper, the full power of the parameters defined above is not required. It suffices to consider restricted versions, where the maximization is taken over only those vectors which appear in the ratio proofs. We define these greedy versions in Appendix B and include more discussion in Remark 1 of Section 4.1.

4 Algorithms

4.1 The ThresholdGreedy Algorithm

In this section, we present the algorithm ThresholdGreedy (Alg. 2) to approximate Problem MCC with ratio with polynomial query complexity. Appendix D contains the proofs of all lemmas, claims, and omitted details from this section.


ThresholdGreedy operates by considering decreasing thresholds for the marginal gain in its outer for loop; for each threshold , the algorithm adds on line 7 elements whose marginal gain exceeds as described below. The parameter determines the stepsize between successive thresholds; the algorithm continues until the budget is met (line 8) or the threshold is below a minimum value dependent on the parameter .

Intuitively, the goal of the threshold approach (Badanidiyuru and Vondrák, 2014) for submodular set functions is as follows. At each threshold (i.e., iteration of the outer for loop), add all elements whose marginal gain exceeds to the solution . On the lattice, adding all copies of whose average gain exceeds on line 7 would require the addition of the maximum multiple such that the average marginal gain exceeds :


as in the threshold algorithm of Soma and Yoshida (2016) for DR-submodular functions, in which the maximum is identified by binary search. However, since is not DR-submodular, it is not always the case that , for each . For this reason, we cannot find the maximum such by binary search. Furthermore, even if we found the maximum for each , we could not guarantee that all elements of marginal gain at least were added due to the non-submodularity of : an element whose gain is less than when considered in the inner for loop might have gain greater than after additional elements are added to the solution.

ThresholdGreedy more conservatively ensures that the number chosen for each satisfies both (P1) and


but it is not necessarily the maximum such .


Any satisfying both (P1) and (P2) is termed a pivot444For convenience, we also define the maximum value of , to be a pivot if satisfies (P1) only, and set , so that all pivots satisfy both properties. with respect to . Perhaps surprisingly, a valid pivot can be found with binary search in function queries, where ; discussion of BinarySearchPivot and proof of this results is provided in Appendix D, Lemma 2. By finding a pivot for each , ThresholdGreedy does not attempt to add all elements exceeding the marginal gain of threshold ; instead, ThresholdGreedy maintains the following property at each threshold.

Property 1.

Let be the solution of ThresholdGreedy immediately after the iteration of the outer for loop corresponding to threshold . Then for each , there exists such that .

1:  Input: , , .
2:  Output:
3:  , .
4:  for  do
5:     for  do
6:        BinarySearchPivot
8:        if  then
9:           return
10:  return
Algorithm 2 ThresholdGreedy

Performance ratios.

Next, we present the main result of this section, the performance guarantee involving the DR and submodularity ratios. Observe that the query complexity of ThresholdGreedy is polynomial in the input size .

Theorem 1.

Let an instance of Problem MCC be given, with . If is the solution returned by ThresholdGreedy and is an optimal solution to this instance, then

The query complexity of ThresholdGreedy is .

If is given, the assignment , yields performance ratio at least .


If , the ratio holds trivially; so assume . The proof of the following claim requires an application of the DR ratio.

Claim 1.

Let be produced by a modified version of ThresholdGreedy that continues until . If we show , the results follows.

Thus, for the rest of the proof let be as described in Claim 1. Let be the value of after the th execution of line 7 of ThresholdGreedy. Let be the th pivot, such that . The next claim lower bounds the marginal gain in terms of the DR ratio and the previous threshold.

Claim 2.

For each ,


Let be the threshold at which is added to ; let . If is the first threshold,

If is not the first threshold, is the previous threshold value of the previous iteration of the outer for loop. By Property 1, there exists , such that . By the definition of DR ratio, .

In either case, by the fact that property (P1) of a pivot holds for , we have

Since , we have by Claim 2

where the equality follows from the lattice identity for all , the second inequality is by definition of the submodularity ratio, and the third inequality is from monotonicity. From here, we obtain from which the hypothesis of Claim 1 follows.

Query complexity.

The for loop on line 4 (Alg. 2) iterates at most times; each iteration requires queries, by Lemma 2. ∎ For additional speedup, the inner for loop of FastGreedy may be parallelized, which divides the factor of in the query complexity by the number of threads but worsens the performance ratio; in addition to , the generalized curvature is required in the proof.

Corollary 1.

If the inner for loop of ThresholdGreedy is parallelized, the performance ratio becomes , for .

Remark 1.

A careful analysis of the usage of , in the proof of Theorem 1 shows that the full power of the definitions of these quantities is not required. Rather, it is sufficient to consider ThresholdGreedy versions of these parameters, as defined in Appendix B. In the same way, we also have FastGreedy version of based upon the proof of Theorem 2. The FastGreedy version of the DR ratio is an integral part of how the algorithm works and is calculated directly by the algorithm, as we discuss in the next section.

4.2 The FastGreedy Algorithm

The proof of the performance ratio of ThresholdGreedy requires both the submodularty ratio and the DR ratio . In this section, we provide an algorithm (FastGreedy, Alg. 3) that achieves ratio , with factor that it can determine during its execution. Appendix E provides proofs for all lemmas, claims, and omitted details.


FastGreedy employs a threshold framework analogous to ThresholdGreedy. Each iteration of the outer while loop of FastGreedy is analogous to an iteration of the outer for loop in ThresholdGreedy, in which elements are added whose marginal gain exceeds a threshold. FastGreedy employs BinarySearchPivot to find pivots for each for each threshold value . Finally, the parameter determines a minimum threshold value.

As its threshold, FastGreedy uses , where is the maximum marginal gain found on line 5, parameter is the intended stepsize between thresholds as in ThresholdGreedy, and is an upper bound on the DR ratio , as described below. This choice of has the following advantages over the approach of ThresholdGreedy: (1) since the threshold is related to the maximum marginal gain , the theoretical performance ratio is improved; (2) the use of to lower the threshold ensures the same555Up to a constant factor, which depends on . worst-case query complexity as ThresholdGreedy and leads to substantial reduction of the number of queries in practice, as we demonstrate in Section 6.

FastGreedy DR ratio .

If FastGreedy is modified666This modification can be accomplished by setting to ensure the condition on line 4 is always true on this instance. to continue until , let the final, smallest value of be termed the FastGreedy DR ratio on the instance. The FastGreedy DR ratio is at least the DR ratio of the function, up to the parameter :

Lemma 1.

Let parameters be given. Throughout the execution of FastGreedy on an instance of Problem MCC with , . Since can be arbitrarily small, .


Initally, ; it decreases by a factor of at most once per iteration of the while loop. Suppose for some iteration of the while loop, and let have the value assigned immediately after iteration , have the value assigned after line 5 of iteration . Since a valid pivot was found for each during iteration , by property (P2) there exists , . Hence , by the definition of DR ratio. In iteration , has the value of from iteration , so the value of computed during iteration is at most , and does not decrease during iteration . ∎

1:  Input: , , .
2:  Output:
3:  , , ,
4:  while  do
6:     if  then
10:     for  do
11:        BinarySearchPivot
13:        if  then
14:           return
15:  return
Algorithm 3 FastGreedy

Performance ratio.

Next, we present the main result of this section. In contrast to ThresholdGreedy, the factor of in the performance ratio has been replaced with ; at the termination of the algorithm, the value of is an output of FastGreedy if the solution satisfies . In any case, by Lemma 1, the performance ratio is at worst the same as that of ThresholdGreedy.

Theorem 2.

Let an instance of Problem MCC be given, with . Let be the solution returned by FastGreedy with parameters , and let be an optimal solution to this instance; also, suppose . Let be the FastGreedy DR ratio on this instance. Then,

The worst-case query complexity of FastGreedy is

If is given, the assignment , yields performance ratio at least .

Proof of query complexity.

The performance ratio is proved in Appendix E. Let be the sequence of values in the order considered by the algorithm. By Lemma 1, at most times; label each such index an uptick, and let be the indices of each uptick in order of their appearance. Also, let be the first index after such that , for each .

Next, we will iteratively delete from the sequence of values. Initially, let be the last uptick in the sequence; delete all terms from the sequence. Set and repeat this process until .

Claim 3.

For each selected in the iterative deletion above, there are at most values deleted from the sequence.

By Claim 3 and the bound on the number of upticks, we have deleted at most thresholds from the sequence; every term in the remaining sequence satisfies ; hence, the remaining sequence contains at most terms, by its initial and terminal values. The query complexity follows from the number of queries per value of , which is by Lemma 2. ∎

5 Influence Maximization: A General Framework

In this section, we provide a non-submodular framework for viral marketing on a social network that unifies the classical influence maximization (Kempe et al., 2003) with the boosting problem (Lin et al., 2017).


The goal of influence maximization is to select seed users (i.e. initially activated users) to maximize the expected adoption in the social network, where the total number of seeds is restricted by a budget, such that the expected adoption in the social network is maximized. The boosting problem is, given a fixed seed set , to incentivize (i.e. increase the susceptibility of a user to the influence of his friends) users within a budget such that the expected adoption with seed set increases the most.

Our framework combines the above two scenarios with a partial incentive: an incentive (say,

% off the purchase price) increases the probability a user will purchase the product independently and increases the susceptibility of the user to the influence of his friends. Hence, our problem asks how to best allocate the budget between (partially) seeding users and boosting the influence of likely extant seeds. Both the classical influence maximization and the non-submodular boosting problem can be obtained as special cases, as shown in Appendix


Our model is related to the formulation of Demaine et al. (2014); however, they employ a submodular threshold-based model, while our model is inherently non-submodular due to the boosting mechanism (Lin et al., 2017). Also, GIM is related to the submodular budgeted allocation problem of Alon et al. (2012), in which the influence of an advertiser increases with the amount of budget allocated; the main difference with GIM is that we modify incoming edge weights with incentives instead of outgoing, which creates the boosting mechanism responsible for the non-submodularity.


Given a social network , and a product , we define the following model of adoption. The allocation of budget to is thought of as a discount towards purchasing the product; this discount increases the probability that this user will adopt or purchase the product. Furthermore, this discount increases the susceptibility of the user to influence from its (incoming) social connections.

Formally, an incentive level is chosen for each user . With independent probability , user initially activates or adopts the product; altogether, this creates a probabilistic initial set of activated users. Next, through the classical Independent Cascade (IC) model777The IC model is defined in Appendix F. of adoption, users influence their neighbors in the social network; wherein the weight for edge is determined by the incentive level of user as well as the strength of the social connection from to .

We write to denote the probability of full graph realization and seed set when gives the incentive levels for each user. We write to denote the size of the reachable set from in realization . The expected activation in the network given a choice of incentive levels is given by where an explicit formula for is given in Appendix F. Finally, let .

Definition 4 (Generalized Influence Maximization (GIM)).

Let social network be given, together with the mappings , , for all , for each , where is the number of incentive levels. Given budget , determine incentive levels , with , such that is maximized.

Bound on non-submodularity.

Next, we provide a lower bound on the greedy submodularity ratios (see Appendix B). We emphasize that the assumption that the probability mappings as a function of incentive level be submodular does not imply the objective is DR-submodular. Theorem 3 is proved in Appendix F.

Theorem 3.

Let be an instance of GIM, with budget . Let , . Suppose for all , the mappings , are submodular set functions. Then, the greedy submodularity ratios defined in Appendix B and the FastGreedy DR ratio are lower bounded by where is the maximum in-degree in .

6 Experimental Evaluation

In this section, we evaluate our proposed algorithms for the GIM problem defined in Section 5. We evaluate our algorithms as compared with StandardGreedy; by the naive reduction of the lattice to sets in exponential time, this algorithm is equivalent performing this reduction and running the standard greedy for sets, the performance of which for non-submodular set functions was analyzed by Bian et al. (2017b).

In Section 6.1, we describe our methodology; in Section 6.2, we compare the algorithms and non-submodularity parameters. In Appendix G.1, we explore the behavior of FastGreedy as the parameters , and are varied.

6.1 Methodology

Our implementation uses Monte Carlo sampling to estimate the objective value

, with samples used. As a result, each function query is relatively expensive.

We evaluate on two networks taken from the SNAP dataset Leskovec and Krevl (2014): ca-GrQc (“GrQc”; 15k nodes, 14.5K edges) and facebook

(“Facebook”; 4k nodes, 176K edges). Unless otherwise specified, we use 10 repetitions per datapoint and display the mean. The width of shaded intervals is one standard deviation. Standard greedy is omitted from some figures where running time is prohibitive. Unless noted otherwise, we use default settings of

, , . We use a uniform box constraint and assign each user the same number of incentive levels; the maximum incentive level for a user corresponds to giving the product to the user for free and hence deterministically seeds the user; we adopt linear models for the mappings , . We often plot versus , which is defined as the maximum number of deterministic seeds; for example, if with incentive levels, then .

6.2 Results

(a) GrQc (10 levels)
(b) Facebook (100 levels)
Figure 1: Activation for the solution returned by each algorithm.
(a) GrQc (10 levels)
(b) Facebook (100 levels)
Figure 2: Total function queries on the GrQc and Facebook networks.
(a) GrQc (10 levels)
(b) Facebook (100 levels)
Figure 3: Runtime on the GrQc and Facebook networks with 100 levels.
(a) GrQc
(b) BA network
Figure 4: (a): The value of FastGreedy DR ratio on the GrQc dataset. (b): FastGreedy submodularity ratio and FastGreedy DR Ratio on a small, random BA network.

In this section, we demonstrate the following: (1) our algorithms exhibit virtually identical quality of solution with StandardGreedy, (2) our algorithms query the function much fewer times, which leads to dramatic runtime improvement over StandardGreedy, (3) FastGreedy further reduces the number of queries of ThresholdGreedy while sacrificing little in solution quality, and (4) the non-submodularity parameters on a small instance are computed, which provides evidence that our theoretical performance ratios are useful.

Quality of Solution

In Fig. 1(a), we plot for the solution returned by each algorithm on the GrQc network with incentive levels; the difference in quality of solution returned by the three algorithms is negligible. In Fig. 1(b), we plot the same for the Facebook network with incentive levels; on Facebook, we drop StandardGreedy due to its prohibitive runtime. FastGreedy is observed to lose a small (up to 3%) factor, which we consider acceptable in light of its large runtime improvement, which we discuss next.

Number of Queries

Next, we present in Fig. 2 the number of function queries888Our implementation is in terms of the marginal gain. The number of function queries shown is the number of times the marginal gain function was called. each algorithm requires on the GrQc and Facebook networks. StandardGreedy required up to 20M queries on Facebook, hence it is not shown in Fig. 2(b). Both of our algorithms provide a large improvement over StandardGreedy; in particular, notice that StandardGreedy increases linearly with , while both of the others exhibit logarithmic increase in agreement with the theoretical query complexity of each. Furthermore, FastGreedy uses at least 14.5% fewer function queries than ThresholdGreedy and up to 43% fewer as grows.

In terms of CPU runtime, we show in Fig. 3 that FastGreedy significantly outperforms ThresholdGreedy; hence, the runtime appears to be dominated by the number of function queries as expected.

Non-Submodularity Parameters

The value of the FastGreedy DR ratio on GrQc is shown in Fig. 4(a); notice that it is relatively stable as the budget increases from to , although there is substantial drop from incentive levels to ; this may be explained as an increase in the non-submodularity resulting from inaccurate sampling of , since it is more difficult to detect differences between the finer levels. Still, on all instances tested, , which suggests the worst-case performance ratio of FastGreedy is not far from that of StandardGreedy.

Finally, we examine the various non-submodularity parameters on a very small instance which admits their computation: a random Barabasi-Albert network with nodes and incentive levels. We compute the FastGreedy version of the submodularity ratio defined in Appendix B by direct enumeration and consider the FastGreedy DR ratio . Results are shown in Fig. 4(b). The value of is close to and remains constant with increasing budget , while the FastGreedy submodularity ratio decreases slowly with . With and the FastGreedy , we can compute the worst-case performance ratio of FastGreedy across these instances: .

7 Conclusions

In this work, we provide two approximation algorithms for maximizing non-submodular functions with respect to a cardinality constraint on the integer lattice with polynomial query complexity. Since set functions are a special case, our work provides faster algorithms for the same problem with set functions than the standard greedy algorithm, although the performance ratio degrades from at least to , where is the FastGreedy DR Ratio. We propose a natural application of non-submodular influence maximization, for which we lower bound the relevant non-submodularity parameters and validate our algorithms.


Appendix A Organization of the Appendix

Appendix B defines the greedy versions of the non-submodularity parameters.

Appendix C provides omitted proofs from Section 3.

Appendix D defines the BinarySearchPivot procedure and omitted proofs from Section 4.1.

Appendix E provides omitted proofs from Section 4.2.

Appendix F defines the Independent Cascade model, proves that classical IM and boosting are subproblems of our IM model, and provides the proof of Theorem 3 from Section 5.

Appendix G provides additional experimental results characterizing the parameters of FastGreedy.

Appendix H presents details of our GIM implementation.

Appendix B Greedy Versions of Non-Submodularity Parameters

We define various greedy versions of the non-submodularity parameters in this section. In this work, these are referred to as FastGreedy submodularity ratio, etc., where the instance is clear from the context.

ThresholdGreedy DR ratio.

Definition 5 (ThresholdGreedy DR ratio).

Let an instance of Problem MCC be given, with budget constraint . Let be the sequence of values takes during execution of ThresholdGreedy on . The ThresholdGreedy version of the DR ratio on , is the maximum value such that for any , for any , if is the value of the greedy vector immediately after was considered during the inner for loop of the threshold directly preceding the one in which was considered ( if was considered during the first threshold),

Greedy versions of submodularity ratio.

Definition 6.

Let , and let an instance of Problem MCC be given, with budget constraint . Let be the sequence of values takes during execution of on . The version of the submodularity ratio on , is the maximum value such that for any , for any , for any such that and and ,

The FastGreedy submodularity ratio differs from the above two only in that the sequence of vectors are the value of the greedy vector at the beginning of each iteration of the outer while loop, instead of all values of during execution of the algorithm.

Appendix C Proofs for Section 3

Proof of Proposition 1.

Suppose . Let . Then,

Therefore, , since is the maximum number satisfying the above inequality. ∎

Appendix D BinarySearchPivot and Proofs for Section 4.1 (ThresholdGreedy)


1:  Input: , , , , ,
2:  Output:
3:  ,
4:  if   then
5:     return
6:  if  then
7:     return
8:  while   do
10:     if  then
12:     else
14:  return
Algorithm 4 BinarySearchPivot

The routine BinarySearchPivot (Alg. 4) efficiently finds a pivot for each . BinarySearchPivot uses a modified binary-search procedure that maintains such that both


Initially, and do satisfy (1), (2), or else we have already found a valid pivot (lines 5, 7). The midpoint of the interval is tested to determine if or should be updated to maintain (1), (2); this process continues until .

Lemma 2.

BinarySearchPivot finds a valid pivot in queries of , where , .

Proof of Lemma 2.

The routine BinarySearchPivot maintains inequalities (1), (2), it is enough to show that given (1), (2), there exists a such that is a pivot. Consider , for ; there must be a smallest such that satisfies property (P1) of being a pivot, since satisfies property (P1). If property (P2) is unsatisfied, then

contradicting the choice of since . The query complexity follows from a constant number of queries per iteration of the while loop and the fact that each iteration reduces the distance from to by a factor of 2; initially, this distance was . ∎

Omitted proofs from Section 4.1.

Proof that Property 1 holds.

Let be the value of immediately after is considered during the iteration corresponding to ; then property (P2) of pivot was satisfied: . ∎

Proof of Claim 1.

Suppose . Suppose , and let be the solution returned by a modified ThresholdGreedy that continues updating the threshold until . Order , and let , , with , so that . Also, let be the vector guaranteed for by Lemma 1 with the last threshold value of ThresholdGreedy. Then

Hence, for any , if


From proof of Theorem 1:

“If , the ratio holds trivially”.

If , the ratio holds trivially from the inequality , for real , since

“from which the hypothesis of Claim 1 follows”.

Since and , we have .

Proof of Corollary 1.

As in proof of Theorem 1, suppose . Claim 1 still holds as before. Now, let be the value of at the beginning of the th iteration of the outer for loop with threshold value . Since the inner for loop is conducted in parallel, all marginal gains in iteration are considered with respect to . Order the vectors added in this iteration ; because each is a pivot, we know and .

Let , so and . Now for each and for each , there exists a vector such that (namely , from when was considered during the previous iteration , or if is the first iteration). Furthermore and . Hence

for any . The preceding argument proves an analogue of Claim 2, and the argument from here is exactly analogous to the proof of Theorem 1. ∎

Appendix E Proofs for Section 4.2

Proof of Theorem 2.

Since we have included as a hypothesis, we have the following claim, analogous to Claim 1.

Claim 4.

If is produced by the modified version of FastGreedy that continues until , and , then the Theorem is proved.


Suppose , and let be the solution returned by a FastGreedy* which continues updating the threshold until . Order , and let , , with , so that . Then