1 Introduction
The optimal allocation of resources for maximizing influence, spread of information or coverage, has gained attention in the past few years, in particular in machine learning and data mining [Domingos & Richardson, 2001; Kempe et al., 2003; Chen et al., 2009; Gomez Rodriguez & Schölkopf, 2012; Borgs et al., 2014].
In the Budget Allocation Problem, one is given a bipartite influence graph between channels and people , and the task is to assign a budget to each channel in with the goal of maximizing the expected number of influenced people . Each edge between channel and person
is weighted with a probability
that, e.g., an advertisement on radio station will influence person to buy some product. The budget controls how many independent attempts are made via the channel to influence the people in . The probability that a customer is influenced when the advertising budget is is(1) 
and hence the expected number of influenced people is . We write to make the dependence on the probabilities explicit. The total budget must remain within some feasible set which may encode e.g. a total budget limit . We allow the budgets to be continuous, as in [Bian et al., 2017].
Since its introduction by Alon et al. [2012], several works have extended the formulation of Budget Allocation and provided algorithms Bian et al. [2017]; Hatano et al. [2015]; Maehara et al. [2015]; Soma et al. [2014]; Soma & Yoshida [2015]. Budget Allocation may also be viewed as influence maximization on a bipartite graph, where information spreads as in the Independent Cascade model. For integer , Budget Allocation and Influence Maximization are NPhard. Yet, constantfactor approximations are possible, and build on the fact that the influence function is submodular in the binary case, and DRsubmodular in the integer case [Soma et al., 2014; Hatano et al., 2015]. If is continuous, the problem is a concave maximization problem.
The formulation of Budget Allocation assumes that the transmission probabilities are known exactly. But this is rarely true in practice. Typically, the probabilities , and possibly the graph itself, must be inferred from observations [Gomez Rodriguez et al., 2010; Du et al., 2013; Narasimhan et al., 2015; Du et al., 2014; Netrapalli & Sanghavi, 2012]. In Section 4
we will see that a misspecification or point estimate of parameters
can lead to much reduced outcomes. A more realistic assumption is to know confidence intervals for the . Realizing this severe deficiency, recent work studied robust versions of Influence Maximization, where a budget must be chosen that maximizes the worstcase approximation ratio over a set of possible influence functions [He & Kempe, 2016; Chen et al., 2016; Lowalekar et al., 2016]. The resulting optimization problem is hard but admits bicriteria approximations.In this work, we revisit Budget Allocation under uncertainty from the perspective of robust optimization [Bertsimas et al., 2011; BenTal et al., 2009]. We maximize the worstcase influence – not approximation ratio – for in a confidence set centered around the “best guess” (e.g., posterior mean). This avoids pitfalls of the approximation ratio formulation (which can be misled to return poor worstcase budgets, as demonstrated in Appendix A), while also allowing us to formulate the problem as a maxmin game:
(2) 
where an “adversary” can arbitrarily manipulate within the confidence set . With fixed, is concave in . However, the influence function is not convex, and not even quasiconvex, in the adversary’s variables .
The new, key insight we exploit in this work is that has the property of continuous submodularity in – in contrast to previously exploited submodular maximization in – and can hence be minimized by generalizing techniques from discrete submodular optimization Bach [2015]. The techniques in [Bach, 2015], however, are restricted to box constraints, and do not directly apply to our confidence sets. In fact, general constrained submodular minimization is hard [Svitkina & Fleischer, 2011; Goel et al., 2009; Iwata & Nagano, 2009]. We make the following contributions:

We present an algorithm with optimality bounds for Robust Budget Allocation in the nonconvex adversarial scenario (2).

We provide the first results for continuous submodular minimization with box constraints and one more “nice” constraint, and conditions under which the algorithm is guaranteed to return a global optimum.
1.1 Background and Related Work
We begin with some background material and, along the way, discuss related work.
1.1.1 Submodularity over the integer lattice and continuous domains
Submodularity is perhaps best known as a property of set functions. A function defined on subsets of a ground set is submodular if for all sets , it holds that . A similar definition extends to functions defined over a distributive lattice , e.g. the integer lattice. Such a function is submodular if for all , it holds that
(3) 
For the integer lattice and vectors
, denotes the coordinatewise maximum and the coordinatewise minimum. Submodularity has also been considered on continuous domains , where, if is also twicedifferentiable, the property of submodularity means that all offdiagonal entries of the the Hessian are nonpositive, i.e., for all [Topkis, 1978, Theorem 3.2]. These functions may be convex, concave, or neither.Submodular functions on lattices can be minimized by a reduction to set functions, more precisely, ring families Birkhoff [1937]. Combinatorial algorithms for submodular optimization on lattices are discussed in [Khachaturov et al., 2012]. More recently, Bach [2015] extended results based on the convex Lovász extension, by building on connections to optimal transport. The subclass of convex functions admits strongly polynomial time minimization [Murota, 2003; Kolmogorov & Shioura, 2009; Murota & Shioura, 2014], but does not apply in our setting.
Similarly, results for submodular maximization extend to integer lattices, e.g. [Gottschalk & Peis, 2015]. Stronger results are possible if the submodular function also satisfies diminishing returns: for all (coordinatewise) and such that , it holds that . For such DRsubmodular functions, many approximation results for the set function case extend [Bian et al., 2017; Soma & Yoshida, 2015; Soma et al., 2014]. In particular, Ene & Nguyen [2016] show a generic reduction to set function optimization that they apply to maximization. In fact, it also applies to minimization:
Proposition 1.1.
A DRsubmodular function defined on can be minimized in strongly polynomial time , where and is the time complexity of evaluating . Here, .
Proof.
In particular, the time complexity is logarithmic in . For general lattice submodular functions, this is not possible without further assumptions.
1.1.2 Related Problems
A sister problem of Budget Allocation is Influence Maximization on general graphs, where a set of seed nodes is selected to start a propagation process. The influence function is still monotone submodular and amenable to the greedy algorithm Kempe et al. [2003], but it cannot be evaluated explicitly and requires approximation Chen et al. [2010]. Stochastic Coverage [Goemans & Vondrák, 2006] is a version of Set Cover where the covering sets are random. A variant of Budget Allocation can be written as stochastic coverage with multiplicity. Stochastic Coverage has mainly been studied in the online or adaptive setting, where logarithmic approximation factors can be achieved [Golovin & Krause, 2011; Deshpande et al., 2016; Adamczyk et al., 2016].
Our objective function (2) is a signomial in , i.e., a linear combination of monomials of the form . General signomial optimization is NPhard [Chiang, 2005], but certain subclasses are tractable: posynomials with all nonnegative coefficients can be minimized via Geometric Programming [Boyd et al., 2007], and signomials with a single negative coefficient admit sum of squareslike relaxations [Chandrasekaran & Shah, 2016]. Our problem, a constrained posynomial maximization, is not in general a geometric program. Some work addresses this setting via monomial approximation [Pascual & BenIsrael, 1970; Ecker, 1980], but, to our knowledge, our algorithm is the first that solves this problem to arbitrary accuracy.
1.1.3 Robust Optimization
Two prominent strategies of addressing uncertainty in parameters of optimization problems are stochastic and robust optimization. If the distribution of the parameters is known (stochastic optimization), formulations such as valueatrisk (VaR) and conditional valueatrisk (CVaR) Rockafellar & Uryasev [2000, 2002] apply. In contrast, robust optimization [BenTal et al., 2009; Bertsimas et al., 2011] assumes that the parameters (of the cost function and constraints) can vary arbitrarily within a known confidence set , and the aim is to optimize the worstcase setting, i.e.,
(4) 
Here, we will only have uncertainty in the cost function.
In this paper we are principally concerned with robust maximization of the continuous influence function , but mention some results for the discrete case. While there exist results for robust and CVaR optimization of modular (linear) functions [Nikolova, 2010; Bertsimas & Sim, 2003], submodular objectives do not in general admit such optimization Maehara [2015], but variants admit approximations [Zhang et al., 2014]. The brittleness of submodular optimization under noise has been studied in [Balkanski et al., 2016, 2017; Hassidim & Singer, 2016].
2 Robust and Stochastic Budget Allocation
The unknown parameters in Budget Allocation are the transmission probabilities or edge weights in a graph. If these are estimated from data, we may have posterior distributions or, a weaker assumption, confidence sets for the parameters. For ease of notation, we will work with the failure probabilities instead of the directly, and write instead of .
2.1 Stochastic Optimization
If a (posterior) distribution of the parameters is known, a simple strategy is to use expectations. We place a uniform prior on , and observe independent observations drawn from . If we observe failures and and successes, the resulting posterior distribution on the variable is . Given such a posterior, we may optimize
(5)  
(6) 
Proposition 2.1.
Concavity of (6) follows since it is an expectation over concave functions, and the problem can be solved by stochastic gradient ascent or by explicitly computing gradients.
Merely maximizing expectation does not explicitly account for volatility and hence risk. One option is to include variance
BenTal & Nemirovski [2000]; Bertsimas et al. [2011]; Atamtürk & Narayanan [2008]:(7) 
but in our case this CVaR formulation seems difficult:
Fact 2.1.
For in the nonnegative orthant, the term need not be convex or concave, and need not be submodular or supermodular.
This observation does not rule out a solution, but the apparent difficulties further motivate a robust formulation that, as we will see, is amenable to optimization.
2.2 Robust Optimization
The focus of this work is the robust version of Budget Allocation, where we allow an adversary to arbitrarily set the parameters within an uncertainty set . This uncertainty set may result, for instance, from a known distribution, or simply assumed bounds. Formally, we solve
(8) 
where is a convex set with an efficient projection oracle, and is an uncertainty set containing an estimate . In the sequel, we use uncertainty sets , where is a distance (or divergence) from the estimate , and is the box . The intervals can be thought of as either confidence intervals around , or, if , enforce that each is a valid probability.
Common examples of uncertainty sets used in Robust Optimization are Ellipsoidal and Dnorm uncertainty sets Bertsimas et al. [2011]. Our algorithm in Section 3.1 applies to both.
Ellipsoidal uncertainty. The ellipsoidal or quadratic uncertainty set is defined by
where is the covariance of the random vector
of probabilities distributed according to our Beta posteriors. In our case, since the distributions on each
are independent, is actually diagonal. Writing , we havewhere .
Dnorm uncertainty. The Dnorm uncertainty set is similar to an ball around , and is defined as
Essentially, we allow an adversary to increase up to some upper bound , subject to some total budget across all terms . The set can be rewritten as
where is the fraction of the interval we have used up in increasing .
The minmax formulation has several merits: the model is not tied to a specific learning algorithm for the probabilities as long as we can choose a suitable confidence set. Moreover, this formulation allows to fully hedge against a worstcase scenario.
3 Optimization Algorithm
As noted above, the function is concave as a function of for fixed . As a pointwise minimum of concave functions, is concave. Hence, if we can compute subgradients of , we can solve our maxminproblem via the subgradient method, as outlined in Algorithm 1.
A subgradient at is given by the gradient of for the minimizing , i.e., . Hence, we must be able to compute for any . We also obtain a duality gap: for any we have
(9) 
This means we can estimate the optimal value and use it in Polyak’s stepsize rule for the subgradient method Polyak [1987].
But is not convex in , and not even quasiconvex. For example, standard methods [Wainwright & Chiang, 2004, Chapter 12] imply that is not quasiconvex on . Moreover, the abovementioned signomial optimization techniques do not apply for an exact solution either. So, it is not immediately clear that we can solve the inner optimization problem.
The key insight we will be using is that has a different beneficial property: while not convex, as a function of is continuous submodular.
Lemma 3.1.
Suppose we have differentiable functions , for , either all nonincreasing or all nondecreasing. Then, is a continuous supermodular function from to .
Proof.
For , the resulting function is modular and therefore supermodular. In the case , we simply need to compute derivatives. The mixed derivatives are
(10) 
By monotonicity, and have the same sign, so their product is nonnegative, and since each is nonnegative, the entire expression is nonnegative. Hence, is continuous supermodular by Theorem 3.2 of [Topkis, 1978]. ∎
Corollary 3.1.
The influence function defined in Section 2 is continuous submodular in over the nonnegative orthant, for each .
Proof.
Since submodularity is preserved under summation, it suffices to show that each function is continuous submodular. By Lemma 3.1, since is nonnegative and monotone nondecreasing for , the product is continuous supermodular in . Flipping the sign and adding a constant term yields , which is hence continuous submodular. ∎
Conjecture 3.1.
Strong duality holds, i.e.
(11) 
If strong duality holds, then the duality gap in Equation (9) is zero at optimality. If were quasiconvex in , strong duality would hold by Sion’s minmax theorem, but this is not the case. In practice, we observe that the duality gap always converges to zero.
Bach [2015] demonstrates how to minimize a continuous submodular function subject to box constraints , up to an arbitrary suboptimality gap . The constraint set in our Robust Budget Allocation problem, however, has box constraints with an additional constraint . This case is not addressed in any previous work. Fortunately, for a large class of functions , there is still an efficient algorithm for continuous submodular minimization, which we present in the next section.
3.1 Constrained Continuous Submodular Function Minimization
We next address an algorithm for minimizing a monotone continuous submodular function subject to box constraints and a constraint :
(12) 
If and were convex, the constrained problem would be equivalent to solving, with the right Lagrange multipler :
(13) 
Although and are not necessarily convex here, it turns out that a similar approach indeed applies. The main idea of our approach bears similarity with [Nagano et al., 2011] for the set function case, but our setting with continuous functions and various uncertainty sets is more general, and requires more argumentation. We outline our theoretical results here, and defer further implementation details and proofs to the appendix.
Following Bach [2015], we discretize the problem; for a sufficiently fine discretization, we will achieve arbitrary accuracy. Let be an interpolation mapping that maps the discrete set into
via the componentwise interpolation functions
. We say is fine if for all , and we say the full interpolation function is fine if each is fine.This mapping yields functions and via and . is lattice submodular (on the integer lattice). This construction leads to a reduction of Problem (12) to a submodular minimization problem over the integer lattice:
(14) 
Ideally, there should then exist a such that the associated minimizer yields a close to optimal solution for the constrained problem. Theorem 3.1 below states that this is indeed the case.
Moreover, a second benefit of submodularity is that we can find the entire solution path for Problem (14) by solving a single optimization problem.
Lemma 3.2.
Suppose is continuous submodular, and suppose the regularizer is strictly increasing and separable: . Then we can recover a minimizer for the induced discrete Problem (14) for any by solving a single convex optimization problem.
The problem in question arises from a relaxation that extends in each coordinate to a function on distributions over the domain
. These distributions are represented via their inverse cumulative distribution functions
, which take the coordinate as input, and output the probability of exceeding . The function is an analogue of the Lovász extension of set functions to continuous submodular functions [Bach, 2015], it is convex and coincides with on lattice points.Formally, this resulting single optimization problem is:
(15) 
where refers to the set of ordered vectors that satisfy , the notation denotes the th coordinate of the vector , and the are strictly convex functions given by
(16) 
Problem (15) can be solved by FrankWolfe methods [Frank & Wolfe, 1956; Dunn & Harshbarger, 1978; LacosteJulien, 2016; Jaggi, 2013]. This is because the greedy algorithm for computing subgradients of the Lovász extension can be generalized, and yields a linear optimization oracle for the dual of Problem (15). We detail the relationship between Problems (14) and (15), as well as how to implement the FrankWolfe methods, in Appendix C.
Let be the optimal solution for Problem (15). For any , we obtain a rounded solution for Problem (14) by thresholding: we set , or zero if for all . Each is the optimal solution for Problem (14) with . We use the largest parameterized solution that is still feasible, i.e. the solution where solves
(17) 
This can be found efficiently via binary search or a linear scan.
Theorem 3.1.
Let be continuous submodular and monotone decreasing, with Lipschitz constant , and let be strictly increasing and separable. Assume all entries of the optimal solution of Problem (15) are distinct. Let be the thresholding corresponding to the optimal solution of Problem (17), mapped back into the original continuous domain . Then is feasible for the continuous Problem (12), and is a approximate solution:
Theorem 3.1 implies an algorithm for solving Problem (12) to optimality: (1) set , (2) compute which solves Problem (15), (3) find the optimal thresholding of by determining the smallest for which , and (4) map back into continuous space via the interpolation mapping .
Optimality Bounds.
Theorem 3.1 is proved by comparing and to the optimal solution on the discretized mesh
Beyond the theoretical guarantee of Theorem 3.1, for any problem instance and candidate solution , we can compute a bound on the gap between and . The following two bounds are proved in the appendix:

We can generate a discrete point satisfying

The Lagrangian yields the bound
Improvements.
The requirement in Theorem 3.1 that the elements of be distinct may seem somewhat restrictive, but as long as has distinct elements in the neighborhood of our particular , this bound still holds. We see in Section 4.1.1 that in practice, almost always has distinct elements in the regime we care about, and the bounds of Remark 3.1 are very good.
3.2 Application to Robust Budget Allocation
The above algorithm directly applies to Robust Allocation with the uncertainty sets in Section 2.2. The ellipsoidal uncertainty set corresponds to the constraint that with , and . By the monotonicity of , there is never incentive to reduce any below , so we can replace with . On this interval, each is strictly increasing, and Theorem 3.1 applies.
For Dnorm sets, we have . Since each is monotone, Theorem 3.1 applies.
Runtime and Alternatives.
Since the core algorithm is FrankWolfe, it is straightforward to show that Problem (15) can be solved to suboptimality in time , where is the minimum derivative of the functions . If has distinct elements separated by , then choosing results in an exact solution to (14) in time .
Noting that is submodular for all , one could instead perform binary search over , each time converting the objective into a submodular set function via Birkhoff’s theorem and solving submodular minimization e.g. via one of the recent fast methods [Chakrabarty et al., 2017; Lee et al., 2015]. However, we are not aware of a practical implementation of the algorithm in [Lee et al., 2015]. The algorithm in [Chakrabarty et al., 2017] yields a solution in expectation. This approach also requires care in the precision of the search over , whereas our approach searches directly over the elements of .
4 Experiments
We evaluate our Robust Budget Allocation algorithm on both synthetic test data and a realworld bidding dataset from Yahoo! Webscope yah to demonstrate that our method yields real improvements. For all experiments, we used Algorithm 1 as the outer loop. For the inner submodular minimization step, we implemented the pairwise FrankWolfe algorithm of LacosteJulien & Jaggi [2015]. In all cases, the feasible set of budgets is where the specific budget depends on the experiment. Our code is available at git.io/vHXkO.
4.1 Synthetic
On the synthetic data, we probe two questions: (1) how often does the distinctness condition of Theorem 3.1 hold, so that we are guaranteed an optimal solution; and (2) what is the gain of using a robust versus nonrobust solution in an adversarial setting? For both settings, we set and and discretize with . We generated true probabilties , created Beta posteriors, and built both Ellipsoidal uncertainty sets and Dnorm sets .
4.1.1 Optimality
Theorem 3.1 and Remark 3.1 demand that the values be distinct at our chosen Lagrange multiplier and, under this condition, guarantee optimality. We illustrate this in four examples: for Ellipsoidal or a Dnorm uncertainty set, and a total influence budget . Figure 3 shows all elements of in sorted order, as well as a horizontal line indicating our Lagrange multiplier which serves as a threshold. Despite some plateaus, the entries are distinct in most regimes, in particular around , the regime that is needed for our results. Moreover, in practice (on the Yahoo data) we observe later in Figure 3 that both solutiondependent bounds from Remark 3.1 are very good, and all solutions are optimal within a very small gap.
4.1.2 Robustness and Quality
Next, we probe the effect of a robust versus nonrobust solution for different uncertainty sets and budgets of the adversary. We compare our robust solution with using a point estimate for , i.e., , treating estimates as ground truth, and the stochastic solution as per Section 2.1. These two optimization problems were solved via standard firstorder methods using TFOCS Becker et al. [2011].
Figure 2 demonstrates that indeed, the alternative budgets are sensitive to the adversary and the robustlychosen budget performs better, even in cases where the other budgets achieve zero influence. When the total budget is large, performs nearly as well as , but when resources are scarce ( is small) and the actual choice seems to matter more, performs far better.
4.2 Yahoo! data
To evaluate our method on realworld data, we formulate a Budget Allocation instance on advertiser bidding data from Yahoo! Webscope [yah, ]. This dataset logs bids on 1000 different phrases by advertising accounts. We map the phrases to channels and the accounts to customers , with an edge between and if a corresponding bid was made. For each pair , we draw the associated transmission probability uniformly from . We bias these towards zero because we expect people not to be easily influenced by advertising in the real world. We then generate an estimate and build up a posterior by generating samples from , where is the number of bids between and in the dataset.
This transformation yields a bipartite graph with , , and more than 50,000 edges that we use for Budget Allocation. In our experiments, the typical gap between the naive and robust was 100500 expected influenced people. We plot convergence of the outer loop in Figure 3, where we observe fast convergence of both primal influence value and the dual bound.
4.3 Comparison to firstorder methods
Given the success of firstorder methods on nonconvex problems in practice, it is natural to compare these to our method for finding the worstcase vector . On one of our Yahoo problem instances with Dnorm uncertainty set, we compared our submodular minimization scheme to FrankWolfe with fixed stepsize as in [LacosteJulien, 2016], implementing the linear oracle using MOSEK [MOSEK ApS, 2015]. Interestingly, from various initializations, FrankWolfe finds an optimal solution, as verified by comparing to the guaranteed solution of our algorithm. Note that, due to nonconvexity, there are no formal guarantees for FrankWolfe to be optimal here, motivating the question of global convergence properties of FrankWolfe in the presence of submodularity.
It is important to note that there are many cases where firstorder methods are inefficient or do not apply to our setup. These methods require either a projection oracle (PO) onto or linear optimization oracle (LO) over the feasible set defined by , and
. The Dnorm set admits a LO via linear programming, but we are not aware of any efficient LO for Ellipsoidal uncertainty, nor PO for either set, that does not require quadratic programming. Even more, our algorithm applies for nonconvex functions
which induce nonconvex feasible sets . Such nonconvex sets may not even admit a unique projection, while our algorithm achieves provable solutions.5 Conclusion
We address the issue of uncertain parameters (or, model misspecification) in Budget Allocation or Bipartite Influence Maximization [Alon et al., 2012] from a robust optimization perspective. The resulting Robust Budget Allocation is a nonconvexconcave saddle point problem. Although the inner optimization problem is nonconvex, we show how continuous submodularity can be leveraged to solve the problem to arbitrary accuracy , as can be verified with the proposed bounds on the duality gap. In particular, our approach extends continuous submodular minimization methods [Bach, 2015] to more general constraint sets, introducing a mechanism to solve a new class of constrained nonconvex optimization problems. We confirm on synthetic and real data that our method finds highquality solutions that are robust to parameters varying arbitrarily in an uncertainty set, and scales up to graphs with over 50,000 edges.
There are many compelling directions for further study. The uncertainty sets we use are standard in the robust optimization literature, but have not been applied to e.g. Robust Influence Maximization; it would be interesting to generalize our ideas to general graphs. Finally, despite the inherent nonconvexity of our problem, firstorder methods are often able to find a globally optimal solution. Explaining this phenomenon requires further study of the geometry of constrained monotone submodular minimization.
Acknowledgements
We thank the anonymous reviewers for their helpful suggestions. We also thank MIT Supercloud and the Lincoln Laboratory Supercomputing Center for providing computational resources. This research was conducted with Government support under and awarded by DoD, Air Force Office of Scientific Research, National Defense Science and Engineering Graduate (NDSEG) Fellowship, 32 CFR 168a, and also supported by NSF CAREER award 1553284.
References
 [1] Yahoo! Webscope dataset ydataysmadvertiserbidsv1_0. URL http://research.yahoo.com/Academic_Relations.
 Adamczyk et al. [2016] Adamczyk, Marek, Sviridenko, Maxim, and Ward, Justin. Submodular Stochastic Probing on Matroids. Mathematics of Operations Research, 41(3):1022–1038, 2016.
 Alon et al. [2012] Alon, Noga, Gamzu, Iftah, and Tennenholtz, Moshe. Optimizing Budget Allocation Among Channels and Influencers. In WWW. 2012.
 Atamtürk & Narayanan [2008] Atamtürk, Alper and Narayanan, Vishnu. Polymatroids and meanrisk minimization in discrete optimization. Operations Research Letters, 36(5):618–622, 2008.
 Bach [2015] Bach, Francis. Submodular Functions: From Discrete to Continous Domains. arXiv:1511.00394, 2015.
 Balkanski et al. [2016] Balkanski, Eric, Rubinstein, Aviad, and Singer, Yaron. The power of optimization from samples. In NIPS, 2016.
 Balkanski et al. [2017] Balkanski, Eric, Rubinstein, Aviad, and Singer, Yaron. The limitations of optimization from samples. In STOC, 2017.
 Becker et al. [2011] Becker, Stephen R., Candès, Emmanuel J., and Grant, Michael C. Templates for convex cone problems with applications to sparse signal recovery. Mathematical programming computation, 3(3):165–218, 2011.
 BenTal & Nemirovski [2000] BenTal, Aharon and Nemirovski, Arkadi. Robust solutions of Linear Programming problems contaminated with uncertain data. Mathematical Programming, 88(3):411–424, 2000.
 BenTal et al. [2009] BenTal, Aharon, El Ghaoui, Laurent, and Nemirovski, Arkadi. Robust Optimization. Princeton University Press, 2009.
 Bertsimas & Sim [2003] Bertsimas, Dimitris and Sim, Melvyn. Robust discrete optimization and network flows. Mathematical programming, 98(1):49–71, 2003.
 Bertsimas et al. [2011] Bertsimas, Dimitris, Brown, David B., and Caramanis, Constantine. Theory and Applications of Robust Optimization. SIAM Review, 53(3):464–501, 2011.
 Best & Chakravarti [1990] Best, Michael J. and Chakravarti, Nilotpal. Active set algorithms for isotonic regression; A unifying framework. Mathematical Programming, 47(13):425–439, 1990.
 Bian et al. [2017] Bian, Andrew An, Mirzasoleiman, Baharan, Buhmann, Joachim M., and Krause, Andreas. Guaranteed Nonconvex Optimization: Submodular Maximization over Continuous Domains. In AISTATS, 2017.
 Birkhoff [1937] Birkhoff, Garrett. Rings of sets. Duke Mathematical Journal, 3(3):443–454, 1937.
 Borgs et al. [2014] Borgs, Christian, Brautbar, Michael, Chayes, Jennifer, and Lucier, Brendan. Maximizing Social Influence in Nearly Optimal Time. In SODA, 2014.
 Boyd et al. [2007] Boyd, Stephen, Kim, SeungJean, Vandenberghe, Lieven, and Hassibi, Arash. A tutorial on geometric programming. Optimization and engineering, 8(1):67–127, 2007.
 Chakrabarty et al. [2017] Chakrabarty, Deeparnab, Lee, Yin Tat, Sidford, Aaron, and Wong, Sam Chiuwai. Subquadratic submodular function minimization. In STOC, 2017.
 Chandrasekaran & Shah [2016] Chandrasekaran, Venkat and Shah, Parikshit. Relative Entropy Relaxations for Signomial Optimization. SIAM Journal on Optimization, 26(2):1147–1173, 2016.
 Chen et al. [2009] Chen, Wei, Wang, Yajun, and Yang, Siyu. Efficient influence maximization in social networks. In KDD, 2009.
 Chen et al. [2010] Chen, Wei, Wang, Chi, and Wang, Yajun. Scalable Influence Maximization for Prevalent Viral Marketing in Largescale Social Networks. In KDD, 2010.
 Chen et al. [2016] Chen, Wei, Lin, Tian, Tan, Zihan, Zhao, Mingfei, and Zhou, Xuren. Robust influence maximization. In KDD. 2016.
 Chiang [2005] Chiang, Mung. Geometric Programming for Communication Systems. Commun. Inf. Theory, 2(1/2):1–154, 2005.
 Deshpande et al. [2016] Deshpande, Amol, Hellerstein, Lisa, and Kletenik, Devorah. Approximation Algorithms for Stochastic Submodular Set Cover with Applications to Boolean Function Evaluation and MinKnapsack. ACM Trans. Algorithms, 12(3):42:1–42:28, 2016.
 Domingos & Richardson [2001] Domingos, Pedro and Richardson, Matt. Mining the network value of customers. In KDD, 2001.
 Du et al. [2013] Du, Nan, Song, Le, Gomez Rodriguez, Manuel, and Zha, Hongyuan. Scalable influence estimation in continuoustime diffusion networks. In NIPS. 2013.
 Du et al. [2014] Du, Nan, Liang, Yingyu, Balcan, MariaFlorina, and Song, Le. Influence function learning in information diffusion networks. In ICML, 2014.
 Dunn & Harshbarger [1978] Dunn, Joseph C. and Harshbarger, S. Conditional gradient algorithms with open loop step size rules. Journal of Mathematical Analysis and Applications, 62(2):432–444, 1978.
 Ecker [1980] Ecker, Joseph. Geometric Programming: Methods, Computations and Applications. SIAM Review, 22(3):338–362, 1980.
 Ene & Nguyen [2016] Ene, Alina and Nguyen, Huy L. A Reduction for Optimizing Lattice Submodular Functions with Diminishing Returns. arXiv:1606.08362, 2016.
 Frank & Wolfe [1956] Frank, Marguerite and Wolfe, Philip. An algorithm for quadratic programming. Naval Research Logistics Quarterly, 3(12):95–110, 1956.
 Goel et al. [2009] Goel, Gagan, Karande, Chinmay, Tripathi, Pushkar, and Wang, Lei. Approximability of combinatorial problems with multiagent submodular cost functions. In FOCS, 2009.
 Goemans & Vondrák [2006] Goemans, Michel and Vondrák, Jan. Stochastic Covering and Adaptivity. In LATIN 2006: Theoretical Informatics. Springer Berlin Heidelberg, 2006.

Golovin & Krause [2011]
Golovin, Daniel and Krause, Andreas.
Adaptive Submodularity: Theory and Applications in Active Learning and Stochastic Optimization.
Journal of Artificial Intelligence, 42:427–486, 2011.  Gomez Rodriguez & Schölkopf [2012] Gomez Rodriguez, Manuel and Schölkopf, Bernhard. Influence maximization in continuous time diffusion networks. In ICML, 2012.
 Gomez Rodriguez et al. [2010] Gomez Rodriguez, Manuel, Leskovec, Jure, and Krause, Andreas. Inferring networks of diffusion and influence. In KDD, 2010.
 Gottschalk & Peis [2015] Gottschalk, Corinna and Peis, Britta. Submodular function maximization on the bounded integer lattice. In Approximation and Online Algorithms: 13th International Workshop (WAOA), 2015.
 Hassidim & Singer [2016] Hassidim, Avinatan and Singer, Yaron. Submodular optimization under noise. arXiv preprint arXiv:1601.03095, 2016.
 Hatano et al. [2015] Hatano, Daisuke, Fukunaga, Takuro, Maehara, Takanori, and Kawarabayashi, Kenichi. Lagrangian Decomposition Algorithm for Allocating Marketing Channels. In AAAI, 2015.
 He & Kempe [2016] He, Xinran and Kempe, David. Robust influence maximization. In KDD. 2016.
 Iwata & Nagano [2009] Iwata, Satoru and Nagano, Kiyohito. Submodular function minimization under covering constraints. In FOCS, 2009.
 Jaggi [2013] Jaggi, Martin. Revisiting FrankWolfe: ProjectionFree Sparse Convex Optimization. In ICML, 2013.
 Kempe et al. [2003] Kempe, David, Kleinberg, Jon, and Tardos, Éva. Maximizing the Spread of Influence Through a Social Network. In KDD, 2003.
 Khachaturov et al. [2012] Khachaturov, Vladimir R., Khachaturov, Roman V., and Khachaturov, Ruben V. Supermodular programming on finite lattices. Computational Mathematics and Mathematical Physics, 52(6):855–878, 2012.

Kolmogorov & Shioura [2009]
Kolmogorov, Vladimir and Shioura, Akiyoshi.
New algorithms for convex cost tension problem with application to computer vision.
Discrete Optimization, 6:378–393, 2009.  Krause et al. [2008] Krause, Andreas, McMahan, H Brendan, Guestrin, Carlos, and Gupta, Anupam. Robust submodular observation selection. Journal of Machine Learning Research, 9(Dec):2761–2801, 2008.
 LacosteJulien [2016] LacosteJulien, Simon. Convergence Rate of FrankWolfe for NonConvex Objectives. arXiv:1607.00345, 2016.
 LacosteJulien & Jaggi [2015] LacosteJulien, Simon and Jaggi, Martin. On the global linear convergence of FrankWolfe optimization variants. In NIPS, 2015.
 Lee et al. [2015] Lee, Yin Tat, Sidford, Aaron, and Wong, Sam Chiuwai. A faster cutting plane method and its implications for combinatorial and convex optimization. In FOCS, 2015.
 Lowalekar et al. [2016] Lowalekar, Meghna, Varakantham, Pradeep, and Kumar, Akshat. Robust Influence Maximization: (Extended Abstract). In AAMAS, 2016.
 Maehara [2015] Maehara, Takanori. Risk averse submodular utility maximization. Operations Research Letters, 43(5):526–529, 2015.
 Maehara et al. [2015] Maehara, Takanori, Yabe, Akihiro, and Kawarabayashi, Kenichi. Budget Allocation Problem with Multiple Advertisers: A Game Theoretic View. In ICML, 2015.
 MOSEK ApS [2015] MOSEK ApS. MOSEK MATLAB Toolbox 8.0.0.57, 2015. URL http://docs.mosek.com/8.0/toolbox/index.html.
 Murota [2003] Murota, Kazuo. Discrete convex analysis. SIAM, 2003.
 Murota & Shioura [2014] Murota, Kazuo and Shioura, Akiyoshi. Exact bounds for steepest descent algorithms of convex function minimization. Operations Research Letters, 42:361–366, 2014.
 Nagano et al. [2011] Nagano, Kiyohito, Kawahara, Yoshinobu, and Aihara, Kazuyuki. Sizeconstrained submodular minimization through minimum norm base. In ICML, 2011.
 Narasimhan et al. [2015] Narasimhan, Harikrishna, Parkes, David C, and Singer, Yaron. Learnability of influence in networks. In NIPS. 2015.
 Netrapalli & Sanghavi [2012] Netrapalli, Praneeth and Sanghavi, Sujay. Learning the graph of epidemic cascades. In SIGMETRICS. 2012.

Nikolova [2010]
Nikolova, Evdokia.
Approximation algorithms for reliable stochastic combinatorial optimization.
In APPROX. 2010.  Orlin et al. [2016] Orlin, James B., Schulz, Andreas, and Udwani, Rajan. Robust monotone submodular function maximization. In IPCO, 2016.
 Pascual & BenIsrael [1970] Pascual, Luis D. and BenIsrael, Adi. Constrained maximization of posynomials by geometric programming. Journal of Optimization Theory and Applications, 5(2):73–80, 1970.
 Polyak [1987] Polyak, Boris T. Introduction to Optimization. Number 04; QA402. 5, P6. 1987.
 Rockafellar & Uryasev [2000] Rockafellar, R Tyrrell and Uryasev, Stanislav. Optimization of conditional valueatrisk. Journal of risk, 2:21–42, 2000.
 Rockafellar & Uryasev [2002] Rockafellar, R Tyrrell and Uryasev, Stanislav. Conditional valueatrisk for general loss distributions. Journal of banking & finance, 26(7):1443–1471, 2002.
 Soma & Yoshida [2015] Soma, Tasuku and Yoshida, Yuichi. A Generalization of Submodular Cover via the Diminishing Return Property on the Integer Lattice. In NIPS, 2015.
 Soma et al. [2014] Soma, Tasuku, Kakimura, Naonori, Inaba, Kazuhiro, and Kawarabayashi, Kenichi. Optimal Budget Allocation: Theoretical Guarantee and Efficient Algorithm. In ICML, 2014.
 Svitkina & Fleischer [2011] Svitkina, Zoya and Fleischer, Lisa. Submodular approximation: Samplingbased algorithms and lower bounds. SIAM Journal on Computing, 40(6):1715–1737, 2011.
 Topkis [1978] Topkis, Donald M. Minimizing a submodular function on a lattice. Operations research, 26(2):305–321, 1978.
 Wainwright & Chiang [2004] Wainwright, Kevin and Chiang, Alpha. Fundamental Methods of Mathematical Economics. McGrawHill Education, 2004.
 Zhang et al. [2014] Zhang, Peng, Chen, Wei, Sun, Xiaoming, Wang, Yajun, and Zhang, Jialin. Minimizing seed set selection with probabilistic coverage guarantee in a social network. In KDD, 2014.
Appendix A WorstCase Approximation Ratio versus True WorstCase
Consider the function defined on , with values given by:
(18) 
We wish to choose to maximize robustly with respect to adversarial choices of . If were fixed, we could directly choose to maximize . In particular, and . Of course, we want to deal with worstcase . One option is to maximize the worstcase approximation ratio:
(19) 
One can verify that the best according to this criterion is , with worstcase approximation ratio 0.6 and worstcase function value 0.6. In this paper, we optimize the worstcase of the actual function value:
(20) 
This criterion will select , which has a worse worstcase approximation ratio of 0.5, but actually guarantees a function value of 1, significantly better than the 0.6 achieved by the other formulation of robustness.
Appendix B DRsubmodularity and convexity
A function is convex if it satisfies a discrete version of midpoint convexity, i.e. for all it holds that
(21) 
Remark B.1.
An convex function need not be DRsubmodular, and viceversa. Hence algorithms for optimizing one type may not apply for the other.
Proof.
Consider and , both defined on . The function is DRsubmodular but violates discrete midpoint convexity for the pair of points and , while is convex but does not have diminishing returns in either dimension. ∎
Intuitivelyspeaking, convex functions look like discretizations of convex functions. The continuous objective function we consider need not be convex, hence its discretization need not be convex, and we cannot use those tools. However, in some regimes (namely if each ), it happens that is DRsubmodular in .
Appendix C Constrained Continuous Submodular Function Minimization
Define to be the set of vectors in which are monotone nonincreasing, i.e. . As in the main text, define . One of the key results from Bach [2015] is that an arbitrary submodular function defined on can be extended to a particular convex function so that
(22) 
Moreover, Theorem 4 from Bach [2015] states that, if are strictly convex functions for all and each , then the two problems
(23) 
and
(24) 
are equivalent. In particular, one recovers a solution to Problem (23) for any just as alluded to in Lemma 3.2: find which solves Problem (24) and, for each component , choose to be the maximal value for which .
c.1 Proof of Lemma 3.2
Proof.
The discretized form of the regularizer is also separable and can be written . For each and each with , define , so that . Since we assumed is strictly increasing, the coefficient of in each is strictly positive, so that each is strictly convex. Then,
(25)  
(26) 
so that the discretized version of the minimization problem can be written as
(27) 
Since the term does not depend on the variable , this minimization is equivalent to
(28) 
This problem is in the precise form where we can apply the preceding equivalence result between Problems (23) and (24), so we are done. ∎
c.2 Proof of Theorem 3.1
Proof.
The general idea of this proof is to first show that the integervalued point which solves
is also nearly a minimizer of the continuous version of the problem, due to the fineness of the discretization. Then, we show that the solutions traced out by get very close to . These two results are simply combined via the triangle inequality.
c.2.1 Continuous and Discrete Problems
We begin by proving that
(29) 
Consider . If corresponds to an integral point in the discretized domain, then and we are done. Else, has at least one nonintegral coordinate. By rounding coordinatewise, we can construct a set so that . By monotonicity, there must be some with , i.e. is feasible for the original continuous problem. By construction, since the discretization given by is fine, we must have . Applying the Lipschitz property of and the optimality of , we have
from which (29) follows.
c.2.2 Discrete and Parameterized Discrete Problems
Define and by
The next step in proving our suboptimality bound is to prove that
Comments
There are no comments yet.