# Price of Dependence: Stochastic Submodular Maximization with Dependent Items

In this paper, we study the stochastic submodular maximization problem with dependent items subject to a variety of packing constraints such as matroid and knapsack constraints. The input of our problem is a finite set of items, and each item is in a particular state from a set of possible states. After picking an item, we are able to observe its state. We assume a monotone and submodular utility function over items and states, and our objective is to select a group of items adaptively so as to maximize the expected utility. Previous studies on stochastic submodular maximization often assume that items' states are independent, however, this assumption may not hold in general. This motivates us to study the stochastic submodular maximization problem with dependent items. We first introduce the concept of degree of independence to capture the degree to which one item's state is dependent on others'. Then we propose a non-adaptive policy based on a modified continuous greedy algorithm and show that its approximation ratio is α(1 - e^-κ/2 + κ/18m^2 - κ + 2/3mκ) where the value of α is depending on the type of constraints, e.g., α=1 for matroid constraint, κ is the degree of independence, e.g., κ=1 for independent items, and m is the number of items.

There are no comments yet.

## Authors

• 29 publications
• ### Beyond Pointwise Submodularity: Non-Monotone Adaptive Submodular Maximization subject to a Knapsack Constraint

In this paper, we study the non-monotone adaptive submodular maximizatio...
04/10/2021 ∙ by Shaojie Tang, et al. ∙ 0

• ### Test Score Algorithms for Budgeted Stochastic Utility Maximization

Motivated by recent developments in designing algorithms based on indivi...
12/30/2020 ∙ by Dabeen Lee, et al. ∙ 8

• ### Adaptive Submodular Meta-Learning

Meta-Learning has gained increasing attention in the machine learning an...
12/11/2020 ∙ by Shaojie Tang, et al. ∙ 0

• ### Submodular maximization with uncertain knapsack capacity

We consider the maximization problem of monotone submodular functions un...
03/07/2018 ∙ by Yasushi Kawase, et al. ∙ 0

• ### Stochastic Submodular Probing with State-Dependent Costs

In this paper, we study a new stochastic submodular maximization problem...
09/01/2019 ∙ by Shaojie Tang, et al. ∙ 0

• ### Adaptive Regularized Submodular Maximization

In this paper, we study the problem of maximizing the difference between...
02/28/2021 ∙ by Shaojie Tang, et al. ∙ 0

• ### Generalized Assignment via Submodular Optimization with Reserved Capacity

We study a variant of the generalized assignment problem ( GAP) with gro...
07/03/2019 ∙ by Ariel Kulik, et al. ∙ 0

##### This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

## 1 Introduction

Stochastic submodular maximization (SSM) has been extensively studied recently (Asadpour et al. 2008). The input of SSM is a set of items, each item belongs to a particular state from a set of possible states. After picking an item, we are able to observe its state. Given a monotone and submodular utility function over all items and their states, our objective is to adaptively select a group of items that maximize the expected utility subject to a variety of constraints. One example is stochastic sensor cover problem. In this example, we are given a set of sensors and the state of each sensor is the subset of targets it covers, this subset may change due to uncertain environmental conditions. After selecting a sensor, we are able to observe its state, i.e., the actual subset of targets that can be covered by this sensor. Then the objective of stochastic sensor cover problem is to adaptively select a group of sensors to cover the largest amount of targets (in expectation).

Majority of existing work assume that items are independent from each other, i.e., one item’s state does not depend on others’. However, this assumption does not always hold in reality. Consider the example of stochastic maximum -cover, since each sensor’s state is affected by the environmental conditions which are shared among all sensors, thus their states are correlated. Another example is from viral marketing where one customer’s decision on whether or not buy a product could depend on her neighbors’ decisions. Golovin and Krause (2011) extended the previous studies to dependent items, however, their results only hold when the utility function is adaptive submodular. It is not clear how to generalize their results to more general settings. In this paper, we study a very general setting for the SMM with dependent items. To capture the degree to which one item’s state is correlated with others’, we introduce the concept of degree of independence. A larger degree of independence indicates a weaker correlation among all items’ states, i.e., this value is 1 for independent items. Then we propose a non-adaptive policy based on a modified continuous greedy algorithm. We say a policy is non-adaptive if it always picks the next item before observing the states of picked items. We show that our non-adaptive policy achieves approximation ratio where the value of is depending on the type of constraints, e.g., for matroid constraint, is the degree of independence, and is the number of items. Since our policy is non-adaptive, is also known as the adaptivity gap between the utilities of best adaptive and best non-adaptive policies.

## 2 Preliminaries

### 2.1 Submodular Function and Multilinear Extension

A submodular function is a set function , where denotes the power set of , which satisfies a natural “diminishing returns” property: the marginal gain from adding an element to a set is at least as high as the marginal gain from adding the same element to a superset of . Formally, a submodular function satisfies the follow property: For every with and every , we have that . We say a submodular function is monotone if whenever

. Consider any vector

. The multilinear extension of is defined as .

## 3 Notations and Problem Formulation

### 3.1 Items and States

Let denote a finite set of items, and each item is in a particular state from a set of possible states. Let denote a realization of item states. Let be a random realization where denotes a random realization of . After picking an item , we are able to observe its realization . Let

denote the set of all realizations, we assume there is a known prior probability distribution

over realizations, i.e., .

### 3.2 Utility Function and Problem Formulation

Let be a monotone and submodular function over all items and their states. A policy is a function that specifies which item to pick next under the observations made so far: . Note that

can be regarded as some decision tree that specifies a rule for picking items adaptively. Let

be a downward-closed family of subsets of . Let denote the subset of items picked by policy under . Then the utility of can be expressed as where denotes the probability that is realized. We say a policy is feasible if for any , . Our goal is to identify the best feasible policy that maximizes its expected utility.

 maxπf(π) subject to E(π,ϕ)∈I for any ϕ.

### 3.3 More Notations and Degree of Independence

By abuse of notation, define as the value of items . Let denote the marginal value of item with respect to . Define as the marginal value of item ’s state with respect to .

Given a vector , let be a random set obtained by picking each item independently with probability , then the multilinear extension is defined as the expected value of : . Let denote the marginal value of item with respect to . Define as the marginal value of item ’s state with respect to . For notation convenience, denote as a new vector by setting in to be 0.

We next introduce the concept of degree of independence which refers to the degree to which one item’s state is correlated the others’. [Degree of Independence] The degree of independence of a known prior probability distribution is defined as follows.

 κ:=sup{c∣∀e∈E,∀S⊆E∖{e}:fS(e)≥cEΦ∼D[fS(Φe)]} (1)

Notice that if all items’ states are realized independently from each other, then the degree of independence is 1, i.e., .

## 4 Algorithm Design

In this section, we present a non-adaptive policy , later, we show that the ratio of the utilities of to the best adaptive policy is bounded. This result is also known as adaptivity gap. The general idea of is to first find a fractional solution using modified continuous greedy algorithm (Algorithm 1) and then round it to an integral solution.

#### Stage 1: Modified Continuous Greedy Algorithm

We first explain the design of the modified continuous greedy algorithm. Algorithm 1 maintains a fractional solution , starting with . Let be a random set which contains each independently with probability . In each round , it updates the weight of each item as follows,

 Fy(t)∖e(e)=E[f(R¯e(t)∪{e})]−E[f(R¯e(t))]

where is a subset of by excluding . Since we are not able to obtain the exact value of

, we estimate this value by averaging over

independent samples of . Let denote the estimated value of . Notice that as compared with the standard continuous greedy algorithm (Calinescu et al. 2011), we define the weight of each item in a different way, i.e., Calinescu et al. (2011) adopt the following weight function . Assume , the convex relaxation for , is a down-monotone solvable polytope, we solve the following optimization problem.

P1: Maximize subject to:

After solving P1 at round and obtain an optimal solution , we update the fractional solution at round as follows . After rounds where , is returned as the final solution.

#### Stage 2: Rounding Fractional Solution

In the second stage, we round the fractional solution to an integral solution. As shown in Chekuri et al. (2014), if there exists a -balanced contention resolution scheme for where , then we can find a feasible solution such that . It turns out many useful constraints admit good -balanced contention resolution schemes, including matroid constraints and knapsack constraints. For example, when specifies an intersection of matroids and knapsacks, we have . Notice that when specifies a matroid constraint, we can apply pipage rounding technique (Ageev and Sviridenko 2004) to find a feasible solution with expected utility , i.e., .

## 5 Performance Analysis

In this section, we prove the following main results.

Assume is the optimal policy and there exists a -balanced contention resolution scheme for , we have where is the degree of independence.

Proof: Observing that if there exists a -balanced contention resolution scheme for , we can find a feasible solution such that , it suffice to prove that . Thus, in the rest of the proof, we focus on proving .

For every , let denote the probability that is picked by . Due to is a feasible policy, is a convex combination of feasible solutions in , thus, . We first prove that for any vector .

 f(π⋄)=∑ϕ∈Uαϕf(∪e∈E(π⋄,ϕ)ϕe) ≤ ∑ϕ∈Uαϕ(F(x)+∑e∈E(π⋄,ϕ)Fx(ϕe)) (2) ≤ ∑ϕ∈Uαϕ(F(x)+∑e∈E(π⋄,ϕ)Fx∖e(ϕe)) ≤ F(x)+∑ϕ∈U∑e∈E(π⋄,ϕ)(αϕFx∖e(ϕe)) ≤ F(x)+1κ∑e∈Ey⋄eFx∖e(e)

The first two inequalities are due to the submodularity of . The third inequality is due to . The last inequality is due to the definition of degree of independence (Definition 3.3).

We next provide a lower bound on the increased utility of during one round of Algorithm 1. To simplify the notation, we use to denote the optimal solution to P1 in round .

 F(y(t+δ))−F(y(t)) (3) ≥ ∑e∈Eδ¯¯¯ye∏e′≠e(1−δ¯¯¯ye′)Fy(t)(e) ≥ ∑e∈Eδ¯¯¯ye∏e′≠e(1−δ¯¯¯ye′)(1−ye(t))Fy(t)∖e(e) ≥ ∑e∈Eδ¯¯¯ye(1−δ)m−1(1−ye(t))Fy(t)∖e(e) = δ(1−δ)m−1∑e∈E¯¯¯ye(1−ye(t))Fy(t)∖e(e) ≥ δ(1−δ)m−1∑e∈E¯¯¯ye(1−tδ)Fy(t)∖e(e) = (1−tδ)δ(1−δ)m−1∑e∈E¯¯¯yeFy(t)∖e(e) ≥ (1−tδ)δ(1−δ)m−1(∑e∈Ey⋄eFy(t)∖e(e)−2mδf(π⋄)) (6) ≥ (1−tδ)δ(1−δ)m−1(κ(f(π⋄)−F(y(t)))−2mδf(π⋄)) = (1−tδ)δ(1−δ)m−1κ((1−2mδκ)f(π⋄)−F(y(t))) ≥ (1−tδ)δ(1−δm)κ((1−2mδκ)f(π⋄)−F(y(t))) ≥ (1−tδ)δκ((1−(κ+2)mδκ)f(π⋄)−F(y(t))) (8)

Inequality (3) is due to Lemma 3.3 in (Calinescu et al. 2011). Inequality (5) is due to . Inequality (5) is due to . Inequality (6) is due to is an optimal solution to P1 and is a feasible solution to P1, and the rest proof is similar to the one of Lemma 3.2 in (Calinescu et al. 2011)). Inequality (5) is due to Inequality (2).

By induction, we have . It follows that . Finally, since , we have .

As a byproduct of our main theorem, we have the following corollary. The adaptivity gap of SSM with dependent items is .

## 6 Conclusion

Previous studies on SSM often assume that items are independent, however, this assumption may not always hold. In this paper, we study SSM with dependent items. To capture the impact of item dependency, we first introduce the concept of degree of independence. Then we propose a non-adaptive policy based on a modified continuous greedy algorithm and show that its performance is close to the optimal adaptive policy. In particular, we prove that our non-adaptive policy achieves an approximation ratio whose value is depending on the degree of independence.

## References

• Ageev and Sviridenko (2004) Ageev, Alexander A, Maxim I Sviridenko. 2004. Pipage rounding: A new method of constructing algorithms with proven performance guarantee.

Journal of Combinatorial Optimization

8(3) 307–328.
• Asadpour et al. (2008) Asadpour, Arash, Hamid Nazerzadeh, Amin Saberi. 2008. Stochastic submodular maximization. International Workshop on Internet and Network Economics. Springer, 477–489.
• Calinescu et al. (2011) Calinescu, Gruia, Chandra Chekuri, Martin Pál, Jan Vondrák. 2011. Maximizing a monotone submodular function subject to a matroid constraint. SIAM Journal on Computing 40(6) 1740–1766.
• Chekuri et al. (2014) Chekuri, Chandra, Jan Vondrák, Rico Zenklusen. 2014. Submodular function maximization via the multilinear relaxation and contention resolution schemes. SIAM Journal on Computing 43(6) 1831–1879.
• Golovin and Krause (2011) Golovin, Daniel, Andreas Krause. 2011.

Adaptive submodularity: Theory and applications in active learning and stochastic optimization.

Journal of Artificial Intelligence Research

427–486.