DeepAI
Log In Sign Up

An efficient branch-and-cut algorithm for approximately submodular function maximization

When approaching to problems in computer science, we often encounter situations where a subset of a finite set maximizing some utility function needs to be selected. Some of such utility functions are known to be approximately submodular. For the problem of maximizing an approximately submodular function (ASFM problem), a greedy algorithm quickly finds good feasible solutions for many instances while guaranteeing (1-e^-γ)-approximation ratio for a given submodular ratio γ. However, we still encounter its applications that ask more accurate or exactly optimal solutions within a reasonable computation time. In this paper, we present an efficient branch-and-cut algorithm for the non-decreasing ASFM problem based on its binary integer programming (BIP) formulation with an exponential number of constraints. To this end, we first derive a BIP formulation of the ASFM problem and then, develop an improved constraint generation algorithm that starts from a reduced BIP problem with a small subset of constraints and repeats solving the reduced BIP problem while adding a promising set of constraints at each iteration. Moreover, we incorporate it into a branch-and-cut algorithm to attain good upper bounds while solving a smaller number of nodes of a search tree. The computational results for three types of well-known benchmark instances show that our algorithm performs better than the conventional exact algorithms.

READ FULL TEXT VIEW PDF

page 1

page 2

page 3

page 4

11/10/2018

An efficient branch-and-bound algorithm for submodular function maximization

The submodular function maximization is an attractive optimization model...
04/20/2014

Efficient Semidefinite Branch-and-Cut for MAP-MRF Inference

We propose a Branch-and-Cut (B&C) method for solving general MAP-MRF inf...
01/18/2021

Maximizing approximately k-submodular functions

We introduce the problem of maximizing approximately k-submodular functi...
05/02/2019

Benchmark Instances and Branch-and-Cut Algorithm for the Hashiwokakero Puzzle

Hashiwokakero, or simply Hashi, is a Japanese single-player puzzle playe...
03/29/2021

A Note on Isolating Cut Lemma for Submodular Function Minimization

It has been observed independently by many researchers that the isolatin...
11/09/2021

An efficient branch-and-cut algorithm for the parallel drone scheduling traveling salesman problem

We propose an efficient branch-and-cut algorithm to exactly solve the pa...
02/25/2016

Probably Approximately Correct Greedy Maximization

Submodular function maximization finds application in a variety of real-...

1 Introduction

When approaching to problems in computer science, we often encounter situations where a subset of a finite set maximizing some utility function needs to be selected. Some of such utility functions are known to be submodular (e.g., sensor placement (Golovin and Krause, 2011; Kawahara et al., 2009; Kratica et al., 2001)

, document summarization 

(Lin and Bilmes, 2011), and influence spread problems (Kempe et al., 2003; Sakaue and Ishihata, 2018)). A set function is called submodular if it satisfies for all and , where is a finite set. Submodular functions can be considered as discrete counterparts of convex functions through the continuous relaxation called the Lovász extension (Lovász, 1983).

Meanwhile, in many practical situations, utility functions may not necessarily be submodular. However even in those cases, submodularity can be approximately satisfied in various problems such as feature selection 

(Das and Kempe, 2011; Yu and Liu, 2004), boosting influence spread (Lin et al., 2018), data summarization (Balkanski et al., 2016) and combinatorial auction (Conitzer et al., 2005). For this reason, the optimization of an approximately submodular function has been attracted an increasing attention recently (Das and Kempe, 2018; Horel and Singer, 2016; Krause and Golovin, 2014). This type of function is defined with a submodular ratio , which defined for a set function as the maximum value such that , for all and . That is, a submodular ratio measures how close the function is to submodular (Das and Kempe, 2011; Johnson et al., 2016).

In this paper, we address the problem of maximizing a non-decreasing approximately submodular function under a cardinality constraint (hereafter, referred to as approximately submodular function maximization (ASFM) problem):

(1)

where is a positive integer comprising the cardinality constraint. A set function is non-decreasing if for all and . Das and Kempe (2011) presented a greedy algorithm for the ASFM problem that guarantees -approximation ratio for a given submodular ratio . Chen et al. (2015) proposed an search algorithm to obtain an exactly optimal solution for the ASFM problem. Their algorithm computes an upper bound by a variant of variable fixing techniques with oracle queries. Their algorithm quickly finds upper bounds; however the attained upper bounds are not often tight enough to prune nodes of the search tree effectively. Therefore, their algorithm often processes a huge number of nodes of the search tree until obtaining an optimal solution.

Here, we present an efficient branch-and-cut algorithm for the ASFM problem based on its binary integer programming (BIP) formulation with an exponential number of constraints. To this end, we first derive a BIP formulation of the ASFM problem and then, develop a modified constraint generation algorithm based on the BIP formulation. Unfortunately, the modified constraint generation algorithm is not efficient because of a large number of reduced BIP problems to be solved. To overcome this, we propose an improved constraint generation algorithm, where a promising set of constraints is added at each iteration. We further incorporate it into a branch-and-cut algorithm to attain good upper bounds while solving a smaller number of reduced BIP problems. Finally, we evaluate our algorithms under comparisons with the existing ones using three types of well-known benchmark instances and the combinatorial auction problem.

The remainder of this paper is organized as follows. First, in Section 2, we give a brief review of the existing algorithms. In Section 3, we derive the IP formulation of the ASFM problem. Then, in Section 4, we propose three algorithms for solving the ASFM problem. We illustrate the effectivity of the proposed algorithm with the combinatorial auction problem in Section 5 and show some computational results using three types of well-known benchmark instances in Section 6. Finally, the paper is concluded in Section 7.

2 Existing Algorithms

Here, we first review the constraint generation algorithm by Nemhauser and Wolsey (1981) for the problem (1) when is not approximately but exactly submodular (referred to as submodular function maximization (SFM) problem) in Subsection 2.1 and then,  search algorithm proposed by Chen et al. (2015) for the ASFM problem in Subsection 2.2.

2.1 Constraint Generation Algorithm for the SFM Problem

Nemhauser and Wolsey (1981) have proposed an exact algorithm for the SFM problem, called the constraint generation algorithm. The algorithm starts from a reduced BIP problem with a small subset of constraints and then, repeats solving the reduced BIP problem while adding a new constraint at each iteration.

Given a set of feasible solutions , we define as the following reduced BIP problem of the SFM problem:

(2)

The initial solution is obtained by applying the greedy algorithm (Minoux, 1978; Nemhauser et al., 1978). Their algorithm starts with a set , where denotes the first elements of a feasible solution with the order obtained by the greedy algorithm. We now consider the -th iteration of the constraint generation algorithm. The algorithm first solves with to obtain an optimal solution and the optimal value that gives an upper bound of that of the problem (2). Let denote the optimal solution of corresponding to , and denote the incumbent solution of the problem (2) obtained so far. If holds, then the algorithm replaces the incumbent solution with . If holds, the algorithm concludes and adds to , because does not satisfy any constraints of . That is, the algorithm adds the following constraint to for improving the upper bound of the optimal value of the problem (2).

(3)

These procedures are repeated until and meet.

The pseudo code of this algorithm is shown below. We note that the value of is non-increasing with the number of iterations and the algorithm must terminate after at most iterations.

Algorithm CG
Input:

The initial feasible solution .

Output:

The incumbent solution .

Step1:

Set , and .

Step2:

Solve . Let and be an optimal solution and the optimal value of , respectively.

Step3:

If holds, then set .

Step4:

If holds, then output the incumbnet solution and exit. Otherwise; (i.e., ), set , and return to Step2.

2.2  Search Algorithm for the ASFM Problem

Chen et al. (2015) have proposed an search algosithm for the ASFM problem. We first define the search tree of the  search algorithm. Each node of the search tree represents a feasible solution, where the root node is set to . The parent of a node is defined as , where is an element with the largest number. For example, node is the parent of node , since . The  search algorithm employs a list to manage nodes of the search tree. The value of a node is defined as , where

is a heuristic function. We note that

give an upper bound of the optimal value of the SFM problem at the node .

The initial feasible solution is obtained by the greedy algorithm (Minoux, 1978; Nemhauser et al., 1978). The algorithm repeats to extract a node with the largest value from the list and insert its children into the list at each iteration. Let be a node extracted from the list , and be the incumbent solution (i.e., best feasible solution obtained so far). The algorithm obtains a feasible solution from the node , e.g. a variety of greedy algorithms. If holds, then the algorithm replaces the incumbent solution with . Then, all children of the node satisfying are inserted into the list . The algorithm repeats these procedures until the list becomes empty.

The pseudo code of this algorithm is shown below.

Algorithm A
Input:

The initial feasible solution .

Output:

The incumbent solution .

Step1:

Set and .

Step2:

If holds, then output the incumbent solution and exit.

Step3:

Extract a node with the largest value from the list . If holds, then return to Step 2.

Step4:

Obtain a feasible solution from the node . If holds, then set .

Step5:

Set for all children of the node satisfying and . Return to Step2.

We then illustrate a heuristic function applied to the  search algorithm. Let be the current node of the  search algorithm. We consider the following reduced problem of the SFM problem for obtaining .

(4)

where and . Let be an optimal solution of the reduced problem (4). By approximately submodularity, we obtain for any and the following inequality.

(5)

Since the reduced problem (4) is still NP-hard, we consider obtaining an upper bound of . Let be the non-increasing ordered set with respect to for . We assume that , because we can obtain the upper bound by computing in otherwise. Let and denote the set of the first elements of the sorted set . We then define a heuristic function by

(6)

We note that we let be a feasible solution for the node (Step 4). If holds for some , then we conclude by submodularity. For a given node , we compute an upper bound .

3 Ip Formulation

In this section, we formulate the ASFM problem into a BIP problem. First, the submodular ratio is obtained as follows:

(7)

where we regard . According to Johnson et al. (2016), we now define an upper bound of the submodular ratio as follows:

(8)

where .

Proposition 1

A function is approximately submodular if there exists constants satisfying any of the following hold:

(i)

(ii)

,

(iii)

(iv)

, .

(v)

.

The proof of Proposition 1 is in the Appendix.

Proposition 2

A function is non-decreasing approximately submodular if there exists constants satisfying any of the following hold:

.

,

.

The proof of Proposition 2 is in the Appendix. We next consider a set of satisfying the following condition.

(9)

where .

Proposition 3

Suppose is a non-decreasing approximately submodular function, if and only if , .

The proof of Proposition 3 is in the Appendix. We now replace with due to . We formulate the ASFM problem into the following BIP problem (10).

(10)

where denotes the set of all feasible solutions satisfying the cardinality constraint .

4 Proposed Algorithms

We first present a modified constraint generation algorithm for the ASFM problem based on the algorithm (Nemhauser and Wolsey, 1981) in Subsection 2.1. The modified constraint generation algorithm often needs to solve a large number of reduced BIP problems because of generating only one constraint at each iteration. We accordingly propose an improved constraint generation algorithm to generate a promising set of constraints for attaining good upper bounds while solving a smaller number of reduced BIP problems in Subsection 4.2. Moreover, we develop a branch-and-cut algorithm by using the above algorithm in Subsection 4.3.

4.1 Modified Constraint Generation Algorithm

We first define BIP() as the following reduced BIP problem of the problem (10).

(11)

where . We propose a modified constraint generation algorithm for the ASFM problem based on the constraint generation algorithm for the SFM problem (Subsection 2.1), where the proposed algorithm solves the above problem (11) instead of (2).

4.2 Improved Constraint Generation Algorithm

Let and be an optimal solution and the optimal value of at the -th iteration of the constraint generation algorithm, respectively. We note that gives an upper bound of the optimal value of the problem (10). To improve the upper bound , it is necessary to add a new feasible solution to satisfying the following inequality.

(12)

For this purpose, we now consider the following problem to generate a new feasible solution adding to called the separation problem.

(13)

If the optimal value of the separation problem (13) is less than , then we add an optimal solution of the separation problem (13) to ; otherwise, we conclude is the optimal value of the problem (10). We repeat adding a new feasible solution obtained from the separation problem (13) to and solving the updated until and meet. This procedure is often called the cutting-plane algorithm which is used for the mixed integer programs (Marchand et al., 2002). However, the computational cost to solve a separation problem (13) is very expensive, almost the same as solving the SFM problem. To overcome this, we propose an improved constraint generation algorithm to quickly generate a promising set of constraints.

After solving , we obtain at least one feasible solution attaining the optimal value of , i.e.,

(14)

Let be the optimal solution of corresponding to , where we assume .

We then consider adding an element to . In the case with satisfying , we obtain the following inequality by approximately submodularity:

(15)

where due to . In the other case when , we obtain a similar inequality above. Ideally, we should have in the sum of right hand side of the inequality. By the inequality (15) with , we observe that it is preferable to add the element to for improving the upper bound . Here, we note that it is necessary to remove another element if holds.

Based on this observation, we develop a heuristic algorithm to generate a set of new feasible solutions for improving the upper bound . Given a set of feasible solutions , let be the number of feasible solutions including an element . We define the occurrence rate of each element with respect to as

(16)

For each element , we set a random value satisfying . If there are multiple feasible solutions satisfying the equation (14), then we select one of them at random. We take the largest elements with respect to the value to generate a feasible solution .

Algorithm SUB-ICG
Input:

A set of feasible solutions . A feasible solution . The number of feasible solutions to be generated .

Output:

A set of feasible solutions .

Step1:

Set and .

Step2:

Select a feasible solution satisfying the equation (14) at random. Set a random value for .

Step3:

If holds, then take the largest elements with respect to to generate a feasible solution . Otherwise, take the largest element with respect to to generate a feasible solution .

Step4:

If holds, then set and .

Step5:

If holds, then output and exit. Otherwise, return to Step2.

We summarize the improved constraint generation algorithm as follows, in which we define as the set of feasible solutions obtained by solving reduced BIP problems and as the set of feasible solutions generated by .

Algorithm ICG
Input:

The initial feasible solution . The number of feasible solutions to be generated at each iteration .

Output:

The incumbent solution .

Step1:

Set }, , and .

Step2:

Solve . Let and be an optimal solution and the optimal value of , respectively.

Step3:

If holds, then set .

Step4:

If holds, then output the incumbent solution and exit.

Step5:

Set , and .

Step6:

For each feasible solution , if holds, then set . Return to Step2.

We note that the improved constraint generation algorithm often attains good lower bounds as well as the upper bounds because SUB-ICG gives good feasible solutions at each iteration.

4.3 Branch-and-Cut Algorithm

We propose a branch-and-cut algorithm incorporating the improved constraint generation algorithm. We first define the search tree of the branch-and-cut algorithm. Each node of the search tree consists of a pair of sets and , where elements (resp., ) correspond to variables fixed to (resp., ) of the problem (10). The root node is set to . Each node has two children and , where .

The branch-and-cut algorithm employs a stack list to manage nodes of the search tree. The value of a node is defined as the optimal value of the following reduced BIP problem :

(17)

where , and is the set of feasible solution generated by the improved constraint generation algorithm so far. We note that gives an upper bound of the optimal value of the problem (10) at the node ; i.e., under the condition that and .

We start with a pair of sets and , where is the initial feasible solutions obtained by the greedy algorithm (Minoux, 1978; Nemhauser et al., 1978). To obtain good upper and lower bounds quickly, we first apply the first iterations of the improved constraint generation algorithm. We then repeat to extract a node from the top of the stack list and insert its children into the top of the stack list at each iteration. Thus, we employ a depth-first-search for the tree search of the branch-and-cut algorithm.

Let be a node extracted from the stack list , and be the incumbent solution of the problem (10) (i.e., the best feasible solution obtained so far). We first solve to obtain an optimal solution and the optimal value . We then generate a set of feasible solutions by . For each feasible solution , if holds, then we replace the incumbent solution with . If holds, then we insert the two children and into the top of the stack list in this order.

To decrease the number of reduced BIP problems to be solved in the branch-and-cut algorithm, we keep the optimal value of as an upper bound (resp., ) of the child (resp., ) when inserted to the stack list . If holds when we extract a node from the stack list , then we can prune the node without solving . We set the upper bound of the root node to . We repeat these procedures until the stack list becomes empty.

Algorithm BC-ICG
Input:

The initial feasible solution . The number of feasible solutions to be generated at each node .

Output:

The incumbent solution .

Step1:

Set , , , and .

Step2:

Apply the first iterations of to update the sets and and the incumbent solution .

Step3:

If holds, then output the incumbent solution and exit.

Step4:

Extract a node from the top of the stack list . If holds, then return to Step3.

Step5:

Solve . Let and be an optimal solution and the optimal value of , respectively.

Step6:

Set , .

Step7:

For each feasible solution , if holds, then set .

Step8:

If , then return to Step3.

Step9:

If and hold, then set , and , where . Return to Step3.

We note that the branch-and-cut algorithm is similar to that for the traveling salesman problem based on a BIP formulation with an exponential number of subtour elimination constraints (Crowder and Padberg, 1980; Grötschel and Holland, 1991).

5 Example

We consider the combinatorial auction (CA) problem that asks a bidder to select a package of items to maximize its utility, which is formulated as the ASFM problem. A greedy algorithm obtains a feasible solution quickly, however it often fails to obtain important items for the problem. To do so, we need an optimal solution as well as other good solutions whose objective value are close to the optimal value. For this purpose, we modify BC-ICG to hold all incumbent solutions obtained so far as well as the final incumbent solution.

Combinatorial auction (CA) We are given a set of items . We select a set of items to make a package of items. We define as the individual utility of an item and as the mutual utility for a pair of items . The utility of a package of items is defined as

(18)

We have tested BC-ICG for an instance arising from a supermarket transaction data containing 170 items and 9835 transactions.444http://www.sci.csueastbay.edu/~esuess/classes/Statistics_6620/Presentations/ml13/groceries.csv We set the size of the package . We also set the individual utility randomly , while setting the mutual utility according to the number of times that both items are selected in the same transaction. We obtain a lower bound of the submodular ratio by the following formula:

(19)

where the sets and are the non-increasing positive and non-decreasing negative ordered set with respect to for an item , respectively, and , . We note that (resp., ) represents the first (resp., ) elements of a sorted set (resp., ). For the instance, we obtained with , respectively.

Table 1 shows the frequency of items in the series of solutions obtained by BC-ICG. The optimal solutions obtained by BC-ICG are [“yogurt”, “frozen vegetables”], [“yogurt”, “sugar”, “organic products”] for , respectively. On the other hand, the feasible solutions obtained the greedy algorithm are [“tea”, “yogurt”], [“tea”, “yogurt”, “frozen vegetables”] for , respectively. We note that “tea” is not selected in the optimal solutions while it has the largest value of the 170 items. That is, the greedy algorithm sometimes fails to attain important items constitute the optimal solution.

“tea” “yogurt” “sugar” “organic products” “frozen vegetables” “softner”
1 3 2 1 2 1
1 4 2 2 3 0
Table 1: Frequency of items in a series of solutions obtained by BC-ICG.

6 Computational Results

We tested two existing algorithms: (i) the  search algorithm with the heuristic function (-MOD), (ii) the modified constraint generation algorithm (MCG) and two proposed algorithms: (iii) the improved constraint generation algorithm (ICG), (iv) the branch-and-cut algorithm (BC-ICG). All algorithms were tested on a personal computer with a 4.0 GHz Intel Core i7 processor and 32 GB memory. For MCG, ICG, and BC-ICG, we use an mixed integer programming (MIP) solver called CPLEX 12.8 (2019) for solving reduced BIP problems, and the number of feasible solutions to be generated at each iteration is set to based on computational results of preliminary experiments.

We report computational results for three types of well-known benchmark instances called facility location (LOC), weighted coverage (COV), and bipartite influence (INF) according to Kawahara et al. (2009) and Sakaue and Ishihata (2018). We note that those instances were originally generated for the SFM problem. For generating instances for the ASFM problem, we replace the original utility function with