DeepAI
Log In Sign Up

An efficient branch-and-bound algorithm for submodular function maximization

The submodular function maximization is an attractive optimization model that appears in many real applications. Although a variety of greedy algorithms quickly find good feasible solutions for many instances while guaranteeing (1-1/e)-approximation ratio, we still encounter many real applications that ask optimal or better feasible solutions within reasonable computation time. In this paper, we present an efficient branch-and-bound algorithm for the non-decreasing submodular function maximization problem based on its binary integer programming (BIP) formulation with a huge number of constraints. Nemhauser and Wolsey developed an exact algorithm called the constraint generation algorithm that starts from a reduced BIP problem with a small subset of constraints taken from the constraints and repeats solving a reduced BIP problem while adding a new constraint at each iteration. However, their algorithm is still computationally expensive due to many reduced BIP problems to be solved. To overcome this, we propose an improved constraint generation algorithm to add a promising set of constraints at each iteration. We incorporate it into a branch-and-bound algorithm to attain good upper bounds while solving a smaller number of reduced BIP problems. According to computational results for well-known benchmark instances, our algorithm achieved better performance than the state-of-the-art exact algorithms.

READ FULL TEXT VIEW PDF

page 1

page 2

page 3

page 4

04/26/2019

An efficient branch-and-cut algorithm for approximately submodular function maximization

When approaching to problems in computer science, we often encounter sit...
02/23/2021

Instance Specific Approximations for Submodular Maximization

For many optimization problems in machine learning, finding an optimal s...
02/12/2020

Branch-and-Bound Method for Just-in-Time Optimization of Radar Search Patterns

Electronic phased-array radars offer new possibilities for radar search ...
06/14/2021

The Power of Randomization: Efficient and Effective Algorithms for Constrained Submodular Maximization

Submodular optimization has numerous applications such as crowdsourcing ...
11/14/2018

Pareto Optimization for Subset Selection with Dynamic Cost Constraints

In this paper, we consider the subset selection problem for function f w...
02/25/2016

Probably Approximately Correct Greedy Maximization

Submodular function maximization finds application in a variety of real-...
05/30/2018

Generic CP-Supported CMSA for Binary Integer Linear Programs

Construct, Merge, Solve and Adapt (CMSA) is a general hybrid metaheurist...

1 Introduction

There are many problems in computer science that are formulated as the submodular function maximization, such as active learning 

(Krause and Golovin, 2014), sensor placement (Golovin and A. Krause, 2011; Kawahara et al., 2009; Kratica et al., 2001), influence spread (Kempe et al., 2003)

and feature selection 

(Jovic et al., 2015; Yu and Liu, 2004; Das and Kempe, 2011). Submodular function is a set function satisfying for all , where is a finite set. Submodular functions can be considered as discrete counterparts of convex functions through the continuous relaxation called the Lovasz extension (Lovasz, 1983). We address the problem of maximizing a submodular function with a cardinality constraint, formulated as

(1)

where , is the size of the finite set and is a positive integer comprising the cardinality constraint. This paper focuses on non-decreasing submodular functions. A submodular function is non-decreasing if for all and . This problem is well known as NP-hard, unlike submodular function minimization (Lee et al., 2009).

For the problem (1), Nemhauser et al. (1978) invoked a greedy algorithm which achieves an approximation ratio . For large-scale instances, the greedy algorithm is not efficient since it takes oracle queries. To overcome this, Minoux (1978) improved the performance of the greedy algorithm in practice.

Although the greedy algorithm quickly finds good feasible solutions for many instances of submodular maximization, we often encounter real applications that ask optimal or better feasible solutions within reasonable computation time. For instance, the feature selection problem asks to select the essential features to represent a model with minimal loss of information. The greedy algorithm often fails to select essential elements that can not be removed without affecting the original conditional target distribution when considering a strong relevant feature (John et al., 1994; Yu and Liu, 2004). The sensor placement problem involves maximizing the area covered by a limited number of placed sensors. The placement of the sensors becomes crucial because they are often operated for a long time after once installed. We accordingly prefer to find a better solution as much as possible with a sufficient computation time.

Recently, Chen et al. (2015) proposed an  search algorithm (Zeng et al., 2015) to obtain an optimal solution of the non-decreasing submodular function maximization problem. Their algorithm computes an upper bound by a variant of variable fixing technique with oracle queries. Sakaue and Ishihata (2018) improved the  search algorithm to obtain better upper bounds for the non-decreasing submodular function maximization with a knapsack constraint. Their algorithms quickly find good upper bounds; however, the attained upper bounds are not often tight enough to prune nodes of the search tree effectively. Therefore, their algorithms often process many nodes of the search tree until obtaining an optimal solution.

In this paper, we propose an efficient branch-and-bound algorithm for the non-decreasing submodular function maximization problem based on the following binary integer programming (BIP) formulation (Nemhauser and Wolsey, 1981):

(2)

where for all and denotes the set of all feasible solutions satisfying the cardinality constraint .

Nemhauser and Wolsey (1981) previously proposed an exact algorithm called the constraint generation algorithm based on the BIP formulation (2). The size of the BIP formulation grows exponentially compared to since it has more than

constraints. To overcome this, they proposed the constraint generation algorithm that starts from a reduced BIP problem with a small subset of constraints taken from the constraints. Their algorithm repeats solving a reduced BIP problem while adding a new constraint at each iteration. Unfortunately, this is not efficient in practice because their algorithm requires to solve many reduced BIP problems. They also proposed a branch-and-bound algorithm with solving linear programming (LP) relaxation problems of the reduced BIP problems to obtain upper bounds. Their branch-and-bound algorithm however can not prune nodes of the search tree efficiently because the LP relaxation problems often give much worse upper bounds than the reduced BIP problems.

Kawahara et al. (2009) proposed an exact algorithm for updating a lower bound based on Tuy’s cutting plane method (Tuy, 1964). The algorithm reformulated the submodular function maximization problem (1) by using the Lovasz extension (Lovasz, 1983)

. It iteratively finds a feasible solution and a unique hyperplane to cut off a feasible subset which clearly does not include any better feasible solutions. To obtain an upper bound, they introduced the constraint generation algorithm

(Nemhauser and Wolsey, 1981) adding the obtained feasible solution as a constraint.

Both of the exact algorithms (Nemhauser and Wolsey, 1981; Kawahara et al., 2009) often need to solve a large number of reduced BIP problems because of generating only one or two constraints at each iteration. In this paper, to overcome this, we propose an improved constraint generation algorithm to add a promising set of constraints at each iteration. We incorporate it into a branch-and-bound algorithm to attain good upper bounds while solving a smaller number of reduced BIP problems. To improve the efficiency of the branch-and-bound algorithm, we also introduce a few techniques to attain good upper and lower bounds quickly.

2 Existing algorithms

We review the  search algorithms proposed by Chen et al. (2015) and Sakaue and Ishihata (2018), and the constraint generation algorithm proposed by Nemhauser and Wolsey (1981) for the non-decreasing submodular function maximization problem.

2.1  Search Algorithm

We first define the search tree of the  search algorithm. Each node of the search tree represents a feasible solution, where the root node is set to . The parent of a node is defined as , where is an element with the largest number. For example, node is the parent of node , since .

The  search algorithm employs a list to manage nodes of the search tree. The value of a node is defined as , where

is a heuristic function. We note that

give an upper bound of the optimal value of the problem (1) at the node .

The initial feasible solution is obtained by the greedy algorithm (Nemhauser et al., 1978; Minoux, 1978). The algorithm repeats to extract a node with the largest value from the list and insert its children into the list at each iteration. Let be a node extracted from the list , and be the best feasible solution obtained so far. The algorithm applies the greedy algorithm to the node for obtaining a feasible solution . If holds, then the algorithm replaces the best feasible solution with . Then, all children of the node satisfying are inserted into the list . The algorithm repeats these procedures until the list becomes empty.

Algorithm A
Input:

The initial feasible solution .

Output:

An optimal solution .

Step 1:

Set and .

Step 2:

If holds, then output an optimal solution and exit.

Step 3:

Extract a node with the largest value from the list . If holds, then return to Step 2.

Step 4:

Apply the greedy algorithm from the node to obtain . If holds, then set .

Step 5:

Set for all children of the node satisfying and . Return to Step 2.

We then illustrate two heuristic functions (Chen et al., 2015) and (Sakaue and Ishihata, 2018) applied to the  search algorithm.

2.2 Upper bound with modular functions (MOD)

Chen et al. (2015) proposed a heuristic function . Let be the current node of the  search algorithm. We consider the following reduced problem of the problem (1) for obtaining .

(3)

where and . Let be an optimal solution of the reduced problem (3). By submodularity, we obtain for any and the following inequality.

(4)

Since the reduced problem (3) is still NP-hard, we consider obtaining an upper bound of . Let be the non-increasing ordered set with respect to for . We assume that , because we can obtain the upper bound by computing in otherwise. Let and denote the set of the first elements of the sorted set . We then define a heuristic function by

(5)

If holds for some , then we conclude by submodularity. For a given node , we compute an upper bound .

2.3 Upper bound with dominant elements (DOM)

Sakaue and Ishihata (2018) proposed another heuristic function . We define as the ordered set of elements added to the current solution by the greedy algorithm. Let and denote the -th element of the ordered set and the set of the first elements of the ordered set , respectively. We define

(6)

where

(7)

We then define a heuristic function by

(8)

The heuristic function holds for any given . For a given node , we compute an upper bound .

2.4 Constraint generation algorithm

Nemhauser and Wolsey (1981) have proposed an exact algorithm called the constraint generation algorithm starting from a reduced BIP problem with a small subset of constraints taken from the constraints. The algorithm repeats solving a reduced BIP problem while adding a new constraint at each iteration. Given a set of feasible solutions , we define as the following reduced BIP problem of the problem (2).

(9)

The initial solution is obtained by the greedy algorithm (Nemhauser et al., 1978; Minoux, 1978). Their algorithm starts with a set We consider the algorithm at the -th iteration. The algorithm first solves with to obtain an optimal solution and the optimal value that gives an upper bound of the problem (2). Let denote the optimal solution of corresponding to , and denote the best feasible solution of the problem (2) obtained so far. If holds, then the algorithm replaces the best feasible solution with . If holds, the algorithm concludes and add to , because does not satisfy any constraints of . That is, the algorithm adds the following constraint to for improving the upper bound of the optimal value of the problem (2).

(10)

The algorithm repeats these procedures until and meet. We note that the value of is non-increasing along to the number of iterations and the algorithm must terminate after at most iterations.

Algorithm CG
Input:

The initial feasible solution .

Output:

An optimal solution .

Step 1:

Set , and .

Step 2:

Solve . Let and be an optimal solution and the optimal value of , respectively.

Step 3:

If holds, then set .

Step 4:

If holds, then output an optimal solution and exit. Otherwise; (i.e., ), set , and return to Step 2.

3 Proposed algorithms

The constraint generation algorithm (Nemhauser and Wolsey, 1981) often solves a large number of reduced BIP problems because of generating only one constraint at each iteration. We accordingly propose an improved constraint generation algorithm to generate a promising set of constraints for attaining good upper bounds while solving a smaller number of reduced BIP problems.

3.1 Improved constraint generation algorithm

Let and be an optimal solution and the optimal value of at the -th iteration of the constraint generation algorithm, respectively. We note that gives an upper bound of the optimal value of the problem (2). To improve the upper bound , it is necessary to add a new feasible solution to satisfying the following inequality.

(11)

For this purpose, we now consider the following problem to generate a new feasible solution adding to called the pricing problem.

(12)

If the optimal value of the pricing problem (12) is less than , then we add an optimal solution of the pricing problem (12) to ; otherwise, we conclude is the optimal value of the problem (2). We repeat adding a new feasible solution obtained from the pricing problem (12) to and solving the updated until and meet. This procedure is often called the column generation method (Chvatal, 1983) which is used for LP problems with a huge number of constraints. However, the computational cost to solve a pricing problem (12) is very expensive, almost the same as solving the problem (1). To overcome this, we propose an improved constraint generation algorithm to quickly generate a promising set of constraints.

After solving , we obtain at least one feasible solution attaining the optimal value of such that

(13)

Let be the optimal solution of of corresponding to , where we assume . We then consider adding an element to , and obtain the following inequality by submodularity:

(14)

where due to . From the inequality (14), we observe that it is preferable to add the element to for improving the upper bound . Here, we note that it is necessary to remove another element if holds.

Based on this observation, we develop a heuristic algorithm to generate a set of new feasible solutions for improving the upper bound . Given a set of feasible solutions , let be the number of feasible solutions including an element . We define the occurrence rate of each element with respect to as

(15)

For each element , we set a random value satisfying . If there are multiple feasible solutions satisfying the equation (13), then we select one of them at random. We take the largest elements with respect to the value to generate a feasible solution .

Algorithm SUB-ICG
Input:

A set of feasible solutions . A feasible solution . The number of feasible solutions to be generated .

Output:

A set of feasible solutions .

Step 1:

Set and .

Step 2:

Select a feasible solution satisfying the equation (13) at random. Set a random value for .

Step 3:

Take the largest elements with respect to to generate a feasible solution .

Step 4:

If holds, then set and .

Step 5:

If holds, then output and exit. Otherwise, return to Step 2.

We summarize the improved constraint generation algorithm as follows, in which we define as the set of feasible solutions obtained by solving and as the set of feasible solutions generated by .

Algorithm ICG
Input:

The initial feasible solution . The number of feasible solutions to be generated at each iteration .

Output:

An optimal solution .

Step 1:

Set }, }, and .

Step 2:

Solve . Let and be an optimal solution and the optimal value of , respectively.

Step 3:

If holds, then set .

Step 4:

If holds, then output an optimal solution and exit.

Step 5:

Set , and .

Step 6:

For each feasible solution , if holds, then set , and return to Step 2.

3.2 Branch-and-bound algorithm

We propose a branch-and-bound algorithm incorporating the improved constraint generation algorithm. Branch-and-bound algorithm is an exact algorithm for optimization problems, which consists of an implicit enumeration using a search tree and a pruning mechanism using relaxation problems.

We first define the search tree of the branch-and-bound algorithm. Each node of the search tree consists a pair of sets and , where elements (resp., ) correspond to variables fixed to (resp., ) of the problem (2). The root node is set to . Each node has two children and , where .

The branch-and-bound algorithm employs a stack list to manage nodes of the search tree. The value of a node is defined as the optimal value of the following reduced BIP problem :

(16)

where is the set of feasible solution generated by the improved constraint generation algorithm so far. We note that gives an upper bound of the optimal value of the problem (2) at the node ; i.e., under the condition that and .

We start with a pair of sets and , where is the initial feasible solutions obtained by the greedy algorithm (Nemhauser and Wolsey, 1981; Minoux, 1978). To obtain good upper and lower bounds quickly, we first apply the first iterations of the improved constraint generation algorithm. We then repeat to extract a node from the top of the stack list and insert its children into the top of the stack list at each iteration.

Let be a node extracted from the stack list , and be the best feasible solution of the problem (2) obtained so far. We first solve to obtain an optimal solution and the optimal value . We then generate a set of feasible solution by the heuristic algorithm. For each feasible solution , if holds, then we replace the best feasible solution with . If holds, then we insert the two children and into the top of the stack list in this order.

To decrease the number of reduced BIP problems to be solved in the branch-and-bound algorithm, we keep the optimal value of as an upper bound (resp., ) of the child (resp., ) when inserted to the stack list . If holds when we extract a node from the stack list , then we can prune the node without solving . We set the upper bound of the root node to . We repeat these procedures until the stack list becomes empty.

Algorithm BB-ICG
Input:

The initial feasible solution . The number of feasible solutions to be generated at each node .

Output:

An optimal solution .

Step 1:

Set , , , and .

Step 2:

Apply the first iterations of to update the sets and and the best feasible solution .

Step 3:

If holds, then output an optimal solution and exit.

Step 4:

Extract a node from the top of the stack list . If holds, then return to Step 3.

Step 5:

Solve . Let and be an optimal solution and the optimal value of , respectively.

Step 6:

Set , .

Step 7:

For each feasible solution , if holds, then set .

Step 8:

If , then return to Step 3.

Step 9:

Set and and , where and hold. Return to Step 3.

3.3 Improved branch-and-bound algorithm

We finally propose an improved branch-and-bound algorithm that introduces (i) the heuristic function to obtain a good upper bound quickly and (ii) a local search algorithm to improve the lower bound from the best feasible solution obtained so far.

The computational cost of evaluating the heuristic function is much cheaper than that of solving a reduced BIP problem. We accordingly compute an upper bound of the problem (2) at the node before solving , where we set for computing .

To improve the efficiency of the branch-and-bound algorithm, it is also important to improve the lower bound from the best feasible solution obtained so far. We accordingly introduce a simple local search at each node of the branch-and-bound algorithm. We first apply the greedy algorithm from to obtain an initial feasible solution , where we only consider to add an element at each iteration. We then repeatedly replaces with a better feasible solution in its neighborhood until no better feasible solution is found in . For a given feasible solution , we define an exchange neighborhood as .

Algorithm LS
Input:

A node of the branch-and-bound algorithm .

Output:

A feasible solution .

Step 1:

Apply the greedy algorithm from to obtain an initial feasible solution .

Step 2:

Find the best feasible solution . If holds, then set and return to Step 2; otherwise, output and exit.

The improved branch-and-bound algorithm is described by replacing Steps 4 and 8 of the branch-and-bound algorithm as follows.

Step 4:

Extract a node from the top of the list . Set . If holds, then set . If holds, then return to Step 3.

Step 8:

If , then return to Step 3.

4 Computational Results

We tested three existing algorithms and three proposed algorithms: (i) the  search algorithm with the heuristic function (-MOD), (ii) the  search algorithm with the heuristic function (-DOM), (iii) the constraint generation algorithm (CG), (iv) the improved constraint generation algorithm (ICG), (v) the branch-and-bound algorithm (BB-ICG) and (vi) the improved branch-and-bound algorithm (BB-ICG+). All algorithms were tested on a personal computer with a 4.0 GHz Intel Core i7 processor and 32 GB memory. For ICG, BB-ICG and BB-ICG+, we use a mixed integer programming (MIP) solver called CPLEX 12.6 for solving BIP problems, and the number of feasible solutions to be generated at each iteration is set to based on computational results of preliminary experiments.

We report computational results for two types of well-known benchmark instances called facility location (LOC) and weighted coverage (COV) (Kawahara et al., 2009; Sakaue and Ishihata, 2018).

Facility location (LOC) We are given a set of locations and a set of clients . We consider to select a set of locations to build facilities. We define as the benefit of -th client attaining from the facility at . We select a set of locations to built the facilities. Each client attains the benefit from the most beneficial facility. The total benefit for the clients is defined as

(17)

Weighted coverage (COV) We are given a set of items and a set of sensors . Let be the subset of items covered by an item , and be a weight of item . The total weighted coverage for the items is defined as

(18)

We tested all algorithms for 24 classes of randomly generated instances that are characterized by several parameters. We set , and for both types of instances. For the facility location instances, is a random value taken from interval . For the weighted coverage instances, each sensor randomly covers each item

with probability

and is a random value taken from interval . For each class, five instances were generated and tested. For all instances, we set the time limit to one hour (3600 seconds).

Type A-MOD A-DOM CG ICG BB-ICG BB-ICG+
LOC
LOC