1 Introduction
Let be a graph such that is the vertex set and is the edge set. Let and . A vertex subset is independent if no two vertices in are adjacent, and dominating if every vertex in is adjacent to at least one vertex in . Given a graph, the minimum independent dominating set (MinIDS) problem asks for a smallest vertex subset that is dominating as well as independent. The MinIDS problem has many practical applications in data communication and networks [13].
There is much literature on the MinIDS problem in the field of discrete mathematics [8]. The problem is NPhard [6] and also hard even to approximate; there is no constant such that the problem can be approximated within a factor of in polynomial time, unless PNP [11].
For algorithmic perspective, Liu and Song [15] and Bourgeois et al. [4] proposed exact algorithms with polynomial space. The running times of Liu and Song’s algorithms are bounded by and , and the running time of Bourgeois et al.’s algorithm is bounded by , where is introduced to ignore polynomial factors. Laforest and Phan [14] proposed an exact algorithm based on clique partition, and made empirical comparison with one of the Liu and Song’s algorithms, in terms of the computation time. Davidson et al. [5] proposed an integer linear optimization model for the weighted version of the MinIDS problem (i.e., weights are given to edges as well as vertices, and the weight of an edge is counted as cost if the edge is used to assign a nonsolution vertex to a solution vertex ; every nonsolution vertex is automatically assigned to an adjacent solution vertex such that the weight of is the minimum) and performed experimental validation for random graphs. Recently, Wang et al. [20] proposed a tabusearch based memetic algorithm and Wang et al. [21] proposed a metaheuristic algorithm based on GRASP (greedy randomized adaptive search procedure). They showed their effectiveness on DIMACS instances, in comparison with CPLEX12.6 and LocalSolver5.5.
A vertex subset is an IDS iff it is a maximal independent set with respect to setinclusion [2]. Then one can readily see that the MinIDS problem is equivalent to the maximum minimal vertex cover (MMVC) problem and the minimum maximal clique problem. Zehavi [22] studied the MMVC problem, which has applications to wireless ad hoc networks, from the viewpoint of fixedparametertractability.
For a combinatorially hard problem like the MinIDS problem, it is practically meaningful to develop a heuristic algorithm to obtain a nearlyoptimal solution in reasonable time. In the present paper, we propose an efficient local search for the MinIDS problem. By the term “efficient”, we mean that the proposed local search has a better time bound than one naïvely implemented. The local search can serve as a key tool of local improvement in a metaheuristic algorithm, or can be used in an initial solution generator for an exact algorithm. We may also expect that it is extended to the weighted version of the MinIDS problem in the future work.
Our strategy is to search for a smallest maximal independent set. Hereafter, we may call a maximal independent set simply a solution. In the proposed local search, we use swap for the neighborhood operation. Given a solution , swap refers to the operation of obtaining another solution by dropping exactly vertices from and then by adding any number of vertices to it. The neighborhood of is the set of all solutions that can be obtained by performing swap on . We call minimal if its neighborhood contains no such that .
To speed up the local search, one should search the neighborhood for an improved solution as efficiently as possible. For this, we propose neighborhood search algorithms for and 3. When (resp., and a given solution is 2minimal), the algorithm finds an improved solution or decides that no such solution exists in (resp., ) time, where denotes the maximum degree in the input graph.
Furthermore, we develop a metaheuristic algorithm named ILPS (Iterated Local & Plateau Search) that repeats the proposed local search and the plateau search iteratively. ILPS is so effective that, among 80 DIMACS graphs, it updates the bestknown solution size for five graphs and performs as well as existing methods for the remaining graphs.
The paper is organized as follows. Making preparations in Section 2, we present neighborhood search algorithms for and 3 in Section 3 and describe ILPS in Section 4. We show computational results in Section 5 and then give concluding remark in Section 6. Some proofs and experimental results are included in the appendix. The source code of ILPS is written in C++ and available at http://puzzle.haraguchis.otaruuc.ac.jp/minids/.
2 Preliminaries
2.1 Notation and Terminologies
For a vertex , we denote by the degree of , and by the set of neighbors of , i.e., . For , we define . We denote by the subgraph induced by . The is called a subset if .
Suppose that is an independent set. The tightness of is the number of neighbors of that belong to , i.e., . We call the tight if its tightness is . In particular, a 0tight vertex is called free. We denote by the set of tight vertices. Then is partitioned into , where may be empty. Let denote the set of vertices that have the tightness no less than , that is, .
An independent set is a solution (i.e., a maximal independent set) iff . We call a solution vertex and a nonsolution vertex. When a solution vertex and a tight vertex are adjacent, is a solution neighbor of , or equivalently, is a tight neighbor of .
A swap on a solution is the operation of obtaining another solution such that is a subset of and that is a nonempty subset of . We call a dropped subset and an added subset. The neighborhood of is the set of all solutions obtained by performing a swap on . A solution is minimal if the neighborhood contains no improved solution such that . Note that every solution is 1minimal.
If a subset is dropped from , then trivially, the solution vertices in become free, and some nonsolution vertices may also become free. Observe that a nonsolution vertex becomes free if the solution neighbors are completely contained in . We denote by the set of such vertices and it is defined as . Clearly the added subset should be a maximal independent set in . We have , and the tightness of any vertex in is at most (at the time before dropping from ).
2.2 Data Structure
We store the input graph by means of the typical adjacency list. We maintain a solution based on the data structure that Andrade et al. [1] invented for the maximum independent set problem. For the current solution , we have an ordering on all vertices in such that;

whenever and ;

whenever and ;

whenever and ;

whenever and .
Note that the ordering is partitioned into five sections; , , , and . In each section, the vertices are arranged arbitrarily. We also maintain the number of vertices in each section and the tightness for every nonsolution vertex .
Let us describe the time complexities of some elementary operations. We can scan each vertex section in linear time. We can pick up a free vertex (if exists) in time. We can drop (resp., add) a vertex from (resp., to) the solution in time. See [1] for details.
Before closing this preparatory section, we mention the time complexities of two essential operations.
Proposition 1
Let be a subset of . We can list all vertices in in time.
Proof: We let every have an integral counter, which we denote by . It suffices to scan vertices in twice. In the first scan, we initialize the counter value as for every neighbor of every solution vertex . In the second, we increase the counter of by one (i.e., ) when is searched in the adjacency list of . Then, if holds, we output as a member of since the equality represents that every solution neighbor of is contained in . Obviously the time bound is .
Proposition 2
Let be a subset of . For any nonsolution vertex , we can decide whether is adjacent to all vertices in in time.
Proof: We use the algorithm of Proposition 1. As preprocessing of the algorithm, we set the counter of each to 0, i.e., , which can be done in time. After we acquire by running the algorithm of Proposition 1, we can see if is adjacent to all other vertices in in time by counting the number of vertices such that and . If the number equals to (resp., does not equal to) , then we can conclude that it is true (resp., false).
3 Local Search
Assume that, for some , a given solution is minimal for every . Such always exists, e.g., . In this section, we consider how we find an improved solution in the neighborhood of or conclude that is minimal efficiently.
Let us describe how timeconsuming naïve implementation is. In naïve implementation, we search all subsets of as candidates of the dropped subset , where the number of them is . Furthermore, for each , there are candidates of the added subset . The number of possible pairs is up to .
In the proposed neighborhood search algorithm, we do not search dropped subsets but added subsets; we generate a dropped subset from each added subset. When , the added subsets can be searched more efficiently than the dropped subsets. This search strategy stems from Proposition 3, a necessary condition of a subset that the improvement is possible by a swap that drops . We introduce the condition in Section 3.1.
Then in Section 3.2 (resp., 3.3), we present a neighborhood search algorithm that finds an improved solution or decides that no such solution exists for (resp., 3), which runs in (resp., ) time.
3.1 A Necessary Condition for Improvement
Let be a subset of . If there is a subset such that is maximal independent in and , then we have an improved solution . The connectivity of is necessary for the existence of such , as stated in the following proposition.
Proposition 3
Suppose that a solution is minimal for every for some integer . Let be a subset of . There is a maximal independent set in such that and only when the subgraph is connected.
Proof: Suppose that is not connected. Let be the number of connected components and be the subset of vertices in the th component (, , , ). Each is not empty since otherwise there would be an isolated vertex in . It is a free vertex with respect to , which contradicts that is a solution. Then we have .
The maximal independent set is a subset of . We partition into , where . Each is maximal independent for the th component. As , holds for some . Then we can construct an improved solution , which contradicts the minimality of .
3.2 2Neighborhood Search
Applying Proposition 3 to the case of , we have the following proposition.
Proposition 4
Let be a 2subset of . There is a nonsolution vertex in such that is a solution only when there is a 2tight vertex in .
We can say more on Proposition 4. The vertex should be 2tight since, if not so (i.e., is 1tight), would not be maximal independent for ; is adjacent to only one of from the definition of 1tightness.
In summary, if there is an improved solution , then is 2tight and has and as the solution neighbors. Instead of searching all 2subsets of , we scan all 2tight vertices, and for each 2tight vertex , we take as the candidate of the dropped set. We have the following theorem.
Theorem 1
Given a solution , we can find an improved solution in the 2neighborhood or conclude that is 2maximal in time.
Proof: Since we maintain the solution by means of the vertex ordering, we can scan all the 2tight vertices in time. For each 2tight , we can detect the two solution neighbors, say and , in time.
Let . The singleton is maximal independent for and thus we have an improved solution iff is adjacent to all other vertices in . Whether is adjacent to all other vertices in is decided in time, as we stated in Proposition 2. If it is the case, then we can construct an improved solution in time as the vertex ordering takes time to drop from and time to add to it [1]. Otherwise, we can conclude that is not a solution because some vertices in are not dominated.
We have seen that, for each 2tight vertex , it takes time to find an improved solution or to conclude that it is not a solution. Therefore, the overall running time is bounded by .
3.3 3Neighborhood Search
We have the following proposition by applying Proposition 3 to the case of .
Proposition 5
Suppose that is a 2minimal solution and that is a subset of . There is a subset of such that is maximal independent in and only when either of the followings holds:
 (a)

there is a 3tight vertex in that has , and as the solution neighbors;
 (b)

there are two 2tight vertices in such that one has and as the solution neighbors and the other has and as the solution neighbors.
Let us make observation on the added subset. Suppose that, for an arbitrary 3subset , there is such that is maximal independent in and . When , the only vertex in is 3tight since otherwise some vertex in would not be dominated. When , at least one of the two vertices in is either 2tight or 3tight; if both are 1tight, one vertex of would not be dominated. Concerning the tightness, the following four situations are possible:
 (i)

and is 3tight;
 (ii)

, is 3tight, and is tight such that ;
 (iii)

, is 2tight, and is 2tight;
 (iv)

, is 2tight, and is 1tight.
From (ii) to (iv), the vertices and are not adjacent. We illustrate (i) to (iv) in Figure 1.
(i)  (ii)  (iii)  (iv) 
Based on the above, we summarize the search strategy as follows. In order to generate all 3subsets of such that satisfies either (a) or (b) of Proposition 5, we scan all 3tight vertices (Proposition 5 (a)) and all pairs of 2tight vertices, say and , such that (Proposition 5 (b)). For (a), we take and search for a 1 or 2subset that is maximal independent in , regarding the 3tight vertex as the vertex in (i) and (ii). Similarly, for (b), we take and search for a 2subset that is maximal independent in , regarding the 2tight vertex as the vertex in (iii) and (iv).
We have the following theorem on the time complexity of 3neighborhood search. The proof is included in the appendix.
Theorem 2
Given a 2minimal solution , we can find an improved solution in the 3neighborhood or conclude that is 3maximal in time.
4 Iterated Local & Plateau Search
In this section, we present a metaheuristic algorithm named ILPS (Iterated Local & Plateau Search) that repeats the proposed local search and the plateau search iteratively.
We show the pseudo code of ILPS in Algorithm 1. The ILPS has four parameters, that is , , and , where is an initial solution, is the order of the local search (i.e., a minimal solution is searched by in Line 6), and and are integers. The roles of the last two parameters are mentioned in Section 4.2.
The in Line 6 is the subroutine that returns a minimal solution from an initial solution , where is set to either two or three. When , it determines a 2minimal solution by moving to an improved solution repeatedly as long as the 2neighborhood search algorithm delivers one. When , it first finds a 2minimal solution, and then runs the 3neighborhood search algorithm. If an improved solution is delivered, then the local search moves to the improved solution and seeks a 2minimal one again since the solution is not necessarily 2minimal. Otherwise, the current solution is 3minimal.
Below we explain two key ingredients: the plateau search and the vertex penalty. We describe these in Sections 4.1 and 4.2 respectively. We remark that they are inspired by Dynamic Local Search for the maximum clique problem [19] and Phased Local Search for the unweighted/weighted maximum independent set and minimum vertex cover [18].
4.1 Plateau Search
In the plateau search (referred to as in Line 7), we search solutions of the size that can be obtained by swapping a solution vertex and a nonsolution vertex . Let be the collection of all solutions that are obtainable in this way. The size of any solution in is . We execute for every solution , and if we find an improved solution such that , then we do the same for , i.e., we execute for every solution . We repeat this until no improved solution is found and employ a best solution among those searched as the output of the plateau search.
We emphasize the efficiency of the plateau search; all solutions in can be listed in time. Observe that is a solution iff is 1tight such that is the only solution neighbor of , and is adjacent to all vertices in other than . We can scan all 1tight vertices in time. For each 1tight vertex , the solution neighbor is detected in time, and whether the last condition is satisfied or not is identified in time from Proposition 2. Dropping from and adding to can be done in time.
4.2 Vertex Penalty
In order to avoid the search stagnation, one possible approach is to apply a variety of initial solutions. To realize this, we introduce a penalty function on the vertices. The penalty function is initialized so that for all (Line 3). During the algorithm, is managed by the subroutine UpdatePenalty (Lines 4 and 12). When the initial solution of the next local search is determined, it increases the penalty of every vertex by one, i.e., . Furthermore, to “forget” the search history long ago, it reduces to for all in every iterations. This is the third parameter of ILPS and called the penalty delay.
The is used in the subroutine Kick (Line 11), the initial solution generator, so that vertices with fewer penalties are more likely to be included in the initial solution. Kick generates an initial solution by adding nonsolution vertices (with respect to the incumbent solution ) “forcibly” to . The added vertices are chosen one by one as follows; in one trial, Kick
picks up one nonsolution vertex. It then goes on to the next trial with the probability
or stops the selection with the probability , where is the fourth parameter of ILPS. Observe that specifies the expected number of added vertices. In the first trial, Kick randomly picks up a nonsolution vertex that has the fewest penalty. In a subsequent th trial , let be the set of vertices chosen so far. Kick samples three vertices randomly from , and picks up the one that has the fewest penalty among the three. Suppose that has been picked up as the result of trials. Then we construct an independent set . The may not be a solution as there may remain free vertices. If so, we repeatedly pick up free vertices by the maximumdegree greedy method until becomes a solution. We use the acquired as the initial solution of the next local search.5 Computational Results
We report some experimental results in this section. In Section 5.1
, to gain insights into what kind of instance is difficult, we examine the phase transition of difficulty with respect to the edge density. The next two subsections are devoted to observation on the behavior of the proposed method. In Section
5.2, we show how a single run of improves a given initial solution. In Section 5.3, we show how the penalty delay affects the search. Finally in Section 5.4, we compare ILPS with the memetic algorithm [20], GRASP+PC [21], CPLEX12.6 [12] and LocalSolver5.5 [16] in terms of the solution size, using DIMACS graphs.All the experiments are conducted on a workstation that carries an Intel Core i74770 Processor (up to 3.90GHz by means of Turbo Boost Technology) and 8GB main memory. The installed OS is Ubuntu 16.04. Under this environment, it takes 0.25 s, 1.54 s and 5.90 s approximately to execute dmclique (http://dimacs.rutgers.edu/pub/dsj/clique/) for instances r300.5.b, r400.5.b and r500.5.b, respectively. The ILPS algorithm is implemented in C++ and compiled by the g++ compiler (ver. 5.4.0) with O2 option.
5.1 Phase Transition of Difficulty
The phase transition has been observed for many combinatorial problems [7, 9, 10]. Roughly, it is said that overconstrained and underconstrained instances are relatively easy, and that intermediately constrained ones tend to be more difficult.
In the MinIDS problem, the amount of constraints is proportional to the edge density . We examine the change of difficulty with respect to
. We estimate the difficulty of an instance by how long CPLEX12.8 takes to solve it.
For each , we generate 100 random graphs (ErdösRényi model) with vertices and the edge density , i.e., an edge is drawn between two vertices with probability . We solve the 100 instances by CPLEX12.8 and take the averaged computation time. We set the time limit of each run to 60 s. If CPLEX12.8 terminates by the time limit, then we regard the computation time as 60 s.
5.2 A Single Run of Local Search
We show how a single run of improves an initial solution . Again we take a random graph. We fix the number of vertices to . For every , we generate random graphs. Then for each graph, we run five times, where we use different random seeds in each time and construct the initial solution randomly.
We show the averaged sizes of random, 2minimal and 3minimal solutions in Table 1. We see that, the larger the edge density is, the fewer the solution size becomes. The local search improves a random solution to some extent. improves the solution more than . The difference between the two local searches is the largest when , that is . The difference gets smaller when gets larger. In particular, when , we see no difference.
.2  .3  .4  .5  .6  .7  .8  .9  .95  .99  

random  44.57  24.42  16.70  12.50  9.66  7.70  6.12  4.84  3.62  3.00  2.12 
2minimal  37.37  20.36  13.84  10.18  7.86  6.12  4.95  3.95  2.99  2.00  1.95 
3minimal  35.44  19.04  12.74  9.28  7.01  5.64  4.06  3.02  2.15  2.00  1.95 
Let us discuss computation time. In the left of Figure 3, we show how the averaged computation time changes with respect to . We see that the computation time of is tens to thousands of times the computation time of . However, it does not necessarily diminish the value of the 3neighborhood search. As will be shown in Section 5.4, when , ILPS can find such a good solution that is not obtained by .
In general, for a fixed , it takes more computation time when is larger. Recall Theorem 1 (resp., 2); when (resp., 3), the neighborhood search algorithm finds an improved solution for the current solution or concludes that is minimal in (resp., ) time. Roughly, is increasing as gets larger.
For , we attribute the peak at to the number of 3tight vertices. In the right of Figure 3, We show the averaged numbers of 2 and 3tight vertices with respect to 3minimal solutions. The 3neighborhood search algorithm searches 2 and 3tight vertices. The numbers of both vertices are generally nondecreasing from to 0.8, but when , the number of 3tight vertices decreases dramatically. This is due to the solution size. The solution size gives an upper bound on the tightness of any nonsolution vertex, and when , the averaged size of a 3minimal solution is less than three; see Table 1. Since most of the nonsolution vertices are either 1 or 2tight, we hardly handle the situations (i) and (ii) in Section 3.3.
5.3 Penalty Delay
We introduced the notion of vertex penalty to control the search diversification. When the penalty delay is larger, more varieties of initial solutions are expected to be tested in ILPS.
To illustrate the expectation, we evaluate how many iterations ILPS takes until all vertices are covered by the initial solutions, that is, used in the initial solutions at least once. The solid line in Figure 4 shows the number of iterations taken to cover all vertices. The graph we employ here is a grid graph such that each vertex is associated with a 2D integral point , and that two vertices and are adjacent iff . For each , the number of iterations is averaged over 500 runs of ILPS with different random seeds, where we fix and construct the first initial solution by the maximumdegree greedy algorithm.
The observed phenomenon meets our expectation; The number is nonincreasing with respect to and saturated for . In other words, when is larger, more varieties of initial solutions are generated in a given number of iterations.
However, setting to a large value does not necessarily lead to discovery of better solutions. The dashed line in Figure 4 shows the averaged number of iterations that ILPS takes to find an optimal solution; we know that the optimal size is 24 since we solve the instance optimally by CPLEX. When , the number is approximately decreasing and takes the minimum at , but a larger does not make any improvement. Hence, given an instance, we need to choose an appropriate value of carefully.
5.4 Performance Validation
We run ILPS algorithm for 80 DIMACS instances that are downloadable from [17]. We generate the first initial solution by the maximumdegree greedy method, and fix the parameter to three. For , all pairs in are tested. For each instance and each , we run ILPS algorithm 10 times, using different random seeds. We terminate the algorithm by the time limit. The time limit is set to 200 s. When , we modify Algorithm 1 so that in Line 7 is called only when as the plateau search is rather timeconsuming.
We take four competitors from [20] and [21]. The first is MEM, a tabusearch based memetic algorithm in [20]. The second is GP, the GRASP+PC algorithm in [21]. The third is CP, which stands for CPLEX12.6 [12] that solves an integer optimization model of the MinIDS problem. The fourth is LS, which stands for LocalSolver5.5 [16], a general discrete optimization solver based on local search. MEM is run on a computer with a 2.0GHz CPU and a 4GB memory, whereas the other competitors are run on computers with a 2.3GHz CPU and an 8GB memory. The time limit of MEM and GP is set to 200 s, and that of CP and LS is set to 3600 s.
In Table 2, we show the results on selected instances. The columns “” and “” indicate the number of vertices and the edge density, respectively. The edge density is between 0.1 and 0.5 in all instances except hamming82. In our context, the instances are expected to be difficult. For ILPS, we show the results for in detail, regarding this pair as the representative. The columns “Min” and “Max” indicate the minimum/maximum solution size over 10 runs, and the column “Avg” indicates the average. The column “TTB” indicates the time to best (in seconds), that is, the average of the computation time that ILPS takes to find the solution of the size “Min”. The symbol represents that the time is less than 0.1 s. The column “Best” indicates the minimum solution size attained over all . The rightmost four columns indicate the solution size attained by the competitors. The symbol before the instance name indicates that the solution size attained by CPLEX is optimal.
ILPS  [20]  [21]  
(, )  MEM  GP  CP  LS  
Min  Avg  Max  TTB  Best  
brock400_2  400  .25  10  10.0  10  1.1  9  9  10  10  11 
C1000.9  1000  .10  27  27.8  29  0.0  26  27  27  29  30 
C125.9  125  .10  14  14.0  14  0.1  14  14  15  14  14 
C2000.9  2000  .10  32  33.6  35  12.1  32  33  33  48  36 
C4000.5  4000  .50  7  7.9  8  49.7  7  8  8     
C500.9  500  .10  22  22.2  23  92.3  21  22  23  23  22 
gen400_p0.9_55  400  .10  20  20.1  21  39.4  20  20  21  22  22 
gen400_p0.9_65  400  .10  20  20.7  21  99.0  20  20  21  21  22 
hamming82  256  .03  36  36.0  36  0.0  32    32  32  32 
keller6  3361  .18  18  18.0  18  26.1  16  18  18  32  19 
san200_0.7_1  200  .30  6  6.1  7  85.9  6  6  7  6  7 
san200_0.9_1  200  .10  15  15.0  15  16.7  15  15  16  15  16 
san400_0.7_3  400  .30  7  7.8  8  106.6  7  7  8  8  9 
The table contains only results on the 13 selected instances such that the solutions sizes attained by “Best”, “MEM” and “GP” are notallequal, except hamming82. We guarantee that, for the remaining 67 instances, ILPS’s “Best” is as good as any competitor. The boldface indicates that the solution size is strictly smaller than those of the competitors. Then we update the bestknown solution size in five graphs. These show the effectiveness of the proposed local search and the ILPS algorithm. All results are included in the appendix.
For hamming82, when , ILPS cannot find a solution of the optimal size 32 for any penalty delay . However, when , ILPS finds an optimal solution with , and .
Before closing this section, let us report our preliminary results briefly.

A preliminary version of ILPS happened to find a solution of the size 31 for C2000.9 and a solution of the size 15 for keller6. See the detail for the appendix.

Let us consider a finer swap operation, swap, that obtains another solution by dropping exactly vertices from the current one and then by adding exactly vertices to it. One can prove that, given a solution and a constant , we can improve by swap or conclude that it is not possible in time. We implemented swap in a preliminary version of ILPS, but it does not yield significant improvement even when is set to a constant larger than three.

We tested Laforest and Phan’s exact algorithm [14], and found that the algorithm is not suitable for a task of finding a good solution quickly. The source code is available at http://todo.lamsade.dauphine.fr/spip.php?article42.

BHOSLIB [3] is another wellknown collection of benchmark instances. It contains 36 instances such that is between 450 and 4000 and that is no less than 0.82. Hence, the BHOSLIB instances are expected to be easy in our context. The ILPS with finds a solution of the size three for all the instances. We also run CPLEX12.8 for 200 s, generating an initial solution by the maximumdegree greedy algorithm. CPLEX12.8 finds a solution of the size five for frb10040, and a solution of the size three for the other instances. In addition, the solution of the size three is proved to be optimal for 15 instances whose names start with frb30, frb35 and frb40.
6 Concluding Remark
We have considered an efficient local search for the MinIDS problem. We proposed fast neighborhood search algorithms for and 3, and developed a metaheuristic algorithm named ILPS that repeats the local search and the plateau search iteratively. ILPS is so effective that it updates the bestknown solution size in five DIMACS graphs.
The proposed local search is applicable to other metaheuristics such as genetic algorithms, as a key tool of local improvement. The future work includes an extension of the local search to a weighted version of the MinIDS problem.
References
 [1] D.V. Andrade, M.G.C. Resende, and R.F. Werneck. Fast local search for the maximum independent set problem. Journal of Heuristics, 18:525–547, 2012.
 [2] C. Berge. Theory of Graphs and its Applications. Methuen, London, 1962.
 [3] BHOSLIB: Benchmarks with hidden optimum solutions for graph problems. http://sites.nlsde.buaa.edu.cn/~kexu/benchmarks/graphbenchmarks.htm. accessed on February 1, 2018.
 [4] N. Bourgeois, F.D. Croce, B. Escoffier, and V.Th. Paschos. Fast algorithms for MIN independent dominating set. Discrete Applied Mathematics, 161(4):558–572, 2013.
 [5] P.P. Davidson, C. Blum, and J. Lozano. The weighted independent domination problem: ILP model and algorithmic approaches. In Proc. EvoCOP 2017, pages 201–214, 2017.
 [6] M.R. Garey and D.S. Johnson. Computers and Intractability: A Guide to the Theory of NPCompleteness. W. H. Freeman & Company, 1979.
 [7] I.P. Gent and T. Walsh. The SAT phase transition. In Proc. ECAI94, pages 105–109, 1994.
 [8] W. Goddard and M.A. Henning. Independent domination in graphs: A survey and recent results. Discrete Mathematics, 313:839–854, 2013.
 [9] C.P. Gomes and B. Selman. Problem structure in the presence of perturbations. In Proc. AAAI97, pages 221–227, 1997.
 [10] C.P. Gomes and D.B. Shmoys. Completing quasigroups or latin squares: a structured graph coloring problem. In Proc. Computational Symposium on Graph Coloring and Generalizations, 2002.
 [11] M.M. Halldórsson. Approximating the minimum maximal independence number. Information Processing Letters, 46(4):169–172, 1993.
 [12] IBM ILOG CPLEX. https://www.ibm.com/analytics/datascience/prescriptiveanalytics/cplexoptimizer. accessed on February 1, 2018.
 [13] F. Kuhn, T. Nieberg, T. Moscibroda, and R. Wattenhofer. Local approximation schemes for ad hoc and sensor networks. In Proc. the 2005 Joint Workshop on Foundations of Mobile Computing, pages 97–103, 2005.
 [14] C. Laforest and R. Phan. Solving the minimum independent domination set problem in graphs by exact algorithm and greedy heuristic. RAIROOperations Research, 47(3):199–221, 2013.
 [15] C. Liu and Y. Song. Exact algorithms for finding the minimum independent dominating set in graphs. In Proc. ISAAC 2006, LNCS 4288, pages 439–448, 2006.
 [16] LocalSolver. http://www.localsolver.com/. accessed on February 1, 2018.
 [17] F. Mascia. dimacs benchmark set. http://iridia.ulb.ac.be/~fmascia/maximum_clique/DIMACSbenchmark. accessed on February 1, 2018.
 [18] W. Pullan. Optimisation of unweighted/weighted maximum independent sets and minimum vertex covers. Discrete Optimization, 6(2):214–219, 2009.

[19]
W. Pullan and H.H. Hoos.
Dynamic local search for the maximum clique problem.
Journal of Artificial Intelligence Research
, 25:159–185, 2006.  [20] Y. Wang, J. Chen, H. Sun, and M. Yin. A memetic algorithm for minimum independent dominating set problem. Neural Computing and Applications, in press. URL: https://doi.org/10.1007/s0052101628137.
 [21] Y. Wang, R. Li, Y. Zhou, and M. Yin. A path costbased grasp for minimum independent dominating set problem. Neural Computing and Applications, 28(1):143–151, 2017. URL: https://doi.org/10.1007/s0052101623246.
 [22] M. Zehavi. Maximum minimal vertex cover parameterized by vertex cover. SIAM Journal on Discrete Mathematics, 31(4):2440–2456, 2017.
Appendix
Proof of Theorem 2
For preparation, let us introduce the following proposition. This is a generalization of Proposition 2 in the sense that the vertex subset is taken arbitrarily. The proof is similar to Proposition 2.
Proposition 6
Given an arbitrary vertex subset and a vertex , we can decide whether is adjacent to all vertices in in time.
Proof: We let every have another integral variable, which we denote by . Initially, is set to zero. We also maintain a global integral variable that is set to one initially.
First, for all , we set to the current (i.e., ). We then count the number of vertices in such that . If the number equals to , then we can regard that is adjacent to all vertices in . As postprocessing, we increase by one (i.e., ).
An integral variable is bounded in conventional programming languages. If reaches the upper limit (e.g., INT_MAX in C), then we reset to zero for all and to one again.
We prove Theorem 2 in Section 3.3. If there is an improved solution , then the four situations from (i) to (iv) are possible as to the tightnesses of vertices in . (For illustration, see Figure 1.) Given a nonsolution vertex in (i) to (iv), the following Lemmas 1 to 4 show time complexities of finding an improved solution or concluding that no such solution exists, respectively.
Lemma 1
Suppose that a 3tight vertex is given. Let be the set of solution neighbors of . We can decide whether is a solution or not in time.
Proof: The set can be decided in time. It suffices to decide whether is adjacent to all vertices in . This can be done in time from Proposition 2.
Lemma 2
Suppose that a 3tight vertex is given. Let be the set of solution neighbors of . We can find a nonsolution vertex such that is a solution or conclude that such does not exist in time.
Proof: Let be the subset of such that the vertices in are not adjacent to . The sets , and can be constructed in time. All we have to do is to check whether there is such that is adjacent to all vertices in . From Proposition 6 and , this can be done in time.
Lemma 3
Suppose that a 2tight vertex is given. We can decide in time whether there exists a 2tight vertex such that:

and have exactly one solution neighbor in common;

is a solution, where .
Proof: Let be the set of solution neighbors of . The target 2tight vertex should be a neighbor of either or , but not both, as and have exactly one solution neighbor in common. Hence, there are at most candidates for .
For each candidate of , let be the set of solution neighbors of . (If , then we discard this candidate.) Let . To check whether is a solution, it suffices to check whether is a solution in the subgraph . It takes time to decide , to decide whether is independent, and decide whether is dominating the vertices in .
Lemma 4
Suppose that a 2tight vertex is given. We can decide in time whether there exist a 2tight vertex and a 1tight vertex such that:

and are adjacent, and have exactly one solution neighbor in common. Let and be the sets of solution neighbors of and respectively;

the unique solution neighbor of is ;

is a solution, where .
Proof: Similarly to Lemma 3, there are at most candidates for . The adjacency between and can be checked in time.
Note that the number of candidates for is also at most . Each candidate of has at most 1tight neighbors that are the candidates of . Hence, for , there are candidates.
For each candidate of , to check whether is a solution, it suffices to check whether is a solution in the subgraph , which can be done in time. Then we have the time bound .
(Proof of Theorem 2) For every 3tight vertex , check whether there is an improved solution in the situations (i) and (ii). This can be done in time from Lemmas 1 and 2. Similarly, for every 2tight vertex , check whether there is an improved solution in the situations (iii) and (iv). This can be done in time from Lemmas 3 and 4.
As there are nonsolution vertices, we have the time bound .
All Computational Results on DIMACS Graphs
The next Table 3 shows all results on 80 DIMACS graphs that are downloadable from [17]. The column “CP12.8” represents CPLEX12.8. We run CPLEX12.8 for each instance, setting the time limit to 200 seconds. An initial solution is constructed by the maximumdegree greedy algorithm. A hyphen in the rightmost four columns indicates that the corresponding result is not available in [20, 21].
As mentioned in the paper, we happened to find a solution of the size 31 for C2000.9 and a solution of the size 15 for keller6 by a preliminary version of ILPS. The vertices in the solution for C2000.9 have the following IDs:
23, 78, 161, 252, 279, 344, 441, 556, 662, 671, 703, 769, 847, 864, 926, 952, 1056, 1266, 1274, 1475, 1540, 1619, 1636, 1641, 1646, 1673, 1826, 1839, 1915, 1947, 1979.
The solution for keller6 is the set of vertices with the following IDs:
169, 601, 659, 855, 1020, 1215, 1352, 1586, 2052, 2376, 2463, 2818, 2847, 2944, 3281.
By the ID of a vertex, we mean an integer that is given to the vertex in the DIMACS files.
ILPS  [20]  [21]  
(, )  CP  MEM  GP  CP  LS  
Min  Avg  Max  TTB  Best  12.8  12.6  
brock200_1  8  8.0  8  8  8          
brock200_2  4  4.0  4  0.3  4  4  4  4  4  4  
brock200_3  5  5.0  5  0.7  5  5          
brock200_4  6  6.0  6  0.8  6  6  6  6  6  6  
brock400_1  10  10.0  10  1.1  10  10          
brock400_2  10  10.0  10  32.2  9  10  9  10  10  11  
brock400_3  9  9.3  10  168.0  9  10          
brock400_4  9  9.8  10  102.8  9  10  9  9  10  11  
brock800_1  8  8.3  9  33.5  8  9          
brock800_2  8  8.7  9  85.9  8  9  8  8  9  9  
brock800_3  8  8.4  9  81.2  8  10          
brock800_4  8  8.5  9  8  9  8  8  9  9  
cfat2001  10  10.0  10  10  10          
cfat2002  22  22.0  22  22  22          
cfat2005  56  56.0  56  56  56          
cfat5001  12  12.0  12  12  12          
cfat50010  124  124.0  124  124  124          
cfat5002  24  24.0  24  24  24          
cfat5005  62  62.0  62  62  62          
C1000.9  27  27.8  29  26  30  27  27  29  30  
C125.9  14  14.0  14  1.1  14  14  14  15  14  14  
C2000.5  7  7.0  7  12.1  7  8  7  7  11  8  
C2000.9  32  33.6  35  26.5  32  33  33  33  48  36  
C250.9  17  17.0  17  49.7  17  17  17  17  18  18  
C4000.5  7  7.9  8  92.3  7  9  8  8      
C500.9  22  22.2  23  0.2  21  23  22  23  23  22  
DSJC1000.5  6  6.0  6  1.3  6  7  6  6  6  6  
DSJC500.5  5  5.0  5  1.6  5  5  5  5  10  7  
gen200_p0.9_44  16  16.0  16  5.4  16  16  16  16  16  16  
gen200_p0.9_55  16  16.0  16  39.4  16  16  16  16  16  16  
gen400_p0.9_55  20  20.1  21  99.0  20  20  20  21  22  22  
gen400_p0.9_65  20  20.7  21  95.3  20  22  20  21  21  22  
gen400_p0.9_75  20  20.7  21  28.6  20  21  20  20  21  22  
hamming102  128  131.1  133  1.2  128  161          
hamming104  12  12.0  12  12  14  12  12  14  12  
hamming62  12  12.0  12  12  12  12  12  12  12  
hamming64  2  2.0  2  2  2  2  2  2  2  
hamming82  36  36.0  36  32  32    32  32  32  
hamming84  4  4.0  4  4  4  4  4  4  4  
johnson1624  8  8.0  8  8  8  8  8  8  8  
johnson3224  16  16.0  16  16  16  16  16  16  16  
johnson824  4  4.0  4  4  4  4  4  4  4  
johnson844  7  7.0  7  7  7  7  7  7  7  
keller4  5  5.0  5  8.9  5  5  5  5  5  5  
keller5  9  9.0  9  26.1  9  11  9  9  11  10  
keller6  18  18.0  18  16  20  18  18  32  19  
MANN_a27  27  27.0  27  27  27  27  27  27  27  
MANN_a45  45  45.0  45  45  45  45  45  45  45  
MANN_a81  81  81.0  81  81  81  81  81  81  81  
MANN_a9  9  9.0  9  0.9  9  9  9  9  9  12  
p_hat10001  3  3.0  3  54.8  3  3          
p_hat10002  6  6.1  7  21.5  6  7          
p_hat10003  11  12.0  13  11  13          
p_hat15001  4  4.0  4  34.7  4  4          
p_hat15002  8  8.0  8  13.3  7  9          
p_hat15003  14  14.7  15  14  18          
p_hat3001  3  3.0  3  3  3          
p_hat3002  5  5.0  5  0.9  5  5          
p_hat3003  9  9.0  9  9  9          
p_hat5001  3  3.0  3  0.3  3  3          
p_hat5002  6  6.0  6  28.4  6  6          
p_hat5003  10  10.0  10  0.2  10  11          
p_hat7001  3  3.0  3  182.4  3  3          
p_hat7002  6  6.9  7  52.0  6  6          
p_hat7003  11  11.0  11  48.1  11  12          
san1000  4  4.8  5  85.9  4  4  4  4  4  5  
san200_0.7_1  6  6.1  7  6  6  6  7  6  7  
san200_0.7_2  6  6.0  6  16.7  6  6  6  6  6  6  
san200_0.9_1  15  15.0  15  2.3  15  15  15  16  15  16  
san200_0.9_2  16  16.0  16  20.4  16  16  16  16  16  16  
san200_0.9_3  15  15.2  16  0.3  15  17  15  15  15  15  
san400_0.5_1  4  4.0  4  64.9  4  4  4  4  4  4  
san400_0.7_1  7  7.9  8  96.1  7  7  7  7  8  8  
san400_0.7_2  7  7.7  8  106.6  7  7  7  7  7  8  
san400_0.7_3  7  7.8  8  59.2  7  8  7  8  8  9  
san400_0.9_1  20  20.4  21  19  20          
sanr200_0.7  7  7.0  7  7.0  7  7          
sanr200_0.9  16  16.0  16  16  16          
sanr400_0.5  5  5.0  5  10.6  5  5          
sanr400_0.7  8  8.0  8  8  9         
Comments
There are no comments yet.