1 Introduction
In a simple and connected graph, an independent set is a subset of the nodes such that every pair of nodes that can be formed from the set is not adjacent. The maximum independent set
problem is then to find the independent set in the graph with the largest possible cardinality. There are lots of applications that benefit from large independent sets such as information retrieval, signal transmission analysis, classification theory, economics, scheduling or computer vision
[9]. As a more specific example, finding large independent sets is useful in map labeling [11] where one wants to maximize the number of visible nonoverlapping labels on a map. Here, a graph model is built such that labels correspond to nodes and there is an edge between two nodes if the associated labels are overlapping. It is easy to see that a maximum independent set in the model yields a maximum number of nonoverlapping labels.The maximum independent set problem is closely related to the maximum clique problem and the minimum vertex cover problem. More precisely, the complement of an independent set results in a vertex cover and an independent set is a clique in the complement graph . However, note that results from the maximum clique problem are usually only partially transferable to practical algorithms for the maximum independent set problem since building the complement of sparse graphs yields dense graphs. It is well known that all of these problems are NPhard [10]
. Thus, one relies on heuristic algorithms to find good solutions on large graphs.
Most of the work in literature considers heuristics and local search algorithms for the maximum clique problem (see for example [5, 15, 13, 16, 20, 14]). These algorithms keep a single solution and try to improve it by using node deletions, insertions, swaps as well as the concept of plateau search. In this context, plateau search only accepts moves that do not change the objective function of the optimization problem. Heuristics usually employ node swaps to achieve that. A node swap refers to the replacement of a node by one of its neighbors; Hence, a node swap cannot directly increase the size of the independent set but can yield a situation where an additional node may get inserted to the solution. A very successful approach for the maximum clique problem has been presented by Grosso et al. [14]. In addition to the plateau search approach, different diversification operations are performed and restart rules are added. In the independent set context, Andrade et al. [1] extended the notion of swaps to swaps. A swap removes nodes from the current solution and inserts nodes. The authors present a fast lineartime implementation that, given a maximal solution, can find a swap or prove that none exists. We implemented the algorithm and use it within our evolutionary algorithm to improve newly computed offsprings.
There are very few papers considering evolutionary algorithms for the maximum independent set problem. The general idea behind evolutionary algorithms is to use mechanisms which are highly inspired by biological evolution such as selection, mutation, recombination and survival of the fittest. An evolutionary algorithm starts with a population of individuals (in our case independent sets of the graph) and evolves the population into different populations over several rounds. In each round, the evolutionary algorithm uses a selection rule based on the fitness of the individuals of the population to select good individuals and combine them to obtain improved offspring [12].
Bäck and Khuri [3] and Borisovsky and Zavolovskaya [6] use fairly similar approaches. They encode solutions as bitstrings such that the value at position equals one if and only if node is in the current solution. In both cases a classic twopoint crossover is used which randomly selects two crossover points . Then all bits in between these positions are exchanged between both input individuals. Note that this likely results in invalid solutions. To guide the search towards valid solutions a penalty approach is used. A major drawback of the work by Bäck and Khuri [3] is that the authors only test their algorithm on synthetic instances. Moreover, in both cases the graphs under consideration are very small.
The main contribution of our paper is a very natural evolutionary framework for the computation of large maximal independent sets. The core innovations of the algorithm are combine operations based on graph partitioning and local search algorithms. More precisely, we employ the stateoftheart graph partitioner KaHIP [21] to derive operations that enable us to quickly exchange whole blocks of given individuals. The newly computed offsprings are then improved using a local search algorithm. In contrast to previous evolutionary algorithms, each computed offspring is valid. Hence, we only allow valid solutions in our population and thus are able to use the cardinality of the independent set as a fitness function. The rest of paper is organized as follows. We begin in Section 2 by introducing basic concepts and related work. We describe the core components of our evolutionary algorithm in Section 3. This includes a number of partitioning based combine operators that take two individuals as input as well as combine operators that can take multiple individuals as input. A summary of extensive experiments done to tune the algorithm and evaluate its performance is presented in Section 4. Experiments indicate that our algorithm computes very good independent sets and outperforms stateoftheart algorithms on large variety of instances. Finally, we conclude with Section 5.
2 Prelimiaries
2.1 Basic Concepts
Let be an undirected graph with and . The set denotes the neighbors of . The complement of a graph is defined as with being the complement of . An independent set is a subset , such that there are no adjacent nodes in . It is maximal, if it is not a subset of any larger independent set. The independent set problem is that of finding the maximum cardinality set among all possible independent sets. A vertex cover is a subset of nodes , such that every edge is at least incident to one node within the set. The minimum vertex cover problem asks for the vertex cover with the minimum number of nodes. It is worth mentioning that the complement of a vertex cover always is an independent set by definition. A clique is a subset of the nodes such that there is an edge between all pairs of nodes from .
A way partition of a graph is a division of into blocks of nodes ,…,, i.e. and for . A balancing constraint demands that for some imbalance parameter . The objective is to minimize the total cut where . The set of cut edges is also called edge separator. The node separator problem asks to find blocks, and a separator , that partition , such that there are no edges between the blocks. Again, a balancing constraint demands . However, there is no balancing constraint on the separator . The objective is to minimize the size of the separator . Note that removing the set from the graph results in at least connected components and that the blocks itself do not need to be connected components. By default, our initial inputs will have unit edge and node weights.
2.2 Detailed Related Work
We now discuss algorithmical details of the algorithm by Andrade et al. [1]. We call the algorithm ARW as an abbreviation for Andrade, Resende and Werneck. While we compare our algorithm against ARW, we also use it within our algorithm to improve newly created offsprings. Moreover, we shortly present the KaHIP graph partitioning framework since we use it to compute partitions and node separators.
Arw.
One iteration of the ARW algorithm consists of a perturbation and a local search step. The ARW local search algorithm uses simple improvements or swaps to gradually improve a single current solution. A swap removes nodes from the solution and then inserts new nodes into it. A swap in particular removes a single node from the solution and adds two other free nodes. A node is called free, if none of its neighbouring nodes can be found in the current solution. The tightness of a node is the number of neighbouring solution nodes. Hence, free nodes have zero tightness. The simple version of the local search algorithm then iterates over all nodes of the graph and looks for a swap. It is shown, that this procedure can find a valid swap in linear time , if it exists. This is achieved by using a data structure that allows insertion and removal operations on nodes in time proportional to their degree. The data structure basically divides the nodes into solution nodes, free nodes and nonfree nonsolution nodes. The perturbation step
used for diversification, forces nodes into the solution and removes neighboring nodes as necessary. In most cases, one node is forced into the solution per iteration. With a small probability the number of forced nodes
is set to higher value: is set to with probability . Moreover, the current node to be forced into a solution is picked from a number of random candidates. Among those candidates the vertex that has been outside the solution for the longest time is picked. We refer the reader to original paper for more details about the ARW algorithm. There is also an even faster incremental version of the algorithm that maintains a list of candidates. We use this version of the algorithm here.KaHIP.
Karlsruhe High Quality Partitioning – is a family of graph partitioning programs that tackle the balanced graph partitioning problem [21, 22]. The algorithms in KaHIP have been able to compute the best results in various benchmarks. It implements different sequential and parallel algorithms to compute way partitions and node separators. In this work, we use the sequential multilevel graph partitioner KaFFPa (Karlsruhe Fast Flow Partitioner) to obtain partitions and separators for the graphs. In particular, we use specialized partitioning techniques based on multilevel sizeconstrained label propagation [18].
3 Evolutionary Components
We now discuss the main contributions of the paper. We begin by outlining the general structure of our evolutionary algorithm and then explain how we build the initial population. Finally, we present our new combine operations and the methods we use for mutation.
3.1 General Structure
As previous work [3, 6] we use bitstrings as a natural way to represent individuals/solutions in our population. More precisely, an independent set is represented as an array where if and only if . The general structure of our evolutionary algorithm is very simple. Our algorithm starts with the creation of a population of individuals (in our case independent sets in the graph) and evolves the population into different populations over several rounds until a stopping criterion is reached.
In each round, our evolutionary algorithm uses a selection rule that is based on the fitness of the individuals (in our case the size of the independent set) of the population to select good individuals and combine them to obtain improved offspring. In contrast to previous work [3, 6], our combine and mutation operators always create valid independent sets. Hence, we use the size of the independent set as a fitness function. That means that there is no need to use a penalty function to ensure that the final individuals generated by our algorithm are independent sets. As we will see later when an offspring is generated it is possible that it is a nonmaximal independent set. Hence, we apply one iteration of ARW local search without the perturbation step to ensure that it is locally maximal and apply a mutation operation to the offspring. We use mutation operations since it is of major importance to keep the diversity in the population high [2], i.e. the individuals should not become too similar, in order to avoid a premature convergence of the algorithm.
We then use an eviction rule to select a member of the population and replace it with the new offspring. In general one has to take both into consideration, the fitness of an individual and the distance between individuals in the population [2]. Our algorithm evicts the solution that is most similar to the newly computed offspring among those individuals of the population that have a smaller or equal objective than the offspring itself. Once an individual has been accepted into the population we further refine it using additional iterations of the ARW algorithm. The general structure of our evolutionary algorithm follows the steadystate approach [8] which generates only one offspring per generation. We give an outline in Algorithm 1.
3.2 Initial Solutions
We use three different approaches to create initial solutions. Each time we create an individual for the population we pick one of the approaches uniformly at random. The first and most simplistic way is to start from an empty independent set and add nodes at random until no further nodes can be added. To ensure that adding a node results in a valid independent set we have to check if the node is free. We do this by simply checking if any of the surrounding nodes is already in the set. The method adds a decent amount of diversity during the construction phase, which over an extended period of time can lead to good solutions.
Secondly, we use a greedy approach similar to Andrade et al. [1]. Starting from an empty solution, we always add the node with the least residual degree which is the number of free neighbors. After a node is added to the solution, we remove all its neighbouring nodes from the graph and update the residual degree of their neighbors. We repeat the procedure until no further node can be added. The implementation is done using a simple bucket priority queue which groups nodes into buckets based on their residual degree. This allows us to pick a random node each time multiple nodes share the same residual degree.
The last approach that we use to create initial solutions is also a greedy one. Here, we take a detour and generate an independent set by computing a vertex cover. We first create a vertex cover and then compute its complement to get an independent set. The algorithms also starts with an empty solution and then always adds the node that will cover the most currently uncovered edges. We repeat this until all edges are covered and then return the corresponding independent set. Note that the two greedy algorithms compute different independent sets (e.g. consider a path with five nodes). While the first approach always maintains an independent set and tries to improve it, the second approach can only return an independent set once the algorithm has terminated.
3.3 Combine Operations
We perform different kinds of combine operations which are all based on graph partitioning. The main idea of our operators is to use a partition of the graph to exchange whole blocks of solution nodes. In general our combination operators try to generate new independent sets that are not necessarily maximal. We then perform a maximization step that adds as many free nodes as possible. Afterwards, we apply a single iteration of the ARW local search algorithm to ensure that our solution is locally optimal. Depending on the type of the operator, we use a node separator or an edge separator of the graph that has been computed by the graph partitioning framework KaHIP. As a side note, small edge or node separators are vital for our combine operations to work well. This is due to the fact that large separators in the combine operations yield offsprings that are far from being maximal. Hence, the maximization step performs lots of fixing and the computed offspring is not of high quality. This is supported by experiments presented in Section 4.1.
The first and the second operator need precisely two input solutions while our third operator is a multipoint combine operator – it can take multiple input solutions. In the first case, we use a simple tournament selection rule [19] to determine the inputs, i.e. is the fittest out of two random individuals from the population. The same is done to select . Note that due to the fact that our algorithms are randomized, a combine operation performed twice using the same parents can yield a different offspring.
Node Separator Combination.
In its simplest form, the operator starts by computing a node separator of the input graph. We then use as a crossover point for our operation. The operator generates two offsprings. More precisely, we set and . In other words, we exchange whole parts of independent sets from the blocks and of the node separator. Note that the exchange can be implemented in time linear in the number of nodes. Recall that the definition of a node separator implies that there are no edges running between and . Hence, the computed offsprings are independent sets, but may not be maximal since separator nodes have been ignored and potentially some of them can be added to the solution. We maximize the offsprings by using the greedy independent set algorithm from Section 3.2. The operator finishes with one iteration of the ARW algorithm to ensure that we reached a local optimum and to add some diversification. An example illustrating the combine operation is shown in Figure 1.
Edge Separator Combination.
This operator computes offsprings by taking a detour over vertex covers. It starts by computing a bipartition of the graph. Let be the vertex cover . We define temporary vertex cover offsprings similar to before: and . Unfortunately, it is possible that an offspring created this way contains some noncovered edges. These edges can only be a subset of the cut edges of the partition. We want to add as little nodes as possible to our solution to fix this. Hence, we add a minimum vertex cover of the bipartite graph induced by the noncovered cut edges to our vertex cover offspring. The minimum vertex cover in a bipartite graph can be computed using the HopcroftKarp algorithm. Afterwards, we transform the vertex cover back to an independent set, and follow our general approach by applying ARW local search to reach a local optimum.
Multiway Combination.
Our last two operators are multipoint crossover operators that extend the previous two operators. Both of them divide the graph into a number of blocks . Depending on the type of the operator, a node or edge separator is used. We start with the description of the node separator approach where . The operator selects a number of parents. We then calculate the score for every possible pair of a parent and a block . The score of a pair is the number of the parents solution nodes inside the given block. We then select the parent with the highest score for each of the blocks to compute the offspring. As before, since we left out the separator nodes we use a maximization step to make the solution maximal and afterwards apply ARW local search to ensure that our solution is a local optimum.
If we use an edge separator for the combination, we start with a way partition of the nodes . This approach also computes scores for each pair of parent and block. This time the score of a pair is defined as the number of the vertex cover nodes of the complement of an independent set inside the given block. We then select the parent with the lowest score for each of the blocks to compute the offspring. As in the simple vertex cover combine operator, it is possible that some cut edges are not covered. We use the simple greedy vertex cover algorithm to fix the offspring since the graph induced by the noncovered cut edges is not bipartite anymore. We then once again complement our vertex cover to get our final offspring.
3.4 Mutation Operations
After we performed a combine operation, we apply a mutation operator to introduce further diversification. Previous work [3, 6] uses bitflipping for mutation, i.e. every bit in the representation of a solution has a certain probability of being flipped. We can not use this approach since our population only allows valid solutions. Instead we perform forced insertions of new nodes into the solution and remove adjacent solution nodes if necessary as in the perturbation routine of the ARW algorithm. Afterwards we perform ARW local search to improve the perturbed solution.
3.5 Miscellanea
Instead of computing a new partition for every combine operation, we hold a pool of partitions and separators that is computed in the beginning. A combine operation then picks a random partition or node separator from the pool. If the combine operations have been unsuccessful for too many iterations, we compute a fresh set of partitions. In our experiments we used twohundred unsuccessful combine operations as a threshold. Additionally, we have to ensure that the partitions created for the combine operations are sufficiently different over multiple runs. However, although KaHIP is a randomized algorithm, small cuts in a graph may be similar. To avoid similar cuts and increase diversification of the partitions and node separators, we additionally give KaHIP a random imbalance to solve the partitioning problem. Additionally, we tried one more combine operator based on set intersection. This operator computes an offspring by keeping the nodes that are in both inputs which is by definition an independent set. However, our experiments with the operator did not yield good results so that we omit further investigations here.
4 Experimental Evaluation
Methodology.
We have implemented the algorithm described above (EvoMIS) using C++ and compiled all algorithms using gcc 4.63 with full optimization’s turned on (O3 flag). We mainly compare our algorithm against the ARW algorithm since it has a relatively clear advantage in Resende et al. [1]. The algorithm by Grosso et al. [14] has originally been formulated for the maximum clique problem. Andrade et al. [1] used an implementation of the algorithm for the maximum independent set problem. Hence, we also compare against the results of the algorithm by Grosso et al. presented in the paper of Andrade et al. [1]. Additionally, we compare ourselves with our implementation of the evolutionary algorithm presented by Bäck and Khuri [3].
Unless otherwise mentioned, we perform five repetitions where each algorithm that we run gets ten hours of running time to compute a solution. Each run was made on a machine that is equipped with two Quadcore Intel Xeon processors (X5355) which run at a clock speed of 2.667 GHz. It has 2x4 MB of level 2 cache each, 64 GB main memory and runs Suse Linux Enterprise 10 SP 1. We used the fastsocial configuration of the KaHIP v0.6 graph partitioning package [21] to obtain graph partitions and node separators. The test results for the ARW algorithm were obtained by using the original algorithm from Andrade et al. [1]. Within the evolutionary algorithm we used our own implementation of the ARW algorithm.
We mostly present two kinds of data: maximum values, average values, minimum values as well as plots that show the evolution of solution quality. We now explain how we compute the convergence plots. Whenever an algorithm creates a new best independent set it reports a tuple (, ), where the time stamp is the currently elapsed time and refers to the size of the independent set that has been created. Since we perform multiple repetitions, the final plots correspond to average values over these repetitions. To compute these we take the time stamps of all repetitions and sort them in ascending order. For each time stamp in this series, we report the average value of the best solution size of each repetition at that time.
Algorithm Configuration.
After an extensive evaluation of the parameters [17], we fixed the population size to two hundred fifty, the partition pool size to thirty, the number of ARW iterations to as well as the number of blocks used for the multiway combine operations to sixtyfour. In each iteration, one of our three combine operations is picked uniformly at random. However, our experiments indicate that our algorithm is not too sensitive about the precise choice of the parameters. We mark the instances that have also been used for the parameter tuning in [17] in Appendix A with a *.
Instances.
We use graphs from various sources to test our algorithm. We divide them into five categories: social networks, meshes, road networks, networks from finite element computations as well as networks stemming from matrices. Social networks include citation networks, autonomous systems graphs or web graphs taken from the 10th DIMACS Implementation Challenge benchmark set [4]. Road networks and meshes are taken from Andrade et al. [1] and have been kindly provided by Renato Werneck. Meshes are dual graphs of triangular meshes. Networks stemming from finite element computations have been taken from Chris Walshaw’s benchmark archive [23]. Graphs stemming from matrices have been taken from the Florida Sparse Matrix Collection [7]. We randomly selected one from each group of all real, symmetric matrices having between 10K and 65K columns. A graph is derived by inserting a node for each column and creating an edge between two nodes if the corresponding matrix entry is nonzero. Selfloops are removed from the graphs.
4.1 Main Results
We now shortly summarize the main results of our experiments. First of all, in 50 out of the 67 instances, we either improve or reproduce the maximum result computed by the ARW algorithm. Our algorithm computes a maximum solution that is strictly larger than the maximum solution computed by the ARW algorithm in 21 cases. Contrarily, in 17 cases the maximum result of the ARW algorithm is larger then the maximum result of our algorithm. When looking at average values, we get 23 cases in which our algorithm strictly outperforms the ARW algorithm, and 17 cases for the opposite direction. Remarkably, when looking at the graphs obtained from the Florida Sparse Matrix collection, the average value of the ARW algorithm only outperforms our algorithm on one instance. The mesh family that we use in this paper has also been used in the original ARW paper [1]. We like to stress that most of the maximum results of the ARW algorithm are strictly larger than the maximum values originally reported by Andrade et al. [1] (including the maximum values presented there of the algorithm by Grosso et al. [14]). Except for four instances the same holds for our algorithm. On these four instances, our algorithm is worse than the original maximum value of the ARW algorithm. On the mesh family, in 8 out of 14 cases our algorithm computes the best result ever reported in literature. On road networks and the largest graphs from the mesh family as well as Walshaw family the ARW algorithm outperforms our algorithm. We tried to give both algorithms more time, i.e. a whole day of computation, but did not see much different results. Lastly, there is an interesting observation on social networks, that is in 5 out of 9 cases the minimum, average and maximum result produced by both algorithms are precisely the same. We suspect that these instances are in a sense easy and that both algorithms compute the optimal result or are very close to the optimum. We provide detailed per instance results in Appendix A.
Figure 2 shows how solution quality evolves over time on four example instances from the mesh family for both algorithms. As one would suspect, our algorithm almost keeps its level of solution quality in the beginning since it has to build the full population before it can start with combine and mutation operations. Contrarily, the ARW algorithm can directly start with local search and improve its solution. Hence, the solution quality of the ARW algorithm rises above the solution quality of our algorithm. As soon as our algorithm finished to compute the population, solution quality starts to improve and eventually the size of the computed independent sets becomes better than the solution quality of the ARW algorithm.
We also implemented the algorithm presented by Bäck and Khuri [3]. The algorithm uses a twopoint crossover as a combine operation, as well as a bitflip approach for mutation. Solutions created by the combine and mutation operations can be invalid. Hence, a penalty approach is used to deal with invalid solution candidates. In the original paper, the algorithm is only tested on small synthetic or random instances ( nodes). We tested the algorithm on the four smallest graphs from the mesh family and gave the algorithm ten hours of time to compute a solution. However, the best valid solution created during the course of the algorithm never exceeded the size of the best solution after the initial population has been created. This is due to the fact that the twopoint crossover and the mutation operations found valid solutions very rarely so that the average solution quality of the population degrades over time. On average, final solution quality of the algorithm has been more than 20% worse than the final result of our algorithm. Due to the bad solution quality observed, we did not perform additional experiments with this algorithm.
The Role of Graph Partitioning.
To estimate the influence of good partitionings in this context, we performed an experiment in which partitions of the graph have been obtained by simple breadth first searches. More precisely, we obtain a twoway partition of the graph using a breadth first search starting from a random node. Every node touched by the breadth first search is added to the first block, and every node not touched by the breadth first search is added to the second block. The breadth first search is stopped as soon as a specified number of nodes has been touched. In our experiments, using this approach instead of the approach that uses a graph partitioner to compute a partition yields significantly worse results. The influence of all the different combine operators that we use here is presented in the thesis
[17].5 Conclusion
We presented a very natural evolutionary framework for the computation of large maximal independent sets. Our core innovations are combine operations that are based on graph partitioning and local search algorithms. More precisely, our combine operations enable us to quickly exchange whole blocks of given individuals. In contrast to previous evolutionary algorithms for the problem, our operators are able to guarantee that the created offspring is valid. Experiments indicate that our algorithms outperforms stateoftheart algorithms on large variety of instances – some of which are better than every reported in literature. Important future work includes a coarsegrained parallelization of our approach which can be done by using an islandbased approach. Moreover, it would be interesting to improve the solution quality of our approach on road networks and to compare our algorithms with exact approaches. Additionally, it would be interesting to overcome the slow start of our algorithm due to the initialization of the population. For example, one could try to adjust the size of the population dynamically.
Acknowledgements
We would like to thank Renato Werneck for providing us the source code of the local search algorithms presented in Andrade et al. [1]. Moreover, we thank the Steinbuch Centre of Computing for giving us access to the IC2 machine.
References
 [1] D. V. Andrade, M. G. C. Resende, and R. F. Werneck. Fast Local Search for the Maximum Independent Set Problem. J. Heuristics, 18(4):525–547, 2012.

[2]
T. Bäck.
Evolutionary Algorithms in Theory and Practice: Evolution Strategies, Evolutionary Programming, Genetic Algorithms
. PhD thesis, 1996. 
[3]
T. Bäck and S. Khuri.
An Evolutionary Heuristic for the Maximum Independent Set Problem.
In
Proc. 1st IEEE Conf. on Evolutionary Computation
, pages 531–535. IEEE, 1994.  [4] D. Bader, A. Kappes, H. Meyerhenke, P. Sanders, C. Schulz, and D. Wagner. Benchmarking for Graph Clustering and Partitioning. In Encyclopedia of Social Network Analysis and Mining. Springer, 2014.
 [5] R. Battiti and M. Protasi. Reactive Local Search for the Maximum Clique Problem. Algorithmica, 29(4):610–637, 2001.
 [6] P. A. Borisovsky and M. S. Zavolovskaya. Experimental Comparison of Two Evolutionary Algorithms for the Independent Set Problem. In Applications of Evolutionary Computing, pages 154–164. Springer, 2003.
 [7] T. Davis. The University of Florida Sparse Matrix Collection.
 [8] K. A. De Jong. Evolutionary Computation: A Unified Approach. MIT Press, 2006.
 [9] T. A. Feo, M. G. C. Resende, and S. H. Smith. A Greedy Randomized Adaptive Search Procedure for Maximum Independent Set. Operations Research, 42(5):860–878, 1994.
 [10] M. R. Garey and David S. Johnson. Computers and Intractability: A Guide to the Theory of NPCompleteness. W. H. Freeman, 1979.
 [11] A. Gemsa, B. Niedermann, and M. Nöllenburg. TrajectoryBased Dynamic Map Labeling. In Proc. 24th Int. Symp. on Algorithms and Computation (ISAAC’13), volume 8283 of LNCS, pages 413–423. Springer, 2013.

[12]
D. E. Goldberg.
Genetic Algorithms in Search, Optimization, and Machine Learning
. AddisonWesley, 1989.  [13] A. Grosso, M. Locatelli, and F. Della Croce. Combining Swaps and Node Weights in an Adaptive Greedy Approach for the Maximum Clique Problem. J. Heuristics, 10(2):135–152, 2004.
 [14] A. Grosso, M. Locatelli, and W. Pullan. Simple Ingredients Leading to Very Efficient Heuristics for the Maximum Clique Problem. J. Heuristics, 14(6):587–612, 2008.
 [15] P. Hansen, N. Mladenović, and D. Urošević. Variable Neighborhood Search for the Maximum Clique. Discrete Applied Mathematics, 145(1):117–125, 2004.
 [16] K. Katayama, A. Hamamoto, and H. Narihisa. An Effective Local Search for the Maximum Clique Problem. Inf. Proc. Letters, 95(5):503–511, 2005.
 [17] S. Lamm. Evolutionary Algorithms for Independent Sets. Bachelor’s Thesis, Karlsruhe Institute of Technology, 2014.
 [18] H. Meyerhenke, P. Sanders, and C. Schulz. Partitioning Complex Networks via Sizeconstrained Clustering. In Proc. of the 13th Int. Symp. on Experimental Algorithms, LNCS. Springer, 2014.
 [19] B. L Miller and D. E Goldberg. Genetic Algorithms, Tournament Selection, and the Effects of Noise. Evolutionary Computation, 4(2):113–131, 1996.
 [20] W. J. Pullan and H. H. Hoos. Dynamic Local Search for the Maximum Clique Problem. J. Artif. Intell. Res.(JAIR), 25:159–185, 2006.
 [21] P. Sanders and C. Schulz. KaHIP – Karlsruhe High Qualtity Partitioning Homepage. http://algo2.iti.kit.edu/documents/kahip/index.html.
 [22] P. Sanders and C. Schulz. Think Locally, Act Globally: Highly Balanced Graph Partitioning. In Proc. of the 12th Int. Symp. on Experimental Algorithms (SEA’13), LNCS. Springer, 2013.
 [23] A. J. Soper, C. Walshaw, and M. Cross. A Combined Evolutionary Search and Multilevel Optimisation Approach to GraphPartitioning. Journal of Global Optimization, 29(2):225–241, 2004.
Appendix A Detailed per Instance Results
Graph  EvoMIS  ARW  

Name  Avg.  Max.  Min.  Avg.  Max.  Min.  
enron  69 244  62 811  62 811  62 811  62 811  62 811  62 811 
gowalla  196 591  112 369  112 369  112 369  112 369  112 369  112 369 
citation  268 495  150 380  150 380  150 380  150 380  150 380  150 380 
cnr2000*  325 557  229 981  229 991  229 976  229 955  229 966  229 940 
356 648  174 072  174 072  174 072  174 072  174 072  174 072  
coPapers  434 102  47 996  47 996  47 996  47 996  47 996  47 996 
skitter*  554 930  328 519  328 520  328 519  328 609  328 619  328 599 
amazon  735 323  309 774  309 778  309 769  309 792  309 793  309 791 
in2004*  1 382 908  896 581  896 585  896 580  896 477  896 562  896 408 
Graph  EvoMIS  ARW  

Name  Avg.  Max.  Min.  Avg.  Max.  Min.  
beethoven  4 419  2 004  2 004  2 004  2 004  2 004  2 004 
cow  5 036  2 346  2 346  2 346  2 346  2 346  2 346 
venus  5 672  2 684  2 684  2 684  2 684  2 684  2 684 
fandisk  8 634  4 075  4 075  4 075  4 073  4 074  4 072 
blob  16 068  7 249  7 250  7 248  7 249  7 250  7 249 
gargoyle  20 000  8 853  8 854  8 852  8 852  8 853  8 852 
face  22 871  10 218  10 218  10 218  10 217  10 217  10 217 
feline  41 262  18 853  18 854  18 851  18 847  18 848  18 846 
gameguy  42 623  20 726  20 727  20 726  20 670  20 690  20 659 
bunny*  68 790  32 337  32 343  32 330  32 293  32 300  32 287 
dragon  150 000  66 373  66 383  66 365  66 503  66 505  66 500 
turtle  267 534  122 378  122 391  122 370  122 506  122 584  122 444 
dragonsub  600 000  281 403  281 436  281 384  282 006  282 066  281 954 
ecat  684 496  322 285  322 357  322 222  322 362  322 529  322 269 
buddha  1 087 716  478 879  478 936  478 795  480 942  480 969  480 921 
Graph  EvoMIS  ARW  

Name  Avg.  Max.  Min.  Avg.  Max.  Min.  
crack  10 240  4 603  4 603  4 603  4 603  4 603  4 603 
vibrobox  12 328  1 852  1 852  1 852  1 850  1 851  1 849 
4elt  15 606  4 944  4 944  4 944  4 942  4 944  4 940 
cs4  22 499  9 172  9 177  9 170  9 173  9 174  9 172 
bcsstk30  28 924  1 783  1 783  1 783  1 783  1 783  1 783 
bcsstk31  35 588  3 488  3 488  3 488  3 487  3 487  3 487 
fe_pwt  36 519  9 309  9 310  9 309  9 310  9 310  9 308 
brack2  62 631  21 417  21 417  21 417  21 416  21 416  21 415 
fe_tooth  78 136  27 793  27 793  27 793  27 792  27 792  27 791 
fe_rotor  99 617  22 022  22 026  22 019  21 974  22 030  21 902 
598a  110 971  21 826  21 829  21 824  21 891  21 894  21 888 
wave  156 317  37 057  37 063  37 046  37 023  37 040  36 999 
fe_ocean  143 437  71 390  71 576  71 233  71 492  71 655  71 291 
auto  448 695  83 935  83 969  83 907  84 462  84 478  84 453 
Graph  EvoMIS  ARW  

Name  Avg.  Max.  Min.  Avg.  Max.  Min.  
ny  264 346  131 384  131 395  131 377  131 481  131 485  131 476 
bay  321 270  166 329  166 345  166 318  166 368  166 375  166 364 
col  435 666  225 714  225 721  225 706  225 764  225 768  225 759 
fla  1 070 376  549 093  549 106  549 072  549 581  549 587  549 574 
Graph  EvoMIS  ARW  

Name  Avg.  Max.  Min.  Avg.  Max.  Min.  
Oregon1  11 174  9 512  9 512  9 512  9 512  9 512  9 512 
caHepPh  12 006  4 994  4 994  4 994  4 994  4 994  4 994 
skirt  12 595  2 383  2 383  2 383  2 383  2 383  2 383 
cbuckle  13 681  1 097  1 097  1 097  1 097  1 097  1 097 
cyl6  13 681  600  600  600  600  600  600 
case9  14 453  7 224  7 224  7 224  7 224  7 224  7 224 
rajat07  14 842  4 971  4 971  4 971  4 971  4 971  4 971 
Dubcova1  16 129  4 096  4 096  4 096  4 096  4 096  4 096 
olafu  16 146  735  735  735  735  735  735 
bodyy6  19 366  6 232  6 233  6 230  6 226  6 228  6 224 
raefsky4  19 779  1 055  1 055  1 055  1 053  1 053  1 053 
smt  25 710  782  782  782  780  780  780 
pdb1HYS  36 417  1 078  1 078  1 078  1 070  1 071  1 070 
c57  37 833  19 997  19 997  19 997  19 997  19 997  19 997 
copter2  55 476  15 192  15 195  15 191  15 186  15 194  15 179 
TSOPF_FS_b300_c2  56 813  28 338  28 338  28 338  28 338  28 338  28 338 
c67  57 975  31 257  31 257  31 257  31 257  31 257  31 257 
dixmaanl  60 000  20 000  20 000  20 000  20 000  20 000  20 000 
blockqp1  60 012  20 011  20 011  20 011  20 011  20 011  20 011 
Ga3As3H12  61 349  8 118  8 151  8 097  8 061  8 124  7 842 
GaAsH6  61 349  8 562  8 572  8 547  8 519  8 575  8 351 
cant  62 208  6 260  6 260  6 260  6 255  6 255  6 254 
ncvxqp5  62 500  24 526  24 537  24 510  24 580  24 608  24 520 
crankseg_2  63 838  1 735  1 735  1 735  1 735  1 735  1 735 
c68  64 810  36 546  36 546  36 546  36 546  36 546  36 546 
Comments
There are no comments yet.