Given a undirected graph with a set vertices and a set of edges, the graph vertex coloring involves assigning each vertex with a color so that two adjacent vertices (linked by an edge) feature different colors. The Graph Vertex Coloring Problem (GVCP) involves finding the minimum number of colors required to color a given graph in respect of these binary constraints. The GVCP is a famous and much-studied problem because this simple formalization can be applied to various issues such as frequency assignment problems [1, 10], scheduling problems [25, 39, 37] and fly level allocation problems . Most problems that involve sharing a rare resource (colors) between different operators (vertices) can be modeled as a GVCP, such as most resource allocation problems. GVCP is NP-hard . Given a positive integer corresponding to the maximum number of colors, a -coloring of a given graph is a function that assigns to each vertex a color (i.e. an integer included between and ) as follows :
One recalls some definitions : a -coloring is called legal or proper -coloring if it respects the following binary constraints : . Otherwise the -coloring is called non legal or non proper; and edges such as are called conflicting edges, and and conflicting vertices. A given graph is -colorable if a proper -coloring exists. The chromatic number of a given graph is the smallest integer such as is -colorable. A coloring is called complete coloring if one color is assigned to all vertices, otherwise it is called partial coloring, whereby some vertices may remain uncolored. An independent set or stable set is a set of vertices, no two of which are adjacent. In this case, it is possible to assign the same color to all the vertices of an independent set without producing any conflicting edges. The problem of finding a minimal graph partition of independent sets is then equivalent to the GVCP.
The -coloring problem - finding a proper -coloring of a given graph - is NP-complete  for . Therefore, the best performing exact algorithms are generally not able to find a proper -coloring in reasonable time when the number of vertices is greater than [22, 11]
. For large graphs, one uses heuristics that partially explore the search space to occasionally find a proper-coloring in a reasonable time frame. However, this partial search does not guarantee that a better solution exists. Very interesting and comprehensive surveys on the GVCP and the most effective heuristics to solve it can be found in [16, 14]
. These studies classify heuristics by the search space used. In order to define the search space (or that which is termed strategy), one has to answer to three questions :
Is the number of available colors fixed or not ?
Is non proper colorings included in the search space ?
Does the heuristic use complete colorings or partial colorings ?
Among the eight theoretical possibilities, four main strategies are defined  :
Proper strategy if the number of colors is not fixed and only complete and proper colorings are taken into account. The aim is to find a coloring that minimizes the number of colors used under constraints of legality and completeness of the coloring.
-fixed partial proper strategy if the number of colors is fixed and partial and proper colorings are taken into account. The aim is to find a coloring that minimizes the number of uncolored vertices under constraints of the number of given colors and of proper coloring.
-fixed penalty strategy if the number of colors is fixed and complete and no proper colorings are taken into account. The aim is to find a coloring that minimizes the number of conflicting edges under constraints of the number of given colors and of completed coloring.
Penalty strategy if the number of colors is not fixed and no proper and completed colorings are taken into account. The aim is to find a coloring that minimizes the number of conflicting edges and the number of colors used under the constraint of completed coloring.
The Variable Space Search of  is interesting and didactic because it works with three of the four above strategies. Another more classical means of classifying the different methods is to consider how these methods explore the search space.
Constructive or exhaustive methods (such as greedy methods, branch and bound, backtracking and constraint programming) build a coloring step-by-step from empty coloring; those approaches usually have a -fixed partial proper strategy. DSATUR  and RLF  are the most well-known greedy algorithms. Those algorithms rapidly provide an upper bound on , but it is quite distance from the exact value of . They are used to initialize solutions before a local search or an evolutionary algorithm is employed. Some improved algorithm such as XRFL  based on RLF provides much better results. Exact algorithms such as branch and bound (B&B), backtracking, or constraint programming  are -fixed partial proper strategies. The B&B implementation of  takes too much time after 100 vertices. Column generation approaches are also exact methods.  divided the problem into two parts: the first problem involves generating useful columns in the form of independent sets. The second problem involves selecting a minimum number of independent sets (created by the first problem) in order to cover the graph. To obtain better results, exact methods can be hybridized with local searches [32, 9].
Local (or neighborhood or trajectory) searches (such as the hillclimbing, the simulated annealing , the tabu search [20, 8], the variable neighborhood search , the variable space search  and the min-conflict) start from an initial coloring and try to improve it by local moves; usually those approaches have a proper or -fixed penalty strategies. A detailed overview of those local search methods for graph coloring is provided in . In our algorithm, we employ an improved version  of a tabu search algorithm called TabuCol ; it is one of the first metaheuristics developed for graph coloring and uses a -fixed penalty strategy.
Population-based approaches (such as the evolutionary algorithm, the ant colony optimization , the particle swarm algorithm and the quantum annealing [34, 35]) work with several colorings that can interact together with, for example, a crossover operator, a share memory, a repulsion or attraction operator, or others criteria such as sharing. Those methods currently obtain the best results when they are combined with the aforementioned local search algorithms. However, used alone, those population-based approaches are limited.
The first objective of this paper is to present a new and simple algorithm that provides the best results for coloring DIMACS benchmark graphs. The second objective is to show why this algorithm effectively controls diversification. Indeed, in our work, the population of this memetic algorithm is reduced to only two individuals, called a couple-population. It provides the opportunity to focus on the correct ‘dose’ of diversity to add to a local search. This new approach is simpler that a general evolutionary algorithm because it does not use numerous parameters such as selection, size of population, crossover and mutation rates.
The organization of this paper is as follows. The issue of diversification in heuristics is presented in Section 2. Section 3 describes our improvement of the memetic algorithm for the GVCP. The experimental results are presented in Section 4 and some of the impacts of diversification are analyzed in Section 5. Finally, we consider the conclusions of this study and discuss possible future research in Section 6.
2 How to manage diversity ?
Because the search is not exhaustive, heuristics provide only sub-optimal solutions with no information about the distance between those sub-solutions and the optimal solution. Heuristics return the best solution found after partial exploration of the search space. To produce a good solution, heuristics must alternate exploitation phases and exploration phases. During an exploitation phase (or an intensification phase), a solution produced by the heuristic is improved. It resembles an exhaustive search but occurs in a small part of the search space. An exploration phase (or diversification phase) involves a search into numerous uncharted regions of the search space. The balance between those two phases is difficult to achieve. Metaheuristics therefore define a framework that manages this balance. In this section, we classify the main components of several metaheuristics as intensification operators or as diversification operators. Of course, this list, presented in table 1, is not exhaustive and we take into account only well-known metaheuristics : Local Search (LS), Tabu Search (TS), Simulated Annealing (SA), Variable Neighborhood Search (VNS), Evolutionary Algorithm (EA), and Ant Colony (AC). Some components can be shared between several metaheuristics. Other components can be at the same time an intensification operator and a diversification operator, such as parent selection or population update. It depends on the context of the algorithm, as we will now demonstrate.
Local search algorithms start from an initial solution and then try to improve this iteratively through local transformation. The simplest LS, hill-climbing, accepts a move only if the objective function is improved: it is inherently an intensification operator. It is possible to introduce some diversity by generating several different initial solutions. This process is called multi-starting. The limit of a simple LS is that after a given number of iterations, the algorithm is blocked within a local optimum. No local transformation can improve the solution. SA and TS therefore accept some worsening moves during the search, with a given criteria for SA and with a tabu list for TS. On the other hand, VNS, to escape local optimum, changes the neighborhood structure. The change in neighborhood structure is an effective diversification operator because it never worsens the current solution. However, this diversification operator is too weak: VNS must add a shaking process, similar to partial restart (as in Iterated Local Search), in order to achieve a more global optimization.
The population-based algorithms are often classified as global optimization algorithms because they work with several candidate solutions, although this does not indicate that the algorithm will find the global optimum. Moreover, working with several candidate solutions can be regarded as a diversification operator. EAs are population-based algorithms. A basic EA can be presented in five steps as follows: 1) Parents selection: one selects two individuals of the population, which are called parents. 2) Crossover: according to a given rate, a crossover operator is applied to those two parents, which creates two new individuals, called children, blending of the two parents. 3) Mutation: according to a given rate, a mutation operator is applied to each children, which modifies them slightly. 4) Population update: under given conditions, the two children take the place of two individuals of the population. 5) This cycle of four steps is called a generation; it is repeated until a given stop condition is realized. The best individual of the population is the output of the EA. We shall classify the different components of this basic EA as intensification operators or diversification operators.
Mutation and crossover operators are different in nature to selection and population update processes. Indeed, mutation and crossover operators are chance generators, while selection and population update processes are higher-level systems that control the chance interest. These two means of functioning are pithily summarized in an expression attributed to Democritus: “Everything existing in the universe is the fruit of chance and necessity”, which is also considered in Jacques Monod’s book Chance and Necessity . The mutation and the crossover processes are therefore diversification operators in essence. The mutation operator slightly changes one of the individuals of the population. The crossover operator mixes two individuals of the population in order to create a new one. The mutation is a unary operator while the crossover is a binary operator. There exists also trinary operators for some other EAs, such as for Differential Evolution. The unary or binary changes (applied by mutation or crossover) are performed randomly and these are therefore exploration phases. Occasionally, a random modification improves the solution, but this function is driven by the higher-level system. These changes cannot be considered intensification operators, except in some specific cases where the modifications (mutation or crossover) are not performed randomly. For example, in , the crossover is a quasi-deterministic process directly guided by the separability of the objective function in order to improve the latter. The aim of a diversification operator is to provide diversity to current solution(s), but there is a risk of providing too much diversity, and thus breaking the structure of current solution(s). The crossover operator generally provides more diversity than the mutation operator.
The classification of operators such as the parents selection and the population update depends on the choice of the EA parameters. Indeed, if one chooses an elitist parents selection policy (the choice of best individuals of the population for the crossover and the mutation), the selection is then an intensification operator. Conversely, a policy of random parents selection (a random selection of individuals from the population for the crossover and the mutation) indicates a diversification operator. This double role of the parents selection can be very interesting but also very difficult to properly control. The population update has the same feature. If one chooses to include in the population the individuals created by the crossover and/or the mutation (children) only if they are better than the individuals of the population that one removes, then the population update is an intensification operator. Conversely, if one chooses to systematically replace some individuals of the population by the created children (even if they are worse), then the population update is a diversification operator. The sharing and the elitism are two others mechanisms of EAs playing diversification operators roles.
In AC, the interactions between individuals of the population occur through the sharing of a collective memory. The phenomenon mechanism (deposit and evaporation) plays this role, while the random choice of the instantiation of variables plays the role of diversification.
|LS: accept improving moves||LS: multi-starts|
|SA, TS: accept worsening moves|
|VNS: change of neighborhood structure||VNS: change of neighborhood structure|
|p-bA: population (several candidate solutions)|
|EA: parents selection (the best)||EA: parents selection (random)|
|population update (if child is better)||population update (systematic)|
|AC: collective memory (pheromone)||AC: random choice|
Table 1 summarizes the classification of main components of several metaheuristics as diversification operators or as intensification operators. The separation is not always very clear, such as in the case of EAs, in which parents selection or update population processes play both roles. Controlling the correct balance between intensification and diversification is a difficult step; Much fine tuning is required to obtain good results from heuristics.
An interesting feature of a diversification operator is its ability to explore new areas of the search space without breaking the structure of current solution(s). We recall a quotation of the physiologist Claude Bernard about hormones  and that corresponds well with the role of a diversification operator :
“Everything is poisonous, nothing is poisonous, it is all a matter of dose.”
Claude Bernard - 1872
The level or dose of diversity provided by a diversification operator is difficult to evaluate. However, it is possible to compare the diversification operators with their dose of diversity. For example, the mutation operator generally provides less diversity than the crossover operator. For each operator, there are many parameters that affect the dose of diversity; One of the easily identifiable parameters for the crossover operator is the Hamming distance between the two parents. Indeed, if we highlight two good solutions that have very good objective functions but that are very different (in terms of Hamming distance), then they would have a high probability of producing children with bad objective functions following crossover. The danger of the crossover is of completely destroying the solution structure. On the other hand, two solutions very closed (in terms of Hamming distance) produce a child with quasi-similar fitness. Chart1 shows the correlation between the Hamming distance separating the two parents of the same fitness (abscissa axis) and the fitness of the child (ordinate axis). This chart is obtained for -coloring problem where the objective is to minimize the number of conflicting edges (the fitness) in a complete but non legal -coloring with . We consider the DSJC500.5 graph from the DIMACS benchmark  and the GPX crossover of . The parents used for the crossover have the same fitness value, equal to conflicting edges. There is a quasi-linear correlation between these two parameters.
The key question is : how does one manage the correct ‘dose’ of diversification in a heuristic? In the next section, we present a memetic algorithm with only 2 individuals, which simplifies the management of the diversity dose.
3 Memetic Algorithms with only two individuals
Memetic Algorithms  (MA) are hybrid metaheuristics using a local search algorithm inside a population-based algorithm. They can also be viewed as specific EAs where all individuals of the population are local minimum (of a specific neighborhood). This hybrid metaheuristic is a low-level relay hybrid (LRH) in the Talbi taxonomy . A low-level hybridization means that a section of one metaheuristic is replaced by a second metaheuristic. In MA, the mutation of the EA is replaced by a local search algorithm. Conversely, a high-level hybridization refers to a combination of several metaheuristics, none of which are modified. A relay (respectively teamwork) hybrid means that metaheuristics are used successively (respectively in parallel).
In graph coloring, the Hybrid Evolutionary Algorithm (HEA) of Galinier and Hao  is a MA; the mutation of the EA is replaced by a tabu search. HEA is one of the best algorithms for solving the GVCP; From 1999 until 2012, it provided the majority of the best results for DIMACS benchmark graphs , particularly for difficult graphs such as DSJC500.5 and DSJC1000.5 (see table 2). These results were obtained with a population of 8 individuals.
The tabu search used is an improvement of the TabuCol of . This version of TabuCol is a very powerful tabu search, which even obtained very good results for the GVCP when it is used alone (see table 2). Another benefit of this version of TabuCol is that it has only two parameters to adjust in order to control the tabu tenure. Moreover,  have demonstrated on a very large number of instances that with the same setting, TabuCol obtained very good -colorings. Indeed, one of the main disadvantages of heuristics is that the number of parameters to set is high and difficult to adjust. This version of TabuCol is very robust. Thus we retain the setting of  in all our tests and consider TabuCol as an atomic algorithm.
The mutation, which is a diversification operator, is replaced by a local search, an intensification operator. The balance between intensification and diversification is restored because in the case of the graph coloring problem, all crossovers are too strong and destroy too much of a solution’s structure.
The crossover used in HEA is called the Greedy Partition Crossover (GPX); it is based on color classes. Its aim is to add slightly more diversification to the TabuCol algorithm.
These hybridizations combine the benefits of population-based methods, which are better at diversification by means of a crossover operator, and local search methods, which are better at intensification.
The intensification/diversification balance is difficult to achieve. In order to simplify the numerous parameters involved in EAs, we have chosen to consider a population with only two individuals. This implies 1) no selection process, and 2) a simpler management of the diversification because there is only one diversification operator: the crossover.
3.1 General Pattern - : Hybrid approach with 2 trajectories-based Optimization
The basic blocks of HEA are the TabuCol algorithm, which is a very powerful local search for intensification, and the Greedy Partion Crossover (GPX), which adds a little more diversification.
The idea is to combine as simply as possible these two blocks. The local search is a unary operator while the crossover is a binary operator. We present with algorithm 1 the pseudo code of a first version of this simple algorithm, which can be seen as a 2 trajectories-based algorithm. We then call the algorithm ‘’, which signifies a Hybrid approach with 2 trajectories-based Optimization.
The algorithm 1 needs an asymmetric crossover, which means: crossover(, ) crossover(, ). After randomly initializing the two solutions, the algorithm repeats an instructions loop until a stop criteria occurres. First, we introduce some diversity with the crossover operator, then the two offspring and are improved by means of the local search. Next, we register the best solution and we systematically replace the parents by the two children. An iteration of this algorithm is called a generation.
In order to add more diversity on the algorithm 1, we present a second version of with two extra elite solutions.
We add two other candidate solutions (similar to elite solutions), and , in order to reintroduce some diversity in the couple-population. Indeed, after a given number of generations, the two individuals of the population become increasingly similar within the search space. To maintain a given diversity in the couple-population, the elite solution replaces one of the population individual after generations i.e. one cycle. is the best solution found during the current cycle and the best solution found during the previous cycle. The figure 2 represents the graphic view of the algorithm 2.
3.2 Application to graph coloring :
is the application of to the -coloring problem with the GPX as crossover and the TabuCol as local search. It uses only one parameter: , the number of iterations performed by the TabuCol algorithm.
This algorithm is an improvement of the TabuCol algorithm; these are two parallel TabuCol algorithms that interact periodically by crossover. We will now briefly recall the principles of the TabuCol algorithm and the GPX crossover.
In 1987,  presented the TabuCol algorithm, one year after Fred Glover introduced the tabu search. This algorithm, which solves -coloring problems, was enhanced in 1999 by . The three basic features of this trajectory-based algorithm are as follows:
Search Space and Objective Function: the algorithm is a -fixed penalty strategy. The objective function minimizes the number of conflicting edges.
Neighborhood: a -coloring solution is a neighbor of an other -coloring solution if the color of only one conflicting vertex is different. This move is called a critic 1-move. Therefore the neighborhood size depends on the number of conflicting vertices.
Move Strategy: the move strategy is the standard tabu search strategy. Even if the objective function is worse, at each iteration, one decides to move to the best neighbors, which are not inside the tabu list. Note that all the neighborhood is explored. If there are several best moves, one chooses one of them at random. This is the only random aspect of this metaheuristic. The tabu list is not the list of each already-visited solution because this is computationally expensive. It is more efficient to put only the reverse moves inside the tabu list. Indeed, the aim is to forbid returning to previous solutions, and it is possible to reach this goal by forbidding the reverse moves during a given number of iterations (i.e. the tabu tenure). The tabu tenure is dynamic: it depends on the neighborhood size. A basic aspiration criteria is also implemented: it accepts a tabu move to a -coloring, which has a better objective function than the best -coloring encountered so far.
Data structures have a major impact on algorithm efficiency, constituting one of the main differences between the Hertz and de Werra version of TabuCol  and the Galinier and Hao version . Checking that a 1-move is tabu or not and updating the tabu list are operations in constant time (figure 2(b)). TabuCol also uses an incremental evaluation : the objective function of the neighbors is not computed from scratch, but only the difference between the two solutions is computed. This is a very important feature for local search efficiency. Finding the best 1-move corresponds to finding the maximum value of a matrix (cf. figure 2(a)).
3.2.2 Greedy Partition Crossover (GPX)
The second block of is the Greedy Partition Crossover (GPX) from the Hybrid Evolutionary Algorithm HEA . The two main principles of GPX are: 1) a coloring is a partition of vertices and not an assignment of colors to vertices, and 2) large color classes should be transmitted to the child. The figure 4 gives an example of GPX for a problem with three colors (red, blue and green) and 10 vertices (A, B, C, D, E, F, G, H, I and J). The first step is to transmit to the child the largest color class of the first parent. After having withdrawn those vertices in the second parent, one proceeds to step 2 where one transmits to the child the largest color class of the second parent. This process is repeated until all the colors are used. There are most probably still some uncolored vertices in the child solution. The final step is to randomly add those vertices to the color classes.
4 Experimental Results
In this section we present the results obtained with the two versions of the proposed memetic algorithm; the first version of without elites solutions is indicated as , and the second version with the two extra elites solutions is simply indicated . Test instances are selected among the most studied graphs since the 1990s, which are known to be very difficult (DIMACS ). To validate the proposed approach, the results of are compared with the results obtained by some of the best methods currently known.
4.1 Instances and Benchmarks
We study some graphs from the second DIMACS challenge of 1992-1993 . This is to date the most widely-used benchmark for solving the graph coloring problem. These instances are available at the following address: ftp://dimacs.rutgers.edu
We focus on two main types of graphs from the DIMACS benchmark: DSJC and FLAT, which are randomly or quasi-randomly generated. DSJC graphs are graphs with vertices, wich each vertex connected to an average of vertices; is the graph density. The chromatic number of these graphs is unknown. FLAT graphs have another structure: they are built for a known chromatic number. The flat graph has vertices and is the chromatic number.
4.2 Computational Results
and were programmed in C++ standard. The results presented in this section were obtained on a computer with an Intel Xeon 3.10GHz processor - 4 cores and 8GB of RAM. Note that the RAM size has no impact on the calculations: even for large graphs such as DSJC1000.9 (with 1000 vertices and high density of 0.9), the memory used does not exceed 125 MB. The main characteristic is the processor speed.
As shown in Section 3, the proposed algorithms have two successive calls to local search (lines 8 and 9 of the algorithms 1, 2 and 3), one for each child of the current generation. Almost all of the time is spent performing the local search. It is possible and moreover easy to parallelize both local searches when using multi-core processor architecture. This is what we have done using the OpenMP API (Open Multi-Processing), which has the advantage of being cross-platform (Linux, Windows, MacOS, etc.) and simple to use. The execution times provided in the following table are in CPU time. Thus, when we give an execution time of 30 minutes, the required time is actually close to 15 minutes using two processing cores.
Table 2 presents results of the principal methods known to date. For each graph, it indicates the lowest number of colors found by each algorithm. For TabuCol  the reported results are from  which are better than those of 1987. The most recent algorithm, QA-col (Quantum Annealing for graph coloring ), provides the best results but is based on a cluster of PC using 10 processing cores simultaneously. Note that HEA , AmaCol , MACOL  and EXTRACOL  are also population-based algorithms also using TabuCol and GPX crossover or an improvement of GPX (GPX with parents for MACOL and EXTRACOL and the GPX process is replaced in AmaCol by a selection of color classes among a very large pool of color classes). Only QA-col has another approach based on several parallel simulated annealing algorithms interacting together with sharing criteria.
In a standard Simulating Annealing algorithm (SA), the probability of accepting a candidate solution is managed through a temperature criteria. The value of the temperature decreases during the SA iterations. A Quantum Annealing (QA) is a memetic algorithm without crossover and in which the local search is a SA. The only interaction between the individuals of the population occurs through a specific sharing process. A standard sharing process involves penalizing solutions that are too ‘close’ within the search space (the simplest sharing is to forbid having two similar solutions in the population). However, comparing a solution with all solutions of the population can be computationally expensive, especially for a large population. This is why QA-col compares a solution with only two others of the population: this comparison provides the solution-population distance. The value of this measure is integrated into the temperature value of each SA. If the solution-population distance is greater, then the temperature will be higher, and there will be a higher probability that the solution will be accepted. [30, 31] have also developed a specific population spacing management that resembles this one. However, there are different ways to calculate the distance between two solutions. For a set of solutions,  define frozen same pairs (respectively frozen different pairs) as pairs of vertices that are in the same color class (respectively in the different color class) for all solutions. In QA-col, authors use these definitions with a set of two solutions in order to define a distance between these two solutions as the difference between the number of frozen same pairs and the number of frozen different pairs.
|TabuCol [20, 21]||HEA ||AmaCol ||MACOL||EXTRACOL ||QA-col |
Table 3 presents the results obtained with , the first version of (without elites). This simplest version finds very good solutions ( color compared to the best known results) for difficult graphs within the literature. Only one method, QA-col, occasionally finds a solution with less color. The column indicates the number of iterations of the TabuCol algorithm (this is the stop criteria of TabuCol). The column Success evaluates the robustness of this method, providing the success rate: success_runs/total_runs. A success run is that which finds a legal -coloring. The average number of generations or crossovers performed during one success run is given by Gene value. The total average number of iterations of TabuCol preformed during is . The column Time indicates the average CPU time in minutes of success runs.
does not find the solutions each time for these graphs, but when it does, it is generally very rapid. For example, for the graph coloring DSJC1000.1 with 20 colors, recent results reported in the literature are:
Our algorithm, , achieves solutions in less than one minute (CPU 3.1GHz).
The main drawback of is that it converges sometimes too quickly. In such instances it cannot find a solution before the two individuals in a generation become identical. The second version, , adds more diversity while performing an intensifying role.
Table 4 shows the results obtained with . Of primary important is that finds solutions with fewer colors than all the best-known methods. Only the Quantum Annealing algorithm, using ten CPU cores simultaneously, achieves this level of performance. In particular, DSJC500.5 is solved with only 47 colors and flat1000_76_0 with 81 colors. We noted 1* the number of success runs when the solution is occasionally found, but on average this occurs in less than one in 20 runs. This is the case for graphs DSJC500.5 and flat1000_76_0 with 47 colors and 81 respectively.
The computation time of is generally close to that of but the former algorithm is more robust with quasi-100% of success. In particular, the two graphs DSJC500.5 and DSJC1000.1 with 48 and 20 colors respectively are resolved each time, and in less than one CPU minute on average. Using a multicore CPU, these instances are solved in less than 30 seconds on average, often in less than 10 seconds. As a comparison, the shortest time reported in the literature for DSJC1000.1 is 93 minutes for EXTRACOL with a 2.8GHz processor (and 108 minutes for MACOL with a 3.4GHz processor).
5 Analysis of diversification
In this section we perform several tests in order to analyze the role of diversification in the algorithm. We increase or decrease the dose of diversification within the algorithm. There are only two operators that lead to diversification: the GPX crossover and the population update process. In a first set of tests, we slightly modify the dose of diversification in the GPX crossover and analyze the results. In a second set of tests, we focus on the population update process: in , the two produced children systematically replace both parents, even if they have worse fitness values than their parents. If we lighten this rule, the diversification decreases.
5.1 Dose of diversification in the GPX crossover
Some modifications are performed on the GPX crossover in order to increase (as for the first test) or decrease (as for the second test) the dose of diversification within this operator.
5.1.1 Test on GPX with increased chance: random draw of a color classes number
In order to increase the levels of chance within the GPX crossover, one randomizes the GPX. One recalls (cf. section 3.2.2) that at each step of the GPX, the selected parent transmits the largest color class to the child. In this test, we begin by randomly transmitting color classes chosen from the parents to the child; after those steps, one starts again by alternately transmitting the largest color class from each parent. is the random level. If , then the crossover is the same as the initial GPX. If increases, then the chance and the diversity also increase. To evaluate this modification of the crossover, we count the cumulative iterations number of TabuCol that one run requires in order to find a legal -coloring. For each value, the algorithm runs ten times in order to produce more robust results. For the test, we consider the -coloring problem for graph DSJC500.5 of the DIMACS benchmark. The figure 5 shows in abscissa the random level and in ordinate the average necessary iterations number required to find a legal -coloring.
First, , with the number of colors, but we stop the computation for , because with , the algorithm does not find a -coloring within acceptable computing time limit. This means that when we introduce too much diversification, the algorithm cannot find a legal solution. Indeed, for high value, the crossover does not transmit the good features of the parents, so that the child appears to be a random initial solution. For , the algorithm finds a legal coloring in more or less 10 million iterations. It is not easy to decide which -value obtains the quickest result.
5.1.2 Test on GPX with decreased chance: imbalanced crossover and important different between the two parents
In the standard GPX, the role of each parent is balanced: they alternatively transmit their largest color class to the child. Of course, the parent, which first transmits its largest class, has more importance than the other; this is why it is an asymmetric crossover. In this test, we give a higher importance to one of the parents. At each step, we randomly draw the parent that transmits its largest color class with a different probability for each parent. We introduce , the probability of selecting the first parent; is the probability of selecting the second parent. For example, if , parent 1 always has a 3 in 4 chance of being selected to transmit its largest color class (parent 2 only has a 1 in 4 chance). If , it means that both parents have an equal probability (a fifty-fifty chance) to be chosen; this almost corresponds to the standard GPX. If , it means that the child is a copy of parent 1; there are no more crossovers and therefore is a TabuCol with two initial solutions. When get away from , the chance and the diversity bring by the crossover decrease. Figure 6 shows in abscissa the probability and in ordinate the average number of necessary iterations required to find a legal -coloring (as in the previous test).
First, it is evident that the results are symmetric according to . The best results are obtained with approximately (). The impact of this parameter is weaker than that of the previous one: the control of the reduction of diversification is finer.
5.2 Test on parents’ replacement: systematic or not
In , the two produced children systematically replace both parents, even if they have worse fitness values than their parents. We modify this replacement rule in this test. If the fitness value of the child is better than that of its parents, the child automatically replaces one of the parents. Otherwise, we introduce a probability corresponding to the probability of the parents’ replacement, even if the child is worse than his parents. If , the replacement is systematic as in standard and if , the replacement is performed only if the children are better. When the -value decreases, the diversity also decreases. Figure 7 shows in abscissa the parents’ replacement probability and in ordinate the average number of necessary iterations required to find a legal -coloring (as in the previous test).
If the parents’ replacement probability or a very low , then more time is required to produce the results. The absence or the lack of diversification is shown to penalize the search. However, for a large range of values: , it is not possible to define the best policy for criteria. The dramatic change in behavior of happens very quickly around .
These studies enable us to better understand the role of the diversification operators (crossover and parent updating). Some interesting criteria are identified as the random level of the crossover or the imbalanced level of the crossover. We will integrate these criteria that we have studied in this paper into future algorithms in order to dynamically manage the diversity.
We have proposed a new algorithm for the graph coloring problem, called . It is a variation of a memetic algorithm with only two candidate solutions. This simplification has the great advantage of clarifying the role of the diversification and intensification operators and of more effectively managing the right ‘dose’ of diversification. combines a local search algorithm (TabuCol) as an intensification operator with a crossover operator (GPX) as a diversification operator, two main building blocks of Galinier and Hao’s memetic algorithm . The computational experiments, carried out on a set of challenging DIMACS graphs, show that finds the best existing results, such as 47-colorings for DSJC500.5, 82-colorings for DSJC1000.5, 222-colorings for DSJC1000.9 and 81-colorings for flat1000_76_0, which have so far only been found by quantum annealing  with a massive multi-CPU.
We have performed an in-depth analysis on the crossover operator in order to better understand its role in the diversification process. Some interesting criteria have been identified, such as the crossover’s levels of randomness and imbalance. Those criteria pave the way for our further research. We have generalized this optimization methodology, which is a specific memetic algorithm with only two individuals in its population. This Hybrid approach with 2 trajectories-based Optimization () improves the local search algorithm through a crossover operator. The crossover inserts a dose of diversification that is then easy to manage.
-  KarenI. Aardal, StanP.M. Hoesel, ArieM.C.A. Koster, Carlo Mannino, and Antonio Sassano. Models and solution techniques for frequency assignment problems. Quarterly Journal of the Belgian, French and Italian Operations Research Societies, 1(4):261–317, 2003.
-  C. Avanthay, Alain Hertz, and Nicolas Zufferey. A variable neighborhood search for graph coloring. European Journal of Operational Research, 2003.
-  Nicolas Barnier and Pascal Brisset. Graph Coloring for Air Traffic Flow Management. Annals of Operations Research, 130(1-4):163–178, 2004.
-  Claude Bernard. Leçons de pathologie expérimentale. J.-B. Baillière et fils, Paris, 1872.
-  D. Brélaz. New Methods to Color the Vertices of a Graph. Communications of the ACM, 22(4):251–256, 1979.
-  Massimiliano Caramia and Paolo Dell’Olmo. Constraint Propagation in Graph Coloring. Journal of Heuristics, 8(1):83–107, 2002.
-  Joseph Culberson and Ian P. Gent. Frozen Development in Graph Coloring. Theoretical Computer Science, 265(1-2), August 2001.
Isabelle Devarenne, Hakim Mabed, and Alexandre Caminada.
Intelligent neighborhood exploration in local search heuristics.
Proceedings of the 18th IEEE International Conference on Tools with Artificial Intelligence (ICTAI ’06), pages 144–150. IEEE Computer Society, 2006.
-  Mohammad Dib. Tabu-NG: hybridization of constraint programming and local search for solving CSP. PhD thesis, University of Technology of Belfort-Montbéliard, December 2010.
-  Mohammad Dib, Alexandre Caminada, and Hakim Mabed. Frequency management in Radio military Networks. In INFORMS Telecom 2010, 10th INFORMS Telecommunications Conference, Montreal, Canada, May 2010.
-  N. Dubois and D. de Werra. Epcot: An efficient procedure for coloring optimally with Tabu Search. Computers & Mathematics with Applications, 25(10–11):35–45, 1993.
-  Nicolas Durand and Jean-Marc Alliot. Genetic crossover operator for partially separable functions. In Genetic Programming 1998: Proceedings of the Third Annual Conference, pages 487–494, University of Wisconsin, Madison, Wisconsin, USA, 22-25 July 1998. Morgan Kaufmann.
-  C. Fleurent and J. Ferland. Genetic and Hybrid Algorithms for Graph Coloring. Annals of Operations Research, 63:437–464, 1996.
-  Philippe Galinier, Jean-Philippe Hamiez, Jin-Kao Hao, and Daniel Cosmin Porumbel. Recent Advances in Graph Vertex Coloring. In Ivan Zelinka, Václav Snásel, and Ajith Abraham, editors, Handbook of Optimization, volume 38 of Intelligent Systems Reference Library, pages 505–528. Springer, 2013.
Philippe Galinier and Jin-Kao Hao.
Hybrid evolutionary algorithms for graph coloring.
Journal of Combinatorial Optimization, 3(4):379–397, 1999.
-  Philippe Galinier and Alain Hertz. A survey of local search methods for graph coloring. Computers & Operations Research, 33:2547–2562, 2006.
-  Philippe Galinier, Alain Hertz, and Nicolas Zufferey. An adaptive memory algorithm for the -coloring problem. Discrete Applied Mathematics, 156(2):267–279, 2008.
-  M. R. Garey and David S. Johnson. Computers and Intractability: A Guide to the Theory of -Completeness. Freeman, San Francisco, CA, USA, 1979.
-  Jin-Kao Hao. Memetic Algorithms in Discrete Optimization. In Ferrante Neri, Carlos Cotta, and Pablo Moscato, editors, Handbook of Memetic Algorithms, volume 379 of Studies in Computational Intelligence, pages 73–94. Springer, 2012.
-  Alain Hertz and Dominique de Werra. Using Tabu Search Techniques for Graph Coloring. Computing, 39(4):345–351, 1987.
-  Alain Hertz, M. Plumettaz, and Nicolas Zufferey. Variable Space Search for Graph Coloring. Discrete Applied Mathematics, 156(13):2551 – 2560, 2008.
-  David S. Johnson, C. R. Aragon, L. A. McGeoch, and C. Schevon. Optimization by Simulated Annealing: An Experimental Evaluation; Part II, Graph Coloring and Number Partitioning. Operations Research, 39(3):378–406, 1991.
-  David S. Johnson and Michael Trick, editors. Cliques, Coloring, and Satisfiability: Second DIMACS Implementation Challenge, 1993, volume 26 of DIMACS Series in Discrete Mathematics and Theoretical Computer Science. American Mathematical Society, Providence, RI, USA, 1996.
-  R.M. Karp. Reducibility among combinatorial problems. In R. E. Miller and J. W. Thatcher, editors, Complexity of Computer Computations, pages 85–103. Plenum Press, New York, USA, 1972.
-  F. T. Leighton. A Graph Coloring Algorithm for Large Scheduling Problems. Journal of Research of the National Bureau of Standards, 84(6):489–506, 1979.
-  Zhipeng Lü and Jin-Kao Hao. A memetic algorithm for graph coloring. European Journal of Operational Research, 203(1):241–250, 2010.
-  A. Mehrotra and Michael A. Trick. A Column Generation Approach for Graph Coloring. INFORMS Journal On Computing, 8(4):344–354, 1996.
-  Jacques Monod. Chance and necessity: an essay on the natural philosophy of modern biology. Knopf, 1971.
-  M. Plumettaz, D. Schindl, and Nicolas Zufferey. Ant Local Search and its efficient adaptation to graph colouring. Journal of Operational Research Society, 61(5):819 – 826, 2010.
Daniel Cosmin Porumbel, Jin-Kao Hao, and Pascale Kuntz.
Diversity Control and Multi-Parent Recombination for Evolutionary
Graph Coloring Algorithms.
9th European Conference on Evolutionary Computation in Combinatorial Optimisation (Evocop 2009), Tübingen, Germany, 2009.
-  Daniel Cosmin Porumbel, Jin-Kao Hao, and Pascale Kuntz. An evolutionary approach with diversity guarantee and well-informed grouping recombination for graph coloring. Computers & Operations Research, 37:1822–1832, 2010.
-  Steven David Prestwich. Generalised graph colouring by a hybrid of local search and constraint programming. Discrete Applied Mathematics, 156(2):148–158, 2008.
-  El-Ghazali Talbi. A Taxonomy of Hybrid Metaheuristics. Journal of Heuristics, 8(5):541–564, September 2002.
-  Olawale Titiloye and Alan Crispin. Graph Coloring with a Distributed Hybrid Quantum Annealing Algorithm. In James O’Shea, Ngoc Nguyen, Keeley Crockett, Robert Howlett, and Lakhmi Jain, editors, Agent and Multi-Agent Systems: Technologies and Applications, volume 6682 of Lecture Notes in Computer Science, pages 553–562. Springer Berlin / Heidelberg, 2011.
-  Olawale Titiloye and Alan Crispin. Quantum annealing of the graph coloring problem. Discrete Optimization, 8(2):376–384, 2011.
-  Olawale Titiloye and Alan Crispin. Parameter Tuning Patterns for Random Graph Coloring with Quantum Annealing. PLoS ONE, 7(11):e50060, 11 2012.
-  D. C. Wood. A Technique for Coloring a Graph Applicable to Large-Scale Timetabling Problems. Computer Journal, 12:317–322, 1969.
-  Qinghua Wu and Jin-Kao Hao. Coloring large graphs based on independent set extraction. Computers & Operations Research, 39(2):283–290, 2012.
-  Nicolas Zufferey, P. Amstutz, and P. Giaccari. Graph Colouring Approaches for a Satellite Range Scheduling Problem. Journal of Scheduling, 11(4):263 – 277, 2008.