I Introduction
The Maximum satisfiability problem (MAXSAT) is an optimization extension of a wellstudied typical NPComplete problem, the satisfiability problem (SAT) [18, 22, 19, 7, 11, 5, 26, 29]. They share some aspects but their solvers are very different. The SAT problem is to determine whether there is an assignment of truth values to the propositional variables such that all clauses in a given conjunctive normal form (CNF) formula are satisfied, while the MAXSAT problem is to find an assignment of truth values such that the number of satisfied clauses is maximized. The importance of different clauses in the MAXSAT problem can be different. When there are clauses considered to be hard and clauses considered to be soft, the MAXSAT problem is referred to as the partial MAXSAT problem, of which the goal is to find an assignment of truth values that satisfies all hard clauses and maximizes the number of satisfied soft clauses. The SAT problem can then be considered as a partial MAXSAT problem with no soft clauses. When weights are assigned to soft clauses to distinguish their importance, the partial MaxSAT problem is referred to as the weighted partial MAXSAT problem, of which the goal is to find an assignment of truth values that satisfies all hard clauses while maximizing the total weight of satisfied soft clauses. In recent years, MAXSAT and its variants have been attracting more and more interests in academy and industry.
As an NPhard problem, MAXSAT is very difficult to address and there are two classes of algorithms for solving it: exact algorithms and heuristic algorithms. Exact algorithms (see e.g.,
[9, 24, 25, 1]) are able to return solutions and prove their optimality. Heuristic algorithms (see e.g., [14, 33, 28, 4, 15, 3]) effectively return goodquality solutions that may not be optimal. Searching for the optimal solutions with exact algorithms can be impossible within reasonable time for large instances due to the NPhardness of the problem. Thus heuristic algorithms, including the local search approaches, are often used for addressing the MAXSAT problem. The key issue of designing a local search based heuristic algorithm for the MAXSAT is how to address the local cycling phenomenon in which the search process gets trapped by the local optimum.As the MAXSAT problem is very closely related to the SAT problem, one could adapt effective local search strategies for SAT, such as random walk [32], promising decreasing variable picking [23] and configuration checking (CC) [13, 11, 2], to solve the MAXSAT problem. Unfortunately, making these adaptations effective for MAXSAT is highly nontrivial because of the difference between SAT and MAXSAT. In fact, a solution of a SAT instance must satisfy every clause in the instance, meaning that when a clause in the SAT instance is falsified by an assignment, at least one variable in the clause is assigned the wrong truth value. So, picking a variable in the falsified clause and flipping its value to satisfy the clause have a chance to approach a solution of the instance. Therefore, the local search strategies for SAT usually focus on falsified clauses. However, an optimal solution of a MAXSAT instance can falsify some clauses in the instance, so that flipping the value of any variable in these clauses is a wrong decision. Consequently, guiding the local search using falsified clauses is much more complex for the MAXSAT than for the SAT.
In this paper, we propose a new strategy named PathBreaking, instead of falsified clauses, to guide our local search for the MAXSAT. PathBreaking is an improved strategy of PathRelinking method [17] to adapt to the MAXSAT. Given two different elite solutions of a combinatorial optimisation problem, PathRelinking tries to find better solutions by establishing trajectories between the two elite solutions. It has been used to solve many combinatorial optimisation problems [30, 31], including the MAXSAT [16]. However, it does not appear to be so effective for the MAXSAT, because the stateoftheart local search solvers in the recent MAXSAT evaluations^{1}^{1}1http://maxsat.ia.udl.cat/introduction do not use PathRelinking.
In order to make the PathRelinking method competitive for the MAXSAT, we identify two drawbacks in the previous PathRelinking algorithm proposed in [16]: (1) complete trajectories between the two elite solutions are constructed, no matter how the quality of the solutions is in these trajectories, so that many search steps are made in exploring lowquality solutions; (2) the search is not sufficiently diversified, because PathRelinking is just used to intensify the search around the solutions that have been produced by a GRASP (Greedy Randomized Adaptive Search Procedure) heuristic. Consequently, we propose an effective local search algorithm for the MAXSAT called IPBMR (Iterated PathBreaking with Mutation and Restart ) to remedy the above two drawbacks: (1) We establish a condition to break the construction of a trajectory between two elite solutions, allowing the search to focus only on high quality solutions; (2) We randomize the construction of the trajectories between two elite solutions, and if the search falls in a local optimum solution, we perform weak mutations followed by strong mutations that randomly flip a subset of variables of the local optimum solution in order to further diversify the search; (3) We restart [8] the search to explore new regions of the search space if the mutations do not allow to improve the local mimimum solution.
Our experiments show that IPBMR significantly outperforms the stateoftheart local search solvers CCLS [27] and Swccams [10], which do not use PathRelinking but use falsified clauses to guide the searching process, on most benchmarks in the MAXSAT evaluation 2016 (MSE2016) [6]. In order to understand the performance of IPBMR, we carry out an empirical investigation to identify and explain the effect of different components of IPBMR.
This paper is organized as follows. Section 2 provides some necessary definitions and notations. Section 3 describes the proposed IPBMR algorithm in detail. Section 4 presents the empirical evaluation of the IPBMR algorithm after describing the experimental environment and the benchmark instances. Section 5 concludes the paper.
Ii Preliminaries
A Boolean variable has two possible values: True (denoted as ) and False (denoted as ). A literal () is satisfied if the value () is assigned to the variable . It is falsified if the value () is assigned to the variable . A clause is a disjunction () of literals and a formula in conjunctive normal form (CNF) is a conjunction () of clauses. A clause is satisfied if one of its literals is satisfied and is falsified if all its literals are falsified. A CNF is satisfied if all the clauses are satisfied. A truth assignment assigns a value to each variable in a CNF. For the MAXSAT problem, any assignment is called a solution of the problem. The MAXSAT problem is to find an optimal solution that maximizes the number of satisfied clauses, or equivalently minimizes the number of falsified clauses.
Flipping a variable in an assignment is to change its value from 0 to 1 or from 1 to 0. Flipping a variable can make some clauses from falsified to satisfied (Case 1) or from satisfied to falsified (Case 2). The number of clauses in Case 1 is denoted as make of the variable and the number of clauses in Case 2 is denoted as break of the variable. The score of the variable is defined as the value of make minus break, the net increase in the number of satisfied clauses. The inverse solution of a solution is obtained by flipping all variables.
For example, the CNF formula contains three clauses. With the assignment (denoted as 011), contains two satisfied clauses and one falsified clause. Flipping variable , the assignment becomes 001. Then all clauses are satisfied and is satisfied. The make of variable is 1. Since no clause is changed from satisfied to falsified after the flip, the break is 0 and the score of is 1.
Iii The IPBMR Algorithm for the MAXSAT
In this section we describe the IPBMR algorithm in details. First we modify the PathRelinking method with a break condition and propose a new algorithm called PathBreaking (PB). Then, we design an iterative PathBreaking algorithm with restart strategy (IPBR) based on PB. Finally, we apply a mutation operator borrowed from the Genetic Algorithm
[20, 21], to design IPBMR (Iterated PathBreaking algorithm with Mutation and Restart).i The PathRelinking Method
PathRelinking is a strategy used to intensify the search around pair wise elite solutions to find better solutions. For solving the MAXSAT, a PathRelinking procedure such as the one used in [16] picks two elite solutions as the starting solution and the target solution respectively, and generates a trajectory from to , where is a solution obtained by picking and flipping one variable that has different values in and the target solution. The best solution along the trajectory is then returned. Table 1 shows a simple example of the PathRelinking process.
Starting solution  1  
Intemediate solution  0  
Intemediate solution  0  
Target solution 
In this work, we pick one elite solution as the starting solution and take its inverse solution as the target solution, so that all variables are candidate variables in the beginning. In this way, we explore around one elite solution. So, the search region is bigger in our PathRelinking process than in the usual PathRelinking process for the MAXSAT, because the search region in the usual PathRelinking process can be considered as the intersection of two regions around the two elite solutions. In fact, the variables having the same value in the two elite solutions are never flipped in a usual PathRelinking process.
ii The PathBreaking Strategy
Using the inverse solution as the target solution also makes a longer trajectory and going through this trajectory needs more calculations, because each variable in the starting solution should be flipped in the complete trajectory. We thus establish a condition to break the search process through the trajectory. At each step of the PathRelinking process, flipping a variable with positive score improves the solution. Therefore, if there is no positivescore variable, the solution will not be improved. Fig. 1 contains four figures, each figure showing ten trajectories of the PathRelinking process for a representative MAXSAT instance by giving the maximum variable score at each step of the trajectory. These trajectories suggest that, the maximum variable score would become negative after a few steps along the trajectory. We also test a plenty amount of other instances and the trajectories are similar. We thus define a condition to break the PathRelinking process. Specifically, we record the last positive maximum score (denoted as ) and the sum of negative scores after flipping the last variable with positive score (denoted as ). When , where is a positive integer parameter, we break the PathRelinking process, allowing to reduce the amount of calculation by at least half.
Besides the break condition, the heuristic to decide which variable to flip at each step is also important. A greedy heuristic is usually used but it often results in premature local optimum solutions. In this paper, we design a heuristic combining greedy method and randomized thought. Algorithm 1 shows the pseudocode of the PathBreaking (PB) Algorithm. The details are as follows.
Given a starting solution ST, at each step the PB algorithm flips a picked variable to move along the trajectory towards the inverse solution of ST. In order to pick a promising variable, PB calculates the scores for all candidate variables stored in . We put those variables with positive scores into a list and allocate a probability to be picked to each of them according to their scores. Let be a function depending on . The probability allocated to a variable in is . In Algorithm 1, we use to give high score variables more opportunity to be flipped.
Concretely, with a constant probability and if is not empty, PB picks and flips one of the variables in based on their probability. Otherwise (i.e., with probability or if is empty), PB picks and flips a variable with the maximum score among all variables in . The flipped variable will be removed from the candidate list . PB continues to pick and flip a candidate variable until the break condition is satisfied or becomes empty. Then, the search process is ended and the best solution in the explored part of the trajectory is returned.
iii Iterative PB with Restart Strategy
We construct an iterative PB algorithm with restart strategy called IPBR as follows. Given a random starting solution , PB(, , , ) returns an improved solution, then we take this solution as the starting solution for the next PB process. But the iteration will be stopped when the current cannot be improved any more, which happens when there is no positive scored variable at the beginning of a PB process. Then, the iteration is restarted by generating a new random starting solution and by iteratively improving using the PB process. Finally, The best solution found in these iterations is returned. The pseudocode of the iterative algorithm IPBR is presented in Algorithm 2.
Note that due to the random generation of the starting solution, each restart allows IPBR to explore a different search region.
iv The IPBMR Algorithm
To further improve the IPBR algorithm, we adapt and apply the mutation operator from the genetic algorithm to IPBR. In fact, when the search process reaches a local optimum, IPBR just restarts the iteration to search in other regions. However, in a neighbourhood of the local optimal solution larger than the one that can be reached in a PathBreaking process, there may exist a better solution. Let be the current local optimal solution. We randomly flip some variables in to obtain a new solution that is not too far from and maintains a part of the good quality of . Then we improve the new solution using the PathBreaking algorithm.
The IPBMR (Iterative PathBreaking with Mutation and Restart) algorithm is the IPBR algorithm with the mutating and improving strategy described above, as depicted in Algorithm 3. In the IPBMR algorithm, the mutation acts on the local optimum solution returned by each iterative PathBreaking process. We exploit two types of mutation. The strong mutation is to flip a large percentage of variables and the weak mutation is to flip a small percentage of variables. The percentage is set with a prerequisite to maintain the major structure of the local optimal solution. Thus we set 70% for the strong mutation and 20% for the weak mutation. Both mutations are operated times, where is a parameter, to increase the chance of finding a better solution. The search continues from the mutated local optimum solution.
Iv Experiments and Discussions
The IPBMR algorithm is presented in Section III for the unweighted MAXSAT for convenience. In reality, we also implemented it for the weighted MAXSAT and the weighted partial MAXSAT (recall that the unweighted MAXSAT is a special case of the weighted MAXSAT in which the weight of each clause is 1). For the weighted MAXSAT, the make (break) of a variable is the total weight of falsified (satisfied) clauses that will become satisfied (falsified) if the variable is flipped. For the weighted partial MAXSAT, if there are falsified hard clauses, the make (break) of a variable is the number of falsified (satisfied) hard clauses that will become satisfied (falsified) if the variable is flipped; otherwise, the make (break) of a variable is the total weight of falsified (satisfied) soft clauses that will become satisfied (falsified) if the variable is flipped. In any case, the score of a variable is equal to .
In the implementation of IPBMR, the parameters are empirically fixed as follows. The parameter in the breaking condition is set to 3, and the probability to pick the variable with the greatest score in the PB funciton is 0.2 for the weighted MAXSAT and to 0.99 for the weighted partial MAXSAT instances. The maximum number of mutations applied to a local minimum solution is set to 3 for industrial instances and to 7 for the other instances. The parameter is set to , as the cutoff time is set to 300 seconds.
We conducted two experiments to evaluate IPBMR. In the first experiment, we compared IPBMR with two stateoftheart local search algorithms for the MAXSAT that do not use the PathRelinking strategy, but focus on falsified clauses to pick the next variable to flip. In the second experiments, we compared IPBMR with three variants of IPBMR to analyze the performance of IPBMR.
The benchmarks used in the experiments come from the MSE2016 and were divided into two groups: unweighted MAXSAT and weighted partial MAXSAT. Each group consisted of three types of instances: Random, Crafted and Industrial. IPBMR was implemented in C programming language and all experiments were performed on an Intel(R) Xeon(R) CPU E52640 v4 @ 2.40GHz under Linux operating system.
In the experiments, each solver solved each instance 10 times independently with different random seeds, and each run was limited to 300 seconds and 4 GB of memory. Let () be the time in seconds in which a solver found the best solution in the th run for an instance. The solving time of the solver for the instance is defined as . The instances are divided into subset according to their size and/or their properties as in the MSE2016, in which the number of instances is denoted as “#inst.". We compared the solvers for each subset of instances, in terms of the total solving time in seconds for the instances of the subset, denoted as “time"; and the number of instances for which a solver reported the best solution (i.e., the solution satisfying the greatest number of clauses) among the compared solvers, denoted as “#win.". A solver is better than another for a subset of instances if it reported better solution for more instances (i.e. with a greater “#win") in the subset, tie being broken using the shorter total solving time.
i Comparing IPBMR with two stateoftheart solvers
We compared IPBMR with the following two stateoftheart solvers which are among the best in MSE2016:
 CCLS

[27] : It is an efficient local search algorithm for the MAXSAT which applies the configuration checking with the make heuristic to pick and flip variables. The configuration checking (CC) strategy is a kind of tabu method that forbids a variable to be flipped again after it is flipped, until a variable occurring in a clause containing is flipped. The score of a variable is equal to the make of the variable in CCLS, where make is defined as in IPBMR. A step of CCLS can be described as follows. With a fixed probability, CCLS randomly chooses a falsified clause and flips a randomly chosen variable in . Otherwise, let be the set of the variables in falsified clauses that are not forbidden by the CC strategy. If is not empty, CCLS flips the variable in with the greatest make; otherwise, CCLS randomly chooses a falsified clause and flips a randomly chosen variable in .
 Swccams

[10]: It is an adaptation of the local search algorithm for SAT called Swcca [12] to MAXSAT. In Swccams, the weight of each clause is initiated to 1. Then, every time Swccams encounters a local minimum solution, the weight of each falsified clause is incremented by one. The score of a variable is the total weight of the falsified clauses that will become satisfied minus the total weight of the satisfied clauses that will become falsified if the variable is flipped. A step of Swccams can roughly described as follows. Let be the set of the variables with positive score that are not forbidden by the CC strategy (see the above description of CCLS). If is not empty, flip the variable in with the greatest score. Otherwise, if there are variables with very great score that are forbidden by the CC strategy, flip such a variable with the greatest score. Otherwise, randomly choose a falsified clause and flip the least recently flipped variable in .
CCLS and Swccams are available on the homepage of the MSE2016. The results in the MSE2016 show that the two solvers are competitive. Note that the variables with positive make or positive score occur necessarily in falsified clauses. The search strategy of CCLS and Swcca clearly focuses on falsified clauses. The comparison of IPBMR with CCLS and Swcca is in reality a comparison of a search strategy guided by PathRelinking with a strategy focusing on falsified clauses.
Table 2 shows the results of CCLS, Swccams and IPBMR on the 454 unweighted random instances and the 402 crafted instances of the MSE2016.From Table 2, we can see that, IPBMR is better than CCLS and Swccams for almost all subsets of instances, the total solving time being divided by 10 or more for some subsets.
Ins.Type  Subset Name (#inst.)  CCLS  Swccams  IPBMR 
avg. time (#win.)  time (#win.)  time (#win.)  
Random  120v(45)  0.09(45)  1.28(45)  0.01(45) 
140v(45)  0.56(45)  0.72(45)  0.01(45)  
160v(45)  0.38(45)  0.75(45)  0.01(45)  
180v(44)  1.25(44)  0.56(44)  0.02(44)  
200v(49)  0.12(49)  0.49(49)  0.10(49)  
70v(45)  0.81(45)  0.98(45)  0.01(45)  
90v(49)  0.59(49)  0.91(49)  0.01(49)  
110v(50)  2.59(50)  0.95(50)  0.06(50)  
3sat(50)  165.86(50)  8.09(50)  115.60(50)  
4sat(32)  166.57(32)  5.47(32)  30.02(32)  
Crafted  maxcut1406300.7(50)  0.16(50)  0.98(50)  0.13(50) 
maxcut1406300.8(50)  0.01(50)  0.77(50)  0.01(50)  
v140(45)  0.54(45)  296.54(45)  0.14(45)  
v160(45)  4.70(45)  7.64(45)  0.33(45)  
v180(45)  1.11(45)  15.51(45)  0.52(45)  
v200(45)  4.65(45)  92.40(45)  0.82(45)  
v220(45)  2.05(45)  22.79(45)  0.99(45)  
dimacsmod(62)  0.84(62)  0.54(62)  0.01(62)  
spinglass(5)  0.07(5)  0.07(5)  0.01(5)  
scpclr(4)  0.14(4)  312.95(4)  225.31(1)  
scpcyc(6)  305.37(6)  196.79(1)  391.57(1) 
Table 3 shows the results of CCLS, Swccams and IPBMR on the weighted partial random and crafted MAXSAT instances of the MSE2016. The Swccams reported false solution for most of these instances because the reported total weight of the falsified clauses is not equal to the sum of the weights of the clauses falsified by the reported solution. IPBMR is significantly better than CCLS on these instances.
Ins.Type  Subset Name (#inst.)  CCLS  Swccams  IPBMR 
avg. time (#win.)  time (#win.)  time (#win.)  
Random  120v(50)  0.01(50)  False  0.01(50) 
140v(50)  0.15(50)  False  0.10(50)  
160v(45)  0.33(45)  False  0.32(45)  
180v(44)  0.44(44)  False  0.34(44)  
200v(49)  1.22(49)  False  0.89(49)  
70v(45)  0.38(45)  False  0.15(45)  
90v(49)  1.00(49)  False  0.30(49)  
110v(50)  2.03(50)  False  1.21(50)  
wpmax2sat/hi(30)  311.65(29)  False  7.85(29)  
wpmax2sat/lo(30)  250.84(29)  False  2.09(30)  
wpmax2sat/me(30)  85.34(29)  False  4.93(30)  
wpmax3sat/hi(30)  1.50(30)  False  0.03(30)  
Crafted  aucpaths(20)  1.3(20)  False  80.13(20) 
aucscheduling(50)  0.01(20)  False  1.33(20)  
CSG(10)  868.27(2)  False  710.74(6)  
dimacs_mod(43)  0.03(43)  False  0.01(43)  
frb(34)  539.19(34)  False  1927.05(17)  
miplib(12)  165.16(3)  False  44.35(1)  
ramsey(15)  590.03(15)  False  899.81(8)  
random_net(32)  5667.22(30)  False  4646.64(2)  
scp4x(10)  1367.67(2)  False  1274.52(9)  
scp5x(10)  1642.85(2)  False  1191.79(8)  
scp6x(5)  293.63(3)  False  647.2(3)  
scpn(20)  2490.19(7)  False  1665.47(18)  
spinglass(5)  84.04(4)  False  112.48(5)  
warehouses(18)  188.85(6)  False  2865.5(12) 
The performance of local search solvers generally is not good enough for weighted partial industrial MAXSAT instances, as is showed in recent MAXSAT evaluations. Table 4 shows the results of CCLS, Swccams and IPBMR on the weighted partial industrial MAXSAT instances of the MSE2016, for which at least one of the three compared solvers reported a solution satisfying all hard clauses for at least one instance in the subset. Swccams also reported false solutions on most instances, indicating that it is not suitable for weighted partial instances.The results show that IPBMR is substantially better than CCLS for these instances, indicating that the PathRelinking strategy can substantially improve the performance of local search for industrial instances.
Ins.Type  Subset Name (#inst.)  CCLS  Swccams  IPBMR 
Industrial  avg. time (#win.)  time (#win.)  time (#win.)  
BTBNSL(60)  N/A  False  544.93(4)  
correlationclustering(129)  1039.26(1)  False  3736.2(20)  
dir(21)  1400.58(9)  False  2216.64(16)  
packupwpms(99)  153.08(1)  False  2155.96(13)  
haplotypingpredigrees(100)  1993.62(23)  False  N/A  
log(21)  1522.14(21)  False  2285.61(5)  
preference_planning(29)  22.35(6)  False  N/A  
relationalinference(9)  N/A  False  247.23(1)  
upgradeabilityproblem(100)  N/A  False  12906.39(98) 
ii Performance analysis of IPBMR
In order to analyze what makes the performance of IPBMR, we compared IPBMR with three variants of IPBMR described as follows.
 IPBMRnoRandom:
 IPBMRnoBreak:
 IPBR:
Table 5 shows the results of IPBMRnoRandom, IPBMRnoBreak, IPBR and IPBMR on unweighted MAXSAT benchmark of MSE2016. From Table 5, we can see that, IPBMR and IPBMRnoRandom are substantially better than IPBR and IPBMRnoBreak for all subsets, meaning that the mutations of local optimum solutions and the PathBreaking strategy in IPBMR and IPBMRnoRandom are essential for their performance. IPBMR is slightly better than IPBMRnoRandom, meaning that some randomness in the choice of the variable to flip is useful, especially for the less random instances such as 3sat and scplr, because a greedy choice usually leads to a local minimum solution more easily.
Ins.Type  Subset Name (#inst.)  IPBMRnoRandom  IPBMRnoBreak  IPBR  IPBMR 
avg. time (#win.)  avg. time (#win.)  avg. time (#win.)  avg. time (#win.)  
Random  120v(45)  0.01(45)  0.43(45)  0.01(45)  0.01(45) 
140v(45)  0.01(45)  1.02(45)  0.11(45)  0.01(45)  
160v(45)  0.03(45)  1.39(45)  0.07(45)  0.01(45)  
180v(44)  0.04(44)  3.09(44)  0.18(44)  0.02(44)  
200v(49)  0.07(49)  4.94(49)  0.37(49)  0.10(49)  
70v(45)  0.01(45)  0.53(45)  0.06(45)  0.01(45)  
90v(49)  0.06(49)  1.67(49)  0.15(49)  0.01(49)  
110v(50)  0.05(50)  4.89(50)  0.64(50)  0.06(50)  
3sat(50)  233.39(50)  4316.69(24)  2703.54(44)  115.60(50)  
4sat(32)  50.12(32)  1052.92(30)  516.08(32)  30.02(32)  
Crafted  maxcut1406300.7(50)  0.17(50)  5.63(50)  1.32(50)  0.13(50) 
maxcut1406300.8(50)  0.24(50)  6.51(50)  1.2(50)  0.01(50)  
v140(45)  0.38(45)  6.63(45)  1.56(45)  0.14(45)  
v160(45)  0.77(45)  12.65(45)  4.25(45)  0.33(45)  
v180(45)  0.52(45)  14.59(45)  6.74(45)  0.52(45)  
v200(45)  2.00(45)  43.93(45)  17.1(45)  0.82(45)  
v220(45)  1.84(45)  79.37(45)  22.02(45)  0.99(45)  
dimacsmod(62)  0.01(62)  0.02(62)  0.01(62)  0.01(62)  
spinglass(5)  0.43(5)  5.18(5)  1.76(5)  0.01(5)  
scpclr(4)  759.21(3)  550.13(1)  353.41(0)  225.31(3)  
scpcyc(6)  392.24(4)  426.16(1)  176.46(1)  391.57(4) 
We now show how the mutations of local optimum solutions and the PathBreaking strategy improve the performance of IPBMR and IPBMRnoRandom. Recall that IPBMR, IPBR, IPBMRnoBreak and IPBMRnoRandom all iteratively call the PB function (Algorithm 1), each call of the PB function returning a solution which falsifies some clauses. We ran each of the four solvers to solve six representative instances in the unweighted benchmarks of the MSE2016, using a cutoff time of 300 seconds for each instance; and collect the set of solutions returned by the PB function, which we partitioned according to the number of clauses falsified by the solutions. Figure 2 shows the distribution of these solutions over the number of falsified clauses of each solver for the six representative instances, in which each point in a curve represents the fact that the PB function returned solutions falsifying clauses within 300 seconds for the corresponding solver.
Two observations can be made from Figure 2, explaining the performance of IPBMR and IPBMRnoRandom over IPBR and IPBMRnoBreak.

The solutions returned by the PB function falsify substantially fewer clauses in IPBMR and IPBMRnoRandom than in IPBR and IPBMRnoBreak, because the mutation and PathRelinking strategies allow IPBMR and IPBMRnoRandom to focus only on high quality solutions.

With the same cutoff time of 300 seconds, IPBMR and IPBMRnoRandom call much more often the PB function than IPBR and IPBMRnoBreak, meaning that the PB function is executed much faster in IPBMR and IPBMRnoRandom, so that PBMR and IPBMRnoRandom can explore much more solutions returned by the PB function than IPBR and IPBMRnoBreak. This observation can be explained as follows. On the one hand, the PathBreaking strategy makes the PB function carries out less calculations in IPBMR and IPBMRnoRandom than in IPBMRnoBreak. On the other hand, although IPBR also uses the PathBreaking strategy, it restarts the search process each time it encounters a local minimum solution, while IPBMR and IPBMRnoRandom apply the mutation strategies to the local minimum solution, allowing to keep a part of its good quality. In other words, while IPBR works on a randomly generated solution, IPBMR and IPBMRnoRandom work on a mutated local minimum solution. The difference is that the breaking condition in the PB function can be satisfied earlier when working on a mutated local minimum solution than on a randomly generated solution. Consequently, the calculations of the PB function are less timeconsuming in IPBMR and IPBMRnoRandom than in IPBR.
V Conclusions and Future Work
We proposed a new effective local search algorithm called IPBMR for the MAXSAT, based on the classical PathRelinking method. The performance of IPBMR comes from a careful combination of three components:

A PathBreaking strategy, which significantly increases the probability to find a good solution by avoiding exploring unpromising regions in the search space.

A new variable picking heuristic to generate paths between two elite solutions, allowing to avoid premature local minimum solutions.

A weak and a strong local optimum solution mutation strategies, allowing to keep parts of good properties of a local optimum solution, so that the search is diversified but is kept in the promising regions of the search space, which is better than a complete restart of the search.
We conducted an indepth empirical investigation to identify and explain the effect of the three components, and to compare IPBMR with two stateoftheart local search algorithms, CCLS and Swccams, which do not use PathRelinking, but focus on falsified clauses to pick the next variable to flip. Experimental results show that IPBMR significantly outperform CCLS and Swccams, indicating that the PathRelinking method reinforced with the three components is very effective for the MAXSAT.
The mutation strategies of IPBMR, which aim at diversifying the search but keeping good properties of a local optimum solution, are based on flipping a randomly chosen subset of variables in the solution. In the future, we plan to use some machine learning approach to accurately identify the good properties in the local optimum solution, so that the mutation can better keep these good properties. We believe that this is a promising direction to improve the performance of IPBMR on industrial MAXSAT instances.
Vi Acknowledgement
This work was supported by National Natural Science Foundation of China (61472147, 61772219, 61602196).
References

[1]
Abramé, A., and Habet, D.
Local maxresolution in branch and bound solvers for maxsat.
In
Tools with Artificial Intelligence (ICTAI), 2014 IEEE 26th International Conference on
(2014), IEEE, pp. 336–343.  [2] Abramé, A., Habet, D., and Toumi, D. Improving configuration checking for satisfiable random ksat instances. Annals of Mathematics and Artificial Intelligence 79, 13 (2017), 5–24.
 [3] Ansótegui, C., and Gabas, J. Wpm3: An (in) complete algorithm for weighted partial maxsat. Artificial Intelligence 250 (2017), 37–57.
 [4] Ansótegui, C., Gabàs, J., and Levy, J. Exploiting subproblem optimization in satbased maxsat algorithms. Journal of Heuristics 22, 1 (2016), 1–53.
 [5] Ansótegui, C., GiráldezCru, J., and Levy, J. The community structure of sat formulas. In International Conference on Theory and Applications of Satisfiability Testing (2012), Springer, pp. 410–423.
 [6] Argelich, J., Li, C. M., Manya, F., and Planes, J. Maxsat 2016 eleventh maxsat evaluation. http://maxsat.ia.udl.cat/introduction/. Accessed June 6, 2018.
 [7] Audemard, G., and Simon, L. Predicting learnt clauses quality in modern sat solvers. In IJCAI (2009), vol. 9, pp. 399–404.
 [8] Biere, A. Adaptive restart strategies for conflict driven sat solvers. In International Conference on Theory and Applications of Satisfiability Testing (2008), Springer, pp. 28–33.
 [9] Borchers, B., and Furman, J. A twophase exact algorithm for maxsat and weighted maxsat problems. Journal of Combinatorial Optimization 2, 4 (1998), 299–306.
 [10] Cai, S., and Luo, C. Swccams. http://maxsat.ia.udl.cat/solvers/7/Swcca_ms%20binary%20for%20MSE2016.tar.gz201603280928. Accessed June 6, 2018.
 [11] Cai, S., and Su, K. Configuration checking with aspiration in local search for sat. In AAAI (2012).
 [12] Cai, S., and Su, K. Local search for boolean satisfiability with configuration checking and subscore. Artificial Intelligence 204 (2013), 75–98.
 [13] Cai, S., Su, K., and Sattar, A. Local search with edge weighting and configuration checking heuristics for minimum vertex cover. Artificial Intelligence 175, 910 (2011), 1672–1696.
 [14] Cha, B., Iwama, K., Kambayashi, Y., and Miyazaki, S. Local search algorithms for partial maxsat. AAAI/IAAI 263268 (1997).
 [15] Djenouri, Y., Habbas, Z., and Djenouri, D. Data miningbased decomposition for solving the maxsat problem: toward a new approach. IEEE Intelligent Systems 32, 4 (2017), 48–58.
 [16] Festa, P., Pardalos, P. M., Pitsoulis, L. S., and Resende, M. G. Grasp with path relinking for the weighted maxsat problem. Journal of Experimental Algorithmics (JEA) 11 (2007), 2–4.
 [17] Glover, F., Laguna, M., and Marti, R. Fundamentals of scatter search and path relinking. Control and Cybernetics, Volume 29, Number 3, pp. 653684, 2000 29, 3 (2000), 653–684.
 [18] Gu, J. Local search for satisfiability (sat) problem. IEEE Transactions on systems, man, and cybernetics 23, 4 (1993), 1108–1129.
 [19] Gu, J., Purdom, P. W., Franco, J., and Wah, B. W. Algorithms for the satisfiability (sat) problem. In Handbook of Combinatorial Optimization. Springer, 1999, pp. 379–572.
 [20] Holland, J. H. Genetic algorithms and the optimal allocation of trials. SIAM Journal on Computing 2, 2 (1973), 88–105.
 [21] Holland, J. H. Genetic algorithms. Scientific american 267, 1 (1992), 66–73.
 [22] Li, C. M., and Anbulagan, A. Heuristics based on unit propagation for satisfiability problems. In Proceedings of the 15th international joint conference on Artifical intelligenceVolume 1 (1997), Morgan Kaufmann Publishers Inc., pp. 366–371.
 [23] Li, C. M., and Huang, W. Q. Diversification and determinism in local search for satisfiability. In Proceedings of SAT2005 (2005), Springer LNCS 3569, pp. 158–172.
 [24] Li, C. M., Manya, F., and Planes, J. New inference rules for maxsat. J. Artif. Intell. Res.(JAIR) 30, 1 (2007), 321–359.
 [25] Li, C. M., and Quan, Z. An efficient branchandbound algorithm based on maxsat for the maximum clique problem. In AAAI (2010), vol. 10, pp. 128–133.
 [26] Liang, J. H., Ganesh, V., Poupart, P., and Czarnecki, K. Exponential recency weighted average branching heuristic for sat solvers. In AAAI (2016), pp. 3434–3440.
 [27] Luo, C., Cai, S., Su, K., Wu, W., and Jie, Z. Ccls. http://maxsat.ia.udl.cat/solvers/5/CCLS201604080726. Accessed June 6, 2018.
 [28] Luo, C., Cai, S., Wu, W., Jie, Z., and Su, K. Ccls: an efficient local search algorithm for weighted maximum satisfiability. IEEE Transactions on Computers 64, 7 (2015), 1830–1843.
 [29] Luo, M., Li, C.M., Xiao, F., Manyà, F., and Lü, Z. An effective learnt clause minimization approach for cdcl sat solvers. In Proceedings of the 26th International Joint Conference on Artificial Intelligence (2017), AAAI Press, pp. 703–711.
 [30] Ma, F., Wang, Y., and Hao, J.K. Path relinking for the vertex separator problem. Expert Systems with Applications 82 (2017), 332–343.
 [31] Muritiba, Fernandes, A. E., Rodrigues, C. D., and da Costa, F. A. A pathrelinking algorithm for the multimode resourceconstrained project scheduling problem. Computers & Operations Research (2018).
 [32] Selman, B., Kautz, H. A., and Cohen, B. Noise strategies for improving local search. In Proceedings of the 12th National Conference on Artificial Intelligence, AAAI’94, Seattle/WA, USA (1994), AAAI Press, pp. 337–343.
 [33] Tompkins, D. A., and Hoos, H. H. Ubcsat: An implementation and experimentation environment for sls algorithms for sat and maxsat. In International conference on theory and applications of satisfiability testing (2004), Springer, pp. 306–320.
Comments
There are no comments yet.