1 Introduction
The propositional satisfiability problem (SAT) is one of the most prominent problems in AI. It is relevant both for theory (having been the first problem proven to be NPhard Cook (1971)) and for practice (having important applications in many fields, such as hardware and software verification Biere et al. (1999); Prasad et al. (2005); Clarke et al. (2004), testcase generation Stephan et al. (1996); Cadar et al. (2008), AI planning Kautz & Selman (1996, 2014), scheduling Crawford & Baker (1994), and graph colouring van Gelder (2002)). The SAT community has a long history of regularly assessing the state of the art via competitions Järvisalo et al. (2012). The first SAT competition dates back to the year 2002 Simon et al. (2005), and the event has been growing over time: in 2014, it had a record participation of 58 solvers by 79 authors in 11 tracks Belov et al. (2014).
In practical applications of SAT, solvers can typically be adjusted to perform well for the specific type of instances at hand, such as software verification instances generated by a particular static checker on a particular software system Babić & Hu (2007), or a particular family of bounded model checking instances Zarpas (2005). To support this type of customization, most SAT solvers already expose a range of command line parameters whose settings substantially affect most parts of the solver. Solvers typically come with robust default parameter settings meant to provide good allround performance, but it is widely known that adjusting parameter settings to particular target instance classes can yield ordersofmagnitude speedups Hutter et al. (2007); KhudaBukhsh et al. (2009); Tompkins et al. (2011a). Current SAT competitions do not take this possibility of customizing solvers into account, and rather evaluate solver performance with default parameters.
Unlike the SAT competition, the Configurable SAT Solver Challenge (CSSC) evaluates SAT solver performance after applicationspecific customization, thereby taking into account the fact that effective algorithm configuration procedures can automatically customize solvers for a given distribution of benchmark instances. Specifically, for each type of instances and each SAT solver , an automated fixedtime offline configuration phase determines parameter settings of optimized for high performance on . Then, the performance of on is evaluated with these settings, and the solver with the best performance wins.
To avoid a potential misunderstanding, we note that for winning the competition, only solver performance after configuration counts, and that it does not matter how much performance was improved by configuration. As a consequence, in principle, even a parameterless solver could win the CSSC if it was very strong: it would not benefit from configuration, but if it nevertheless outperformed all solvers that were specially configured for the instance families in a given track, it would still win that track. (In practice, we have not observed this, since the improvements resulting from configuration tend to be large.)
The competition conceptually most closely related to the CSSC is the learning track of the international planning competition (IPC, see, e.g., the description by Fern et al. Fern et al. (2011)^{1}^{1}1http://www.cs.colostate.edu/~ipc2014/, which also features an offline timelimited learning phase on training instances from a given planning domain and an online testing phase on a disjoint set of instances from the same domain. The main difference between this IPC learning track and the CSSC (other than their focus on different problems) is that in the IPC learning track every planner uses its own learning method, and the learning methods thus vary between entries. In contrast, in the CSSC, the corresponding customization process is part of the competition setup and uses the same algorithm configuration procedure for each submitted solver. Our approach to evaluating solver performance after configuration could of course be transferred to any other competition. (In fact, the 2014 IPC learning track for nonportfolio solvers was won by FastDownwardSMAC Seipp et al. (2014), a system that employs a similar combination of general algorithm configuration and a highly parameterized solver framework as we do in the CSSC.)
In the following, we first describe the criteria we used for the design of the CSSC (Section 2). Next, we provide some background on the automated algorithm configuration methods we used when running the competition (Section 3). Then, we discuss the two CSSCs we have held so far (in 2013 and 2014); we discuss each of these competitions in turn (Sections 4 and 5), including the specific benchmarks used, the participating solvers, and the results. We describe two main insights that we obtained from these results:

In many cases, automated algorithm configuration found parameter settings that performed much better than the solver defaults, in several cases yielding average speedups of several orders of magnitude.

Some solvers benefited more from automated configuration than others; as a result, the ranking of algorithms after configuration was often substantially different from the ranking based on the algorithm defaults (as, e.g., measured in the SAT competition).
Finally, we analyze various aspects of these results (Section 6) and discuss the implications we see for future algorithm development (Section 7).
2 Design Criteria for the CSSC
We organized the CSSC 2013 and 2014 in coordination with the international SAT competition and presented them in the competition slots at the 2013 and 2014 SAT conferences (as well as in the 2014 FLoC Olympic Games, in which all SATrelated competitions took part). We coordinated solver submission deadlines with the SAT competition to minimize overhead for participants, who could submit their solver to the SAT competition using default parameters and then open up their parameter spaces for the CSSC.
We designed the CSSC to remain close to the international SAT competition’s established format; in particular, we used the same general categories: industrial, crafted, and random, and, in 2014 also random satisfiable. Furthermore, we used the same input and output formats, the SAT competition’s mature code for verifying correctness of solver outputs (only for checking models of satisfiable instances; we did not have a certified UNSAT track), and the same scoring function (number of instances solved, breaking ties by average runtime).
The main way our setup differed from that of the SAT competition was that we used a relatively small budget of five minutes per solver run. We based this choice partly on the fact that many solvers have runtime distributions with rather long tails (or even heavy tails Gomes et al. (2000)), and that practitioners often use many instances and relatively short runtimes to benchmark solvers for a new application domain. There is also evidence that SAT competition results would remain quite similar if based on shorter runtimes, but not if based on fewer instances Hutter et al. (2010). Therefore, in order to achieve more robust performance within a fixed computational budget, we chose to use many test instances (at least 250 for each benchmark) but relatively low runtime cutoffs per solver run (five minutes). (We also note that a short time limit of five minutes has already been used in the agile track of the 2014 International Planning Competition.) Due to constraints imposed by our computational infrastructure, we used a memory limit of 3GB for each solver run.
To simulate the situation faced by practitioners with limited computational resources, we limited the computational budget for configuring a solver on a benchmark with a given configuration procedure to two days on 4 or 5 cores (in 2014 and 2013, respectively). Our results are therefore indicative of what could be obtained by performing configuration runs over the weekend on a modern desktop machine.
2.1 Controlled Execution of Solver Runs
Since all configuration procedures ran in an entirely automated fashion, they had to be robust against any kind of solver failure (segmentation faults, unsupported combinations of parameters, wrong results, infinite loops, etc.). We handled all such conditions in a generic wrapper script that used Olivier Roussel’s runsolver tool Roussel (2011) to limit runtime and memory, and counted any errors or limit violations as timeouts at the maximum runtime of 300 seconds. We also kept track of the rich solver runtime data we gathered in our configuration runs and made it publicly available on the competition website.
2.2 Choice of Configuration Pipeline
To avoid bias arising from our choice of algorithm configuration method, we independently used all three stateoftheart methods applicable for runtime optimization (ParamILS Hutter et al. (2009), GGA Ansótegui et al. (2009), and SMAC Hutter et al. (2011b), as described in detail in Section 3). We evaluated the configurations resulting from all configuration runs on the entire training data set and selected the configuration with the best training performance. We then executed only this configuration on the test set to determine the performance of the configured solver. Except where specifically noted otherwise, all performance data we report in this article is for this optimized configuration on previously unseen test instances from the respective benchmark set.
2.3 Presubmission Bug Fixing
As part of the submission package, we provided solver authors with our configuration pipeline, so that they could run it themselves to identify bugs in their solver before submission (e.g., problems due to the choice of nondefault parameters). We also provided some trivial benchmark sets for this presubmission preparation, which were not part of the competition.
We did not offer a bug fixing phase after solver submission, except that we ran a very simple configuration experiment (10 minutes on trivial instances) to verify that the setup of all participants was correct.
2.4 Choice of Benchmarks
We chose the benchmark families for the CSSC to be relatively homogeneous in terms of the origin and/or construction process of instances in the same family. Typically, we selected benchmark families that are neither too easy (since speedups are less interesting for easy instances), nor too hard (so that solvers could solve a large fraction of instances within the available computational budgets). We aimed for benchmark sets of which at least 2040% could be solved within the maximum runtime on a recent machine by the default configuration of a SAT solver that would perform reasonably well in the SAT competition. We also aimed for benchmark sets with a sufficient number of instances to safeguard against overtuning; in practice, the smallest datasets we used had 500 instances: 250 for training and 250 for testing.
We did not disclose which benchmark sets we used until the competition results were announced. While we encouraged competition entrants to also contribute benchmarks, we made sure to not substantially favor any solver by using such contributed benchmarks.
3 Automated Algorithm Configuration Procedures
The problem of finding performanceoptimizing algorithm parameter settings arises for many computational problems. In recent years, the AI community has developed several dedicated systems for this general algorithm configuration problem Hutter et al. (2009); Ansótegui et al. (2009); LópezIbáñez et al. (2011); Hutter et al. (2011b).
We now describe this problem more formally. Let be an algorithm having parameters with domains . Parameters can be realvalued (with domains , where and ), integervalued (with domains , where and ), or categorical (with finite unordered domains, such as {red, blue, green}). Parameters can also be conditional on an instantiation of other (socalled parent
) parameters; as an example, consider the parameters of a heuristic mechanism
, which are completely ignored unless is chosen to be used by means of another, categorical parameter. Finally, some combinations of parameter instantiations can be labelled as forbidden.Algorithm ’s configuration space then consists of all possible combinations of parameter values: . We refer to elements of this configuration space as parameter configurations, or simply configurations. Given a benchmark set and a performance metric capturing the performance of configuration on problem instance , the algorithm configuration problem then aims to find a configuration that minimizes over , i.e., that minimizes^{2}^{2}2An alternative definition considers the optimization of expected performance across a distribution of instances rather than average performance across a set of instances Hutter et al. (2009). What we consider here can be seen as a special case where the distribution is uniform over a given set of training instances. It is also possible to optimize performance metrics other than mean performance across instances, but mean performance is by far the most widely used option.
In the CSSC, the specific metric we optimized was penalized average runtime (PAR10), which counts runs that exceed a maximal cutoff time without solving the given instance as . We terminated individual solver runs as unsuccessful after seconds.
We refer to an instance of the algorithm configuration problem as a configuration scenario and to a method for solving the algorithm configuration problem as a configuration procedure (or a configurator), in order to avoid confusion with the solver to be optimized (which we refer to as the target algorithm) and the problem instances the solver is being optimized for.
Algorithm configuration has been demonstrated to be very effective for optimizing various SAT solvers in the literature. For example, Hutter et al. (2007) configured the algorithm Spear Babić & Hutter (2007) on formal verification instances, achieving a 500fold speedup on software verification instances generated with the static checker Calysto Babić & Hu (2007) and a 4.5fold speedup on IBM bounded model checking instances by Zarpas (2005). Algorithm configuration has also enabled the development of general frameworks for stochastic local search SAT solvers that can be automatically instantiated to yield stateoftheart performance on new types of instances; examples for such frameworks are SATenstein KhudaBukhsh et al. (2009) and Captain Jack Tompkins et al. (2011a).
While all of these applications used the localsearch based algorithm configuration method ParamILS Hutter et al. (2009), in the CSSC we wanted to avoid bias that could arise from commitment to one particular algorithm configuration method and thus used all three existing general algorithm configuration methods for runtime optimization: ParamILS, GGA Ansótegui et al. (2009), and SMAC Hutter et al. (2011b).^{3}^{3}3We did not use the iterated racing method I/FRace LópezIbáñez et al. (2011), since it does not effectively support runtime optimization and its authors thus discourage its use for this purpose (personal communication with Manuel LópezIbáñez and Thomas Stützle). We refer the interested reader to B for details on each of these configurators. Here, we only mention some details that were important for the setup of the CSSC:

ParamILS does not natively support parameters specified only as real or integervalued intervals, but requires all parameter values to be listed explicitly; for simplicity, we refer to the transformation used to satisfy this requirement as discretization. When multiple parameter spaces were available for a solver, we only ran ParamILS on the discretized version, whereas we ran GGA and SMAC on both the discretized and the nondiscretized versions.

ParamILS and SMAC have been shown to benefit substantially from multiple independent runs, since they are randomized algorithms. Given cores, the usual approach is simply to execute independent configurator runs and pick the configuration from the one with best performance on the training set. GGA, on the other hand, can use multiple cores on a single machine, and in fact requires these to run effectively. Therefore, given available cores per configuration approach, we used independent runs of each ParamILS and SMAC, and one run using all cores for GGA.

GGA could not handle the complex parameter conditionalities found in some solvers; for those solvers, we only ran ParamILS and SMAC.
4 The Configurable SAT Solver Challenge 2013
Benchmark  #Train  #Test  #Variables  #Clauses  Reference 
SWV  302  302  Babić & Hu (2008)  
IBM  383  302  Zarpas (2005)  
Circuit Fuzz  299  302  Brummayer et al. (2012)  
BMC  807  302  Biere et al. (2008)  
GI  1032  351  Mugrauer & Balint (2013a); Torán (2013)  
LABS  350  351  Mugrauer & Balint (2013b)  
K3  300  250  Bayless et al. (2014)  
unifk5  300  250  –  
5sat500  250  250  Tompkins et al. (2011a) 
The first CSSC^{4}^{4}4http://www.cs.ubc.ca/labs/beta/Projects/CSSC2013/ was held in 2013. It featured three tracks mirroring those of the SAT competition: Industrial SAT+UNSAT, crafted SAT+UNSAT, and Random SAT+UNSAT. Table 1 lists the benchmark families we used in each of these tracks, all of which are described in detail in A. Within each track, we used the same number of test instances for each benchmark family, thereby weighting each equally in our analysis.
4.1 Participating Solvers and Their Parameters
Solver  # Parameters  # Configurations  Reference  

c  i  r  cond.  original  discretized  disc. subset  
Gnovelty+GCa  –  –  Duong & Pham (2013)  
Gnovelty+GCwa  –  –  Duong & Pham (2013)  
Gnovelty+PCL  –  –  Duong & Pham (2013)  
Simpsat  –  –  Han & Jiang (2012a)  
Sat4j  –  –  Berre & Parrain (2010)  
Solver43  –  –  Balabanov (2013)  
Forlnodrup  –  –  Soos et al. (2009)  
Clasp2.1.3  –  Gebser et al. (2012)  
Riss3g  –  –  Manthey (2013)  
Riss3gExt  –  –  Manthey (2013)  
Lingeling  Biere (2013) 
Table 2 summarizes the solvers that participated in the CSSC 2013, along with information on their configuration spaces. The eleven submitted solvers ranged from complete solvers based on conflictdirected clause learning (CDCL; Bayardo Jr. & Schrag (1997)) to stochastic local search (SLS; Hoos & Stützle (2004)) solvers. The degree of parameterization varied substantially across these submitted solvers, from 2 to 241 parameters. We briefly discuss the main features of the solvers’ parameter configuration spaces, ordering solvers by their number of parameters.
Gnovelty+GCa and Gnovelty+GCwa Duong & Pham (2013)
are closely related SLS solvers. Both have two numerical parameters: the probability of selecting false clauses randomly and the probability of smoothing clause weights. The parameters were prediscretized by the solver developer to 11 and 10 values, yielding 110 possible combinations.
Gnovelty+PCL Duong & Pham (2013) is an SLS solver with five parameters: one binary parameter (determining whether the stagnation path is dynamic or static) and four numerical parameters: the length of the stagnation path, the size of the time window storing stagnation paths, the probability of smoothing stagnation weights, and the probability of smoothing clause weights. All numerical parameters were prediscretized to ten values each by the solver developer, yielding 20 000 possible combinations.
Simpsat Han & Jiang (2012a) is a CDCL solver based on Cryptominisat Soos (2014), which adds additional strategies for explicitly handling XOR constraints Han & Jiang (2012b). It has five numerical parameters that govern both these XOR constraint strategies and the frequency of random decisions. All parameters were prediscretized by the solver developer, yielding 2 400 possible combinations.
Sat4j Berre & Parrain (2010) is fullfeatured library of solvers for Boolean satisfiability and optimization problems. For the contest, it applied its default CDCL SAT solver with ten exposed parameters: four categorical parameters deciding between different restart strategies, phase selection strategies, simplifications, and cleaning; and six numerical parameters prediscretized by its developer.
Solver43 Balabanov (2013) is a CDCL solver with 12 parameters: three categorical parameters concerning sorting heuristics used in bounded variable eliminiation, in definitions and in adding blocked clauses; and nine numerical parameters concerning various frequencies, factors, and limits. All parameters were prediscretized by the solver developer.
Forlnodrup Soos et al. (2009) is a CDCL solver with 44 parameters. Most notably, these control variable selection, Boolean propagation, restarts, and learned clause removal. About a third of the parameters are numerical (particularly most of those concerning restarts and learned clause removal); all parameters were prediscretized by the solver developer.
Clasp2.1.3 Gebser et al. (2012) is a solver for the more general answer set programming (ASP) problem, but it can also solve SAT, MAXSAT and PB problems. As a SAT solver, Clasp2.1.3 is a CDCL solver with parameters: for preprocessing, for the variable selection heuristic, for the restart policy, for the deletion policy, and 10 for a variety of other uses. The configuration space is highly conditional, with several toplevel parameters enabling or disabling certain strategies. Clasp2.1.3 exposes both a mixed continuous/discrete parameter configuration space and a manuallydiscretized one.
Riss3g Manthey (2013) is a CDCL solver with 125 parameters. These include 6 numerical parameters from MiniSAT Eén & Sörensson (2003), 10 numerical parameters from Glucose Audemard & Simon (2009), 17 mostly numerical Riss3G parameters, and 92 parameters controlling preprocessing/inprocessing performed by the integrated Coprocessor Manthey (2012). The inprocessor parameters resemble those in Lingeling Biere (2013), emphasizing blocked clause elimination Järvisalo et al. (2010), bounded variable addition Manthey et al. (2013), and probing Lynce & MarquesSilva (2003). About 50 of the parameters are Boolean, and most others are numerical parameters prediscretized by the solver developer. The parameter space is highly conditional, with inprocessor parameters dependent on a switch turning them on alongside various other dependencies. Indeed, there are only 18 unconditional parameters. Finally, there are also seven forbidden parameter combinations that ascertain various switches are turned on if inprocessing is used.
Riss3gExt Manthey (2013) is an experimental extension of Riss3g. It exposes all of the parameters previously discussed for Riss3g, along with an additional 11 Riss3G parameters and 57 inprocessing parameters. Its developer implemented all of these extensions in one week and did not have time for extensive testing before the CSSC; therefore, he submitted Riss3gExt as closed source, making it ineligible for medals. We discuss the results of this closedsource solver separately, in C.
LingelingBiere (2013) is a CDCL solver with 241 parameters (making it the solver with the largest configuration space in the CSSC 2013). 102 of these parameters are categorical, and the remaining 139 are integervalued (76 of them with the trivial upper bound of maxinteger, ). Lingeling parameterizes many details of the solution process, including probing and lookahead (about 25 mostly numerical parameters), blocked clause elimination and bounded variable elimination (about 20 mostly categorical parameters each), glue clauses (about 15 mostly numerical parameters), and a host of other mechanisms parameterized by about 5–10 parameters each. Lingeling exposes its full parameter space, a discretized version of all parameters, and a subspace of only the categorical parameters (102 of them).
4.2 Configuration Pipeline
We executed this competition on the QDR partition of the Compute Canada Westgrid cluster Orcinus. Each node in this cluster was provisioned with 24 GB memory and two 6core, 2.66 GHz Intel Xeon X5650 CPUs with 12 MB L2 cache each, and ran Red Hat Enterprise Linux Server 5.5 (kernel 2.6.18, glibc 2.5).
In this first edition of the CSSC, we were unfortunately unable to run GGA. This was because it requires multiple cores for effective runtime minimization, and the respective multiplecore jobs we submitted on the Orcinus cluster were stuck in the queue for months without getting started. (Singlecore runs, on the other hand, were often scheduled within minutes.)
We thus limited ourselves to using ParamILS for the discretized parameter space of each of the 11 solvers and SMAC for each of the parameter spaces that solver authors submitted (as discussed above, 9 submissions with one parameter space, 1 submission with two, and 1 submission with three, i.e., 14 in total). For each of the nine benchmark families, this gave rise to 11 configuration scenarios for ParamILS and 14 for SMAC, for a total of 225 configuration scenarios. Since our budget for each configuration procedure was two CPU days on five cores (five independent runs of ParamILS and SMAC, respectively), the competition’s configuration phase required a total of 2250 CPU days (just over 6 CPU years). Thanks to a special allocation on the Orcinus cluster, we were able to complete this phase within a week.
Following standard practice, we then evaluated the configurations resulting from all configuration runs on the entire training data set and selected the configuration with the best training performance. We then executed only this configuration on the test set to assess the performance of the configured solver. This evaluation phase required much less time than the configuration phase.
We note that all scripts we used for performing the configuration and analysis experiments were written in Ruby and are available for download on the competition website.
4.3 Results
Rank  Industrial SAT+UNSAT  crafted SAT+UNSAT  Random SAT+UNSAT 

1  Lingeling  Clasp3.0.4p8  Clasp3.0.4p8 
2  Riss3g  Forlnodrup  Lingeling 
3  Solver43  Lingeling  Riss3g 
For each the three tracks of CSSC 2013, we configured each of the eleven submitted solvers for each of the benchmark families within the track and aggregated results across the respective test instances. We show the winners in Table 3 and discuss the results for each track in the following sections. Additional details, tables, and figures are provided in an accompanying technical report Hutter et al. (2014).
We remind the reader that the CSSC score only depends on how well the configured solver did and not on the difference between default and configured performance. We nevertheless still cover default performance prominently in the following results, in order to emphasize the impact configuration had and the difference between the CSSC and standard solver competitions (e.g., the SAT competition).
4.3.1 Results of the Industrial SAT+UNSAT Track
Results for CSSC 2013 Industrial SAT+UNSAT track
Our Industrial SAT+UNSAT track consisted of the four industrial benchmarks detailed in A.1: Bounded Model Checking 2008 (BMC) Biere (2007), Circuit Fuzz Brummayer et al. (2012), Hardware Verification (IBM) Zarpas (2005), and SWV Babić & Hu (2008).
Figure 5 visualizes the results of the configuration process for the winning solver Lingeling on these four benchmark sets. It demonstrates that even Lingeling, a highly competitive solver in terms of default performance, can be configured for improved performance on a wide range of benchmarks. We note that for the easy benchmark SWV, configuration sped up Lingeling by a factor of (average runtime 3.3s vs 0.16s), and that for the harder Circuit Fuzz instances, it nearly halved the number of timeouts (39 vs 20). The improvements were smaller for more traditional hardware verification instances (IBM and BMC) similar to those used to determine Lingeling’s default parameter settings.
Table 4 summarizes the results of the ten solvers that were eligible for medals. From this table, we note that, like Lingeling, many other solvers benefited from configuration. Indeed, some solvers (in particular Forlnodrup and Clasp3.0.4p8) benefited much more from configuration on the BMC instances, largely because their default performance was worse on this benchmark. On the other hand, Riss3g featured stronger default performance than Lingeling but did not benefit as much from configuration.
Table 4 also aggregates results across the four benchmark families to yield the overall results for the Industrial SAT+UNSAT track. These results show that many solvers benefited substantially from configuration, and that some benefited more than others, causing the CSSC ranking to differ substantially from the ranking according to default solver performance; for instance, based on default performance, the overall winning solver, Lingeling, would have only ranked fourth.
4.3.2 Results of the crafted SAT+UNSAT Track
Results for CSSC 2013 crafted SAT+UNSAT track
The crafted SAT+UNSAT track consisted of the two crafted benchmarks detailed in A.2: Graph Isomorphism (GI) and Low Autocorrelation Binary Sequence (LABS).
Figure 8 visualizes the improvements algorithm configuration yielded for the bestperforming solver Clasp3.0.4p8 on these benchmarks. Improvements were particularly large on the GI instances, where algorithm configuration decreased the number of timeouts from 42 to 6. Table 5 summarizes the results we obtained for all solvers on these benchmarks, showing that configuration also substantially improved the performance of many other solvers. The table also aggregates results across both benchmark families to yield overall results for the crafted SAT+UNSAT track. While Forlnodrup showed the best default performance and benefited substantially from configuration (#timeouts reduced from 135 to 98), Clasp3.0.4p8 improved even more (#timeouts reduced from 139 to 96).
Results for CSSC 2013 Random SAT+UNSAT track
4.3.3 Results of the Random SAT+UNSAT Track
The Random SAT+UNSAT track consisted of three random benchmarks detailed in A.3: 5sat500, K3, and unifk5. The instances in 5sat500 were all satisfiable, those in unifk5 all unsatisfiable, and those in K3 were mixed.
Table 6 summarizes the results for these benchmarks. It shows that the unifk5 benchmark set was very easy for complete solvers (although configuration still yielded up to 4fold speedups), that the K3 benchmark was also quite easy for the best solvers, and that only the SLS solvers could tackle benchmark 5sat500, with configuration making a big difference to performance.
Here again, our aggregate results demonstrate that rankings were substantially different between the default and configured versions of the solvers: the three solvers with top default performance were ranked 4th to 6th in the CSSC, and vice versa. Figure 12 visualizes the very substantial speedups achieved by configuration for the winning solver Clasp3.0.4p8 on K3 and unifk5, and for the SLS solver Gnovelty+GCa on 5sat500.
5 The Configurable SAT Solver Challenge 2014
Benchmark  #Train  #Test  #Variables  #Clauses  Reference 

IBM  383  302  Zarpas (2005)  
Circuit Fuzz  299  302  Brummayer et al. (2012)  
BMC  604  302  Biere et al. (2008)  
GI  1032  351  Mugrauer & Balint (2013a); Torán (2013)  
LABS  350  351  Mugrauer & Balint (2013b)  
NRooks  484  351  Manthey & Steinke (2014)  
K3  300  250  Bayless et al. (2014)  
3cnf  500  250  Bebel & Yuen (2013)  
unifk5  300  250  –  
3sat1k  250  250  Tompkins et al. (2011a)  
5sat500  250  250  Tompkins et al. (2011a)  
7sat90  250  250  Tompkins et al. (2011a) 
The second CSSC^{5}^{5}5http://aclib.net/cssc2014/ was held in 2014. Compared to the inaugural CSSC in 2013, we improved the competition design in several ways:

We used a different computer cluster,^{6}^{6}6We executed this competition on the META cluster at the University of Freiburg, whose compute nodes contained 64GB of RAM and two 2.60GHz Intel Xeon E52650v2 8core CPUs with 20 MB L2 cache each, running Ubuntu 14.04 LTS, 64bit. enabling us to run GGA as one of the configuration procedures.

We added a Random SAT track to facilitate comparisons of stochastic local search solvers.

We let solver authors decide which tracks their solver should run in.

For fairness, for each solver, we performed the same number of configuration experiments. (This is in contrast to 2013, where we performed the same number of configuration runs for every configuration space of every solver, which lead to a larger combined configuration budget for solvers submitted with multiple configuration spaces).

We kept track of all of the (millions of) solver runs performed during the configuration process and made all information about errors available to solver developers after the competition.
5.1 Participating Solvers
Solver  # Parameters  # Configurations  Categories  Ref.  

c  i  r  cond.  discretized  original  
DCCASat+marchrw  Random  Luo et al. (2014)  
CSCCSat2014  Random SAT  Luo et al. (2012); Luo et al. (2014)  
ProbSAT  Random SAT  Balint & Schöning (2012)  
MinisatHACK999ED  All categories  Oh (2014)  
YalSAT  Crafted & Random SAT  Biere (2014)  
Cryptominisat  Industrial & Crafted  Soos (2014)  
Clasp3.0.4p8  All categories  Gebser et al. (2012)  
Riss4.27  All but Random SAT  Manthey (2014)  
SparrowToRiss  All categories  Balint & Manthey (2014)  
Lingeling  All categories  Biere (2014) 
The ten solvers that participated in the CSSC 2014 are summarized in Table 8; they included CDCL, SLS and hybrid solvers. These solvers differed substantially in their degree of parameterization, with the number of parameters ranging from 1 to 323. We briefly discuss the main features of each solver’s parameter configuration space, ordering solvers by their number of parameters.
DCCASat+marchrw Luo et al. (2014) combines the SLS solver DCCASat with the CDCL solver marchrw. It was submitted to the Random SAT+UNSAT track. Its only (continuous) parameter is the time ratio of the SLS solver. This parameter was prediscretized to nine values.
CSCCSat2014 Luo et al. (2012); Luo et al. (2014) is an SLS solver based on configuration checking and dynamic local search methods. It was submitted to the Random SAT track. It features 3 continuous parameters that were prediscretized to 7, 9, and 9 values each, giving rise to a total configuration space of 567 possible parameter configurations. The parameters control the weighting of the dynamic local search part and the probabilities for the linear make functions used in the random walk steps.
ProbSAT Balint & Schöning (2012)
is a simple SLS solver based on probability distributions that are built from simple features, such as the make and break of variables
Balint & Schöning (2012). ProbSAT’s 9 parameters control the type and the parameters of the probability distribution, as well as the type of restart. ProbSAT was submitted to the Random SAT track.MinisatHACK999ED Oh (2014) is a CDCL solver; it was submitted to all tracks. It has one categorical parameter (whether or not to use the Luby restarting strategy) and 9 numerical parameters finetuning the Luby and geometric restart strategies, as well as controlling clause removal and the treatment of glue clauses. 3 of these 9 numerical parameters are conditional on the choice of the Luby restart strategy, and all numerical parameters were prediscretized by the solver developer. There are also 3 forbidden parameter combinations derived from a weak inequality constraint between two parameter values.
YalSAT Biere (2014) is an SLS solver; it was submitted to the tracks crafted SAT+UNSAT and Random SAT. It has 27 parameters that parameterize the solver’s restart component (7 parameters) amongst many other components. 11 of the 27 parameters are numerical, with 6 of them having a trivial upper bound of maxinteger ().
Cryptominisat Soos (2014) is a CDCL solver; it was submitted to the tracks Industrial SAT+UNSAT and crafted SAT+UNSAT. It has 29 parameters that control restarts (6 mostly numerical parameters), clause removal (7 mostly numerical parameters), variable branching and polarity (3 parameters each), simplification (5 parameters), and several other mechanisms. 2 of the numerical parameters further parameterize the blocking restart mechanism and are thus conditional on that mechanism being selected.
Clasp3.0.4p8 Gebser et al. (2012) is a solver for the more general answer set programming (ASP) problem, but it can also solve SAT, MAXSAT and PB problems. It is fundamentally similar to the solver submitted in 2013; changes in the new version focused on the ASP solving part rather than the SAT solving part. As a SAT solver, Clasp3.0.4p8 has parameters, of which control preprocessing, variable selection, the restart policy, the deletion policy and miscellaneous other mechanisms. The configuration space is highly conditional, with several toplevel parameters enabling or disabling certain strategies. Finally, there are also forbidden parameter combinations that prevent certain combinations of deletion strategies. Clasp3.0.4p8 exposes both a mixed continuous/discrete parameter configuration space and a manuallydiscretized one. It was submitted to all tracks.
Riss4.27 Manthey (2014) is a CDCL solver submitted to all tracks except Random SAT. Compared to the 2013 version Riss3g, it almost doubled its number of parameters, yielding 214 parameters organized into 121 simplification and 93 search parameters. In particular, it added many new preprocessing and inprocessing techniques, including XOR handling (via Gaussian elimination Han & Jiang (2012b)), and extracting cardinality constraints Biere et al. (2014). Roughly half of the simplification parameters and a third of the search parameters are categorical (in both cases most of the categoricals are binary). The simplification parameters comprise about 20 Boolean switches for preprocessing techniques and about 100 inprocessor parameters, prominently including blocked clause elimination, bounded variable addition, equivalance elimination Gelder (2005), numerical limits, probing, symmetry breaking, unhiding Heule et al. (2011), Gaussian elimination, covered literal elimination Manthey & Philipp (2014), and even some stochastic local search. The search parameters parameterize a wide range of mechanisms including variable selection, clause learning and removal, restarts, clause minimization, restricted extended resolution, and interleaved clause strengthening.
SparrowToRiss Balint & Manthey (2014) combines the SLS solver Sparrow with the CDCL solver Riss4.27 by first running Sparrow, followed by Riss4.27. It was submitted to all tracks. SparrowToRiss’s configuration space is that of Riss4.27 plus 6 Sparrow parameters and 2 parameters controlling when to switch from Sparrow to Riss4.27: the maximal number of flips for Sparrow (by default 500 million) and the CPU time for Sparrow (by default 150 seconds). Also, in contrast to Riss4.27, SparrowToRiss does not prediscretize its numerical parameters, but expresses them as 36 integer and 16 continuous parameters.
Lingeling Biere (2014) is a successor to the 2013 version; it was submitted to the tracks Industrial SAT+UNSAT and crafted SAT+UNSAT. Compared to 2013, Lingeling’s parameter space grew by roughly a third, to a total of 323 parameters (meaning that again, Lingeling was the solver with the most parameters). As in 2013, roughly 40% of these parameters were categorical and the rest integervalued (many with a trivial upper bound of maxinteger, ). Notable groups of parameters that were introduced in the 2014 version include additional preprocessing/inprocessing options and new restart strategies.
5.2 Configuration Pipeline
In the CSSC 2014, we used the configurators ParamILS, GGA, and SMAC. For each benchmark and solver, we ran GGA and SMAC on the solver’s full configuration space, which could contain an arbitrary combination of numerical and categorical parameters. We also ran all configurators on a discretized version of the configuration space (automatically constructed unless provided by the solver authors), yielding a total of five configuration approaches: ParamILSdiscretized, GGA, GGAdiscretized, SMAC, and SMACdiscretized. GGA could not handle the complex conditionals of some solvers; therefore, for these solvers we only ran ParamILS and the two SMAC variants.
Due to the cost of running a third configurator on nearly every configuration scenario, we reduced the budget for each configuration approach from two CPU days on five cores in CSCC 2013 to two CPU days on four cores in CSSC 2014. In the case of ParamILS and SMAC, as in 2013, we used these four cores to perform four independent 2day configurator runs. In the case of GGA, we performed one 2day run using all four cores. We evaluated the configurations resulting from each of the 14 configuration runs (4 ParamILSdiscretized, 4 SMACdiscretized, 4 SMAC, 1 GGAdiscretized, and 1 GGA) on the entire training data set of the benchmark at hand and selected the configuration with the best performance. We then executed only this configuration on the benchmark’s test set to determine the performance of the configured solver.
In the four tracks of the CSSC (Industrial SAT+UNSAT, crafted SAT+UNSAT, Random SAT+UNSAT, Random SAT) we had 6, 6, 5, and 6 participating solvers, respectively, and since there were three benchmark families per track, we ended up with pairs of solvers and benchmarks to configure them on. For each of these configuration scenarios, each of the 5 configuration approaches above required four cores for 2 days, yielding a total computational expense of CPU days (close to 8 CPU years). Thanks to a special allocation on the META cluster at the University of Freiburg, we were able to finish this process within 2 weeks.
We note that all scripts we used for performing the configuration and analysis experiments were written in Python (updated from Ruby in 2013) and are available for download on the competition website.
5.3 Results
Rank  Industrial SAT+UNSAT  crafted SAT+UNSAT  Random SAT+UNSAT  Random SAT 

1  Lingeling  Clasp3.0.4p8  Clasp3.0.4p8  ProbSAT 
2  MinisatHACK999ED  Lingeling  DCCASat+marchrw  SparrowToRiss 
3  Clasp3.0.4p8  Cryptominisat  MinisatHACK999ED  CSCCSat2014 
For each of the four tracks of CSSC 2014, we configured the solvers submitted to the track on each of the three benchmark families from that track and aggregated results across the respective test instances. We show the winners for each track in Table 9 and discuss the results in the following sections. Additional details, tables, and figures are provided in an accompanying technical report Hutter et al. (2014).
5.3.1 Results of the Industrial SAT+UNSAT Track
Results for CSSC 2014 Industrial SAT+UNSAT track
The Industrial SAT+UNSAT track consisted of three industrial benchmarks detailed in A.1: BMC Biere (2007), Circuit Fuzz Brummayer et al. (2012), and IBM Zarpas (2005). Figure 16 visualizes the results of applying algorithm configuration to the winning solver Lingeling on these three benchmark sets. It shows similar results as in the Industrial SAT+UNSAT track of CSSC 2013: Lingeling’s strong default performance on ‘typical’ hardware verification benchmarks (IBM and BMC) could only be improved slightly by configuration, but much larger improvements were possible on less standard benchmarks, such as Circuit Fuzz.
Table 10 summarizes the results for all six solvers that participated in the Industrial SAT+UNSAT track. These results demonstrate that, in contrast to Lingeling, several solvers (in particular, Clasp3.0.4p8, Riss4.27, and SparrowToRiss) benefited largely from configuration on the BMC benchmark, but did not reach Lingeling’s performance even after configuration. MinisatHACK999ED performed even better than Lingeling with its default parameters, but did not benefit from configuration as much as Lingeling (particularly on the Circuit Fuzz benchmark family).
5.3.2 Results of the crafted SAT+UNSAT Track
Results for CSSC 2014 crafted SAT+UNSAT track
The crafted SAT+UNSAT track consisted of the three crafted benchmarks detailed in A.2: Graph Isomorphism (GI), Low Autocorrelation Binary Sequence (LABS), and NRooks. Figure 20 visualizes the improvements configuration yielded on these benchmarks for the bestperforming solver, Clasp3.0.4p8. The effect of configuration was particularly large on the NRooks instances, where it reduced the number of timeouts from 81 to 0. Similar to the results from CSSC 2013, configuration also substantially improved performance on the GI instances, decreasing the number of timeouts from 43 to 9. In contrast to 2013, an unusual effect occurred for Clasp3.0.4p8 on the LABS instances, where the number of timeouts on the test set increased from 87 to 93 by configuration; we study the reasons for this in more detail in Section 6.1.
Table 11 summarizes the results of all solvers on the crafted SAT+UNSAT track, showing that the performance of many other solvers also substantially improved on the benchmarks GI and NRooks, and only mildly (if at all) on the LABS benchmark. The aggregate results across these 3 benchmark families show that Lingeling had the best default performance, but only benefited mildly from configuration (#timeouts reduced from 115 to 109), whereas Clasp3.0.4p8 benefited much more from configuration and thus outperformed Lingeling after configuration (#timeouts reduced from 211 to 102). Once again, we note that the winning solver only showed mediocre performance based on its default: Clasp3.0.4p8 would have ranked 5th in a comparison based on default performance.
5.3.3 Results of the Random SAT+UNSAT Track
Results for CSSC 2014 Random SAT+UNSAT track
The Random SAT+UNSAT track consisted of three random benchmarks detailed in A.3: 3cnf, K3, and unifk5. The instances in unifk5 are all unsatisfiable, while the other two sets contain both satisfiable and unsatisfiable instances. Figure 24 visualizes the improvements achieved by configuration on these benchmarks for the bestperforming solver Clasp3.0.4p8. Clasp3.0.4p8 benefited most from configuration on benchmark 3cnf, where it reduced the number of timeouts from 18 to 0. For the other benchmarks, it could already solve all instances in its default parameter configuration, but configuration helped reduce its average runtime by factors of 3 (K3) and 2 (unifk5), respectively. Table 12 summarizes the results of all solvers for these benchmarks. We note that solver DCCASat+marchrw showed the best default performance, and that after configuration, it also solved all instances from the three benchmark sets, only ranking behind Clasp3.0.4p8 because the latter solved these instances faster.
5.3.4 Results of the Random SAT Track
Results for CSSC 2014 Random SAT track
The Random SAT track consisted of the three benchmarks detailed in A.3: 3sat1k, 5sat500 and 7sat90. Figure 28 visualizes the improvements configuration achieved on these benchmarks for the bestperforming solver ProbSAT. ProbSAT benefited most from configuration on benchmark 5sat500: its default did not solve a single instance in the maximum runtime of 300 seconds, while its configured version solved all instances in an average runtime below 2 seconds! Since timeouts at 300s yield a PAR10 score of 3000, the PAR10 speedup factor on this benchmark was , the largest we observed in the CSSC. On the other two scenarios, configuration was also very beneficial, reducing ProbSAT’s number of timeouts from 24 to 0 (7sat90) and from 10 to 4 (3sat1k), respectively. Table 13 summarizes the results of all solvers for these benchmarks, showing that next to ProbSAT, only SparrowToRiss benefited from configuration. Neither of the CDCL solvers (Clasp3.0.4p8 and MinisatHACK999ED) solved a single instance in any of the three benchmarks (in either default or configured variants). For the other two SLS solvers, YalSAT and CSCCSat2014, the defaults were already well tuned for these benchmark sets. Indeed, we observed overtuning to the training sets in one case each: YalSAT for 3sat1k and CSCCSat2014 for 7sat90. Overall, the configurability of ProbSAT and SparrowToRiss allowed them to place first and second, respectively, despite their poor default performance (especially on 5sat500, where neither of them solved a single instance with default settings).
6 PostCompetition Analyses
While the previous sections focussed on the results of the respective competitions, we now discuss several analyses we performed afterwards to study overarching phenomena and general patterns.
6.1 Why Does Configuration Work So Well and How Can It Fail?
Several practitioners have asked us why automated configuration can yield the large speedups over the default configuration we observed. We believe there are two key reasons for this:

Solver defaults are typically chosen to be robust across benchmark families. For any given benchmark family , highly parameterized solvers can, however, typically be instantiated to exploit the idiosyncrasies of substantially better. (These improvements only need to generalize to other instances from , not to other benchmark families.)
However, algorithm configuration does not necessarily work in all cases. For example, in the crafted SAT+UNSAT track of the CSSC 2014, we encountered a case in which the configured solver performed somewhat worse than the default solver: Clasp configured on benchmark family LABS timed out on 93 test instances, whereas its default only timed out on 87 test instances (see also Figure LABEL:fig:cssc14clasplabs). Two obvious causes suggest themselves in the case of such a failure:

insufficiently long configuration runs (which can result in worsethandefault performance on the training set^{7}^{7}7Worsethandefault performance on the training set is possible since configurators only base their decisions on a subset of the training instances; that subset increases over time and only reaches the full training set when the configuration process is given enough time.); and/or

overtuning on the training set that does not generalize to the test set.
We investigated the configuration of Clasp on LABS further after the competition, and found that in this case both of these effects applied: In the CSSC, training performance slightly deteriorated ( timeouts); and the improved training performance we found with a larger configuration budget^{8}^{8}8Specifically, we ran 32 SMAC runs for 10 days each. afterwards ( timeouts) also did not generalize to the test set ( timeouts).
To contrast the conditions under which configuration can fail and under which it works well, we compared the configuration of Clasp on benchmarks LABS ( test timeouts in the CSSC) and NRooks ( test timeouts in the CSSC). For this analysis, we sampled 100 Clasp configurations uniformly at random and evaluated their PAR10 training and test scores on both of the benchmarks; Figure 31 shows the result. The first observation we make directly based on the figure is that for NRooks, about 20% of the random configurations outperform the default, whereas for LABS none of them do: the default is simply very good to start with for LABS and thus much harder to beat. Second, since several configurations are very good (i.e., fast) for NRooks, configurators can make progress much faster (and also take full advantage of adaptive capping to limit the time spent with poor configurations); indeed, in the CSSC the configurators managed to perform about times more Clasp runs in the same time (2 days) for NRooks than for LABS (averaging about vs. runs). This explains why configurators require more time to improve training performance on LABS. Third, to assess the potential for overtuning, we studied how training and test performance correlate. Visually, Figure 31 shows a strong overall correlation of PAR10 training and test scores for both of the benchmarks; Spearman correlation coefficients are indeed high: 0.99 (NRooks) and 0.98 (LABS). However, for the top 20% of sampled configurations, the correlation is much stronger for NRooks (0.98) than for LABS (0.49). This explains why improvements on the LABS training set do not necessarily translate to improvements on its test set.
6.2 Overall Configurability of Solvers
Some solvers consistently benefited more from configuration than others. Here, we quantify the configurability of a solver on a given benchmark by the PAR10 speedup factor its configured version achieved over its default version, computed on the set of instances solved by at least one of the two. We then examine the relationship between configurability and number of parameters to determine whether solvers with many parameters consistently benefited more or less from configuration than solvers with few parameters.^{9}^{9}9Of course, it is simple to construct examples where a solver with a single parameter is highly configurable (e.g., let the parameter have a poor default setting) or where a solver has many parameters but does not benefit from configuration at all (e.g., a solver could expose many parameters that are not actually used at all). The focus of our analysis is therefore on the relationship between configurability and the number of parameters that a solver author reasonably expected would be useful to expose.
Figure 34 shows that configurability was indeed high for solvers with many parameters (e.g., the variants of Lingeling, Riss, and Clasp), but that it did not increase monotonically in the number of parameters: some solvers with very few parameters were surprisingly configurable. For example, configuration sped up the singleparameter solver DCCASat+marchrw by at least a factor of four in all three benchmarks it was configured for, while the 4parameter solver CSCCSat2014 was not improved at all by configuration. Furthermore, ProbSAT, which achieved the best singlebenchmark performance improvement (as previously discussed in Section 5.3.4), has only 9 parameters.
We note that the notion of configurability used here is strongly dependent on the time budget available for configuration. In the next section, we investigate this issue in more detail.
6.3 Impact of Configuration Budget
The runtime budget we allow to configure each solver has an obvious impact on the results. In one extreme case, if we let this budget go towards zero, the configuration pipeline returns the solver defaults (and we are back in the setting of the standard SAT competition). For small, nonzero budgets, we can expect solvers with few parameters to benefit from configuration more, since their configuration spaces are easier to search. On the other hand, if we increase the time budget, solvers with larger parameter spaces are likely to benefit more than those with smaller parameter spaces (since larger parts of their configuration space can be searched given additional time).
one standard deviation of the PAR
performance of the incumbent configurations found by 4 runs of SMAC over time.Figure 38 illustrates this phenomenon for the two top solvers in the Random SAT+UNSAT track of CSSC 2014. With the competition’s configuration budget of two days across 4 cores, Clasp3.0.4p8 performed better than DCCASat+marchrw (both solved all test instances, with average runtimes of 13 vs. 21 seconds). In the extreme case of no time budget for configuration, DCCASat+marchrw would have won against Clasp3.0.4p8, since its default version performed much better (2 vs. 18 timeouts), and, in fact, Figure (a)a shows that it required a configuration budget of at least seconds to find improving Clasp3.0.4p8 parameters for the 3cnf benchmark (where the default version of Clasp3.0.4p8 produced 18 timeouts). While the configuration of DCCASat+marchrw’s single parameter had long converged by seconds, the configuration of Clasp3.0.4p8’s 75 parameters continued to improve performance until the end of the configuration budget, and, in particular for the 3cnf benchmark, performance would have likely continued to improve further if the budget had been larger.
We thus conclude that the solver’s flexibility should be chosen in relation to the available budget for configuration: solvers with few parameters can often be improved more quickly than highly flexible solving frameworks, but, given enough computational resources and powerful configurators, the latter ones can typically offer a greater performance potential.
6.4 Results with an Increased Cutoff Time for Validation
Next to the overall time budget allowed for configuration, another important time limit is the cutoff time allowed for each single solver run; due to our limited overall budget, we chose this to be quite low: 300 seconds both for solver runs during the configuration process and for the final evaluation of solvers on previously unseen test instances.
Here, we study how using a larger cutoff time at evaluation time affects results, mimicking a situation where we care about performance with a large cutoff time but use a smaller cutoff time for the configuration process to make progress faster. In fact, several studies in the literature (e.g., Hutter et al. (2007); KhudaBukhsh et al. (2009); Tompkins et al. (2011b)) used a smaller cutoff time for configuration than for testing, and we found that improvements with a time budget around 300 seconds often lead to improvements with larger cutoff times.
Table 14 shows the results we obtained when using a cutoff time of 5000 seconds for validation (the same as the SAT competition) for the Industrial SAT+UNSAT track of CSSC 2014. Qualitatively, these results are quite similar to those obtained with an evaluation cutoff time of 300s (compare Table 10), with only few differences. As expected, given the larger cutoff time, all solvers solved substantially more instances (especially for the BMC and Circuit Fuzz benchmarks). Nevertheless, with a cutoff time of 5000 seconds, for all solvers, the configured variant (configured to perform well with a cutoff time of 300 seconds) still performed better than the default version, making us more confident that configuration does not substantially overtune to achieve good performance on easy instances only.
timeouts default timeouts configured (on test set)  Rank  
BMC  Circuit Fuzz  IBM  Overall  def  CSSC  
Lingeling  1  1  
MinisatHACK999ED  2  2  
Riss4.27  3  3  
SparrowToRiss  4  4  
Cryptominisat  5  5  
Clasp3.0.4p8  6  6 
6.5 Results with a Single Configurator
Training  Test  

SMACd  SMACc  PILS  GGAd  GGAc  SMACd  SMACc  PILS  GGAd  GGAc  
DCCASat+marchrw  
CSCCSat2014  
ProbSAT  –  –  –  –  
MinisatHACK999ED  
YalSAT  
Cryptominisat  
Clasp3.0.4p8  –  –  –  –  
Riss4.27  
SparrowToRiss  –  –  –  –  
Lingeling  –  – 
While the CSSC addressed the performance of SAT solvers rather than the performance of configurators, we have been asked whether our complex configuration pipeline was necessary, or whether a single configurator would have produced similar or identical results. Indeed, counting the choice of discretized vs nondiscretized parameter space, our pipeline used five configuration approaches (ParamILSdiscretized, GGA, GGAdiscretized, SMACdiscretized, and SMAC). Thus, if one of these approaches had yielded the same results all by itself, we could have reduced our overall configuration budget fivefold.
To determine whether this was the case, we evaluated the solver performance we would have observed if we had used each configuration approach in isolation. For each configuration scenario and each approach, we computed the PAR10 slowdown factor over the CSSC result as the PAR10 achieved with the respective approach, divided by the PAR10 of the approach with best training performance (which we selected in the CSSC). If a configuration approach achieves a PAR10 slowdown factor close to one, this means that it gives rise to solver performance close to that achieved by our full CSSC configuration pipeline. For each solver, we then computed the geometric mean of these factors across the scenarios it was configured for.
Table 15 shows that both SMAC variants performed close to best for all solvers, meaning that we would have achieved similar results had we only used SMAC in the CSSC. ParamILS yielded the next best performance, followed by GGA. Full results can be found in the accompanying technical report Hutter et al. (2014). Despite SMAC’s strong performance, we believe it will still be useful to run several configuration approaches in future CSSCs, both to ensure robustness and to assess whether some configuration scenarios are better suited to other configuration approaches.
7 Conclusion
In this article, we have described the design of the Configurable SAT Solver Challenge (CSSC) and the details of CSSC 2013 and CSSC 2014. We have highlighted two main insights that we gained from this competition:

Automated algorithm configuration often improved performance substantially, in several cases yielding average speedups of orders of magnitude.

Some solvers benefited more from automated configuration than others, leading to substantially different algorithm rankings after configuration than before (as, e.g., measured by the SAT competition).
Also, the configuration budget influenced which algorithm would perform best, and with the competition budget of 2 days on 4–5 cores, algorithms with larger parameter spaces exhibited more capacity for improvement.
These conclusions have interesting implications for algorithm design: if an algorithm is likely to be applied across a range of specialized applications, then it should be made flexible by parameterization of its key mechanisms and components, and this flexibility should be exploited by automated algorithm configuration. Our findings thus challenge the traditional approach to solver design that tries to avoid having too many algorithm parameters (since these parameters complicate manual tuning and analysis). Rather, they promote the design paradigm of Programming by Optimization (PbO) Hoos (2012), which aims to avoid premature design choices and to rather actively develop promising alternatives for parts of the design that enable an automated customization to achieve peak performance on particular benchmarks of interest. Indeed, in the CSSC, we have already observed a trend towards PbO, as evidenced by the introduction of a host of new parameters into stateoftheart solvers, such as Riss4.27 and Lingeling, between 2013 and 2014.
Finally, there is no reason why a configurable solver competition should be appropriate and insightful only for SAT. On the contrary, similar events would be interesting in the context of many other challenging computational problems, such as answer set programming, constraint programming or AI planning. Another interesting application domain is automatic machine learning, where algorithm configuration can adapt flexible machine learning frameworks to each new dataset at hand
Thornton et al. (2013); Feurer et al. (2015). We believe that for those and many other problems, similar findings to those we reported here for CSSC would be obtained, leading to analogous conclusions regarding algorithm design.Acknowledgements
Many thanks go to Kevin Tierney for his generous help with running GGA, including his addition of new features, his suggestion of parameter settings and his conversion script to read the pcs format. We also thank the solver developers for proofreading the description of their solvers and their parameters. For computational resources to run the competition, we thank Compute Canada (CSSC 2013) and the German Research Foundation (DFG; CSSC 2014). F. Hutter and M. Lindauer thank the DFG for funding this research under Emmy Noether grant HU 1900/21. H. Hoos acknowledges funding through an NSERC Discovery Grant.
References
References

Ansótegui et al. (2009)
Ansótegui, C., Sellmann, M., & Tierney, K. (2009).
A genderbased genetic algorithm for the automatic configuration of algorithms.
In Gent, I. (Ed.), Proceedings of the Fifteenth International Conference on Principles and Practice of Constraint Programming (CP’09), volume 5732 of Lecture Notes in Computer Science, (pp. 142–157). SpringerVerlag.  Audemard & Simon (2009) Audemard, G. & Simon, L. (2009). Predicting learnt clauses quality in modern SAT solvers. In Boutilier (2009), (pp. 399–404).
 Babić & Hu (2007) Babić, D. & Hu, A. (2007). Structural abstraction of software verification conditions. In Damm, W. & Hermanns, H. (Eds.), Proceedings of the international conference on Computer Aided Verification (CAV’07), volume 4590 of Lecture Notes in Computer Science, (pp. 366–378). Springer.
 Babić & Hu (2008) Babić, D. & Hu, A. J. (2008). Exploiting shared structure in software verification conditions. In Yorav, K. (Ed.), Proceedings of the International Conference on Hardware and Software: Verification and Testing (HVC’08), volume 4899 of Lecture Notes in Computer Science, (pp. 169–184). Springer.
 Babić & Hutter (2007) Babić, D. & Hutter, F. (2007). Spear theorem prover. Solver description, SAT competition.
 Balabanov (2013) Balabanov, V. (2013). Solver43. In Balint et al. (2013), (pp.8̃6).
 Balint et al. (2013) Balint, A., Belov, A., Heule, M., & Järvisalo, M. (Eds.). (2013). Proceedings of SAT Competition 2013: Solver and Benchmark Descriptions, volume B20131 of Department of Computer Science Series of Publications B. University of Helsinki.
 Balint & Manthey (2014) Balint, A. & Manthey, N. (2014). SparrowToRiss. In Belov et al. (2014), (pp. 77–78).
 Balint & Schöning (2012) Balint, A. & Schöning, U. (2012). Choosing probability distributions for stochastic local search and the role of make versus break. In Cimatti & Sebastiani (2012), (pp. 16–29).

Bayardo Jr. &
Schrag (1997)
Bayardo Jr., R. J. & Schrag, R. (1997).
Using CSP lookback techniques to solve realworld SAT instances.
In Kuipers, B. & Webber, B. (Eds.),
Proceedings of the Fourteenth National Conference on Artificial Intelligence (AAAI’97)
, (pp. 203–208). AAAI Press.  Bayless et al. (2014) Bayless, S., Tompkins, D., & Hoos, H. (2014). Evaluating instance generators by configuration. In Pardalos, P. & Resende, M. (Eds.), Proceedings of the Eighth International Conference on Learning and Intelligent Optimization (LION’14), Lecture Notes in Computer Science. SpringerVerlag.
 Bebel & Yuen (2013) Bebel, J. & Yuen, H. (2013). Hard SAT instances based on factoring. In Balint et al. (2013), (pp. 102).
 Belov et al. (2014) Belov, A., Diepold, D., Heule, M., & Järvisalo, M. (Eds.). (2014). Proceedings of SAT Competition 2014: Solver and Benchmark Descriptions, volume B20142 of Department of Computer Science Series of Publications B. University of Helsinki.
 Berre & Parrain (2010) Berre, D. L. & Parrain, A. (2010). The Sat4j library, release 2.2, system description. Journal on Satisfiability, Boolean Modeling and Computation, 7, 59–64.
 Biere (2007) Biere, A. (2007). The AIGER andinverter graph (AIG) format. Available at fmv.jku.at/aiger.
 Biere (2013) Biere, A. (2013). Lingeling, Plingeling and Treengeling entering the SAT competition 2013. In Balint et al. (2013), (pp. 51–52).
 Biere (2014) Biere, A. (2014). Yet another local search solver and Lingeling and friends entering the SAT competition 2014. In Belov et al. (2014), (pp. 39–40).
 Biere et al. (2008) Biere, A., Cimatti, A., Claessen, K. L., Jussila, T., McMillan, K., & Somenzi, F. (2008). Benchmarks from the 2008 hardware model checking competition (HWMCC’08). Available at http://fmv.jku.at/hwmcc08/benchmarks.html.
 Biere et al. (1999) Biere, A., Cimatti, A., Clarke, E., Fujita, M., & Zhu, Y. (1999). Symbolic model checking using SAT procedures instead of BDDs. In Proceedings of Design Automation Conference (DAC’99), (pp. 317–320).
 Biere et al. (2014) Biere, A., Le Berre, D., Lonca, E., & Manthey, N. (2014). Detecting cardinality constraints in CNF. In Sinz, C. & Egly, U. (Eds.), Proceedings of the Seventeenth International Conference on Theory and Applications of Satisfiability Testing (SAT’14), volume 8561 of Lecture Notes in Computer Science, (pp. 285–301). SpringerVerlag.
 Boutilier (2009) Boutilier, C. (Ed.). (2009). Proceedings of the 22th International Joint Conference on Artificial Intelligence (IJCAI’09).
 Breimann (2001) Breimann, L. (2001). Random forests. Machine Learning Journal, 45, 5–32.
 Brummayer et al. (2012) Brummayer, R., Lonsing, F., & Biere, A. (2012). Automated testing and debugging of SAT and QBF solvers. In Cimatti & Sebastiani (2012), (pp. 44–57).
 Cadar et al. (2008) Cadar, C., Dunbar, D., & Engler, D. R. (2008). Klee: Unassisted and automatic generation of highcoverage tests for complex systems programs. In Proceedings of the 8th USENIX conference on Operating systems design and implementation (OSDI’08), volume 8, (pp. 209–224).
 Cimatti & Sebastiani (2012) Cimatti, A. & Sebastiani, R. (Eds.). (2012). Proceedings of the Fifteenth International Conference on Theory and Applications of Satisfiability Testing (SAT’12), volume 7317 of Lecture Notes in Computer Science. SpringerVerlag.
 Clarke et al. (2004) Clarke, E., Kroening, D., & Lerda, F. (2004). A tool for checking ANSIC programs. In Tools and Algorithms for the Construction and Analysis of Systems (pp. 168–176). Springer.

Cook (1971)
Cook, S. (1971).
The complexity of theorem proving procedures.
In Harrison, M., Banerji, R., & Ullman, J. (Eds.),
Proceedings of the Third Annual ACM Symposium on the Theory of Computing (STOC’71)
, (pp. 151–158). ACM.  Crawford & Baker (1994) Crawford, J. & Baker, A. (1994). Experimental results on the application of satisfiability algorithms to scheduling problems. In Proceedings of the national conference on Artificial Intelligence (AAAI’94), (pp. 1092–1097). AAAI Press/MIT Press.
 Duong & Pham (2013) Duong, T.T. & Pham, D.N. (2013). gNovelty+GC: WeightEnhanced Diversification on Stochastic Local Search for SAT. In Balint et al. (2013), (pp. 49–50).
 Eén & Sörensson (2003) Eén, N. & Sörensson, N. (2003). An extensible SATsolver. In Giunchiglia, E. & Tacchella, A. (Eds.), Proceedings of the Sixth International Conference on Theory and Applications of Satisfiability Testing (SAT’03), volume 2919 of Lecture Notes in Computer Science, (pp. 502–518). Springer.
 Fern et al. (2011) Fern, A., Khardon, R., & Tadepalli, P. (2011). The first learning track of the international planning competition. Machine Learning, 84(12), 81–107.
 Feurer et al. (2015) Feurer, M., Klein, A., Eggensperger, K., Springenberg, J. T., Blum, M., & Hutter, F. (2015). Efficient and robust automated machine learning. In Cortes, C., Lawrence, N., Lee, D., Sugiyama, M., & Garnett, R. (Eds.), Proceedings of the 29th International Conference on Advances in Neural Information Processing Systems (NIPS’15).
 Gebser et al. (2012) Gebser, M., Kaufmann, B., & Schaub, T. (2012). Conflictdriven answer set solving: From theory to practice. Artificial Intelligence, 187188, 52–89.
 Gelder (2005) Gelder, A. V. (2005). Toward leaner binaryclause reasoning in a satisfiability solver. Annals of Mathematics and Artificial Intelligence, 43(1), 239–253.

Gomes
et al. (2000)
Gomes, C., Selman, B., Crato, N., & Kautz, H. (2000).
Heavytailed phenomena in satisfiability and constraint satisfaction
problems.
Journal of Automated Reasoning.
, 24(12), 67–100.  Han & Jiang (2012a) Han, C.S. & Jiang, J.H. (2012a). Simpsat 1.0 for sat challenge 2012. In Balint, A., Belov, A., Diepold, D., Gerber, S., Järvisalo, M., & Sinz, C. (Eds.), Proceedings of SAT Challenge 2012: Solver and Benchmark Descriptions, volume B20122 of Department of Computer Science Series of Publications B, (pp.5̃9). University of Helsinki.
 Han & Jiang (2012b) Han, C.S. & Jiang, J.H. (2012b). When Boolean satisfiability meets Gaussian elimination in a simplex way. Computer Aided Verification, 410–426.
 Han & Somenzi (2009) Han, H. & Somenzi, F. (2009). Onthefly clause improvement. In Kullmann (2009), (pp. 209–222).
 Heule et al. (2011) Heule, M. J., Järvisalo, M., & Biere, A. (2011). Efficient CNF simplification based on binary implication graphs. In Sakallah & Simon (2011), (pp. 201–215).
 Hoos & Stützle (2004) Hoos, H. & Stützle, T. (2004). Stochastic Local Search: Foundations & Applications. San Francisco, CA, USA: Morgan Kaufmann Publishers Inc.
 Hoos (2012) Hoos, H. H. (2012). Programming by optimization. Commun. ACM, 55(2), 70–80.
 Hutter et al. (2007) Hutter, F., Babić, D., Hoos, H., & Hu, A. (2007). Boosting verification by automatic tuning of decision procedures. In O’Conner, L. (Ed.), Formal Methods in Computer Aided Design (FMCAD’07), (pp. 27–34). IEEE Computer Society Press.
 Hutter et al. (2014) Hutter, F., Balint, A., Bayless, S., Hoos, H., & LeytonBrown, K. (2014). Results of the Configurable SAT Solver Challenge 2013. Technical Report 276, University of Freiburg, Department of Computer Science.
 Hutter et al. (2010) Hutter, F., Hoos, H., & LeytonBrown, K. (2010). Tradeoffs in the empirical evaluation of competing algorithm designs. Annals of Mathematics and Artificial Intelligenc (AMAI), Special Issue on Learning and Intelligent Optimization, 60(1), 65–89.
 Hutter et al. (2011a) Hutter, F., Hoos, H., & LeytonBrown, K. (2011a). Bayesian optimization with censored response data. In NIPS workshop on Bayesian Optimization, Sequential Experimental Design, and Bandits. Published online.
 Hutter et al. (2011b) Hutter, F., Hoos, H., & LeytonBrown, K. (2011b). Sequential modelbased optimization for general algorithm configuration. In Coello, C. (Ed.), Proceedings of the Fifth International Conference on Learning and Intelligent Optimization (LION’11), volume 6683 of Lecture Notes in Computer Science, (pp. 507–523). SpringerVerlag.
 Hutter et al. (2009) Hutter, F., Hoos, H., LeytonBrown, K., & Stützle, T. (2009). ParamILS: An automatic algorithm configuration framework. Journal of Artificial Intelligence Research, 36, 267–306.
 Hutter et al. (2014) Hutter, F., Lindauer, M., Bayless, S., Hoos, H., & LeytonBrown, K. (2014). Results of the Configurable SAT Solver Challenge 2014. Technical Report 277, University of Freiburg, Department of Computer Science.
 Hutter et al. (2014) Hutter, F., Xu, L., Hoos, H., & LeytonBrown, K. (2014). Algorithm runtime prediction: Methods and evaluation. Artificial Intelligence, 206, 79–111.
 Järvisalo et al. (2012) Järvisalo, M., Berre, D. L., Roussel, O., & Simon, L. (2012). The international SAT solver competitions. AI Magazine, 33(1).
 Järvisalo et al. (2010) Järvisalo, M., Biere, A., & Heule, M. J. (2010). Blocked clause elimination. In Esparza, J. & Majumdar, R. (Eds.), Tools and Algorithms for the Construction and Analysis of Systems, volume 6015 of Lecture Notes in Computer Science, (pp. 129–144). Springer Berlin Heidelberg.
 Kadioglu et al. (2011) Kadioglu, S., Malitsky, Y., Sabharwal, A., Samulowitz, H., & Sellmann, M. (2011). Algorithm selection and scheduling. In Lee, J. (Ed.), Proceedings of the Seventeenth International Conference on Principles and Practice of Constraint Programming (CP’11), volume 6876 of Lecture Notes in Computer Science, (pp. 454–469). SpringerVerlag.
 Kautz & Selman (1996) Kautz, H. & Selman, B. (1996). Pushing the envelope: Planning, propositional logic, and stochastic search. In Shrobe, H. & Senator, T. (Eds.), Proceedings of the Fourteenth National Conference on Artificial Intelligence (AAAI’96), (pp. 1194–1201). AAAI Press.
 Kautz & Selman (2014) Kautz, H. & Selman, B. (2014). Unifying SATbased and graphbased planning. In Belov et al. (2014), (pp. 318–325).
 KhudaBukhsh et al. (2009) KhudaBukhsh, A., Xu, L., Hoos, H., & LeytonBrown, K. (2009). SATenstein: Automatically building local search SAT solvers from components. In Boutilier (2009), (pp. 517–524).
 Kullmann (2009) Kullmann, O. (Ed.). (2009). Proceedings of the Twelfth International Conference on Theory and Applications of Satisfiability Testing (SAT’09), volume 5584 of Lecture Notes in Computer Science. Springer.
 LópezIbáñez et al. (2011) LópezIbáñez, M., DuboisLacoste, J., Stützle, T., & Birattari, M. (2011). The irace package, iterated race for automatic algorithm configuration. Technical report, IRIDIA, Université Libre de Bruxelles, Belgium.
 Lourenço et al. (2003) Lourenço, H., Martin, O., & Stützle, T. (2003). Iterated local search. In Handbook of Metaheuristics (pp. 321–353). Springer New York.
 Luo et al. (2012) Luo, C., Cai, S., Wu, W., & Su, K. (2012). Focused random walk with configuration checking and break minimum for satisfiability. In Bessiere, C. (Ed.), Proceedings of the Ninth International Conference on Principles and Practice of Constraint Programming (CP’13), volume 4741 of Lecture Notes in Computer Science, (pp. 481–496). SpringerVerlag.
 Luo et al. (2014) Luo, C., Cai, S., Wu, W., & Su, K. (2014). Double configuration checking in stochastic local search for satisfiability. In Brodley, C. & Stone, P. (Eds.), Proceedings of the Twentyeighth National Conference on Artificial Intelligence (AAAI’14), (pp. 2703–2709). AAAI Press.
 Lynce & MarquesSilva (2003) Lynce, I. & MarquesSilva, J. P. (2003). Probingbased preprocessing techniques for propositional satisfiability. In 15th IEEE International Conference on Tools with Artificial Intelligence, (pp. 105–110). IEEE Computer Society.
 Manthey (2012) Manthey, N. (2012). Coprocessor 2.0–a flexible CNF simplifier. In Cimatti & Sebastiani (2012), (pp. 436–441).
 Manthey (2013) Manthey, N. (2013). The SAT solver RISS3G at SC 2013. In Balint et al. (2013), (pp. 72–73).
 Manthey (2014) Manthey, N. (2014). Riss 4.27. In Belov et al. (2014), (pp. 65–67).
 Manthey et al. (2013) Manthey, N., Heule, M. J., & Biere, A. (2013). Automated reencoding of Boolean formulas. In Biere, A., Nahir, A., & Vos, T. (Eds.), Hardware and Software: Verification and Testing, volume 7857 of Lecture Notes in Computer Science, (pp. 102–117). Springer Berlin Heidelberg.
 Manthey & Philipp (2014) Manthey, N. & Philipp, T. (2014). Formula simplifications as DRAT derivations. In Lutz, C. & Tielscher, M. (Eds.), KI 2014: Advances in Artificial Intelligence, volume 8736 of Lecture Notes in Computer Science, (pp. 111–122). Springer Berlin Heidelberg.
 Manthey & Steinke (2014) Manthey, N. & Steinke, P. (2014). Too many rooks. In Belov et al. (2014), (pp. 97–98).
 Mugrauer & Balint (2013a) Mugrauer, F. & Balint, A. (2013a). SAT encoded graph isomorphism benchmark description. In Balint et al. (2013).
 Mugrauer & Balint (2013b) Mugrauer, F. & Balint, A. (2013b). SAT encoded low autocorrelation binary sequence (labs) benchmark description. In Balint et al. (2013).
 Nudelman et al. (2004) Nudelman, E., LeytonBrown, K., Hoos, H. H., Devkar, A., & Shoham, Y. (2004). Understanding random SAT: Beyond the clausestovariables ratio. In M. Wallace (Ed.), Proceedings of the Tenth International Conference on Principles and Practice of Constraint Programming (CP’04), volume 3258 of Lecture Notes in Computer Science (pp. 438–452). SpringerVerlag.
 Oh (2014) Oh, C. (2014). Minisat hack 999ed, minisat hack 1430ed and swdia5by. In Belov et al. (2014), (pp.4̃6).
 Prasad et al. (2005) Prasad, M., Biere, A., & Gupta, A. (2005). A survey of recent advances in SATbased formal verification. International Journal on Software Tools for Technology Transfer, 7(2), 156–173.
 Roussel (2011) Roussel, O. (2011). Controlling a solver execution with the runsolver tool. Journal on Satisfiability, Boolean Modeling and Computation, 7(4), 139–144.
 Sakallah & Simon (2011) Sakallah, K. A. & Simon, L. (Eds.). (2011). Proceedings of the Fourteenth International Conference on Theory and Applications of Satisfiability Testing (SAT’11), volume 6695 of Lecture Notes in Computer Science. Springer.
 Seipp et al. (2014) Seipp, J., Sievers, S., & Hutter, F. (2014). Fast downward SMAC. Planner abstract, IPC 2014 Planning and Learning Track.
 Simon et al. (2005) Simon, L., Berre, D. L., & Hirsch, E. (2005). The SAT2002 competition report. Annals of Mathematics and Artificial Intelligence, 43, 307–342.
 Soos (2014) Soos, M. (2014). CryptoMiniSat v4. In Belov et al. (2014), (pp.2̃3).
 Soos et al. (2009) Soos, M., Nohl, K., & Castelluccia, C. (2009). Extending SAT solvers to cryptographic problems. In Kullmann (2009), (pp. 244–257).
 Stephan et al. (1996) Stephan, P., Brayton, R., & SangiovanniVencentelli, A. (1996). Combinational test generation using satisfiability. IEEE Transactions on ComputerAided Design of Integrated Circuits and Systems, 15, 1167–1176.

Thornton et al. (2013)
Thornton, C., Hutter, F., Hoos, H., & LeytonBrown, K. (2013).
AutoWEKA: combined selection and hyperparameter optimization of classification algorithms.
In I.Dhillon, Koren, Y., Ghani, R., Senator, T., Bradley, P., Parekh, R., He, J., Grossman, R., & Uthurusamy, R. (Eds.), The 19th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD’13), (pp. 847–855). ACM Press.  Tompkins et al. (2011a) Tompkins, D., Balint, A., & Hoos, H. (2011a). Captain jack: New variable selection heuristics in local search for SAT. In Sakallah & Simon (2011), (pp. 302–316).
 Tompkins et al. (2011b) Tompkins, D., Balint, A., & Hoos, H. (2011b). Captain jack: New variable selection heuristics in local search for SAT. In Sakallah & Simon (2011), (pp. 302–316).
 Torán (2013) Torán, J. (2013). On the resolution complexity of graph nonisomorphism. In M. Järvisalo & A. Van Gelder (Eds.), Proceedings of the Sixteenth International Conference on Theory and Applications of Satisfiability Testing (SAT’13), volume 7962 of Lecture Notes in Computer Science (pp. 52–66). SpringerVerlag.
 van Gelder (2002) van Gelder, A. (2002). Another look at graph coloring via propositional satisfiability. In Proceedings of Computational Symposium on Graph Coloring and Generalizations (COLOR02), (pp. 48–54).
 Xu et al. (2008) Xu, L., Hutter, F., Hoos, H., & LeytonBrown, K. (2008). SATzilla: Portfoliobased algorithm selection for SAT. Journal of Artificial Intelligence Research, 32, 565–606.
 Zarpas (2005) Zarpas, E. (2005). Benchmarking SAT solvers for bounded model checking. In Bacchus, F. & Walsh, T. (Eds.), Proceedings of the Eighth International Conference on Theory and Applications of Satisfiability Testing (SAT’05), volume 3569 of Lecture Notes in Computer Science, (pp. 340–354). Springer.
Appendix A Benchmark Sets Used
We mirrored the three main categories of instances from the SAT competition: industrial, crafted, and random. In 2014, we also included a category of satisfiable random instances from the SAT Races. For each of these categories, we used various benchmark sets, each of them split into a training set to be used for algorithm configuration and a disjoint test set.
For each category, to weight all benchmarks equally, we used the same number of test instances from each benchmark; these test sets were subsampled uniformly at random from the respective complete test sets.
a.1 Industrial Benchmark Sets
Swv
This set of SATencoded software verification instances consists of 604 instances generated with the CALYSTO static checker Babić & Hu (2008), used for the verification of five programs: the spam filter Dspam, the SAT solver HyperSAT, the Wine Windows OS emulator, the gzip archiver, and a component of xinetd (a secure version of inetd). We used the same training/test split as Hutter et al. Hutter et al. (2007), containing 302 training instances and 302 test instances. We used this benchmark set in the 2013 CSSC. (In 2014, we only used it for preliminary tests since it is quite easy for modern solvers.)
Hardware Verification (IBM)
This set of SATencoded bounded model checking instances consists of 765 instances generated by Zarpas Zarpas (2005) These instances were originally selected by Hutter et al. Hutter et al. (2007) as the instances in 40 randomlyselected folders from the IBM Formal Verification Benchmarks Library. We used their original training/test split, containing 382 training instances and 383 test instances. We used this benchmark set in both the 2013 and 2014 CSSCs.
Circuit Fuzz
These instances were produced by a circuitbased CNF fuzzing tool, FuzzSAT Brummayer et al. (2012) (version 0.1). As FuzzSAT was originally designed to produce semirealistic test cases for debugging SAT solvers, the majority of the instances it produces are trivial; however, occasionally, it produces more challenging instances. The CircuitFuzz instances were found by generating 10,000 FuzzSAT instances and removing all those that could be solved within one second by Lingeling. This instance generator was originally described in detail by Bayless et al. Bayless et al. (2014); we used the 300 instances from that paper as the training set (except one quite easy instance, ‘fuzz_100_25433.cnf’, which was dropped unintentionally by a script) and produced 585 additional instances using the same method, to form a testing set. We used this benchmark set in both the 2013 and 2014 CSSCs. We used these instances as part of the industrial track since they are “structured in ways that resemble (at least superficially) realworld, circuitderived instances” Bayless et al. (2014); a case could, however, also be made for them to be part of the crafted or random track.
Bounded Model Checking 2008 (BMC)
This set of SAT instances was derived by unrolling the 2008 Hardware Model Checking Competition circuits Biere et al. (2008). Each of these instances is a sequential circuit with safety properties. Each circuit was unrolled to 50, 100, and 200 iterations using the tool aigunroll (version 1.9.4) from the AIGER tools Biere (2007). We omitted trivial instances that were proven SAT or UNSAT during the unrolling process. While we used the entire set in 2013, in 2014 we removed the 60 instances provided by Intel in order to allow us to publicly share the instances.
a.2 Crafted Benchmark Sets
Graph Isomorphism (GI)
These instances were first used in the 2013 SAT Competition Mugrauer & Balint (2013a) and were generated by encoding the graph isomorphism problem to SAT according to the procedure described by Torán Torán (2013). Given two graphs and with vertices and edges (for whom the isomorphism problem is to be solved) the generator creates a SAT formula with variables and clauses. Consequently, the generated instances can contain very many clauses. The 2 064 SAT instances in this set were generated from different types of graphs, with the number of vertices ranging from 10 to 1296.^{10}^{10}10Note that the larger graphs have varying node degrees, and that each node can only match with other nodes of the same degree; this allows the encoding to generate much fewer clauses than in the worst case of equal node degrees. We split the instances uniformly at random into 1 032 training and 1 032 test instances; in both the 2013 and 2014 CSSCs, we only used 351 of the test instances.
Low Autocorrelation Binary Sequence (LABS)
This set contains 651 lowautocorrelation binary sequence (LABS) search problems that were encoded to SAT problems by first encoding them as pseudoBoolean problems and then as SAT problems. Instances from this set were first used in the SAT Competition 2013 in the crafted category Mugrauer & Balint (2013b). We split this benchmark set uniformly at random into 350 training and 351 test instances, and used it in both the 2013 and 2014 CSSCs.
NRooks
These 835 instances Manthey & Steinke (2014) represent a parameterized unsatisfiable variation of the wellknown queens problem, in which the task is to place queens on a chess board with fields such that they do not attack each other. In the variation considered here, the (unsatisfiable) problem is to either place rooks or queens on a board of size . Additional constraints enforcing that there is a piece in each row/column/diagonal make it easier to prove unsatisfiability, and these constraints can be enabled or disabled by generator parameters. We used the generator newDame provided by Norbert Manthey to generate instances with , using all rooks or all queens, using six different problem encodings, and using all combinations of enabling/disabling all types of constraints. We then removed trivial instances, ending up with 835 instances. For the CSSC 2014, we selected 484 training instances uniformly at random and used the remaining 351 as test instances.
a.3 Random Benchmark Sets
K3
This is a set of 600 randomlygenerated 3SAT instances at the phase transition (clause to variable ratio of approximately 4.26). It includes both satisfiable and unsatisfiable instances. The set includes 100 instances each with 200 variables (853 clauses), 225 variables (960 clauses), 250 variables (1066 clauses), 275 variables (1172 clauses), 300 variables (1279 clauses), and 325 variables (1385 clauses). These 600 instances were generated by Lin Xu using the random instance generator from the 2009 SAT competition, and were previously described by
Bayless et al. Bayless et al. (2014). We employed their uniform random split into 300 training and 300 test instances, using all 300 test instances in the CSSC 2013 (random track) and only a subset of 250 test instances in the CSSC 2014 (random track).3cnf
This is a set of 750 random 3SAT instances (satisfiable and unsatisfiable) at the phase transition, with 350 variables and 1493 clauses. These instances were generated by the ToughSAT instance generator Bebel & Yuen (2013) and split into 500 training and 250 test instances uniformly at random. We used this benchmark set in the 2014 CSSC (random track).
unifk5
This set contains only unsatisfiable 5SAT instances generated uniformly at random with 50 variables and 1 056 clauses (a clausetovariable ratio sharply on the phase transition). The instances were generated by the uniform random generator used in the SAT Challenge 2012 and SAT Competition 2013, with satisfiable instances being filtered out by running the SLS solver ProbSAT. We used this benchmark set in both the 2013 and 2014 CSSCs (random track).
3sat1k
This is a set of 500 3SAT instances at the phase transition, all satisfiable. Each instance has 1000 variables and 4260 clauses. These instances were previously described by Tompkins et al. Tompkins et al. (2011a). We used their uniform random split into 250 training and test instances in the 2013 CSSC (random track) and in the 2014 CSSC (random satisfiable track).
5sat500
This set contains 500 5SAT instances generated uniformly at random with a clausetovariable ratio of 20. Each instance is satisfiable and has 500 variables and 10000 clauses. This set was first used for tuning the SAT solver Captain Jack and other SLS solvers Tompkins et al. (2011a). We used the original uniform random split into 250 training and test instances in the 2014 CSSC (random satisfiable track).
7sat90
This set contains 500 7SAT instances generated uniformly at random with a clausetovariable ratio of 85. Each instance is satisfiable and has 90 variables and 7650 clauses. This set was also first used for tuning the SAT solver Captain Jack and other SLS solvers Tompkins et al. (2011a). We used the original uniform random split into 250 training and test instances in the 2014 CSSC (random satisfiable track).
a.4 Instance Features Used for these Benchmark Sets
As described in B.3, SMAC can use instance features to guide its search. Such instance features have predominantly been studied in the work on SATzilla for algorithm selection Nudelman et al. (2004); Xu et al. (2008) and in machine learning models for predicting algorithm runtime Hutter et al. (2014). These features range from simple summary statistics, such as the number of variables or clauses in an instance, to the results of short, runtimelimited probes with local search solvers. In the context of algorithm configuration, we can afford somewhat more expensive features than for algorithm selection since we only require them on the training instances (not the test instances) and can compute them once, offline. Nevertheless, we kept feature computation costs low to not add substantially to the time required for algorithm configuration.
For the instance sets where we already had available instance features from previous work, we used those features. In particular, we used the 138 features described by Hutter et al. Hutter et al. (2014) for the datasets SWV, IBM, 3sat1k, 5sat500, and 7sat90. For the set unifk5, we did not compute features since these instances were very easy to solve even with algorithm defaults (note that SMAC
also worked very well without features). For the other datasets, we computed a subset of 119 features, including basic features and feature groups based on survey propagation, clause learning, local search probing, and search space size estimates.
^{11}^{11}11The code for computing these features is available at http://www.cs.ubc.ca/labs/beta/Projects/EPMs/.Appendix B Configuration Procedures
This appendix describes the configuration procedures we used in more detail. Configurators typically iterate the following steps: (1) execute the target algorithm on one or more instances with one or more configurations for a limited amount of time; (2) measure the resulting performance metric and (3) decide upon the next target algorithm execution. Beyond the key question of which configuration to try next, configurators also need to decide how many runs and which instances to use for each evaluation, and after which time to terminate unsuccessful runs. ParamILS, SMAC, and GGA differ in how they instantiate these components.
b.1 ParamILS: Local Search in Configuration Space
ParamILS Hutter et al. (2009), short for iterated local search in parameter configuration space, generalizes the simple (often manually performed) tuning approach of changing one parameter at a time and keeping changes if performance improves. While that simple tuning approach is a local search that terminates in the first local optimum, ParamILS carries out an iterated local search Lourenço et al. (2003) that applies perturbation steps in each local optimum in order to escape ’s basin of attraction and carry out another local search that leads to another local optimum . Iterated local search then decides whether to continue from the new optimum or to return to the previous optimum , thereby performing a biased random walk over locally optimal solutions. ParamILS only supports categorical parameters, so numerical parameters need to be discretized before ParamILS is run.
ParamILS is an algorithm framework with two different instantiations that differ in their strategy of deciding how many runs to use to evaluate each configuration. The most straightforward instantiation, BasicILS(N), resembles the approach most frequently used in manual parameter optimization: it evaluates each configuration according to a fixed number of runs on a fixed set of instances. While this approach is simple and intuitive, it gives rise to the problem of how to set the number . Setting to a large value yields slow evaluations; using a small number yields fast evaluations, but the evaluations are often not representative for the instance set (for example, if we choose runs we can cover at most instances, even if we only allow a single run per instance). The second ParamILS instantiation, FocusedILS, solves this problem by allocating most of its runs to strong configurations: it starts with a single run per configuration and incrementally performs more runs for promising configurations. This means that it can often afford a large number of runs for the best configurations while rejecting most poor configurations based on a few runs. There is also a guarantee that configurations that were ‘unlucky’ can be revisited in the search, allowing for a proof that FocusedILS—if run indefinitely—will eventually identify the configuration with the best performance on the entire training set.
Finally, ParamILS also implements a mechanism for adaptively choosing the time after which to terminate unsuccessful target algorithm runs. Intuitively, when comparing the performance of two configurations and on an instance, and we already know that solves the instance in time , we do not need to run for longer than : we do not need to know precisely how bad is, as long as we know that is better. More precisely, each comparison of configurations in ParamILS is with respect to an instance set , and evaluations of can be terminated prematurely when ’s aggregated performance on is provably worse than that of . In practice, this socalled adaptive capping mechanism can speed up ParamILS’s progress by orders of magnitude when the best configuration solves instances much faster than the overall maximal cutoff time Hutter et al. (2009).
For all experiments in this paper, we used the FocusedILS variant of the most recent publicly available ParamILS release 2.3.7^{12}^{12}12http://www.cs.ubc.ca/labs/beta/Projects/ParamILS/ with default parameters.
b.2 Gga: Genderbased Genetic Algorithm
The Genderbased Genetic Algorithm (GGA) Ansótegui et al. (2009) is a configuration procedure that maintains a population of configurations and proceeds according to an evolutionary metaphor, evolving the population over a number of generations in which pairs of configurations mate and produce offspring. GGA also uses the concept of gender: each configuration is labeled with a gender chosen uniformly at random, and when configurations are selected to mate there are separate selection pressures for each gender: configurations from the first gender are selected based on their empirical performance, whereas configurations from the other gender are selected uniformly at random. The second gender thus serves as a pool of diversity, countering premature convergence to a poor parameter configuration.
Unlike ParamILS’ local search mechanism, GGA’s recombination operator for combining the parameter values of two parent configurations can operate directly on numerical parameter domains, avoiding the need for discretization.
Like ParamILS, GGA implements an adaptive capping mechanism, elegantly combining it with a parallelization mechanism that lets it effectively use multiple processing units. GGA only ever evaluates configurations in the selection step for the first gender, and its strategy is to evaluate several candidates in parallel until the first one succeeds. Here, the number of configurations to be evaluated in parallel is taken to be identical to the number of processing units available, .^{13}^{13}13This coupling of adaptive capping and parallelization is the reason that GGA should not be run on a single core if the objective is to minimize runtime.
Like the FocusedILS variant of ParamILS, GGA also implements an “intensification” mechanism for increasing the number of runs it performs for each configuration over time. Specifically, it keeps constant in each generation, starting with small in the first generation, and linearly increasing up to a larger in generation and thereafter; , , and , are parameters of GGA.
For all experiments in the CSSC, we used the most recent publicly available version of GGA, version .^{14}^{14}14https://wiwi.unipaderborn.de/dep3/entscheidungsunterstuetzungssystemeundoperationsresearchjunprofdrtierney/research/sourcecode/ GGA’s author Kevin Tierney kindly provided a script to convert the parameter configuration space description for each solver from the competition’s pcs format^{15}^{15}15http://aclib.net/cssc2014/pcsformat.pdf to GGA’s native xml format. This script allowed us to run GGA for all solvers except those with complex conditionals.
Next to the parameters , , , and mentioned above, free GGA parameters include the maximal number of generations, and the size of the population, . The setting of these parameters considerably affects GGA’s behaviour and also determines its overall runtime (when run to completion). If there is an external fixed time budget (as in the CSSC), these parameters can be modified to ensure that GGA does not finish far too early (thus not making effective use of the available configuration budget) while simultaneously ensuring that runs do not take far too long (in which case configuration would be cut off in one of the first generations, where the search is basically still random sampling). It is thus important to set GGA’s parameters carefully. We set the following parameters to values handchosen by Kevin Tierney for the CSSC (leaving all other parameters at their default values): , , , , , .^{16}^{16}16Actually, due to a miscommunication, we first ran experiments with , obtaining somewhat worse results than reported here. After doublechecking with Kevin Tierney we then reran everything with the correct value of that depended on the number of training instances in each configuration scenario. We only report these latter results here.
We performed a post hoc analysis, which suggests that these parameters may yet not be optimal: GGA often finished relatively few generations within its configuration budget. It might thus make sense to use a smaller value of in the future to reduce the number of instances considered per configuration. However, this means that GGA would never consider all instances and may overtune as a result. How to best set GGA’s parameters is therefore an open research question.
b.3 Smac: Sequential Modelbased Algorithm Configuration
In contrast to the modelfree configurators ParamILS and GGA, SMAC Hutter et al. (2011b) is a sequential modelbased algorithm configuration method, which means that it uses predictive models of algorithm performance Hutter et al. (2014) to guide its search for good configurations. More specifically, it uses previously observed configuration, performance pairs to learn a random forest of regression trees (see, e.g., Breimann (2001)) that express a function predicting the performance of arbitrary parameter configurations (including those not yet evaluated) and then uses this function to guide its search. When instance characteristics are available for each problem instance , SMAC uses observed configuration, instance characteristic, performance triplets to learn a function that predicts the performance of arbitrary parameter configurations on instances with arbitrary characteristics. These socalled empirical performance models Hutter et al. (2014) are then marginalized over the instance characteristics of all training benchmark instances in order to derive the function that predicts average performance for each parameter configuration:
This performance model is used in a sequential optimization process as follows. After an initialization phase, SMAC iterates the following three steps: (1) use the performance measurements observed so far to fit a marginal random forest model ; (2) use to select a promising configuration to evaluate next, trading off exploration in new parts of the configuration space and exploitation in parts of the space known to perform well; and (3) run on one or more benchmark instances and compare its performance to the best configuration observed so far.
SMAC employs a similar criterion as FocusedILS to determine how many runs to perform for each configuration, and for finite configuration spaces in the limit it also provably converges to the best configuration on the training set. Unlike ParamILS, SMAC does not require that the parameter space be discretized.
When used to optimize target algorithm runtime, SMAC implements an adaptive capping mechanism similar to the one used in ParamILS. When this capping mechanism prematurely terminates an algorithm run we only observe a lower bound of the algorithm’s runtime. In order to construct predictive models of algorithm runtime in the presence of such socalled rightcensored data points, SMAC applies modelbuilding techniques derived from the survival analysis literature Hutter et al. (2011a).
Appendix C HorsConcours Solver Riss3gExt
So far, we have limited our analysis to the ten opensource solvers that competed for medals. Recall that one additional solver, Riss3gExt, only participated hors concours. It was not eligible for a medal, because it had been submitted as closed source, being based on a highly experimental code branch of Riss3g that had not been exhaustively tested and was therefore likely to contain bugs.
As discussed in Section 2.1, our experimental protocol included various safeguards against such bugs: we measured runtime and memory externally, compared reported solubility status against true solubility status where this was known, and checked returned models when an instance was reported satisfiable. Our configuration pipeline detected and penalized these crashes automatically, enabling the configuration procedures to continue their search and find Riss3gExt configurations with no or few crashes. In fact, the final best configurations identified by our configuration pipeline performed very well and would have handily won both the industrial and the crafted track of the CSSC 2013 had Riss3gExt been submitted as open source: in the industrial track, it only left 82 problem instances unsolved (compared to 115 for Lingeling); and in the crafted track only 44 (compared to 96 for Clasp3.0.4p8). Even though most of the instances Riss3gExt did not solve were due to it crashing, all of these were ‘legal’ crashes that simply did not output a solution (such as segmentation faults). In particular, we never observed Riss3gExt to produce an incorrect output for a CSSC test instance with known satisfiability status.
However, empirical tests with benchmark instances are of course no substitute for formal correctness guarantees, and even seasoned solvers can have bugs. Indeed, after the competition, Riss3gExt’s developer found a bug in it (in onthefly clause improvement Han & Somenzi (2009)) that caused some satisfiable instances to be incorrectly labeled as unsatisfiable.^{17}^{17}17Personal communication with Riss3gExt’s developer Norbert Manthey. This being the case, it was fortunate that Riss3gExt was ineligible for medals.
While empirical testing on benchmark instances, as done in a competition, can never guarantee the correctness of a solver, in future CSSCs, we consider tightening solubility checks on the benchmark instances used, by either limiting the benchmark sets to contain only instances with known satisfiability status or to require (and check) proofs of unsatisfiability, as in the certified UNSAT track of the SAT competition.
Appendix D Additional results with PAR1 score
Figure 41 visualizes runtime speedups obtained for each solver, counting timeouts at the cutoff time as the cutoff time itself (PAR1). Compared to the PAR10 results in Figure 34, speedups with PAR1 are up to a factor of ten smaller for benchmark/solver combinations with many timeouts for the default, but otherwise results are qualitatively similar.
Comments
There are no comments yet.