claspfolio 2: Advances in Algorithm Selection for Answer Set Programming

05/07/2014 ∙ by Holger Hoos, et al. ∙ 0

To appear in Theory and Practice of Logic Programming (TPLP). Building on the award-winning, portfolio-based ASP solver claspfolio, we present claspfolio 2, a modular and open solver architecture that integrates several different portfolio-based algorithm selection approaches and techniques. The claspfolio 2 solver framework supports various feature generators, solver selection approaches, solver portfolios, as well as solver-schedule-based pre-solving techniques. The default configuration of claspfolio 2 relies on a light-weight version of the ASP solver clasp to generate static and dynamic instance features. The flexible open design of claspfolio 2 is a distinguishing factor even beyond ASP. As such, it provides a unique framework for comparing and combining existing portfolio-based algorithm selection approaches and techniques in a single, unified framework. Taking advantage of this, we conducted an extensive experimental study to assess the impact of different feature sets, selection approaches and base solver portfolios. In addition to gaining substantial insights into the utility of the various approaches and techniques, we identified a default configuration of claspfolio 2 that achieves substantial performance gains not only over clasp's default configuration and the earlier version of claspfolio 2, but also over manually tuned configurations of clasp.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Answer Set Programming (ASP; [Baral (2003)]) has become a popular approach to declarative problem solving. This is mainly due its combination of a rich and simple modeling language with high performance solving technology. ASP decouples problem specifications from solving algorithms; however, modern ASP solvers are known to be sensitive to search configurations – a phenomenon that is common to advanced Boolean constraint processing techniques. To avoid the necessity of manual solver configuration, a substantial amount of research was thus devoted to automated algorithm configuration and selection approaches, as we detail in Section 2; in ASP, we find works by gekakascsczi11a, mapuri12a, silisc12a, mapuri13a and hokalisc14a, and in particular the two portfolio-based systems claspfolio [Gebser et al. (2011)] and ME-ASP [Maratea et al. (2013)]

. The idea of such portfolio-based systems is to train classifiers on features of benchmark instances in order to predict the putatively best solver from a given solver portfolio. The portfolio of solvers used in this approach may consist of distinct configurations of the same solver or contain different solvers.

In what follows, we describe the new portfolio-based ASP system claspfolio, whose earlier version won first, second, and third places at various ASP competitions. Version of claspfolio was briefly described in a short paper by gekakascsczi11a and is conceptually identical to the first stable release of version . The key design features of this prototype were (i) feature generation using a light-weight version of the ASP solver clasp, the original claspre

 system, (ii) performance estimation of portfolio solvers via support vector regression, and (iii) a portfolio consisting of different

clasp configurations only. In contrast to this rigid original design, the new version 2 of claspfolio provides a modular and open architecture (Section 3) that allows for integrating several different approaches and techniques. This includes (i) different feature generators, (ii) different approaches to solver selection, (iii) variable solver portfolios, as well as (iv) solver-schedule-based pre-solving techniques. The default setting of claspfolio 2 relies on an advanced version of claspre (Section 4), a light-weight version of clasp that produces statistics based on which numerous static and dynamic instance features are generated.

The flexible and open design of claspfolio 2 is a distinguishing factor even beyond ASP. As such, it provides a unique framework for comparing and combining existing approaches and techniques in a uniform setting. We take advantage of this and conduct an extensive experimental study comparing the influence of different options regarding (i), (ii), and (iii). In addition to gaining insights into the impact of the various approaches and techniques, we identify distinguished options showing substantial performance gains not only over clasp’s default configuration but moreover over manually tuned configurations of clasp. claspfolio 2 is - faster than the best known static clasp configuration and also - faster than claspfolio , as shown in Table 7 at the end of the paper. To facilitate reproducibility of our results and to promote the use of high-performance ASP solving technology, we have made claspfolio 2 publicly available as open-source software at http://potassco.sourceforge.net/#claspfolio.

2 Related Work

Our work continues a long line of research that can be traced back to John Rice’s seminal work on algorithm selection [Rice (1976)] on one side, and to work by huluho97a on parallel algorithm portfolios on the other side. Especially on SAT problems, automatic algorithm selectors have achieved impressive performance improvements in the last decade. SATzilla

  (xuhuhole08a xuhuhole08a,xuhole07a,xuhuhole09a,xuhuhole11a,xuhuhole11b) predicted algorithm performance by means of ridge regression until

and nowadays uses a pairwise voting scheme based on random forests;

ISAC [Kadioglu et al. (2010)] clusters instances in the instance feature space and uses a nearest neighbour approach on cluster centers for algorithm selection; 3S [Kadioglu et al. (2011), Malitsky et al. (2013)] uses

-NN in the feature space and introduces pre-solving schedules computed by Integer Linear Programming and cost-sensitive clustering;

SNAPP [Collautti et al. (2013)] predicts algorithm performance based on instance features and chooses an algorithm based on the similarity of the predicted performances. All these systems are specialized on a single approach. They are highly efficient but do not provide a uniform setting, that is, different inputs and different performance metrics.

Apart from SAT, there exist several algorithm selectors for other problems. Following the original claspfolio of gekakascsczi11a approach, mapuri12a presented ME-ASP, a multi-engine algorithm selector for ASP with an instance feature generator for syntactic features. Similarly, AQME [Pulina and Tacchella (2007)] is a multi-engine selector for QSAT. CP-Hydra [O’Mahony et al. (2008)]

selects a set of CSP solvers based on case-based reasoning and schedules them heuristically.

Stone Soup [Seipp et al. (2012), Helmert et al. (2011)] uses greedy hill climbing to find algorithm schedules for planning problems. aspeed [Hoos et al. (2014)] also computes algorithm schedules, but takes advantage of the modeling and solving capabilities of ASP to find timeout-minimal schedules.

Related to our work on a more general level, huxuhole12a gave an overview over runtime prediction techniques, which is also used in some algorithm selection approaches, e.g., SATzilla

09. A comparison of different machine learning algorithms for algorithm selection was presented by kogemi12a. Based on these results, kotthoff13a introduced

LLAMA, Leveraging Learning to Automatically Manage Algorithms, a flexible framework that provides functionality to train and assess the performance of different algorithm selection techniques.

3 Generalized Algorithm Selection Framework

claspfolio 2’s new algorithm framework combines the flexibility of LLAMA with additional state-of-the-art techniques and produces an executable algorithm selection solver. As such, it provides a unique framework for comparing and combining existing approaches and techniques in a uniform setting. Furthermore, the new design of claspfolio 2 follows the idea of Level of programming by optimisation [Hoos (2012)]: “The software-development process is centered on the idea of providing design choices and alternatives in all parts of a project that might benefit from them; design choices that cannot be justified convincingly are not made prematurely.”

A further distinguishing feature of the claspfolio 2 framework is the efficient and deep integration of an algorithm scheduling system, viz. aspeed [Hoos et al. (2014)], into an algorithm selection framework to compute a static pre-solving schedule. claspfolio 2 uses aspeed to determine the running times used within pre-solving schedules. Thereby, it considers the estimated quality of the algorithm selector to determine the running time of the complete pre-solving schedule. This also allows us to integrate the pre-solving strategies of SATzilla and 3S.

Resources

Data Collection

Prediction

Scheduling

Training

Solving

Training Instances

Algorithms

Assess Performance

Compute Features

claspre

Feature Preprocessing

Performance Preprocessing

Train Scoring Model

Performance Estimation

Pre-Solving Schedule by aspeed

Run Pre-Solving Schedule

Run Best Scored Algorithm

I

if not successful

Score Algorithms

Compute Features

(New) Instance

Run Backup Algorithm

failed

II
Figure 1: General workflow of claspfolio 2. Objects such as algorithms and instances are shown as rectangles, and activities are depicted as rectangles with rounded corners. Activities related to algorithm are tinted red and activities related to algorithm schedules yellow.

The general workflow underlying claspfolio 2 consists of collecting training data, learning a prediction model and training a pre-solving schedule; the portfolio-based ASP solver thus obtained solves a given problem instance with the pre-solving schedule and a solver selected by the prediction model. In what follows, we describe how this workflow is implemented efficiently in claspfolio 2; see Figure 1.

1. Resources.

To train an algorithm selector, training instances and a portfolio of algorithms are required. Algorithm selection is based on the assumption that the given training instances are representative for the instances to be solved using the trained algorithm selection solver. In addition, a portfolio, i.e., a set of algorithms with complementary strengths (e.g., high-performance solvers used in a competition), provides the basis for algorithm selectors to efficiently solve a large variety of instances.

2. Data Collection.

An algorithm selection task is defined based on the performance of all algorithms on all training instances (Assess Performance), instance features for each instance (Compute Features) and the costs for feature computation define an algorithm selection task.claspfolio 2 supports several feature generators, of which claspre is used by default.

3. Training.

The training phase of claspfolio 2 makes use of two distinct components: Prediction and Scheduling. Both components can also be used separately in claspfolio 2.

The Prediction component of claspfolio 2 involves feature pre-processing

, e.g., feature normalization and feature selection, and

performance pre-processing, e.g., performance score transformation and algorithm filtering111Algorithm filtering removes components of the portfolio given some strategy, e.g., algorithms with a marginal contribution on virtual best solver performance of can be removed. In [Xu et al. (2008)], this is called solver subset selection and in [Maratea et al. (2012)], solver selection. . Based on the preprocessed data, a scoring model is learned, which maps the feature vector for a given problem instance to scores for all algorithms such that algorithms expected to perform well on the given instances are assigned better scores.

The Scheduling component of claspfolio 2 computes a timeout-minimal pre-solving schedule using aspeed [Hoos et al. (2014)], where each algorithm gets a (potentially zero) time slice of the overall runtime budget available for solving a given problem instance. If the prediction component is not used, the schedule consists only of the given algorithms. If the prediction

component is used, cross validation is used to obtain an unbiased estimate of the performance (

Performance Estimation) of the prediction component (Arrow I). The resulting performance estimate of the prediction component is used as an additional simulated algorithm in the schedule generation process. All components of the schedule except the simulated one form the pre-solving schedule used in claspfolio 2. If the prediction performs well, the pre-solving schedule may be empty because the pre-solving schedule cannot perform better than a perfect predictor, i.e., the selection of the best solver. In contrast, if prediction performs very poorly (e.g., as a result of non-informative instance features), the simulated algorithm may be assigned a time slice of zero seconds and the prediction component is de facto ignored in the solving step.

Like SATzilla [Xu et al. (2008)], claspfolio 2 allows to ignore instances solved by the pre-solving schedule (Arrow II) when learning the scoring model, such that the resulting model is focused on the harder instances not solved by the pre-solvers that are actually subject to algorithm selecting during the solving phase.

4. Solving

a (new) instance starts with the computation of its features. If feature computation fails, e.g., because it requires too much time, a backup solver is used to solve the instance. Otherwise, the scoring model is used to score

each algorithm of the portfolio based on the computed feature vector. If the algorithm with the best score is part of the pre-solving schedule, it is removed from the schedule, because running the same algorithm twice does not increase the solving probability (when using deterministic algorithms like

clasp). Next, the pre-solving schedule is executed.222Unlike this, SATzilla runs the pre-solving schedule first and then computes the instance features, because the feature computation can be costly in SAT and the pre-solving schedule can solve the instance without incurring this cost. However, this does not permit removal of the selected solver from the pre-solving schedule. If at the end of executing the pre-solving schedule, the instance has not been solved, the algorithm with the highest score is run for the remainder of the overall time budget.

4 claspre: Instance Features for ASP

Tight

Problem Variables

Free problem Variables

Assigned problem Variable

Constraints

Constraints / Variables

Created Bodies

Program Atoms

SCCs

Nodes in positive BADG

Rules

Normal Rules

Cardinality Rules

Choice Rules

Weight Rules

Negative body Rules

Positive body Rules

Unary Rules

Binary Rules

Ternary Rules

Integrity Constraints

Equivalences

Atom-Atom Equivalences

Body-Body Equivalences

Other Equivalences

Binary Constraints

Ternary Constraints

Other Constraints

Table 1: 38 static features computed by claspre ( = number, = fraction, SCCs = Strongly Connected Components, BADG = Body-Atom-Dependency Graph)

Choices

Conflicts / Choices

conflict level

LBD level

Learnt conflict nogoods

Learnt loop nogoods

Literals conflict nogoods

Literals loop nogoods

Removed nogoods

Learnt binary nogoods

Learnt ternary nogoods

Learnt other nogoods

Longest backjump (bj)

Skipped levels while bj

running average Conflict level

running average LBD level

Table 2: 25 dynamic features computed (at each restart) by claspre
( = number, = fraction, = average, LBD = Literal Blocking Distance)

The entire concept of algorithm selection is based on instance features which characterize benchmark instances and allow for predicting the putatively best solver from a given portfolio. These instance features should be cheap-to-compute to save as much time as possible for the actual solving process, but should also provide sufficient information to distinguish between (classes of) instances for which different solvers or solver configurations work best.

For feature generation, claspfolio 2 uses claspre in its default configuration. claspre is a light-weight version of clasp [Gebser et al. (2011)] that extracts instance features of ground ASP instances in smodels format [Syrjänen (Syrjänen)], using clasp’s internal statistics. The features determined by claspre can be grouped into static and dynamic ones. The former are listed in Table 1 and include properties, such as number of constraints. Beyond that, claspre performs a limited amount of search to collect dynamic information about solving characteristics. These dynamic features are computed after each restart of the search process, where restarts are performed after a fixed number of conflicts. Thereby, dynamic features (Table 2) are extracted after each restart, such as the average number of conflict levels skipped while back-jumping.

The number of restarts performed is a parameter of claspre. More restarts lead to longer feature vectors that may contain more information. The number of restarts and number of conflicts between restarts determine the time used by claspre for feature computation We note that the pre-processing and search performed by claspre can actually solve a given ASP instance. The probability of this happening increases with the length of the search performed within claspre; however, at the same time, long runs of claspre reduce the time available for running solvers from the portfolio.

5 Empirical Performance Analysis

As previously described, claspfolio 2’s modular and open architecture (Section 3) allows for integrating several different approaches and techniques, including (i) different feature generators, (ii) different approaches to solver selection, as well as (iii) variable solver portfolios. Taking advantages of this flexibility, we conducted an extensive experimental study to assess the efficacy of the various choices on large and representative sets of ASP instances.

Training data of claspfolio 2 is stored in the algorithm selection data format developed by the COSEAL Group,333https://code.google.com/p/coseal an international group of experts in the field of algorithm selection and configuration. Detailed experimental results and the source code of claspfolio 2 are available at http://www.cs.uni-potsdam.de/claspfolio. Our empirical analysis makes use of commonly used techniques from statistics and machine learning (see, e.g., [Bishop (2007)]).

5.1 Setup

All our experiments were performed on a computer cluster with dual Intel Xeon E quad-core processors ( GHz, 8192 KB cache) and  GB RAM per node, running Scientific Linux (2.6.18-308.4.1.el5). Each algorithm run was limited to a runtime cutoff of CPU seconds and to a memory cutoff of  GB. Furthermore, we used permutation tests with permutations and significance level to our performance metrics, the (/) timeout scores, the PAR scores and the PAR scores,444 PAR is the penalized average runtime penalizing timeouts by times the runtime cutoff. to asses the statistical significance of observed performance differences.

5.2 Instance Sets

We used all instances submitted to the ASP Competition in the NP category that could be grounded with gringo () within CPU seconds and GB memory. The resulting instance set consists of instances from problem classes; we call it Comp-13-Set. As an even more heterogeneous instance set, we used the ASP Potassco-Set introduced by hokascsc13a; it consists of instances from problem classes and includes instances from the ASP competitions organized in (SLparse track), (with the encodings of the Potassco group) and (decision NP-problems from the system track), as well as several instances from the ASP benchmark collection platform asparagus.555http://asparagus.cs.uni-potsdam.de All instances were grounded with gringo, and the grounding time was not counted towards solving the instances.

Each instance set was randomly split into equally sized, disjoint training and test set; only the training sets were used in the process of building algorithm portfolios. The resulting claspfolio 2 solvers were evaluated on the hold-out test sets. We also used the training instances to determine the best claspfolio 2 configuration (Subsection 5.3). To assess the performance of claspfolio 2 (Subsection 5.6), we used a -fold cross validation on the test set. Notice that we cannot use the training set for claspfolio 2 to obtain an unbiased learned model, because the algorithm portfolios have an optimistic performance estimation on the training set on which they were build.

5.3 Building Algorithm Portfolios

In addition to a set of training instances, a portfolio (i.e., a set) of algorithms is required to construct a portfolio solver. claspfolio 2 can handle portfolios containing different solvers as well as different configurations of a given solver, all of which are viewed as individual ASP solvers. We investigated the following portfolios of ASP solvers:

Expert-portfolio and SOTA-portfolio are portfolios manually constructed by experts. In contrast, Hydra and ISAC are automatic methods for constructing portfolios using algorithm configurators, e.g., ParamILS [Hutter et al. (2007)], GGA [Ansótegui et al. (2009)] or SMAC [Hutter et al. (2011)]. They generate a portfolio of configurations of a given solver by determining configurations that complement each other well on a given set of training instances, with the goal of optimizing the performance of the portfolio under the idealized assumption of perfect selection; this performance is also called the virtual best solver (vbs) or oracle performance of the portfolio.

An implementation of Hydra that can be applied to solvers for arbitrary problems has not yet been published by xuhole10a; therefore, we have implemented our own version of Hydra (in consultation with the authors), which we refer to as Hydra-like-portfolio in the following. Also, since the only published version of ISAC () does not include algorithm configuration, we reimplemented the part of ISAC responsible for portfolio generation, dubbed ISAC-like-portfolio. In contrast to the original ISAC, which performs g-means clustering, ISAC-like-portfolio

uses k-means clustering, where the number of clusters is determined by using cross-validation to optimize the scoring function of the k-means procedure (following hokascsc13a).

Using this approach, ISAC-like-portfolio found clusters for Comp-13-Set and clusters for Potassco-Set, inducing and configuration tasks, respectively. To obtain a fair comparison, we allocated the same time budget to Hydra-like-portfolio and allowed it to perform and iterations, respectively (each consisting of one configuration task). The configuration process performed by SMAC (; huhole11b) on each cluster and in each Hydra iteration, respectively, was allocated CPU seconds, i.e., times the target algorithm cutoff time, and independent repetitions, from which the result with the best PAR10 score on the given training set was selected. SMAC optimized PAR.

Comp-13-Set Potassco-Set
#TOs PAR PAR #TOs PAR PAR
Expert-portfolio
SOTA-portfolio
Hydra-like-portfolio
ISAC-like-portfolio
Table 3: Virtual best solver (VBS) performance of portfolio building approaches on test sets. Results shown in boldface were statistically significantly better than all others within the respective column (according to a permutation test with permutations and ).

Table 3 shows the performance of the virtual best solvers (i.e., the performance of a perfect algorithm selector) for the different considered portfolios. Interestingly, the results differ qualitatively between two benchmark sets. While SOTA-portfolio performs better than Expert-portfolio on Comp-13-Set, Expert-portfolio is better on Potassco-Set. Furthermore, while for both sets, the automatic generation methods found better performing portfolios than the the manual selected methods, on the Comp-13-Set, ISAC-like-portfolio produced a better results than Hydra-like-portfolio, and the opposite holds for Potassco-Set. Furthermore, unlike conjectured by mapuri12a, a set of configurations of the same, highly parameterized solver (Expert-portfolio, ISAC-like-portfolio and Hydra-like-portfolio) generally did not yield worse performance than a mixed portfolio, such as SOTA-portfolio.

While we gave Hydra the same time budget as ISAC to find portfolios, the components added by Hydra-like-portfolio in its final three iterations decreased the number of timeouts only by one on our training and test sets. Following xuhole10a, Hydra would be terminated when the performance does not improve on the training set after an iteration. Hence, Hydra-like-portfolio not only produced a better portfolio on Potassco-Set than ISAC, but also does so using less configuration time than ISAC.

5.4 Feature Sets

In addition to the claspre feature set presented in Section 4, we considered a set of ASP features introduced by mapuri13a that is focussed on very efficiently computable syntactic features, such as number of variables. The published version of their feature generator supports only the ASPCore  [Calimeri et al. (2011)] language of the ASP Competition. Our Comp-13-Set consists of instances of the ASP Competition in ASPCore , which introduced further language constructs. Therefore, we re-implemented this feature generator with the help of mapuri13a to be compatible with ASPCore .777 The new feature generator is implement in Python, whereas the original generator was implemented in C++, which induced an overhead of a factor in terms of running time on average on ASPCore instances from the ASP Competition.

One of the most established and investigated feature generators for SAT is provided as part of SATzilla [Xu et al. (2008)]. ASP instances can be translated to SAT with techniques by janhunen06a, using his tool lpsat. We use a combination of lpsat888lpsat was used as submitted at the ASP Competition. with the feature generator of SATzilla to generate a set of instance features for ASP instances; this is the first time, these features are studied in the context of ASP. Since the full set of SATzilla features is very expensive to compute and our SAT encodings can get quite large, we decided to only use the efficiently computable base features.

Comp-13-Set Potassco-Set
Min Median %TOs Min Median %TOs
claspre(s)
claspre(s+d)
ME-ASP
lpsat
Table 4: Time required for computing the features of a single ASP instance in CPU seconds, with a seconds runtime cutoff. We report minimum (Min), quantile (), median and quantile () of the distribution over the respective instance set, as well as the percentage of timeouts (%TOs).

Table 4 shows the runtime statistics for claspre with static features, claspre(s), claspre with static and dynamic features, claspre(s+d), with restarts and conflicts between the restarts, the (re-implemented) feature generator of ME-ASP and the combination of lpsat and SATzilla’s feature generator on our full benchmark sets (training + test instances). claspre(s) is only slightly faster than claspre with additional dynamic features, since its search was limited to conflicts. To solve typical ASP instances, searches well beyond conflicts are often required; nevertheless, claspre(s) solved instances through pre-processing, and claspre(s+d) solved instances on Comp-13-Set, and instances on Potassco-Set, respectively. The feature generation of ME-ASP was faster, but (unsurprisingly, considering the nature of these features) did not solve any instance. Because of the substantial overhead of generating translations from ASP to SAT, the combination of lpsat and SATzilla’s feature generator turned out to be substantially slower than the other approaches and failed to compute the feature vectors of instances on Comp-13-Set and instances on Potassco-Set within the given cutoff time.

5.5 Algorithm Selection Approaches

As previously mentioned, claspfolio 2 was explicitly designed to easily integrate several state-of-the-art algorithm selection approaches. This not only permits us to optimize the performance of claspfolio 2, but also to compare the considered algorithm selection approaches within a controlled environment. Although our re-implementations may not reproduce the original implementations in all details (something that would be difficult to achieve, considering that sources are not available for some published approaches), they provide the only freely available, open-source implementations of some of these systems and thus provide a basis for further analysis and improvements.999As with Hydra and ISAC above, published and trainable, general-purpose implementations of 3S and ME-ASP are not available.

Approach Feat. Norm. Pre-Solver Pre-Solver Time [sec]
aspeed static schedule none
claspfolio-1.0-like support vector
regression
-score
ME-ASP-like nearest neighbor none
ISAC-like -means clustering linear
3S-like -NN linear
SATzilla’09-like ridge regression -score
SATzilla’11-like voting with random forest -score
Table 5: Algorithm selection mechanism supported by claspfolio 2.

Table 5 gives an overview of the approaches available within claspfolio 2. These differ with respect to (i) the algorithm selection method, (ii) the feature normalization technique, (iii) the maximal number of pre-solvers used and (iv) the maximal running time allocated to the pre-solving schedule. In all cases, the pre-solving schedules were computed by aspeed

, and hyperparameters of the machine learning techniques were set using grid search on training data.

5.6 Results

Impact of feature set
Comp-13-Set Potassco-Set
claspre(s)
claspre(s+d)
ME-ASP
lpsat
Impact of portfolio
Comp-13-Set Potassco-Set
Expert-portfolio
SOTA-portfolio
Hydra-like-portfolio
ISAC-like-portfolio
Impact of selection mechanism
Comp-13-Set Potassco-Set
aspeed
claspfolio-1.0-like
ME-ASP-like
ISAC-like
3S-like
SATzilla’09-like
SATzilla’11-like
Table 6: Statistics ( = average,

= standard deviation,

= minimum) of PAR10 performance over all combinations except for the one kept fixed to assess its impact.

We have assessed the performance of claspfolio 2 on all combinations of our feature sets, portfolios and algorithm selection approaches, using a cross validation on both test sets. To study the effect of each design choice, we collected statistics over the distribution of results by keeping one choice fixed and varying all remaining components; the results are shown in Table 6. The top part of the table shows results obtained for using each of the feature sets, in terms of average PAR performance, standard deviation in PAR performance and best PAR performance over all combinations of portfolios and selection approaches. The subsequent parts of Table 6 show analogous results for different portfolios and selection approaches.

On average, the best feature set was claspre(s) (the static claspre features) on Comp-13-Set, followed by claspre(s+d) (the static + dynamic claspre features), the feature sets of ME-ASP and lpsat. However, the best claspfolio 2 configuration on Comp-13-Set used ME-ASP. The fact that claspre(s+d) gave worse results than claspre(s), although the former is superset of the latter, indicates that not all features were useful and that feature selection should be used to identify a subset of features with highest information content. On Potassco-Set, the best average performance and the best performance of any claspfolio 2 configuration was consistently obtained by using claspre(s+d). We believe that the additional dynamic features are necessary to distinguish between the larger number of different problem classes in Potassco-Set.

The results on the impact of the portfolio of algorithms used as a basis for algorithm selection confirm our assumption that the best potential performance, i.e., best VBS performance, is a good indicator of the actual performance achieved by a high-performance selection approach. On Comp-13-Set, ISAC-like-portfolio achieved the best performance, while on Potassco-Set, Hydra-like-portfolio yielded even better results. Furthermore, the portfolios obtained using the two automatic portfolio generation methods, ISAC and Hydra, yielded better results than the manually created ones, Expert-portfolio and SOTA-portfolio.

As shown in the lower part of Table 6, the SATzilla’11-like approach performed best on both benchmark sets, followed closely by 3S-like and ISAC-like. SATzilla’09-like and claspfolio-1.0-like showed similar, but weaker performance results, followed by the ME-ASP-like approach and the pure algorithm schedules of aspeed.

Overall, the best combination both on the training and test sets of Comp-13-Set was the ME-ASP features, ISAC-like-portfolio and SATzilla’11-like selection approach, and claspre(s+d) features, Hydra-like-portfolio and SATzilla’11-like selection approach for Potassco-Set.

Comp-13-Set Potassco-Set
PAR PAR PAR PAR
clasp (default)
clasp (ASP Comp )
Single Best
claspfolio 1.0
claspfolio 2
best known VBS
Table 7: Comparison of two clasp configurations, the Single Best solver in all portfolios (cf. Subsection 5.3), claspfolio 1.0, the claspfolio 2 with claspre(s+d) features, Hydra-like-portfolio and SATzilla’11-like approach. The significantly best performances (except VBS) are shown in boldface (according to a permutation test with permutations and significance level ).

6 Conclusions and Future Work

Our new, modular claspfolio 2 ASP solver architecture comprises a diverse set of portfolio-based algorithm selection techniques, including feature extractors, manually and automatically constructed base algorithm portfolios, algorithm selection mechanisms and solver-schedule-based pre-solving techniques. As seen from the high-level overview of empirical performance results in Table 7, on standard, diverse and heterogeneous sets of ASP benchmarks, claspfolio 2 is substantially more robust than the default configuration of clasp, the manual tuned configuration of clasp of the ASP Competition, and than all other assessed individual solvers; in fact, its performance in terms of PAR10-score lies only about 20% and 15% above that of the best known oracle on Potassco-Set and Comp-13-Set benchmark sets, respectively. The reimplementation of claspfolio in claspfolio 2, which had a similar performance in preliminary experiments than the original implementation, achieves also about higher PAR10-score than claspfolio 2. While the best configuration of claspfolio 2 varies between these two benchmark sets, the performance differences are relatively minor: on Comp-13-Set, the best configuration of claspfolio 2 for Potassco-Set – which we also chose as the default configuration for claspfolio 2 – achieves a PAR10-score only about 2.1% lower than the best configuration for Comp-13-Set, and on Potassco-Set, its PAR10-score is about 9.6% higher. This configuration uses the claspre(s+d) feature set in combination with the Hydra-like-portfolio base algorithm portfolio construction approach and the SATzilla’11-like algorithm selection mechanism, but other feature sets, base algorithm portfolios and algorithm selection mechanisms also achieve very strong performance.

Acknowledgments

T. Schaub and M. Lindauer were supported by the DFG projects under SCHA 550/8-3. H. Hoos was supported by an NSERC Discovery Grant.

References

  • Alviano et al. (2013) Alviano, M., Dodaro, C., Faber, W., Leone, N., and Ricca, F. 2013. WASP: A native asp solver based on constraint learning. In Proceedings of the Twelfth International Conference on Logic Programming and Nonmonotonic Reasoning (LPNMR’13)

    , P. Cabalar and T. Son, Eds. Lecture Notes in Artificial Intelligence, vol. 8148. Springer-Verlag, 54–66.

  • Ansótegui et al. (2009) Ansótegui, C., Sellmann, M., and Tierney, K. 2009.

    A gender-based genetic algorithm for the automatic configuration of algorithms.

    In Proceedings of the Fifteenth International Conference on Principles and Practice of Constraint Programming (CP’09), I. Gent, Ed. Lecture Notes in Computer Science, vol. 5732. Springer-Verlag, 142–157.
  • Baral (2003) Baral, C. 2003. Knowledge Representation, Reasoning and Declarative Problem Solving. Cambridge University Press.
  • Bessiere (2007) Bessiere, C., Ed. 2007. Proceedings of the Thirteenth International Conference on Principles and Practice of Constraint Programming (CP’07). Lecture Notes in Computer Science, vol. 4741. Springer-Verlag.
  • Bishop (2007) Bishop, C. 2007. Pattern Recognition and Machine Learning (Information Science and Statistics), 1st ed. 2006. Corr. 2nd printing ed. Springer.
  • Calimeri et al. (2011) Calimeri, F., Ianni, G., and Ricca, F. 2011. Third ASP competition - file and language formats. Tech. rep., Università della Calabria.
  • Collautti et al. (2013) Collautti, M., Malitsky, Y., Mehta, D., and O’Sullivan, B. 2013. SNAPP: Solver-based nearest neighbor for algorithm portfolios. In Proceedings of the Twenty-Fourth European Conference on Machine Learning (ECML’13), F. Zelezny, Ed. Lecture Notes in Computer Science. Springer-Verlag.
  • Dovier and Santos Costa (2012) Dovier, A. and Santos Costa, V., Eds. 2012. Technical Communications of the Twenty-eighth International Conference on Logic Programming (ICLP’12). Vol. 17. Leibniz International Proceedings in Informatics (LIPIcs).
  • Gebser et al. (2011) Gebser, M., Kaminski, R., Kaufmann, B., Ostrowski, M., Schaub, T., and Schneider, M. 2011. Potassco: The Potsdam answer set solving collection. AI Communications 24, 2, 107–124.
  • Gebser et al. (2011) Gebser, M., Kaminski, R., Kaufmann, B., Schaub, T., Schneider, M., and Ziller, S. 2011. A portfolio solver for answer set programming: Preliminary report. In Proceedings of the Eleventh International Conference on Logic Programming and Nonmonotonic Reasoning (LPNMR’11), J. Delgrande and W. Faber, Eds. Lecture Notes in Artificial Intelligence, vol. 6645. Springer-Verlag, 352–357.
  • Giunchiglia et al. (2006) Giunchiglia, E., Lierler, Y., and Maratea, M. 2006. Answer set programming based on propositional satisfiability.

    Journal of Automated Reasoning

     36, 4, 345–377.
  • Helmert et al. (2011) Helmert, M., Röger, G., and Karpas, E. 2011. Fast downward stone soup: A baseline for building planner portfolios. In ICAPS 2011 Workshop on Planning and Learning. 28–35.
  • Hoos (2012) Hoos, H. 2012. Programming by optimisation. Communications of the ACM 55, 70–80.
  • Hoos et al. (2014) Hoos, H., Kaminski, R., Lindauer, M., and Schaub, T. 2014. aspeed: Solver scheduling via answer set programming. Theory and Practice of Logic Programming First View, 1–26. Available at http://arxiv.org/abs/1401.1024.
  • Hoos et al. (2013) Hoos, H., Kaufmann, B., Schaub, T., and Schneider, M. 2013. Robust benchmark set selection for boolean constraint solvers. See Pardalos and Nicosia (2013), 138–152.
  • Huberman et al. (1997) Huberman, B., Lukose, R., and Hogg, T. 1997. An economic approach to hard computational problems. Science 275, 51–54.
  • Hutter et al. (2011) Hutter, F., Hoos, H., and Leyton-Brown, K. 2011. Sequential model-based optimization for general algorithm configuration. In Proceedings of the Fifth International Conference on Learning and Intelligent Optimization (LION’11). Lecture Notes in Computer Science, vol. 6683. Springer-Verlag, 507–523.
  • Hutter et al. (2007) Hutter, F., Hoos, H., and Stützle, T. 2007. Automatic algorithm configuration based on local search. In Proceedings of the Twenty-second National Conference on Artificial Intelligence (AAAI’07). AAAI Press, 1152–1157.
  • Hutter et al. (2012) Hutter, F., Xu, L., Hoos, H. H., and Leyton-Brown, K. 2012. Algorithm runtime prediction: The state of the art. Artificial Intelligence.
  • Janhunen (2006) Janhunen, T. 2006. Some (in)translatability results for normal logic programs and propositional theories. Journal of Applied Non-Classical Logics 16, 1-2, 35–86.
  • Kadioglu et al. (2011) Kadioglu, S., Malitsky, Y., Sabharwal, A., Samulowitz, H., and Sellmann, M. 2011. Algorithm selection and scheduling. In Proceedings of the Seventeenth International Conference on Principles and Practice of Constraint Programming (CP’11), J. Lee, Ed. Lecture Notes in Computer Science, vol. 6876. Springer-Verlag, 454–469.
  • Kadioglu et al. (2010) Kadioglu, S., Malitsky, Y., Sellmann, M., and Tierney, K. 2010. ISAC – instance-specific algorithm configuration. In Proceedings of the Nineteenth European Conference on Artificial Intelligence (ECAI’10), H. Coelho, R. Studer, and M. Wooldridge, Eds. IOS Press, 751–756.
  • Kotthoff (2013) Kotthoff, L. 2013. LLAMA: leveraging learning to automatically manage algorithms. Tech. rep., Cork Constraint Computation Centre. published at arXiv.
  • Kotthoff et al. (2012) Kotthoff, L., Gent, I. P., and Miguel, I. 2012. An evaluation of machine learning in algorithm selection for search problems. AI Communications 25, 3, 257–270.
  • Liu et al. (2012) Liu, G., Janhunen, T., and Niemelä, I. 2012. Answer set programming via mixed integer programming. In Proceedings of the Thirteenth International Conference on Principles of Knowledge Representation and Reasoning (KR’12), G. Brewka, T. Eiter, and S. McIlraith, Eds. AAAI Press, 32–42.
  • Malitsky et al. (2013) Malitsky, Y., Sabharwal, A., Samulowitz, H., and Sellmann, M. 2013. Boosting sequential solver portfolios: Knowledge sharing and accuracy prediction. See Pardalos and Nicosia (2013), 153–167.
  • Maratea et al. (2012) Maratea, M., Pulina, L., and Ricca, F. 2012. Applying machine learning techniques to ASP solving. See Dovier and Santos Costa (2012), 37–48.
  • Maratea et al. (2013) Maratea, M., Pulina, L., and Ricca, F. 2013. A multi-engine approach to answer-set programming. Theory and Practice of Logic Programming First View, 1–28.
  • Nguyen et al. (2013) Nguyen, M., Janhunen, T., and Niemelä, I. 2013. Translating answer-set programs into bit-vector logic. In Proceedings of the Nineteenth International Conference on Applications of Declarative Programming and Knowledge Management (INAP’11) and the Twenty-fifth Workshop on Logic Programming (WLP’11), H. Tompits, S. Abreu, J. Oetsch, J. Pührer, D. Seipel, M. Umeda, and A. Wolf, Eds. Lecture Notes in Computer Science, vol. 7773. Springer-Verlag, 105–116.
  • O’Mahony et al. (2008) O’Mahony, E., Hebrard, E., Holland, A., Nugent, C., and O’Sullivan, B. 2008. Using case-based reasoning in an algorithm portfolio for constraint solving. In Proceedings of the Nineteenth Irish Conference on Artificial Intelligence and Cognitive Science (AICS’08), D. Bridge, K. Brown, B. O’Sullivan, and H. Sorensen, Eds.
  • Pardalos and Nicosia (2013) Pardalos, P. and Nicosia, G., Eds. 2013. Proceedings of the Seventh International Conference on Learning and Intelligent Optimization (LION’13). Lecture Notes in Computer Science, vol. 7997. Springer-Verlag.
  • Pulina and Tacchella (2007) Pulina, L. and Tacchella, A. 2007. A multi-engine solver for quantified boolean formulas. See Bessiere (2007), 574–589.
  • Rice (1976) Rice, J. 1976. The algorithm selection problem. Advances in Computers 15, 65–118.
  • Seipp et al. (2012) Seipp, J., Braun, M., Garimort, J., and Helmert, M. 2012. Learning portfolios of automatically tuned planners. In Proceedings of the Twenty-Second International Conference on Automated Planning and Scheduling (ICAPS’12), L. McCluskey, B. Williams, J. R. Silva, and B. Bonet, Eds. AAAI.
  • Silverthorn et al. (2012) Silverthorn, B., Lierler, Y., and Schneider, M. 2012. Surviving solver sensitivity: An ASP practitioner’s guide. See Dovier and Santos Costa (2012), 164–175.
  • Simons et al. (2002) Simons, P., Niemelä, I., and Soininen, T. 2002. Extending and implementing the stable model semantics. Artificial Intelligence 138, 1-2, 181–234.
  • Syrjänen (Syrjänen) Syrjänen, T. Lparse 1.0 user’s manual.
  • Xu et al. (2007) Xu, L., Hoos, H., and Leyton-Brown, K. 2007. Hierarchical hardness models for SAT. See Bessiere (2007), 696–711.
  • Xu et al. (2010) Xu, L., Hoos, H., and Leyton-Brown, K. 2010. Hydra: Automatically configuring algorithms for portfolio-based selection. In Proceedings of the Twenty-fourth National Conference on Artificial Intelligence (AAAI’10), M. Fox and D. Poole, Eds. AAAI Press, 210–216.
  • Xu et al. (2008) Xu, L., Hutter, F., Hoos, H., and Leyton-Brown, K. 2008. SATzilla: Portfolio-based algorithm selection for SAT. Journal of Artificial Intelligence Research 32, 565–606.
  • Xu et al. (2009) Xu, L., Hutter, F., Hoos, H., and Leyton-Brown, K. 2009. SATzilla2009: An automatic algorithm portfolio for SAT. In SAT 2009 competitive events booklet: preliminary version, D. Le Berre, O. Roussel, L. Simon, V. Manquinho, J. Argelich, C. Li, F. Manyà, and J. Planes, Eds. 53–55. Available at http://www.cril.univ-artois.fr/SAT09/solvers/booklet.pdf.
  • Xu et al. (2011) Xu, L., Hutter, F., Hoos, H., and Leyton-Brown, K. 2011. Hydra-MIP: Automated algorithm configuration and selection for mixed integer programming. In RCRA workshop on Experimental Evaluation of Algorithms for Solving Problems with Combinatorial Explosion at the International Joint Conference on Artificial Intelligence (IJCAI’11).
  • Xu et al. (2012) Xu, L., Hutter, F., Hoos, H., and Leyton-Brown, K. 2012. Evaluating component solver contributions to portfolio-based algorithm selectors. In Proceedings of the Fifteenth International Conference on Theory and Applications of Satisfiability Testing (SAT’12), A. Cimatti and R. Sebastiani, Eds. Lecture Notes in Computer Science, vol. 7317. Springer-Verlag, 228–241.