Introduction
Answer Set Programming [Baral2003, Eiter, Gottlob, and Mannila1997, Gelfond and Lifschitz1988, Gelfond and Lifschitz1991, Marek and Truszczyński1998, Niemelä1998]
(ASP) is a declarative language based on logic programming and nonmonotonic reasoning. The applications of ASP belong to several areas, e.g., ASP was used for solving a variety of hard combinatorial problems (see e.g.,
[Calimeri et al.2011] and [Potsdamsince 2002]).Nowadays, several efficient ASP systems are available [Gebser et al.2007, Janhunen, Niemelä, and Sevalnev2009, Leone et al.2006, Lierler2005, Mariën et al.2008, Simons, Niemelä, and Soininen2002]
. It is wellestablished that, for solving empirically hard problems, there is rarely a best algorithm/heuristic, while it is often the case that different algorithms perform well on different problems/instances. It can be easily verified (e.g., by analyzing the results of the ASP competition series) that this is the case also for ASP implementations. In order to take advantage of this fact, one should be able to select automatically the “best” solver on the basis of the characteristics (called
features) of the instance in input, i.e., one has to consider to solve an algorithm selection problem [Rice1976].Inspired by the successful attempts [Gomes and Selman2001, O’Mahony et al.2008, Pulina and Tacchella2009, Xu et al.2008] done in the neighbor fields of SAT, QSAT and CSP, the application of algorithm selection techniques to ASP solving was ignited by the release of the portfolio solver claspfolio [Gebser et al.2011]. This solver imports into ASP the satzilla [Xu et al.2008] approach. Indeed, claspfolio employs inductive techniques based on regression to choose the “best” configuration/heuristic of the solver clasp. The complete picture of inductive approaches applied to ASP solving includes also techniques for learning heuristics orders [Balduccini2011], solutions to combine portfolio and automatic algorithm configuration approaches [Silverthorn, Lierler, and Schneider2012], automatic selection of a scheduling of ASP solvers [Hoos et al.2012] (in this case clasp configurations), and the multiengine approach. The aim of a multiengine solver [Pulina and Tacchella2009] is to select the “best” solver among a set of efficient ones used as blackbox engines. The multiengine ASP solver measp was proposed in [Maratea, Pulina, and Ricca2012b], and ports to ASP an approach applied before to QBF [Pulina and Tacchella2009].
measp exploits inductive techniques based on classification to choose, on a per instance basis, an engine among a selection of blackbox heterogeneous ASP solvers. The first implementation of measp, despite not being highly optimized, already reached good performance. Indeed, measp can combine the strengths of its component engines, and thus it performs well on a broad set of benchmarks including 14 domains and 1462 ground instances (detailed results are reported in [Maratea, Pulina, and Ricca2014a]).
In this paper we report on a new optimized implementation of measp; and on a first attempt of applying algorithm selection to the entire process of computing answer sets of nonground programs.
As a matter of fact, the ASP solutions available at the state of the art employing machinelearning techniques are devised to solve ground (or propositional) programs, and – to the best of our knowledge – no solution has been proposed that is able to cope directly with nonground programs. Note that ASP programmers almost always write nonground programs, which have to be first instantiated by a grounder. It is wellknown that such instantiation phase can influence significantly the performance of the entire solving process. At the time of this writing, there are two prominent alternative implementations that are able to instantiate ASP programs:
DLV [Leone et al.2006] and GrinGo [Gebser, Schaub, and Thiele2007]. Once the peculiarities of the instantiation process are properly taken into account, both implementations can be combined in a multiengine grounder by applying also to this phase an algorithm selection recipe, building on [Maratea, Pulina, and Ricca2013]. The entire process of evaluation of a nonground ASP program can be, thus, obtained by applying algorithm selection to the instantiation phase, selecting either DLV or GrinGo; and, then, in a subsequent step, evaluating the propositional program obtained in the first step with a multiengine solver.An experimental analysis reported in the paper shows that the new implementation of measp is substantially faster than the previous version; and the straight application of the multiengine recipe to the instantiation phase is already beneficial. At the same time, it remains space for future work, and in particular for devising more specialized techniques to exploit the full potential of the approach.
A MultiEngine ASP system
We next overview the components of the multiengine approach, and we report on the way we have instantiated it to cope with instantiation and solving, thus obtaining a complete multiengine system for computing answer sets of nonground ASP programs.
General Approach. The design of a multiengine solver based on classification is composed of three main ingredients:
a set of features that are significant for classifying the instances;
a selection of solvers that are representative of the state of the art and complementary; and a choice of effective classification algorithms. Each instance in a fairlydesigned training set of instances is analyzed by considering both the features and the performance of each solvers. An inductive model is computed by the classification algorithm during this phase. Then, each instance in a test set is processed by first extracting its features, and the solver is selected starting from these features using the learned model. Note that, this schema does not make any assumption (other than the basic one of supporting a common input) on the engines.The measp solver. In [Maratea, Pulina, and Ricca2012b, Maratea, Pulina, and Ricca2014a] we described the choices we have made to develop the measp solver. In particular, we have singled out a set of syntactic features that are both significant for classifying the instances and cheaptocompute (so that the classifier can be fast and accurate). In detail, we considered: the number of rules and number of atoms, the ratio of horn, unary, binary and ternary rules, as well as some ASP peculiar features, such as the number of true and disjunctive facts, and the fraction of normal rules and constraints. The number of resulting features, together with some of their combinations, amounts to 52. In order to select the engines we ran preliminary experiments [Maratea, Pulina, and Ricca2014a] to collect a pool of solvers that is representative of the stateoftheart solver (sota), i.e., considering a problem instance, the oracle that always fares the best among the solvers that entered the system track of the 3rd ASP Competition [Calimeri et al.2011], plus DLV. The pool of engines collected in measp is composed of 5 solvers, namely clasp, claspD, cmodels, DLV, and idp, as submitted to the 3rd ASP Competition. We experimented with several classification algorithms [Maratea, Pulina, and Ricca2014a], and proved empirically that measp
can perform better than its engines with any choice. Nonetheless, we selected the knearest neighbor (kNN) classifier for our new implementation: it was already used in
measp [Maratea, Pulina, and Ricca2012b], with good performance, and it was easy to integrate its implementation in the new version of the system.Multiengine instantiator. Concerning the automatic selection of the grounder, we selected: number of disjunctive rules, presence of queries, the total amount of functions and predicates, number of strongly connected and HeadCycle Free[BenEliyahu and Dechter1994] components, and stratification property, for a total amount of 11 features. These features are able to discriminate the class of the problem, and are also pragmatically cheaptocompute. Indeed, given the high expressivity of the language, nonground ASP programs (which are usually written by programmers) contain only a few rules. Concerning the grounders, given that there are only two alternative solutions, namely DLV and GrinGo, we considered both for our implementation.
Concerning the classification method, we used an implementation of the PART decision list generator [Frank and Witten1998], a classifier that returns a human readable model based on ifthenelse rules. We used PART because, given the relatively small total amount of features related to the nonground instances, it allows us to compare the generated model with respect to the knowledge of a human expert.
MultiEngine System measp. Given a (nonground) ASP program, the evaluation workflow of the multiengine ASP solution called measp is the following:
nonground features extraction,
grounder selection, grounding phase, ground features extraction, solver selection, and solving phase on ground program.Implementation and Experiments
In this section we report the results of two experiments conceived to assess the performance of the new versions of the measp system. The first experiment has the goal of measuring the performance improvements obtained by the new optimized implementation of the measp solver. The second experiment assesses measp and reports on the performance improvements that can be obtained by selecting the grounder first and then calling the measp solver. measp and measp are available for download at www.mat.unical.it/ricca/measp. Concerning the hardware employed and the execution settings, all the experiments run on a cluster of Intel Xeon E31245 PCs at 3.30 GHz equipped with 64 bit Ubuntu 12.04, granting 600 seconds of CPU time and 2GB of memory to each solver. The benchmarks used in this paper belong to the suite of benchmarks, encoded in the ASPCore 1.0 language, of the 3rd ASP Competition. Note that in the 4th ASP Competition [Alviano et al.2013] the new language ASPCore 2.0 has been introduced. We still rely on the language of the 3rd ASP Competition given that the total amount of solvers and grounders supporting the new standard language is very limited with respect to the number of tools supporting ASPCore 1.0.
Assessment of the new implementation of measp. The original implementation of measp was obtained by combining a general purpose feature extractor (that we have initially developed for experimenting with a variety of additional features) developed in Java, with a collection of Perl scripts linking the other components of the system, which are based on the rapidminer library. This is a general purpose implementation supporting also several classification algorithms. Since the CPU time spent for the extraction of features and solver selection has to be made negligible, we developed an optimized version of measp. The goal was to optimize the interaction among system components and further improve their efficiency. To this end, we have reengineered the feature extractor, enabling it to read ground instances expressed in the numeric format used by GrinGo. Furthermore, we have integrated it with an implementation of the kNN algorithm built on top of the ANN library (www.cs.umd.edu/~mount/ANN) in the same binary developed in C++. This way the new implementation minimizes the overhead introduced by solver selection.
We now present the results of an experiment in which we compare the old implementation of measp, labelled measp, with the new one, labelled measp. In this experiment, assessing solving performance, we used GrinGo as grounder for both implementations, and we considered problems belonging to the NP and Beyond NP classes of the competition (i.e., the grounder and domains considered by measp [Maratea, Pulina, and Ricca2014a]). The inductive model used in measp was the same used in measp (details are reported in [Maratea, Pulina, and Ricca2014a]). The plot in Figure 1 (top) depicts the performance of both measp and measp (dotted red and solid blue lines in the plot, respectively). Considering the total amount of NP and Beyond NP instances evaluated at the 3rd ASP Competition (140), measp solved 92 instances (77 NP and 15 Beyond NP) in about 4120 seconds, while measp solved 77 instances (62 NP and 15 Beyond NP) in about 6498 seconds. We report an improvement both in the total amount of solved instances (measp is able to solve 66% of the whole set of instances, while measp stops at 51%) and in the average CPU time of solved instances (about 45 seconds against 84).
The improvements of measp are due to its optimized implementation. Once feature extraction and solver selection are made very efficient, it is possible to extract features for more instances and the engines are called in advance w.r.t. what happens in measp. This results in more instances that are processed and solved by measp within the timeout.


Assessment of the complete system. We developed a preliminary implementation of a grounder selector, which combines a feature extractor for nonground programs written in Java, and an implementation of the PART decision list generator, as mentioned in the previous section. The grounder selector is then combined with measp.
We now present the results of an experiment in which we compare measp with measp, and the sota solver. measp coupled with DLV (resp. GrinGo) is denoted by measp (dlv) (resp. measp (gringo)). In this case we considered all the benchmark problems of the 3rd ASP Competition, including the ones belonging to the P class. Indeed, in this case we are interested also in grounders’ performance, which is crucial in the P class.
The plot in Figure 1 (bottom) shows the performance of the aforementioned solvers. In the plot, we depict the performance of measp (dlv) with a red dotted line, measp (gringo) with a solid blue line, measp with a double dotted dashed yellow line, and, finally, with a dotted dashed black line we denote the performance of the sota solver. Looking at the plot, we can see that measp (gringo) solves more instances that measp (dlv) – 126 and 111, respectively – while both are outperformed by measp, that is able to solve 134 instances. The average CPU time of solved instances for measp (dlv), measp (gringo) and measp is 86.86, 67.93 and 107.82 seconds, respectively. Looking at the bottom plot in Figure 1, concerning the performance of the sota solver, we report that it is able to solve 173 instances out of a total of 200 instances (evaluated at the 3rd ASP Competition), highlighting room for further improving this preliminary version of measp. Indeed, the current classification model predicts GrinGo for most of the NP instances, but having a more detailed look at the results, we notice that clasp and idp with GrinGo solve both 72 instances, while using DLV they solve 93 and 92 instances, respectively. A detailed analysis of the performance of the various ASP solvers with both grounders can be found in [Maratea, Pulina, and Ricca2013].
It is also worth mentioning that the output formats of GrinGo and DLV differ, thus there are combinations grounder/solver that require additional conversion steps in our implementation. Since the new feature extractor is designed to be compliant with the numeric format produced by GrinGo, if DLV is selected as grounder then the nonground program is instantiated twice. Moreover, if DLV is selected as grounder, and it is not selected also as solver, the produced propositional program is fed in gringo to be converted in numeric format. These additional steps, due to technical issues, result in a suboptimal implementation of the execution pipeline that could be further optimized in case both grounders would agree on a common output format.
Conclusion. In this paper we presented improvements to the multiengine ASP solver measp. Experiments show that the new implementation of measp is more efficient, and the straight application of the multiengine recipe to the instantiation phase is already beneficial. Directions for future research include exploiting the full potential of the approach by predicting the pair grounder+solver, and importing policy adaptation techniques employed in [Maratea, Pulina, and Ricca2014b].
Acknowledgments. This research has been partly supported by Regione Calabria under project PIA KnowRex POR FESR 2007 2013 BURC n. 49 s.s. n. 1 16/12/2010, the Italian Ministry of University and Research under PON project “Ba2Know S.I.LAB” n. PON03PE_0001, the Autonomous Region of Sardinia (Italy) and the Port Authority of Cagliari (Italy) under L.R. 7/2007, Tender 16 2011 project “desctop”, CRP49656.
References
 [Rice1976] Rice, J. R. 1976. The algorithm selection problem. Advances in Computers 15:65–118.
 [Gelfond and Lifschitz1988] Gelfond, M., and Lifschitz, V. 1988. The Stable Model Semantics for Logic Programming. In Logic Programming, 1070–1080. Cambridge, Mass.: MIT Press.
 [Gelfond and Lifschitz1991] Gelfond, M., and Lifschitz, V. 1991. Classical Negation in Logic Programs and Disjunctive Databases. NGC 9:365–385.
 [Eiter, Gottlob, and Mannila1997] Eiter, T.; Gottlob, G.; and Mannila, H. 1997. Disjunctive Datalog. ACM TODS 22(3):364–418.
 [Frank and Witten1998] Frank, E., and Witten, I. H. 1998. Generating accurate rule sets without global optimization. In ICML’98, 144.
 [Marek and Truszczyński1998] Marek, V. W., and Truszczyński, M. 1998. Stable models and an alternative logic programming paradigm. CoRR cs.LO/9809032.
 [Niemelä1998] Niemelä, I. 1998. Logic Programs with Stable Model Semantics as a Constraint Programming Paradigm. In CANR 98 Workshop, 72–79.
 [Gomes and Selman2001] Gomes, C. P., and Selman, B. 2001. Algorithm portfolios. Artificial Intelligence 126(12):43–62.
 [Potsdamsince 2002] Potsdam, U. since 2002. asparagus homepage. http://asparagus.cs.unipotsdam.de/.
 [Simons, Niemelä, and Soininen2002] Simons, P.; Niemelä, I.; and Soininen, T. 2002. Extending and Implementing the Stable Model Semantics. Artificial Intelligence 138:181–234.
 [Baral2003] Baral, C. 2003. Knowledge Representation, Reasoning and Declarative Problem Solving. Tempe, Arizona: CUP.
 [Lierler2005] Lierler, Y. 2005. Disjunctive Answer Set Programming via Satisfiability. In LPNMR 05, LNCS 3662, 447–451.
 [Leone et al.2006] Leone, N.; Pfeifer, G.; Faber, W.; Eiter, T.; Gottlob, G.; Perri, S.; and Scarcello, F. 2006. The DLV System for Knowledge Representation and Reasoning. ACM TOCL 7(3):499–562.
 [Gebser et al.2007] Gebser, M.; Kaufmann, B.; Neumann, A.; and Schaub, T. 2007. Conflictdriven answer set solving. In IJCAI07, 386–392.
 [Gebser, Schaub, and Thiele2007] Gebser, M.; Schaub, T.; and Thiele, S. 2007. GrinGo : A New Grounder for Answer Set Programming. In LPNMR 2007, LNCS 4483, 266–271.
 [Mariën et al.2008] Mariën, M.; Wittocx, J.; Denecker, M.; and Bruynooghe, M. 2008. Sat(id): Satisfiability of propositional logic extended with inductive definitions. In SAT 08, LNCS, 211–224.
 [O’Mahony et al.2008] O’Mahony, E.; Hebrard, E.; Holland, A.; Nugent, C.; and O’Sullivan, B. 2008. Using casebased reasoning in an algorithm portfolio for constraint solving. In ICAICS 08.
 [Xu et al.2008] Xu, L.; Hutter, F.; Hoos, H. H.; and LeytonBrown, K. 2008. SATzilla: Portfoliobased algorithm selection for SAT. JAIR 32:565–606.
 [Janhunen, Niemelä, and Sevalnev2009] Janhunen, T.; Niemelä, I.; and Sevalnev, M. 2009. Computing stable models via reductions to difference logic. In LPNMR 09, LNCS, 142–154.
 [Pulina and Tacchella2009] Pulina, L., and Tacchella, A. 2009. A selfadaptive multiengine solver for quantified boolean formulas. Constraints 14(1):80–116.
 [Gebser et al.2011] Gebser, M.; Kaminski, R.; Kaufmann, B.; Schaub, T.; Schneider, M. T.; and Ziller, S. 2011. A portfolio solver for answer set programming: Preliminary report. In LPNMR 11, LNCS 6645, 352–357.
 [Balduccini2011] Balduccini, M. 2011. Learning and using domainspecific heuristics in ASP solvers. AICOM 24(2):147–164.
 [BenEliyahu and Dechter1994] BenEliyahu R.; Dechter R. 1994. Propositional Semantics for Disjunctive Logic Programs Annals of Mathematics and Artificial Intelligence. 12:53–87, Science Publishers.
 [Calimeri et al.2011] Calimeri, F.; Ianni, G.; Ricca, F.; et al. 2011. The Third Answer Set Programming Competition: Preliminary Report of the System Competition Track. In Proc. of LPNMR11., 388–403 LNCS.
 [Hoos et al.2012] Hoos, H.; Kaminski, R.; Schaub, T.; and Schneider, M. T. 2012. ASPeed: Aspbased solver scheduling. In Tech. Comm. of ICLP 2012, volume 17 of LIPIcs, 176–187.
 [Maratea, Pulina, and Ricca2012b] Maratea, M.; Pulina, L.; and Ricca, F. 2012b. The multiengine asp solver measp. In JELIA 2012., LNCS 7519, 484–487.
 [Silverthorn, Lierler, and Schneider2012] Silverthorn, B.; Lierler, Y.; and Schneider, M. 2012. Surviving solver sensitivity: An asp practitioner’s guide. In Tech. Comm. of ICLP 2012, volume 17 of LIPIcs, 164–175.
 [Alviano et al.2013] Alviano, M.; Calimeri, F.; Charwat, G.; et al. 2013. The fourth answer set programming competition: Preliminary report. In LPNMR, LNCS 8148, 42–53.
 [Maratea, Pulina, and Ricca2013] Maratea, M.; Pulina, L.; and Ricca, F. 2013. Automated selection of grounding algorithm in answer set programming. In AI* IA 2013. International Publishing. 73–84.
 [Maratea, Pulina, and Ricca2014a] Maratea, M.; Pulina, L.; and Ricca, F. 2014a. A multiengine approach to answerset programming. Theory and Practice of Logic Programming. DOI: http://dx.doi.org/10.1017/S1471068413000094
 [Maratea, Pulina, and Ricca2014b] Maratea, M.; Pulina, L.; and Ricca, F. 2014b. Multiengine asp solving with policy adaptation. JLC. In Press.
Comments
There are no comments yet.