The challenge received a total of 8 submissions from 4 different groups of researchers comprising 15 people. Participants were based in 4 different countries on 2 continents. Table 1 gives an overview of all submissions.
Presolving/feature selection used?
All submissions were submitted for evaluation on all ASlib scenarios. Most systems used a presolver and specified a subset of features to use for each scenario.
The evaluation was performed as follows. For each scenario, 10 bootstrap
samplings of the entire data were used to create 10 different train/test splits.
No stratification was used. The training part was left unmodified. For the test
part, algorithm performances were set to 0 and runstatus to “ok” for all
algorithms and all instances – the ASlib specification requires algorithm
performance data to be part of a scenario. A
cv.arff file was generated
for both training and testing with 10 folds and the instances assigned to folds
by the order in which they appeared in the original scenario.
For systems that specified a presolver, the instances that were solved by the presolver within the specified time were removed from the training set. If a subset of features was specified, only these features (and only the costs associated with these features) were left in both training and test set, with all other feature values removed.
Each system was trained on each train scenario and predicted on each test scenario. In total, 130 evaluations (10 for each of the 13 scenarios) per submitted system were performed. The total CPU time spent was 4685.11 hours.
The predictions were evaluated as follows. If a presolver was specified, it was “run” for the specified time. If the instance was solved within this time, the time to solve the instance was taken as the performance on that instance and the instance recorded as solved.
Otherwise, the time limit given for the presolving run was added to the time required to compute all features specified for the particular scenario. For any instances that were solved during feature computation, the instance was recorded as solved at this point and the time for the presolving run plus feature computation recorded as the performance. The misclassification penalty was set to 0 in this case regardless of the performance of the best solver.
For instances not solved during feature computation, the solvers specified in
the prediction schedule of the system were “run”. For each instance, the
predicted solvers were ordered by the
runID specified. If a run was
unable to solve an instance, the smaller of time the schedule specified to run
it for and the time it actually took to run on the instance was added to the
total. If a run solved the respective instance, the actual time required by the
algorithm was added to the total and the instance recorded as solved. If the
total time exceeded the time limit for the scenario, an instance was recorded as
Each system was evaluated in terms of mean PAR10 score, mean misclassification penalty, and mean number of instances solved for each of the 130 evaluations on each scenario and split.
To facilitate comparison of the different measures across the different scenarios, all measures were normalised by the performance of the virtual best (VBS) and the single best (SB) solver. The single best solver was determined as the solver with the smallest overall runtime across all instances. Equation 1 defines the normalisation of a score .
This normalises the score to the interval 0 (VBS) to 1 (SB), with smaller values being better. The number denotes how much of the gap between single best and virtual best solver was left by the system.
To determine the overall winner, the mean across all of the normalised measurements was taken. For each submitted system, 390 scores were taken into account for this (13 scenarios times 10 splits times 3 measures).
Table 2 shows the final ranking. The first and second placed entries are very close. All systems perform well on average, closing more than half of the gap between virtual and single best solver.
For comparison, we show three other systems. Autofolio-48 is identical to Autofolio, but was allowed 48 hours training time to assess the impact of additional exploration of the hyperparameter space. Llama-regrPairs and llama-regr are simple llama models (see AppendixA).
|System||Average total score|
To assess how significant the difference are and how stable the ranking is, we took 1 000 bootstrap samples from the scenario-split combinations and computed the scores and ranks on each of them. The mean average of the total score averages over the bootstrap samples and the confidence intervals are show in Table3.
|System||Average total score||95% CI upper||95% CI lower|
The ranking is the same as the final ranking in Table 2. The confidence intervals show that the rankings are relatively stable.
3.1 Winner – zilla
The winner of the ICON Challenge on Algorithm Selection is zilla by Chris Cameron, Alex Fréchette, Holger Hoos, Frank Hutter, and Kevin Leyton-Brown.
3.2 Honourable mention – ASAP_RF
ASAP_RF by Fran¸cois Gonard, Marc Schoenauer, and Michèle Sebag receives an honourable mention as a submission that has not been described in the literature before and showed respectable performance, beating all other approaches in some cases.
3.3 Alternative rank aggregations
An alternative (and probably fairer) way of determining the winner is to see the ranking of systems induced by each measure on each split of each scenario as a ballot (for a total of 260 ballots) and aggregate the ranks in those ballots. Here, we optimise the aggregated Spearman coefficient between candidate rankings and ballot rankings. That is, the final ranking has the optimal Spearman coefficient with respect to the ballots.
Table 4 shows the aggregated ranks. Now autofolio is in second position.
There are significant changes however when averaging the performance across all measures, splits, and scenarios by median rather than mean. Table 5 shows this ranking. Zilla is now in second position, beat by ASAP_RF.
|System||Median total score|
3.4 Detailed results
|System||Mean PAR10 score|
|System||Mean misclassification penalty|
|System||Mean number of instances solved|
Table 9 shows the ranks for the different scenarios for all systems by mean across all measures and splits.
Figures 1 through 3 give a more detailed overview of the performance of the systems on the different scenarios. The colour of each boxplot denotes the system, the mean performance of which is shown in the legend (this corresponds to the number in the respective table above). The boxplot shows the variation of performance across the 10 different splits for each scenario. The solid black line denotes the performance of the single best solver; anything above is worse.
Two of the SAT scenarios are hard for all systems in the sense that the performance they deliver on at least one of the splits is worse than the performance of the single best solver. For most other scenarios, using any algorithm selection system gives a significant performance improvement compared to the single best solver though.
3.5 Time required to run
The time required to train the models and make the predictions varied significantly across systems and scenarios, with some completing in minutes and others requiring hours. Figure 4 presents a summary.
We would like to thank all the participants for taking the time to prepare submissions and their help in getting them to run; in alphabetical order: Alex Fréchette, Chris Cameron, David Bergdoll, Fabio Biselli, Fran¸cois Gonard, Frank Hutter, Holger Hoos, Jacopo Mauro, Kevin Leyton-Brown, Marc Schoenauer, Marius Lindauer, Michèle Sebag, Roberto Amadini, Tong Liu, and Torsten Schaub. We thank Barry Hurley for setting up and maintaining the submission website and Luc De Raedt, Siegfried Nijssen, Benjamin Negrevergne, Behrouz Babaki, Bernd Bischl, and Marius Lindauer for feedback on the design of the challenge.
All data, code and results from the challenge are available at http://4c.ucc.ie/~larsko/downloads/challenge.tar.gz.
Appendix A Llama models used for comparison
-  Icon challenge on algorithm selection. http://challenge.icon-fet.eu/. Accessed: 2015-11-12.