The Results

This page displays the results of the first Cross-domain Heuristic Search Challenge. For each entry (according to author's consent), the algorithm descriptions can be downloaded (as a pdf). Additionally, the scores per domain are given as bar plots for the best 15 entries on each domain. The Top 3 Plot shows the distribution of points obtained by the top 3 competition entries across the domains. A more detailed explanation about how the results were calculated, is given below.
#
Algorithm
Description
Score Author/Team Affiliation
1
AdapHH 181 Mustafa Misir University KaHo Sint-Lieven, Belgium
2
VNS-TW 134 Ping-Che Hsiao National Taiwan University, Taiwan
3
ML 131.5 Mathieu Larose Université de Montréa,Canada
4
PHUNTER 93.25 Fan Xue Hong Kong Polytechnic U., Hong Kong
5
EPH 89.75 David Meignan Polytechnique Montréal, Canada
6
HAHA 75.75 Andreas Lehrbaum Vienna University of Technology, Austria
7
NAHH 75 Franco Mascia Université Libre de Bruxelles, Belgium
8
ISEA 71 Jiri Kubalik Czech Technical University, Czech Rep.
9
KSATS-HH 66.5 Kevin Sim Edinburgh Napier University, UK
10
HAEA 53.5 Jonatan Gomez Univ. Nacional de Colombia, Colombia
11
ACO-HH 39 José Luis Núñez Universidad de Santiago de Chile
12
GenHive 36.5 CS-PUT Poznan University of Technology, Poland
13
DynILS 27 Mark Johnston Victoria University of Wellington, New Zealand
14
SA-ILS 24.25 He Jiang Dalian University of Technology, China
15
XCJ 22.5 Kamran Shafi University of New South Wales, Australia
16
AVEG-Nep 21 Tommaso Urli University of Udine, Italy
17
GISS 16.75 Alberto Acuña University of Santiago de Chile, Chile
18
SelfSearch 7 Jawad Elomari Warwick University, UK
19
MCHH-S 4.75 Kent McClymont University of Exeter, UK
20
Ant-Q 0 Imen Khamassi University of Tunisia, Tunisia


Scores per Domain

In order to give more information on the algorithms' performance, the plots below illustrate the the scores of the best 15 competition entries on each domain. The maximum possible score per domain is 50.00 (10 points for each of the 5 competition instances per domain).

Max-SAT

max-sat scores

Bin Packing

bin-packing
              scores

Personnel Scheduling

personnel
              scores

Flow Shop

flowshop
              scores

TSP

tsp scores

VRP

vrp scores
[Top]

The Top 3 Plot

This plot shows the total score, and the points obtained per domain, for the 3 top competition entries.

top
              3 scores
[Top]

How the Results were Calculated

The information below, clarifies the procedure for calculating the competition results:
  • Hidden domains: two hidden domains were considered each adding 5 instances. The hidden domains are: Vehicle Routing with time windows (VRP), and the Travelling Salesman problem (TSP).
  • Hidden instances: for the four test domains: Max-SAT, bin packing, personnel scheduling and flow shop; two hidden instances were considered.
  • Training instances: for the four test domains: Max-SAT, bin packing, personnel scheduling and flow shop; 3 of the 10 training instances were randomly selected. 
  • Instance selection: the algorithm for selecting the 3 training instances from each of the 4 test domains, and the 5 instances for the 2 hidden domains, is given by 'CompetitionInstanceSelector.java'.   This  program uses the Java random number generator with the following seed: 15062011,  which corresponds to the date of the competition submissions deadline.
  • Number of runs and performance metric: in order to strengthen the statistical significance of the results, 31 runs per instance were conducted, and the median values were calculated.
  • Running time: each instance is run for 10 minutes (600 secs), according to the benchmarking description.
  • Scoring system: the median values were used for calculating the scores using the Formula 1 point system.
  • Maximum possible score: there are 5 instances per domain, and 6 domains. The maximum score is, therefore, 50 per domain, and 300 for the total competition score.
[Top]

Last Updated: 09 September 2011.