ASAP Default Hyper-heuristics
During the preparation and testing of the competition software, we developed 8 hyper-heuristics, which are inspired by state-of the-art approaches. However, we make no guarantees about their quality (or lack thereof). They are intended to serve as a starting benchmark to compare the performance of the participants' algorithms.The tables below show the results obtained by these 8 default hyper-heuristics on the 10 training instances of the 4 test domains (Table 1: MAX-SAT, Table 2: 1D Bin Packing, Table 3: Personnel Scheduling, and Table 4: Permutation Flow Shop). Each table entry corresponds to the objective function value obtained by each algorithm after a single 10 minutes run on the respective domain instance. Recall that all domains are minimisation problems, therefore the best results correspond to the smallest values.
Note that the Personnel Scheduling instances are a subset of the instances in the Staff Rostering Benchmark Data Sets, maintained by Timothy Curtois, University of Nottingham, UK.
Please notice that success against these hyper-heuristics will not necessarily translate into success in the competition, as that depends on the quality of the other competitors' entries. If you are willing to gauge the performance of your algorithm against those of other competitors, please consider sending your results to be included in the Leaderboard.
Table 1: MAX-SAT,
objective function
values obtained by 8 hyper-heuristics on the 10 training
instances,
after a 10 minutes run. The best obtained result for each
instance is
highlighted in bold font.
HH1 |
HH2 |
HH3 |
HH4 |
HH5 |
HH6 |
HH7 |
HH8 |
|
Inst1 |
47 | 35 | 23 | 38 | 125 | 14 | 51 | 46 |
Inst2 |
35 | 31 | 38 | 51 | 109 | 30 | 37 | 57 |
Inst3 | 32 | 24 | 26 | 44 | 115 | 27 | 29 | 54 |
Inst4 | 19 | 13 | 31 | 15 | 54 | 17 | 18 | 25 |
Inst5 | 11 | 8 | 39 | 32 | 56 | 33 | 10 | 42 |
Inst6 | 25 | 17 | 56 | 46 | 110 | 50 | 23 | 54 |
Inst7 | 7 | 6 | 12 | 12 | 16 | 10 | 6 | 18 |
Inst8 | 6 | 6 | 11 | 12 | 16 | 11 | 7 | 15 |
Inst9 | 9 | 8 | 13 | 17 | 26 | 14 | 10 | 21 |
Inst10 | 213 | 211 | 216 | 235 | 263 | 219 | 215 | 233 |
[Top]
Table 2: 1D Bin
Packing, objective function
values obtained by 8 hyper-heuristics on the 10 training
instances,
after a 10 minutes run.
The best obtained result for each instance is highlighted in
bold font.
Table
3:
Personnel Scheduling, objective
function
values obtained by 8 hyper-heuristics on the 10 training
instances,
after a 10 minutes run.
The best obtained result for each instance is highlighted in
bold font.
HH1 |
HH2 |
HH3 |
HH4 |
HH5 |
HH6 |
HH7 |
HH8 |
|
Inst1 |
0.01738 | 0.01740 | 0.00683 | 0.01179 | 0.05065 | 0.01574 | 0.02195 | 0.07166 |
Inst2 |
0.01643 | 0.01696 | 0.00723 | 0.01174 | 0.05013 | 0.01180 | 0.02114 | 0.06794 |
Inst3 |
0.02339 | 0.02354 | 0.02419 | 0.02353 | 0.02760 | 0.02314 | 0.02422 | 0.03068 |
Inst4 | 0.02477 | 0.02456 | 0.02604 | 0.02460 | 0.03145 | 0.02425 | 0.02553 | 0.03285 |
Inst5 | 0.00628 | 0.00651 | 0.00034 | 0.00457 | 0.01512 | 0.00682 | 0.00688 | 0.02195 |
Inst6 | 0.00428 | 0.00845 | 0.00335 | 0.00365 | 0.01785 | 0.00842 | 0.00897 | 0.02373 |
Inst7 | 0.11477 | 0.09875 | 0.01084 | 0.02209 | 0.17178 | 0.04843 | 0.13877 | 0.18497 |
Inst8 | 0.13643 | 0.13595 | 0.01909 | 0.06375 | 0.18226 | 0.08434 | 0.15052 | 0.18409 |
Inst9 | 0.05505 | 0.05444 | 0.05798 | 0.09188 | 0.09279 | 0.06073 | 0.05549 | 0.12584 |
Inst10 | 0.01236 | 0.01127 | 0.01575 | 0.02671 | 0.03521 | 0.01667 | 0.01497 | 0.04291 |
[Top]
HH1 |
HH2 |
HH3 |
HH4 |
HH5 |
HH6 |
HH7 |
HH8 |
|
Inst1 |
3324 | 3309 | 3354 | 3339 | 3340 | 8342 | 3324 | 3334 |
Inst2 | 2240 | 2280 | 2485 | 2218 | 2260 | 11673 | 2135 | 3983 |
Inst3 | 360 | 340 | 545 | 340 | 380 | 755 | 370 | 315 |
Inst4 | 22 | 18 | 28 | 17 | 21 | 74 | 19 | 21 |
Inst5 | 25 | 23 | 35 | 25 | 22 | 69 | 22 | 27 |
Inst6 | 23 | 26 | 34 | 22 | 22 | 73 | 19 | 26 |
Inst7 | 1129 | 1113 | 1241 | 1125 | 1130 | 30094 | 1117 | 1643 |
Inst8 | 2261 | 2266 | 2270 | 2290 | 2278 | 46800 | 2197 | 4035 |
Inst9 | 3238 | 3173 | 3252 | 3236 | 3276 | 42151 | 3166 | 8074 |
Inst10 | 9690 | 9735 | 14321 | 12752 | 9815 | 95582 | 9789 | 19148 |
Note:
These
instances
are
a
subset of the instances in the Staff
Rostering Benchmark
Data Sets)
[Top]
Table 4: Permutation Flow shop,
objective
function
values obtained by 8 hyper-heuristics on the 10 training
instances,
after a 10 minutes run.
The best obtained result for each instance is highlighted in
bold font.
HH1 |
HH2 |
HH3 |
HH4 |
HH5 |
HH6 |
HH7 |
HH8 |
|
Inst1 |
6381 | 6383 | 6368 | 6326 | 6387 | 6312 | 6391 | 6315 |
Inst2 | 6329 | 6336 | 6339 | 6263 | 6315 | 6271 | 6334 | 6265 |
Inst3 | 6404 | 6404 | 6398 | 6362 | 6407 | 6344 | 6404 | 6351 |
Inst4 | 6390 | 6389 | 6369 | 6366 | 6392 | 6350 | 6385 | 6366 |
Inst5 | 6481 | 6468 | 6439 | 6407 | 6468 | 6398 | 6480 | 6419 |
Inst6 | 10547 | 10549 | 10544 | 10503 | 10549 | 10500 | 10542 | 10522 |
Inst7 | 10965 | 10965 | 10965 | 10923 | 10965 | 10922 | 10968 | 10957 |
Inst8 | 26440 | 26440 | 26487 | 26382 | 26476 | 26424 | 26450 | 26406 |
Inst9 | 26984 | 26958 | 26998 | 26864 | 26974 | 26896 | 26928 | 26939 |
Inst10 | 26779 | 26754 | 26818 | 26721 | 26756 | 26704 | 26767 | 26726 |
The design principles of
some of these
hyper-heuristics, can be found in the following article:
E. K. Burke, T. Curtois, M. Hyde, G. Kendall, G. Ochoa, S. Petrovic, J. A. Vazquez-
Rodriguez and M. Gendreau (2010) Iterated Local Search vs. Hyper-heuristics: Towards General-purpose Search Algorithms, IEEE Congress on Evolutionary Computation (CEC 2010), IEEE PRess, pp. 3073-3080.
Also, the hyper-heuristic identified as HH5 corresponds to the ExampleHyperHeuristic1.java discussed in the HyFlex Tutorial.
E. K. Burke, T. Curtois, M. Hyde, G. Kendall, G. Ochoa, S. Petrovic, J. A. Vazquez-
Rodriguez and M. Gendreau (2010) Iterated Local Search vs. Hyper-heuristics: Towards General-purpose Search Algorithms, IEEE Congress on Evolutionary Computation (CEC 2010), IEEE PRess, pp. 3073-3080.
Also, the hyper-heuristic identified as HH5 corresponds to the ExampleHyperHeuristic1.java discussed in the HyFlex Tutorial.