The Challenge Rules

We distinguish between General, Algorithm Design and ImplementationAdjudication, and Leaderboard Rules.


General

Rule 1

This challenge seeks to encourage research into automated heuristic design and hyper-heuristic methods, and to offer prizes to the most generally applicable algorithms, those able to perform well across several domains. It is the spirit of these rules that is important, not the letter. With any set of rules for any competition it is possible to work within the letter of the rules but outside the spirit.

Rule 2

The organisers reserve the right to disqualify any participant from the competition at any time if the participant is determined by the organisers to have worked outside the spirit of the competition rules. The organisers' decision is final in any matter. Decisions will be made democratically by the organisers and advisory board, with the Chairs having the casting vote in the case of ties.

Rule 3

The organisers reserve the right to change the rules at any time. Any change of rules will be accompanied by a general email to all participants.


[Top]

Algorithm Design and Implementation

Rule 5

Participants should implement a hyper-heuristic algorithm to tackle the test instances on the provided problem domains.  Specifically, they should implement in Java a subclass of the provided hyper-heuristic abstract class. They should, then, submit the Java source code. This class will be run on five instances of each of the four public problem domains and also on five instances of the hidden domains (to be used at the competition).

Rules 6

A submission must only consist of Java classes. For example, Java code that calls 'C' libraries will not be allowed.

Rule 7

The competition will run using the default "-client" option when running Java (as opposed to "-server").

Rule 8

Competitors must not make use of multithreading in Java (e.g. extending from the 'Thread' class, or implementing 'Runnable'). Only the main thread may be used. This is mainly to ensure accurate timing within the competition software. Multithreading may be considered in a future edition of this challenge.

Rule 9

In order to run the competing hyper-heuristics across different domains, the HyFlex framework should be used.  Each of the problem domains provided through HyFlex contains a number of challenging training instances. The competition will use three of the training instances provided (yet to be selected). It will also use two hidden instances. Moreover, at least two additional hidden domains will be added. See Rule 12 and the section entitled 'Challenge Description' for more details.

Rule 10

Participants have to benchmark their machine with the program provided in order to know how much time they have available to run their program on their machines.

Rule 11

The same version of the algorithm must be used for all instances and all domains. That is, the algorithm should not "know" which domain/instance it is solving - while your particular algorithm may analyse the behaviour of the low level heuristics on the problem instance and set parameters accordingly, it should not "recognise" the particular instance.

Before the competition, the organisers will randomly re-assign the indices of the problem domains, and the instances within them. The indices of the low level heuristics (within each type) will also be randomised, so that the hyper-heuristic will not know which low level heuristic is associated with each index before the run. The idea is that the hyper-heuristics should use information about the performance of the low level heuristics as the run progresses. This is also the reason why hidden domains are to be added, as nothing will be known about their instances or low level heuristics before the competition.

There is a definite difference between the following strategies.
  1. "On instance 1, apply heuristic 2, then 3, then 5, then 2, then 3…"
  2. "If the solution has not changed for 10 seconds, apply heuristic 3" or "If search landscape of this instance seems to be very flat, apply a heuristic which performs a large mutation".
The first strategy is not in the spirit of the competition, and it should be rendered useless by the random re-assignment of the instance and heuristic indices.

Rule 12

Participants should also submit a concise and clear description of their algorithm, so that in principle others can implement it. This description may be very short if the hyper-heuristic is very simple. This is a fundamental part of a competitors' submission.
[Top]

Adjudication

Rule 13

Our intention is to calculate the scores based on a "typical" run of the algorithms. Therefore, we plan to run each competing algorithm a number of times with different random seeds for each instance. These  seeds will be re-used for each algorithm and instance, so the competing algorithms will start from the same initial conditions on each "event". From these number of runs per instance, we will take the median values. These medians will then be used for calculating the scores.

Competitors' eventual place listings will be based on their total points. The scoring (and tie break) system used is inspired by that used in Formula 1. It is a relative system, so the outcome would depend on the number and quality of competing algorithms. The organisers will run the competing algorithms and produce the scores. A detailed description of this system can be found in the section entitled 'Scoring System'.

Rule 14

In some circumstances, finalists may be required to show source code to the organisers. This will be a matter of routine for the winners. This is simply to check that they have stuck to the rules and will be treated in the strictest confidence.

Rule 15

Entries from participating organising partners will not be permitted. However, results from participants who choose to work on the problems will be presented for comparison.

Rule 16

Each participant may submit only one entry. Teams are allowed as participants. Several participants from the same institution/research group are allowed, provided that their entries are sufficiently different. We reserve the right to check the source code for similarity in this case.
[Top]

Leaderboard

Rule 17

Participants can submit their results on the 10 training instances of the 4 open domains in order to be included in the Leaderboard. The submission details and format are described in the Registration and Submission page.

Rule 18

Each participant may submit a maximum of two Leaderboard entries  per week.

Rule 19

The Leaderboard will include the scores from the two best entries from each participant/team. If a participant/team already has two entries in the leaderboard, and submits a better set of results, then this will replace their worst entry in the Leaderboard.

Rule 20

The Leaderboard will be updated when new submissions are received, and once per week at most.

Rule 21

Participants should benchmark their machine with the program provided in order to know how much time they have available to run their program on their machines. They should run their algorithms for the allotted time and no more.

Rule 22

Participants should remember that in the final competition, we aim at calculating the scores according to a "typical" run of the algorithms (see Rule 13). Therefore, their Leaderboard submissions should follow the procedure indicated in Rule 13 as close as possible.

Rule 23

Participants should bear in mind that the Leaderboard suggests the performance of different algorithms but is not a final arbiter for deciding the winner of the competition prize.  The final competition will also consider hidden instances and hidden domains. Furthermore, all the competing submissions will be run on a standard machine therefore creating a "level playing field".


Last Updated: 14 June 2011, by Gabriela Ochoa.