Description of the Challenge 

This challenge differs from other competitions in search and optimisation, as it aims to measure performance over several problem domains rather than just one problem domain. Currently, the HyFlex framework provides the following four test domains (follow the links for more details on each of them):
  1. Boolean Satisfiability (MAX-SAT)
  2. One Dimensional Bin Packing
  3. Permutation Flow Shop
  4. Personnel Scheduling  (see also the Staff Rostering Benchmark Data Sets)
For the competition, five instances from each of these four test domains will be selected.  The selection will be as follows:
  • Three  instances from the test set provided for each domain, and
  • Two new instances (not in the test set) to be used at the competition.
Additionally, at least two hidden domains will be considered at the competition, which will each add five instances. These additional domains will be revealed only after the competition has been completed.

Our intention is to calculate the scores based on a "typical" run of the algorithms. Therefore, we plan to run each competing algorithm a number of times with different random seeds for each instance. These seeds will be re-used for each algorithm and instance, so the competing algorithms will start from the same initial conditions on each "event". From these number of runs per instance, we will take the median values. These medians will then be used for calculating the scores.


Last Updated:  25 May 2011, by Gabriela Ochoa