Dynamic and multimodal features are two core properties and are widely existed in many real-world optimization problems. The former illustrates that the objectives and/or constraints of the problems change over time, and the latter means there is more than one optimal solution (sometimes including the accepted local solutions) in each environment. The dynamic multimodal optimization problems (DMMOPs) have both of these characteristics, which have been proposed recently and attract more and more attention. That is, solving such problems requires optimization algorithms to find multiple optima simultaneously in changing environments. So that the decision makers can pick out one optimal solution in each environment according to their experiences and preferences. In this competition, a test suit about DMMOPs is given, which models the real-world applications. Specifically, this test suit adopts 8 multimodal functions and 8 change modes to construct 24 typical dynamic multimodal optimization problems. Moreover, the metric is also given to measure the algorithm performance, which considers the average number of optimal solutions found in all environments.
In the competitions, the optimization algorithms are tested on 24 benchmark problems, which are constructed by 8 multimodal functions and 8 change modes. These problems are divided into three groups. When the environment changes, the problems in the first group have different basic multimodal functions, but changes in the same mode. The problems in the second group have the same basic multimodal functions, but the change modes are different. The third group is used to test the algorithm performance on the relatively high dimensional problems. In the competitions, the metric considering the average number of optimal solutions found in all environments is used to measure the algorithm performance.
The participants must submit their experimental results of all benchmark problems at the three levels of accuracy including 1e-3, 1e-4 and 1e-5. Each problem is tested 30 times, where the random seed is set to the corresponding index from 1 to 30. For each problem, the best, worst, average values of the peak ratio found by the optimization algorithms should be recorded. The results for all problems need to be summarized in a table and sent to xlin@nuist.edu.cn.
The data results and the brief analysis report (1-2 pages) should be provided and sent to xlin@nuist.edu.cn. It is noted that the top one algorithm is required to provide the source code after the end of the competition. The source code of the top one algorithm should be open for all researchers. The authors can upload their source codes to GitHub or other platforms, and the links to the source codes will be provided at this webpage.
This competition does not reuqire papers, but we encourage participants to sumbit papers to CEC 2024.