Dynamic and multimodal features are two core properties and are widely existed in many real-world optimization problems. The former illustrates that the objectives and/or constraints of the problems change over time, and the latter means there is more than one optimal solution (sometimes including the accepted local solutions) in each environment. The dynamic multimodal optimization problems (DMMOPs) have both of these characteristics, which have been proposed recently and attract more and more attention. That is, solving such problems requires optimization algorithms to find multiple optima simultaneously in changing environments. So that the decision makers can pick out one optimal solution in each environment according to their experiences and preferences. In this competition, a test suit about DMMOPs is given, which models the real-world applications. Specifically, this test suit adopts 8 multimodal functions and 8 change modes to construct 24 typical dynamic multimodal optimization problems. Moreover, the metric is also given to measure the algorithm performance, which considers the average number of optimal solutions found in all environments.

In the competitions, the optimization algorithms are tested on 24 benchmark problems, which are constructed by 8 multimodal functions and 8 change modes. These problems are divided into three groups. When the environment changes, the problems in the first group have different basic multimodal functions, but changes in the same mode. The problems in the second group have the same basic multimodal functions, but the change modes are different. The third group is used to test the algorithm performance on the relatively high dimensional problems. In the competitions, the metric considering the average number of optimal solutions found in all environments is used to measure the algorithm performance.

The participants must submit their experimental results of all benchmark problems at the three levels of accuracy including 1e-3, 1e-4 and 1e-5. Each problem is tested 30 times, where the random seed is set to the corresponding index from 1 to 30. For each problem, the best, worst, average values of the peak ratio found by the optimization algorithms should be recorded. The results for all problems need to be summarized in a table and sent to luowenjian@hit.edu.cn. We consider that there will be round 20 participants in this competition.

This competition is supported by the IEEE CIS ECTC Task Force on Evolutionary Computation in Dynamic and Uncertain Environments.


  • Technical Report

The document presents the contents about the competition on seeking multiple optima in the dynamic environments.


  • Source Code

This project presents the source code of the competition on seeking multiple optima in the dynamic environments.



  • Submit Instructions

The data results and the brief analysis report (1-2 pages) should be provided and sent to luowenjian@hit.edu.cn. It is noted that the top three algorithms are required to provide the source code after the end of the competition. The source codes of the top three algorithms should be open for all researchers. The authors can upload their source codes to GitHub or other platforms, and the links to the source codes will be provided at this webpage.

This competition does not reuqire papers, but we encourage participants to sumbit papers to CEC 2022.

  • Important Dates

  • Deadline of submiting results: June 1, 2022
  • CEC 2022 Conference: July 18-23, 2022


  • Wenjian Luo, Harbin Institute of Technology, Shenzhen, China, Email: luowenjian@hit.com
  • Xin Lin, University of Science and Technology of China, Hefei, China, Email: iskcal@mail.ustc.edu.cn
  • Changhe Li, China University of Geosciences, Wuhan, China, Email: change.lw@gmail.com
  • Shengxiang Yang, the De Montfort University, Leicester LE1 9BH, United Kingdom, Email: syang@dmu.ac.uk
  • Yuhui Shi, Southern University of Science and Technology, Shenzhen, China, Email: shiyh@sustech.edu.cn