Competition on "Multiparty Multiobjective Optimization" at the 2024 IEEE Congress on Evolutionary Computation (IEEE CEC 2024)

June 30 - July 5, 2024, Yokohama, Japan


Overview

Multiparty multiobjective optimization problems (MPMOPs) are a kind of multiobjective optimization problems in which multiple decision makers are involved. Generally, there are at least two decision makers, i.e., two parties, and each party has at least two conflicting optimization objectives. MPMOPs exist in many real-world applications. However, such problems are often regarded as many-objective optimization problems. That is, the optimization objectives from different decision makers are simply collected and considered, and multiple decision makers are united into one role. Therefore, multiparty multiobjective optimization problems, as a new type optimization problems where multiple decision makers are involved, are worthy of study.

The goal of this competition is to promote the development of algorithms for MPMOPs. In this competition, the test suit of MPMOPs includes two parts. The first part is eleven problems with common Pareto optimal solutions. That is to say, for these eleven problems, the Pareto optimal solutions of all parties have an non-empty intersection, and the intersection is given for algorithm evaluation. The second part is six biparty multiobjective UAV path planning (BPMO-UAVPP) problems. We do not know the optimal solutions of these six BPMO-UAVPP problems.

In this competition, the optimization algorithms are evaluated on all the test problems. For the problems with real solutions, the metric is the multiparty IGD (MPIGD). For the problems without real solutions, the metric is the multiparty HV (MPHV). Each problem is tested 30 times, where the random seed is set to the corresponding index from 1 to 30. For each problem, the best, worst, and the average values of MPIGD (or MPHV) should be recorded. For each problem, all the algorithms will be sorted according to the average values of MPIGD (or MPHV). Finally, the average ranking of the algorithm in all problems is used to evaluate its performance for comparison.


Download


Submission


Important Dates


Results


Organizers


© Machine Intelligence Laboratory --- HITSZ