Competition on Evolutionary Computation in MultiLabel Adversarial Examples


The "Competition on Evolutionary Computation in MultiLabel Adversarial Examples" will be held as a part of the 2026 IEEE World Congress on Computational Intelligence (IEEE WCCI 2026), June 21 - June 26, 2026, MECC Maastricht, the Netherlands


Overview

Artificial intelligence (AI), represented by deep learning, has deeply penetrated into multiple fields including production, life, and scientific research, and has been one of the core technological engines that drives the progress of human society. AI replaces repetitive labor through automation, and breaks through the boundaries of human cognition.

However, much work has demonstrated that AI models are faced with some serious security and privacy threats, such as adversarial examples. Adversarial examples -- carefully crafted inputs that mislead AI systems -- pose significant threats to the reliability and safety of these models. By investigating the adversarial examples and developing efficient attack testing methodologies, we can not only assess the security of models, but also enhance their intrinsic trustworthiness.

The generation of adversarial examples is a typical large-scale optimization problems, where evolutionary algorithms have shown remarkable effectiveness. In this competition, we focus on a challenging category of adversarial examples -- multi-label adversarial examples targeting deep neural networks.

The goals of this competition are: (1) From the view of multi-label adversarial examples, to explore and advance the development of evolutionary algorithms for testing AI models security, and (2) to promote research in understanding the security level and boundary of AI models, thereby establishing a performance baseline for testing multi-label learning models.


Download


Submission


Important Dates


Results


Organizers


© Machine Intelligence Laboratory --- HITSZ