–
26.06.
How can AI workloads be engineered for optimal performance in modern HPC environments?
The rapid advancement of Artificial Intelligence (AI) and Machine Learning (ML) has positioned High-Performance Computing (HPC) systems as indispensable platforms for developing, training, and executing these workloads. However, the architectural complexity and batch-oriented design of traditional HPC systems pose unique challenges distinct from those encountered in resource-elastic environments such as clouds.
The parallelization characteristics, input/output requirements, and dynamic workflows of AI workloads demand innovative techniques for efficient utilization of HPC resources. Moreover, the performance engineering of such workloads is crucial to achieve scalability, portability, and reproducibility across diverse system architectures.
This workshop aims to bring together researchers, practitioners, and system developers to discuss engineering challenges, performance optimization, and emerging opportunities at the intersection of AI and HPC. It invites among others, papers that present experimental results, architectural insights, performance studies, and best practices advancing the convergence of these domains.
Important dates
- Submission deadline: March 2, 2026 AOE
- Notification of acceptance: April 13, 2026
- Camera-ready deadline: May 11, 2026 AOE
- Workshop day: June 26, 2026