A major challenge for supercomputing today is to demonstrate how advances in HPC technology translate to accelerated progress in key application domains – especially with respect to reduction in “time-to-solution” and also “energy to solution” of advanced codes that model complex physical systems. In order to effectively address the extreme concurrency present in modern supercomputing hardware, one of the most efficient algorithmic methodologies has been to adopt the hybrid MPI/OpenMP approach to facilitate efficient multi-threading engagement of very large numbers of processors. This presentation describes the deployment of scalable scientific software for extreme scale applications – with focus on Fusion Energy Science as an illustrative application domain.
Computational advances in magnetic fusion energy research have produced particle-in-cell (PIC) simulations of turbulent kinetic dynamics for which computer run-time and problem size scale very well with the number of processors on massively parallel many-core supercomputers. For example, the GTC-Princeton (GTC-P) code, which has been developed with a “co-design” focus, has demonstrated the effective usage of the full power of current leadership class computational platforms worldwide at the petascale and beyond to produce efficient nonlinear PIC simulations that have advanced progress in understanding the complex nature of plasma turbulence and confinement in fusion systems [1, 2]. Results have provided great encouragement for being able to include increasingly realistic dynamics in extreme-scale computing campaigns with the goal of enabling predictive simulations characterized by unprecedented physics realism needed to help accelerate progress in delivering clean energy via magnetically-confined fusion systems. In particular, future challenges associated with achieving further improvements in portability and performance scalability via deployment of OpenMP 4.0 and OpenACC will be discussed.