Bulletin of the American Physical Society
APS April Meeting 2023
Volume 68, Number 6
Minneapolis, Minnesota (Apr 15-18)
Virtual (Apr 24-26); Time Zone: Central Time
Session EE04: V: Exascale Computational AstrophysicsInvited
|
Hide Abstracts |
Sponsoring Units: DAP DCOMP Chair: Bronson Messer, Oak Ridge National Lab Room: Virtual Room 4 |
Monday, April 24, 2023 1:00PM - 1:30PM |
EE04.00001: A Grand-Challenge Galaxy Simulation Invited Speaker: Evan Schneider Recent years have witnessed enormous gains in the complexity of numerical astrophysics simulations, and the compuational power of the machines that run them. Only a few decades ago, models of galaxy formation and evolution relied on calculations with a few million cells or particles -- now those numbers typically exceed billions. With the advent of modern, GPU-based machines, such as the exascale-breaking Frontier at Oak Ridge National Lab, a new opportunity arises to increase the resolution by orders-of-magnitude more... provided the software algorithms can keep up. In this talk, I will describe our work to prepare the astrophysics code Cholla to run a "grand challenge" trillion-cell galaxy simulation on Frontier. With a domain resolution of 10,0003 cells, this simulation will be the first to capture the cycle of star formation, supernova explosions, and galaxy outflows on the scale of our own Milky Way galaxy. The simulation will additionally produce hundreds of terabytes of data to be compared with the most detailed surveys available. Combining these results promises to help us answer fundamental questions about how galaxies form, grow, and evolve throughout cosmic history. |
Monday, April 24, 2023 1:30PM - 2:00PM |
EE04.00002: CRK-HACC: Exascale Simulations for Modern Cosmological Surveys Invited Speaker: Nick Frontiere Numerical simulations play a vital role in precision cosmology. Current and next-generation large-scale structure surveys of the Universe involve measurements at extremely low levels of statistical uncertainty, accompanied by high resolution data. Commensurate numerical predictions that capture the extensive dynamic range of cosmological scales are required to provide accompanying theoretical predictions. Exascale computing delivers a powerful new tool to perform state-of-the-art simulations to address the increasing demand for large-volume high fidelity simulations. We will present the CRK-HACC framework, a cosmology code built to run performantly on all modern GPU-accelerated supercomputers, taking full advantage of the computational capabilities of modern exascale machines. |
Monday, April 24, 2023 2:00PM - 2:30PM |
EE04.00003: Parthenon - A Performance Portable Block-Structured Adaptive Mesh Refinement Framework Invited Speaker: Forrest Glines On the path to exascale the landscape of computer device architectures and corresponding programming models has become much more diverse. While various low-level performance portable programming models are available, support at the application level lacks behind. To address this issue, we present Parthenon, a performance portable block-structured adaptive mesh refinement (AMR) framework, derived from the well-tested and widely used Athena++ astrophysical magnetohydrodynamics code but generalized to serve as the foundation for a variety of downstream multi-physics codes. Parthenon adopts the Kokkos programming model and provides various levels of abstractions including multi-dimensional variables, packages defining multi-physics components, and compute kernels on device architectures. Parthenon allocates all data in device memory to reduce data movement, supports the logical packing of variables and mesh blocks to reduce the number of kernels and thus mitigate kernel launch overhead, and employs one-sided, asynchronous MPI calls to reduce communication overhead in multi-node simulations. Using a hydrodynamics miniapp, we demonstrate weak and strong scaling on various architectures including AMD and NVIDIA GPUs, Intel and AMD x86 CPUs, IBM Power9 CPUs, as well as Fujitsu A64FX CPUs. At the largest scale on Frontier (the first TOP500 exascale machine), the miniapp reaches a total of 1.7×1013 zone-cycles/s on 9,216 nodes (73,728 logical GPUs) at ~92% weak scaling parallel efficiency (starting from a single node). In combination with being an open, collaborative project, this makes Parthenon an ideal framework to target exascale simulations in which the downstream developers can focus on their specific application rather than on the complexity of handling massively-parallel, device-accelerated AMR. Finally, we present existing open source astrophysical downstream codes, including AthenaPK (MHD), Phoebus (GRMHD), and KHARMA (GRMHD), that already leverage Parthenon. |
Follow Us |
Engage
Become an APS Member |
My APS
Renew Membership |
Information for |
About APSThe American Physical Society (APS) is a non-profit membership organization working to advance the knowledge of physics. |
© 2024 American Physical Society
| All rights reserved | Terms of Use
| Contact Us
Headquarters
1 Physics Ellipse, College Park, MD 20740-3844
(301) 209-3200
Editorial Office
100 Motor Pkwy, Suite 110, Hauppauge, NY 11788
(631) 591-4000
Office of Public Affairs
529 14th St NW, Suite 1050, Washington, D.C. 20045-2001
(202) 662-8700