Bulletin of the American Physical Society
61st Annual Meeting of the APS Division of Plasma Physics
Volume 64, Number 11
Monday–Friday, October 21–25, 2019; Fort Lauderdale, Florida
Session NM9: Mini-conference: Building the Bridge to Exascale: Applications and Opportunities for Plasma Physics I |
Hide Abstracts |
Chair: Jack Wells, Oak Ridge National Laboratory Room: Grand C/E |
Wednesday, October 23, 2019 9:30AM - 9:55AM |
NM9.00001: Toward the Modeling of Chains of Plasma Accelerator Stages with WarpX Jean-Luc Vay One of the most challenging application of plasma accelerators is the development of a plasma- based collider for high-energy physics studies. Fast and accurate simulation tools are essential to study the physics toward configurations that enable the production and acceleration of very small beams with low energy spread and emittance preservation over long distances, as required for a collider. The Particle-In-Cell code WarpX is being developed by a team of the U.S. DOE Exascale Computing Project (with non-U.S. collaborators on part of the code) to enable the modeling of chains of tens of plasma accelerators on exascale supercomputers, for collider designs. The code combines the latest algorithmic advances (e.g., boosted frame, pseudo-spectral Maxwell solvers) with mesh refinement and runs on the latest CPU and GPU architectures. The application to the modeling of up to three successive muti-GeV stages will be discussed. The latest implementation on GPU architectures will also be reported, as well as novel algorithmic developments. [Preview Abstract] |
Wednesday, October 23, 2019 9:55AM - 10:20AM |
NM9.00002: High-fidelity Whole Device Model of Magnetically Confined Fusion Plasma Amitava Bhattacharjee The Whole Device Model Application (WDMApp) in the DOE Exascale Computing Project (ECP) is developing a high-fidelity model of magnetically confined fusion plasmas, urgently needed to plan experiments on ITER and optimize the design of next-step fusion facilities. These devices will operate in high-fusion-gain physics regimes not achieved by any experiment, making predictive numerical simulation the best tool for the task. WDMApp is focused on building the main driver and coupling framework for a WDM. The main driver is based on the coupling of two advanced and highly scalable gyrokinetic codes, XGC and GENE, where the former is a particle-in-cell code optimized for the treating the edge plasma while the other is a continuum code optimized for the core. WDMApp aims to take advantage of the complementary nature of these two applications to build the most advanced and efficient whole device kinetic transport kernel for the WDM. A major part of the technical development work is targeting the coupling framework, which will be further developed for exascale and optimized for coupling most of the physics modules that will operate at various space and time scales. The current MPI$+$X is to be enhanced with communication-avoiding methods, task-based parallelism, in situ analysis with resources for load optimization workflows, and deep memory hierarchy-aware algorithms. The status of the project and recent results will be presented as the ECP enters its CD-2 phase. [Preview Abstract] |
Wednesday, October 23, 2019 10:20AM - 10:45AM |
NM9.00003: Towards an Exascale Implementation of an Adaptive Sparse Grid Discretization (ASGarD). David Green, Graham Lopez, Lin Mu, Ed D'Azevedo, Wael Elwasif, Tyler McDaniel, Timothy Younkin, Adam McDaniel, Sebastian De Pascuale, Diego Del-Castillo-Negrete The development, implementation details, and progress of an exascale targeted continuum solver for the high-dimensional PDEs of relevance to fusion will be presented. The Adaptive Sparse Grid Discretization (ASGarD) software project combines novel methods from the applied math community with performance portable computer science efforts to enable the extreme numbers of degrees of freedom required to simulate the high dimensional PDEs in a noise free manner. We will discuss the project workflow whereby domain scientists, applied mathematicians, computer scientists, software engineers, and vendors are contributing to building an exascale enabled tool in a maintainable manner. Application of ASGarD to several standard plasma physics benchmark problems, as well as progress on specific physics use cases will also be presented. [Preview Abstract] |
Wednesday, October 23, 2019 10:45AM - 11:10AM |
NM9.00004: Heterogeneous Programming and Optimization of Gyrokinetic Toroidal Code Using Directives Zhihong Lin The latest production version of the fusion particle simulation code,Gyrokinetic Toroidal Code (GTC), has been ported to and optimized for the next generation exascale GPU supercomputing platform. Heterogeneous programming using directives has been utilized to balance the continuously implemented physical capabilities and rapidly evolving software/hardware systems. The original code has been refactored to a set of unified functions/calls to enable the acceleration for all the species of particles. Extensive GPU optimization has been performed on GTC to boost the performance of the particle push and shift operations. In order to identify the hotspots, the code was the first benchmarked on up to 8000 nodes of the Titan supercomputer, which shows about 2--3 times overall speedup comparing NVidia M2050 GPUs to Intel Xeon X5670 CPUs. This Phase I optimization was followed by further optimizations in Phase II, where single-node tests show an overall speedup of about 34 times on SummitDev and 7.9 times on Titan. The real physics tests on Summit machine showed impressive scaling properties that reaches roughly 50{\%} efficiency on 928 nodes of Summit. The GPU $+$ CPU speed up from purely CPU is over 20 times, leading to an unprecedented speed. [Preview Abstract] |
Wednesday, October 23, 2019 11:10AM - 11:35AM |
NM9.00005: Extreme-scale high-fidelity simulation of tokamak edge plasma en route to exascale computing C.S. Chang, M. Shephard, S. Klasky, S. Parker, L. Chacon, P. Worley, Mark Adams, M. Greenwald, E. D'Azevedo, S. Ku XGC is a high-fidelity gyrokinetic code aiming at understanding the difficult plasma dynamics in the edge region of a tokamak reactor. Tokamak edge plasma is in a non-Maxwellian state, dominated by several multiscale multi-physics kinetic dynamics in complicated geometry that includes divertor and magnetic X-point. In order to solve this difficult problem, XGC is designed to utilize extreme scale computers, eventually aiming for exascale and post-exascale computers. XGC was in the early science program (ESP) for Summit, is in the ESP for the upcoming exascale computer Aurora, and in the ESP for the upcoming Permutter at NERSC. XGC is also a SciDAC and ECP code. We will present the performance of and the scientific achievements by XGC on the US leadership class computers Titan, Summit, Theta and Cori. The scientific achievements presented here would not have been possible without such leadership class computers, which include the L-H bifurcation dynamics, divertor heat-flux width for ITER, neutral particle effect on edge turbulence, RMP-turbulence interaction, pedestal shape in ITER, blob dynamics, etc. We will also present the future plans toward exascale computing. [Preview Abstract] |
Wednesday, October 23, 2019 11:35AM - 12:00PM |
NM9.00006: Turbulence in fusion and astrophysical plasmas: Grid-based gyrokinetics on exascale systems with GENE Gabriele Merlo, Frank Jenko, Bryce Allen, Alejandro Banon Navarro, Tilman Dannert, Denis Jarema, Daniel Told It is widely recognized that turbulence is an important and exciting frontier topic of both basic and applied plasma physics - as well as of many neighboring fields of science. Numerous aspects of this paradigmatic example of nonlinear multiscale dynamics remain to be better understood. Meanwhile, for both laboratory and natural plasmas, an impressive combination of new experimental and observational data and new computational capabilities have and will become available. Thus, we are facing a unique window of opportunity to push the boundaries of our grasp of plasma turbulence. In this context, a main goal is to further unravel its crucial role in phenomena like cross-field transport of mass, momentum, and heat, particle acceleration and propagation, and plasma heating. Future challenges and opportunities in this vibrant area of research - on the brink of the exascale era - will be described, with a focus on the grid-based gyrokinetic turbulence code GENE. [Preview Abstract] |
Wednesday, October 23, 2019 12:00PM - 12:25PM |
NM9.00007: Computing Challenges in Kinetic Modeling of FRC Stability and Transport Calvin Lau, Francesco Ceccherini, Sean Dettrick In TAE Technologies’ current experimental device, C-2W (also called “Norman”) [1], record breaking, advanced beam-driven field reversed configuration (FRC) plasmas are produced and sustained in steady state utilizing variable energy neutral beams, advanced divertors, end bias electrodes, and an active plasma control system. Two particle-in-cell HPC codes are under development to support the main goals of TAE’s research program: 1) the ANC kinetic micro-stability code to understand energy confinement and turbulence[2], and 2) the FPIC kinetic macro-stability code to model global stability and study plasma control methods that could be deployed on current and future devices. Using the computing resources of NERSC Cori and ALCF Theta, these two simulation codes are the most computationally demanding components of the integrated modeling project at TAE, dubbed the FRC Whole Device Model (WDM). The WDM is a hierarchy of models, which will use a global transport model as the framework to integrate microstability, macrostability, electron dynamics, neutral transport, and neutral beam and RF source terms to perform full system simulations. \\ $[1]$ H. Gota et al., Nucl. Fusion 59, 112009 (2019). \\ $[2]$ Lau, C. K., et al. Nucl. Fusion 59, 066018 (2019). [Preview Abstract] |
Follow Us |
Engage
Become an APS Member |
My APS
Renew Membership |
Information for |
About APSThe American Physical Society (APS) is a non-profit membership organization working to advance the knowledge of physics. |
© 2025 American Physical Society
| All rights reserved | Terms of Use
| Contact Us
Headquarters
1 Physics Ellipse, College Park, MD 20740-3844
(301) 209-3200
Editorial Office
100 Motor Pkwy, Suite 110, Hauppauge, NY 11788
(631) 591-4000
Office of Public Affairs
529 14th St NW, Suite 1050, Washington, D.C. 20045-2001
(202) 662-8700