Bulletin of the American Physical Society
61st Annual Meeting of the APS Division of Plasma Physics
Volume 64, Number 11
Monday–Friday, October 21–25, 2019; Fort Lauderdale, Florida
Session PM8: Mini-conference: Building the Bridge to Exascale: Applications and Opportunities for Plasma Physics II |
Hide Abstracts |
Chair: Amitava Bhattachargee, Princeton Plasma Physics Laboratory Room: Grand H |
Wednesday, October 23, 2019 2:00PM - 2:25PM |
PM8.00001: OSIRIS: A Highly Scalable High-Performance Computing Application for Plasma Physics Ricardo Fonseca The OSIRIS [1] Electromagnetic particle-in-cell (EM-PIC) code is widely used in the numerical modeling of many kinetic plasma laboratory and astrophysical scenarios. Working at the most fundamental microscopic level and needing to resolve the smallest spatial and temporal scales, these are the most compute-intensive models in plasma physics, requiring efficient use of large scale HPC systems. Exascale computing opens the opportunity for ab initio full-scale modeling of many relevant kinetic plasma scenarios, allowing the code to address an increasingly wider range of problems. In this presentation I will discuss our efforts on deploying OSIRIS doing computation in these advanced architectures, focusing on the latest trends and emerging technologies. I will address our implementation of a tile-based dynamic load balancing algorithm, and the support for the latest hardware (GPU architectures / ARM / Xeon Phi accelerators). Finally, I will report on recent scalability tests done at the Cori system, showing excellent weak and strong parallel scalability at full system scale. \\ $[1]$ R. A. Fonseca et al., Lecture Notes in Computer Science 2331, 342-351 (2002) [Preview Abstract] |
Wednesday, October 23, 2019 2:25PM - 2:50PM |
PM8.00002: Kinetic simulations for laboratory astrophysics with laser-produced plasmas W. Fox, J. Matteucci, K. Lezhnin, D.B. Schaeffer, A. Bhattacharjee, K. Germaschewski Recent laboratory experiments with laser-produced plasmas have opened new opportunities for studying a number of fundamental physical processes relevant to magnetized astrophysical plasmas, including magnetic reconnection, collisionless shocks, and magnetic field generation by Weibel instability and Biermann battery. We develop a fully-kinetic simulation model for first-principles simulation of these systems. Leadership scale kinetic simulations in 2-D and 3-D are conducted on Titan and Summit at OLCF using the particle-in-cell code PSC. Key dimensionless parameters describing the system are derived for scaling between kinetic simulation, recent experiments, and astrophysical plasmas. First, simulations are presented which model Biermann battery magnetic field generation in plasmas expanding from a thin target. Ablation of two neighboring plumes leads to the formation of a current sheet as the opposing Biermann-generated fields collide, leading to a strongly-driven magnetic reconnection at plasma $\beta \sim 10$. Second, we model recent experiments on collisionless magnetized shocks, generated by expanding a piston plasma into a pre-magnetized ambient plasma, and discuss opportunities and predictions for collisionless shock physics available from such experiments. [Preview Abstract] |
Wednesday, October 23, 2019 2:50PM - 3:15PM |
PM8.00003: Introducing the GPU-based Particle-in-Cell Code Aperture Yuran Chen Aperture is a Particle-in-Cell code designed and developed from scratch. It was designed from the beginning to be run on GPUs and scalable to large GPU clusters. It was originally developed for simulations of magnetospheres of neutron stars, but was designed to be flexible and can be applied to many different plasma physics problems, especially when the interaction of radiation and plasma is important. It has different radiation modules that handle synchrotron loss, resonant and non-resonant inverse Compton scattering, triplet pair production, and photon-photon pair production. It has been used to simulate the pair creation process near pulsars, the hard X-ray emission from magnetars, and pair-producing gaps in the vicinity of supermassive black holes. The GPU architecture allows a speed up factor of several hundred over conventional CPU cores, and alleviates load balancing issues with larger subdomains. I will also present my accompanying work on interactive visualization of the simulation results using WebGL. It is possible to render volumetric data, iso-surfaces, and particles using this pipeline in real time in a modern browser, and it is one click away from viewing the result in virtual reality. [Preview Abstract] |
Wednesday, October 23, 2019 3:15PM - 3:40PM |
PM8.00004: WarpX: implementation and performance on GPUs R\'{e}mi Lehe WarpX is an advanced electromagnetic Particle-In-Cell code, and is part of the DoE Exascale Computing Project (ECP). The code provides many powerful features for large-scale simulations of plasmas (e.g. mesh refinement, load balancing, perfectly-matched layers), and in particular for intense laser-plasma interactions (e.g. boosted-frame, spectral solvers, quasi-cylindrical geometry). The code was recently ported to GPUs, and runs at scale on the Summit super- computer. We will describe the key components of the GPU implementation of WarpX, and how they allowed us to rapidly port the code while avoiding code duplication. We will also discuss the performance of the code on Summit, as well as the main limiting factors to overcome in order to reach additional speedup. [Preview Abstract] |
Wednesday, October 23, 2019 3:40PM - 4:00PM |
PM8.00005: The Spatial Core-edge Coupling of Particle-in-cell Gyrokinetic Codes GEM and XGC Junyi Cheng, J. Dominski, Y. Chen, C.S. Chang, S. Ku, R. Hager, E. Suchyta, K. Klasky, A. Bhattacharjee, S. Parker Within the Exascale Computing Program (ECP), the High-Fidelity Whole Device Modeling (WDM) project aims at delivering a first-principle-based computational tool that simulates the plasma neoclassical and turbulence dynamics from the core to edge of Tokamak. To permit such simulations, different gyrokinetic codes need to be coupled, which will take advantage of the complementary nature of different applications to build the most advanced and efficient whole volume kinetic transport kernel for WDM. Here we present that the two existing particle-in-cell (PIC) gyrokinetic codes GEM and XGC have been successfully coupled, where GEM is optimized for the core and XGC is optimized for the edge plasma. The current GEM-XGC coupling adopts a coupling scheme, which is initially developed using XGCcore-XGCedge coupled simulations [1]. In this scheme, the time-stepping of the global core and edge distribution functions is achieved by pushing the composite distribution function independently in each code, but using the common global potential field solution for the whole domain. Due to the different grids, an interpolation scheme is used for transferring data back and forth between GEM's structured grid and XGC's unstructured grid. Meanwhile, the whole coupling framework is based on the high-performance ADIOS library with its state-of-the-art dataspaces in file/memory coupling capability. [1] J. Dominski, et al. Physics of Plasmas 25 (7), 072308 [Preview Abstract] |
Wednesday, October 23, 2019 4:00PM - 4:20PM |
PM8.00006: Using the OMFIT framework to streamline HPC workflows Sterling Smith, Orso Meneghini, David Eldon, Joseph McClenaghan, Brendan Lyons, Matthias Knolker, Kathreen Thome, Brian Grierson, Nik Logan, Arash Ashourvan, Qiming Hu, Shaun Haskey, Christopher Holland, Theresa Wilks, JM Park, Kyungjin Kim, Valerie Izzo, Cody Moynihan, Leonardo Pigatto, Gregorio Trevisan The OMFIT framework [http://gafusion.github.io/OMFIT-source] through its convenient GUIs and API has greatly streamlined the usage of high performance computing (HPC) in fusion research. Specifically, OMFIT has been used to conduct scans of physics and computation input parameters for a broad range of HPC simulations, including gyrokinetic, MHD, pedestal structure, and SOL applications. In many cases the results of these HPC simulations have been compiled in databases that have been used to generate machine-learning reduced models. Finally, the interface that the framework provides with experimental data is the ideal environment to carry out detailed validation studies of first-principles simulations and reduced models alike. As we bridge toward exascale computing, we expect OMFIT to continue to play a key role in making HPC applications accessible to the broader fusion community. [Preview Abstract] |
Wednesday, October 23, 2019 4:20PM - 4:40PM |
PM8.00007: Writing clean scientific software for plasma simulation Nicholas A. Murphy Most scientific programmers are self-taught. Graduate programs in plasma physics often lack courses on scientific programming, which leaves students to learn these skills on their own. High pressure to get results prevents us from taking the time to learn practices from software engineering that can greatly improve the reliability, maintainability, and usability of software. Adopting such practices can make research more efficient and reliable, improve scientific reproducibility, and prevent future headaches. Writing readable code is particularly important because code is communication. I will discuss strategies for writing clean scientific code such as choosing meaningful variable names, refactoring code for readability in preference to commenting on how it works, and writing short functions that do exactly one thing. I will describe techniques such as periodic refactoring, continuous integration testing, test-driven development, and layering code at different levels of abstraction. I will describe how these techniques and strategies can be applied to plasma simulation software. [Preview Abstract] |
Wednesday, October 23, 2019 4:40PM - 5:00PM |
PM8.00008: Scalable, Performance-Portable Particle-in-Cell Simulations and PByte-Scale Data-Challenges A. Huebl, R. Widera, M. Garten, R. Pausch, K. Steiniger, S. Bastrakov, A. Debus, T. Kluge, S. Ehrig, F. Meyer, M. Werner, B. Worpitz, A. Matthes, F. Poeschel, S. Starke, M. Bussmann We present the architecture, abstractions, novel developments, and workflows that enable high-resolution, fast turn-around computations on contemporary, leadership-scale supercomputers powered by both GPUs and CPUs from various vendors and on top of a generalized programming model (Alpaka). From the experience developing the open-source community code PIConGPU, strategies for handling PByte-scale data flows from thousands of computing devices for analysis with in situ processing and open data formats (openPMD) are presented. Furthermore, simulation control via a lightweight Python Jupyter interface as well as recent research towards just-in-time kernel generation for C++ with Cling-CUDA are shown as a mean for fast turn-around, close-to-experiment simulations. [Preview Abstract] |
Follow Us |
Engage
Become an APS Member |
My APS
Renew Membership |
Information for |
About APSThe American Physical Society (APS) is a non-profit membership organization working to advance the knowledge of physics. |
© 2025 American Physical Society
| All rights reserved | Terms of Use
| Contact Us
Headquarters
1 Physics Ellipse, College Park, MD 20740-3844
(301) 209-3200
Editorial Office
100 Motor Pkwy, Suite 110, Hauppauge, NY 11788
(631) 591-4000
Office of Public Affairs
529 14th St NW, Suite 1050, Washington, D.C. 20045-2001
(202) 662-8700