Bulletin of the American Physical Society
APS March Meeting 2023
Volume 68, Number 3
Las Vegas, Nevada (March 5-10)
Virtual (March 20-22); Time Zone: Pacific Time
Session F60: Extreme Scale Computational Science Discovery in Fluid Dynamics and Related Disciplines IFocus
|
Hide Abstracts |
Sponsoring Units: DCOMP Chair: Pui-Kuen Yeung, Georgia Institute of Technology; Daniel Livescu, LANL Room: Room 419 |
Tuesday, March 7, 2023 8:00AM - 8:36AM |
F60.00001: Towards adaptive high-order simulations of multiphase compressible turbulent flows at exa-scale Invited Speaker: Sanjiva K Lele Numerical simulations of turbulent flows require numerical methods which preserve the broadband multiscale dynamics of turbulence. Compressible and multiphase turbulent flows pose additional challenges including accurate treatment of acoustic waves, shock waves and its interaction with turbulence, and phase interface phenomena such as droplet breakup, atomization, cavitation, evaporation and condensation. Thermodynamics of the continuous phases also need to be appropriately and consistently treated. A high-accuracy solver capable of representing these physical complexities with minimum numerical dispersion and dissipation and designed for portability and performance scaling on exa-scale computing hardware would enable computational exploration of multi-physics turbulent flows at fundamental level and advance modeling approaches for applications ranging from sustainable energy to aerospace. Recent progress towards exa-scale capable compressible multiphase turbulent flow simulations is reviewed with emphasis on work at Stanford. Under support from NSF an open-source software framework called AMR-H is being developed. It combines robust high-accuracy compact-scheme based approach on curvilinear grids with shock capturing where required. For performance on HPC systems it combines Kokkos and Legion to support AMR for performance portability and runtime parallelism control. The banded linear systems arising with compact schemes (differentiation and interpolation) are efficiently solved using cyclic reduction considering both shared and distributed memory hierarchies. The scaling performance and turbulence physics results obtained on Summit at ORNL are discussed. Results on compressible isotropic turbulence, compressible shear layers, and supercritical wall-bounded turbulence are discussed, and the status of AMR software is summarized. Collaborations from the scientific community are sought to further advance the capabilities of this software infrastructure. |
Tuesday, March 7, 2023 8:36AM - 8:48AM |
F60.00002: Exascale Computational Science on Frontier Reuben D Budiardja The exascale era is here. Frontier, the first exascale computer in the world, is up and running at the Oak Ridge Leadership Computing Facility(OLCF) in Oak Ridge National Laboratory. Frontier provides users with unprecedented access to memory size and computational power, enabling them to solve complex problems that were previously inaccessible. In this talk, I will present Frontier's hardware configurations in some detail, the software and programming models available on the system, and more importantly, how one can obtain access to Frontier and OLCF's staff expertise to scale up their science. I will also present some initial results from early projects are running on Frontier. |
Tuesday, March 7, 2023 8:48AM - 9:00AM |
F60.00003: Direct numerical simulation with time dependent subspaces for reduced-order modeling (ROM) of turbulent compressible reacting flows Jacqueline H Chen, Swapnil Desai, Hessam Babaee, Seshu Yamajala Development of accurate predictive models in compressible reacting flows for practical fuels requires solving a large number of species transport equations. This can become computationally intractable for complex alternative fuels including sustainable aviation fuel surrogates or even blends of methane with hydrogen. At the same time, such flow-flame interactions are often characterized by the presence of strongly transient events associated with finite-time instabilities including auto-ignition, extinction, re-ignition etc. whose detection through infinite-time methods, e.g., long term averages or information about the statistical steady-state, is not possible. In this work, the performance of dynamically bi-orthogonal (DBO) decomposition [D. Ramezanian et al., Comput. Methods Appl. Mech. Engrg. 382 (2021) 113882] is evaluated for the reduced-order modeling (ROM) of compressible reacting flows in the context of DNS performed on heterogeneous supercomputers. DBO is an on-the-fly low-rank approximation technique in which the instantaneous composition matrix of the reactive flow field is decomposed into a set of orthonormal spatial modes, a set of orthonormal vectors in the composition space, and a factorization of the low-rank correlation matrix. This approach bypasses the need to solve the full-dimensional set of species transport equations to generate high-fidelity data as is commonly performed in data-driven dimension reduction techniques such as the principal component analysis (PCA). A demonstration case of DBO-based ROM of reacting transport equations exhibiting strongly transient combustion phenomena including flame propagation and extinction is presented using a methane-hydrogen mechanism. A-posteriori comparisons against the data generated via full-rank direct numerical simulation (DNS) as well as the instantaneous PCA reduction of the DNS data is carried out to assess the effectiveness and accuracy of the DBO based ROM of turbulent compressible reacting flows. |
Tuesday, March 7, 2023 9:00AM - 9:12AM |
F60.00004: Updates to a public turbulence database system and applications to studying local features of the energy cascade Charles Meneveau, Hanxun Yao, Michael Schnaubelt, Alex Szalay, P.K Yeung, Tamer A Zaki We describe updates to an open big-data system that houses large datasets from direct numerical simulations of fluid turbulence. The JHTDB (Johns Hopkins Turbulence Databases) has been operating for over a decade and has led to hundreds of peer-reviewed articles on turbulence. A new set of analysis tools based on Jupyter notebooks has been developed that enable direct access to subsets of the data based on the virtual sensors concept. These notebooks provide fast and stable operation on the existing turbulence data sets while enabling user-programmable, server-side computations. To date, the new data access tools have been tested mostly on the high Reynolds number, forced isotropic turbulence data set at a Taylor microscope Reynolds number of 1,300. We report on a novel analysis based on the Karman-Howarth-Monin-Hill (KHMH) equation, a generalization of the Karman-Howarth equation relating third-order velocity increment moments to the rate of viscous dissipation in the flow and separation length-scale. We explore various implications of Kolmogorov's refined similarity hypothesis (KRSH) that states that statistics in the inertial range conditioned on the locally averaged dissipation rate at that scale are universal. Conditional statistics of local third-order structure function evaluated from the dataset show good agreement with KRSH predictions. The ability to access local regions in vast amounts of turbulence data without the need to download entire datasets and the ability to perform the analysis near where the data reside have greatly enhanced the flexibility needed for the sort of exploratory analysis and results to be presented |
Tuesday, March 7, 2023 9:12AM - 9:24AM |
F60.00005: Compressible multiphase flow simulation at near-exascale via a scalable GPU implementation Spencer H Bryngelson, Henry Le Berre, Anand Radhakrishnan Today's, and likely tomorrow's, exascale computers get the lion's share of their computational capabilities from GPUs. We present a method for simulating compressible multiphase flows that efficiently leverages GPU accelerators in the face of otherwise low arithmetic-intensity operations. The simulation method is based on high-order accurate finite-volume reconstructions that perform near the compute roofline of the latest accelerators. Offloading is handled via OpenACC and remote direct memory access (RDMA) keeps network costs low. The implementation is in MFC (mflowcode.github.io), an open-source Fortran-based solver. We observe about a 500-times speedup for an A100 GPU over a single modern Intel CPU core. These numbers correspond to between a 50- and 100-times node-level speed-up on current leadership class computers. Within 3% of ideal weak scaling is observed on OLCF Summit (we test on up to about 13,000 GPUs). We close with a discussion of how the algorithms perform on other architectures, including ARM and POWER-based systems, and what that entails for CFD simulation on future heterogeneous supercomputers. |
Tuesday, March 7, 2023 9:24AM - 9:36AM |
F60.00006: Turbulence at the Exascale: particle tracking and asynchronous GPU algorithm for low-diffusivity turbulent mixing. Pui-Kuen Yeung, Kiran Ravikumar, Stephen Nichols, Rohini U Vaideswaran Recent advances in GPU algorithm development targeting the world's first Exascale computer (Frontier) are making new milestones for direct numerical simulation of fluid turbulence with many trillions of grid points in a simplified domain achievable in the very near future. In this talk we present further work focusing on two fundamental turbulence phenomena: namely the dispersion of fluid or particulate material at high Reynolds numbers, and the mixing of transported substances with very low molecular diffusivity (such as salinity in the ocean). The first problem requires a particle-tracking capability that is accurate, scalable, and capable of supporting larger particle counts with little increase in cost. This objective is met by cubic spline interpolation combined with ghost layers on the GPU, and a dynamic local decomposition for the particles which greatly reduces the communication costs. In the second, the challenge of low-diffusivity scalars requiring finer resolution than the velocity field is met very economically by a dual-resolution, dual-communicator scheme where both velocity and scalar fields are computed by pseudo-spectral methods asynchronously. |
Tuesday, March 7, 2023 9:36AM - 9:48AM |
F60.00007: Exploration of the FleCSI Asynchronous Runtime for Large Scale Plasma Simulations on Heterogeneous Architectures Robert M Chiodi, Peter T Brady, Zach Jibben, Oleksandr Koshkarov, Ryan Wollaeger, Svetlana Tokareva, Chris L Fryer, Gian Luca Delzanno, Daniel Livescu Modern super computers leverage a mix of CPUs and GPUs to maximize available computing power. This heterogeneity must be specifically accounted for while developing new software to be run on these machines, otherwise significant portions of their power will go unused. With heterogeneous architectures playing an integral role in future plans for exa-scale computer clusters, this will only become increasingly more important. In this talk, we will present our use of LANL’s FleCSI library, an adaptable infrastructure created to facilitate asynchronous multiphysics applications, to develop a new plasma dynamics simulation tool targeted for deployment on exa-scale machines. The impact of performing an asynchronous computation in place of a sequential MPI-based simulation will be discussed, along with realized speedups from offloading work to GPUs via the Kokkos library. Results from large scale simulations relevant to space weather and electron beta decay in supernova fronts will also be shown.
|
Tuesday, March 7, 2023 9:48AM - 10:00AM |
F60.00008: Selected-Eddy Simulations (SES): a novel approach for turbulence simulations at extreme scales Diego A Donzis Direct Numerical Simulations (DNS) resolve all dynamically relevant scales of a turbulent flow. Because of the detail they provide, they have |
Tuesday, March 7, 2023 10:00AM - 10:12AM |
F60.00009: Structure and Dynamics of Puffing Plumes Peter E Hamlington, Michael Meehan, Omkar T Patil The near-field characteristics of highly buoyant plumes, commonly referred to as lazy plumes, remain relatively poorly understood across a range of flow conditions, particularly compared to our understanding of far-field characteristics. Buoyant plumes are found in a wide range of naturally occurring phenomena (e.g., volcanic eruptions, hydrothermal vents, and fire plumes) and engineering applications (e.g., heat treatment processes, desalination plants, and space heaters). The distinguishing feature of these flows is the presence of a dominant buoyant force resulting from density variations in the presence of a gravitational field. In each of these flows, when lower density fluid is injected into higher density ambient fluid, the plume contracts laterally, producing large coherent vortical structures that rise vertically, opposite to the direction of gravity. This process repeats continuously, resulting in a characteristic "puffing" behavior. The frequency at which vortices are shed is the most studied characteristic of this instability, and much research has been devoted to developing scaling relations for the frequency based on plume inlet parameters. In this talk, we use three-dimensional numerical simulations of buoyant plumes across a range of conditions and configurations to characterize the effects of inlet Richardson and Reynolds numbers and two-plume interactions on the puffing frequency. We focus, in particular, on the development of scaling relations for the puffing frequency, the identification of the laminar to turbulent flow transition, and the role of Rayleigh-Taylor instabilities in the plume structure and dynamics. |
Tuesday, March 7, 2023 10:12AM - 10:24AM |
F60.00010: Molecular-level simulations of turbulence via the direct simulation Monte Carlo method Ryan McMullen, John R Torczynski, Michael A Gallis In the last decade, the advent of exascale computing platforms has enabled molecular-level investigations of turbulence using the direct simulation Monte Carlo method. This talk highlights insights gained from this molecular-level approach, including the demonstration that, beginning at a length scale comparable to the Kolmogorov scale, the rapid decay of the energy spectrum observed in deterministic Navier-Stokes simulations is replaced by quadratic growth due to thermal fluctuations. Importantly, the effects of thermal fluctuations become dominant at length scales much larger than the molecular mean free path, in contrast to conventional arguments for the validity of the deterministic Navier-Stokes equations in this regime. |
Tuesday, March 7, 2023 10:24AM - 10:36AM |
F60.00011: Supersonic Reacting Flows Alexei Y Poludnenko Reacting flows are pervasive both in our daily lives on Earth and in the Universe. Such flows are fundamental to systems ranging from astrophysical Type Ia supernovae explosions to novel propulsion applications such as scramjets or detonation-based engines. Despite this ubiquity in Nature, turbulent reacting flows still pose a number of fundamental questions concerning their structure and dynamics often exhibiting surprising behavior. In recent years, the advent of massively parallel high-performance computing has enabled the use of large-scale direct numerical simulations (DNS) for the exploration of the reacting flow dynamics in extreme, previously inaccessible regimes. This talk will discuss the computational requirements and challenges associated with such extreme-scale DNS. Furthermore, we will present an overview of a range of phenomena recently discovered in DNS of high-speed reacting flows characterized by high flow speeds, significant compressibility effects, and strong coupling between exothermic reactions and the flow. Such phenomena include intrinsic instabilities of reacting turbulence, onset of catastrophic transitions, e.g., spontaneous detonation formation, and the qualitative changes in the nature of the turbulent cascade in the presence of exothermic reactions. |
Tuesday, March 7, 2023 10:36AM - 10:48AM |
F60.00012: High-Resolution Simulations of Richtmyer-Meshkov Instability and variable-density turbulence induced by reshock Man Long Wong, Jon R Baltzer, Sanjiva K Lele, Daniel Livescu The Richtmyer-Meshkov instability (RMI) and its transition to turbulence appear in many astrophysical events and engineering applications, such as supernova remnant formation, supersonic combustion, and inertial confinement fusion. In this talk, a multi-mode RMI problem with the variable-density turbulence induced by reshock is studied using high-resolution Navier-Stokes simulations, where the highest resolution run has cell counts exceeding 4.5 billion and was computed on the 20 Petaflop machine Trinity of Los Alamos National Laboratory - one of the largest supercomputers in the world. In these simulations, adaptive mesh refinement and high-order shock-capturing methods are employed to prevent excessive use of grid cells while the shocks and turbulence are well captured. Direct numerical simulation (DNS)-like results are obtained before reshock and the turbulent mixing layer is highly resolved after reshock in the highest resolution simulation. Transport equations of second moment turbulent quantities are studied to analyze the dominant mechanisms governing the turbulence and mixing before and after the shocks' interactions with the material interface for Reynolds-averaged Navier-Stokes (RANS) modeling. Budgets of the large-scale turbulent quantities computed with spatially filtered fields after reshock are further studied and the effects of subfilter-scale stress on the transport of large-scale quantities in scale-resolving simulations are also examined. |
Tuesday, March 7, 2023 10:48AM - 11:00AM |
F60.00013: Multi-Precision Solvers for Non-Linear Systems on AMR Grids using GPUs Peter T Brady, Bobby Philip Modern hardware designed for high performance computing has become increasingly heterogeneous in |
Follow Us |
Engage
Become an APS Member |
My APS
Renew Membership |
Information for |
About APSThe American Physical Society (APS) is a non-profit membership organization working to advance the knowledge of physics. |
© 2024 American Physical Society
| All rights reserved | Terms of Use
| Contact Us
Headquarters
1 Physics Ellipse, College Park, MD 20740-3844
(301) 209-3200
Editorial Office
100 Motor Pkwy, Suite 110, Hauppauge, NY 11788
(631) 591-4000
Office of Public Affairs
529 14th St NW, Suite 1050, Washington, D.C. 20045-2001
(202) 662-8700