Bulletin of the American Physical Society
72nd Annual Meeting of the APS Division of Fluid Dynamics
Volume 64, Number 13
Saturday–Tuesday, November 23–26, 2019; Seattle, Washington
Session L16: Focus Session: Exascale Computations of Complex Turbulent Flows II |
Hide Abstracts |
Chair: Fady Najjar Room: 4c3 |
Monday, November 25, 2019 1:45PM - 1:58PM |
L16.00001: Exascale Simulations For Exploring The Physics of Expanding Bed of Particles S Balachandar, David Zwick In this work we present simulations of the rapid depressurization of a particle bed in a gas shock tube. Historically, experiments of this nature have been used as a laboratory surrogate for volcanic eruptions. The present simulations use a state-of-the-art Euler-Lagrange (EL) approach to discretize the governing equations through the coupling of the discontinuous Galerkin (DG) and discrete element methods (DEM). Appropriate numerical parameters for the EL equations were selected through numerous low-fidelity simulations and used in three high-fidelity simulations. The results are compared and contrasted with experimental observations to explain various physical phenomena. The DG and DEM codes are open-source and highly-scalable, with proven scalability to more than one hundred thousand processors. Exascale trends also suggest exceptional scalings on future architectures [Preview Abstract] |
Monday, November 25, 2019 1:58PM - 2:11PM |
L16.00002: Soleil-X: An Exascale Ready Multiphysics Solver for Particle Laden Turbulence in a Radiation Environment Hilario Torres, Gianluca Iaccarino The Predictive Science Academic Alliance Program at Stanford University is developing an exascale ready multi-physics solver to investigate particle laden turbulence that is subjected to thermal radiation for solar energy receiver applications. Each of the three physics solvers (fluid, particles, and radiation) run concurrently to make up the integrated multi-physics simulations and use substantially different algorithms and data access patterns. Coordinating the data communication, computational load balancing, and scaling these different physics solvers together in parallel on modern heterogeneous high performance computing systems presents several major computational challenges. We have chosen to utilize the Legion programming system and its task parallel programming model to address these challenges. Our multi-physics solver, Soleil-X, is written entirely in the high level Regent programming language, which is itself a high level counterpart to the Legion programming system. We will give an overview of the software architecture of Soleil-X and discuss how our multi-physics solver was designed to use the task parallel programming model provided by Legion. We will also discuss scaling, performance, and multi-physics simulation results. [Preview Abstract] |
Monday, November 25, 2019 2:11PM - 2:24PM |
L16.00003: Direct Numerical Simulation of Coupled Convection and Radiation on Heterogeneous Computing Architectures Simone Silvestri, Rene Pecnik, Dirk Roekaerts When dealing with high temperature applications, thermal radiation plays an important role in the heat transfer process. In particular, due to its non-locality, radiation causes counter-intuitive interactions with the turbulent temperature field. These so called turbulence-radiation interactions (TRI) greatly modify the well-known patterns of heat transfer and variable property turbulence. The solution of the radiative transfer problem on fine grids is notoriously challenging, especially for optically intermediate systems; we implemented an innovative approach which exploits heterogeneous high performance computing facilities. The Navier-Stokes equations are solved on CPUs, and the radiative transfer equation is solved on GPUs using an optimized Monte Carlo method. With our method it is possible to access the whole description of TRI in a direct numerical simulation framework. We applied the algorithm to a thermally developing turbulent channel flow of high temperature water vapor to study the interaction between the different heat transfer mechanisms. With the obtained results we were able to identify the destruction of turbulent convection caused by radiative damping of thermal fluctuations and relate it to the dimension of the thermal scales within the flow. [Preview Abstract] |
Monday, November 25, 2019 2:24PM - 2:37PM |
L16.00004: Dynamic Bridging Modeling for Coarse Grained Simulations of Shock Driven Turbulent Mixing. Fernando Grinstein, Juan Saenz, Rick Rauenzahn, Massimo Germano We focus on simulating the consequences of material interpenetration, hydrodynamical instabilities, and mixing arising from perturbations at shocked material interfaces, as vorticity is introduced by the impulsive loading of shock waves -- e.g., as in Inertial Confinement Fusion (ICF) capsule implosions. Such complex flow physics is capturable with coarse grained simulation (CGS) -- classical and implicit LES (ILES), where the small-scale flow dynamics is presumed enslaved to the dynamics of the largest scales. Beyond the complex multiscale resolution issues of shocks and variable density turbulence, we must address the difficult problem of predicting flow transitions promoted by energy deposited at the material interfacial layers during the shock interface interactions. Transition involves unsteady large-scale coherent-structure dynamics resolvable by CGS but not by RANS modeling based on equilibrium turbulence assumptions and single-point-closures. We propose a dynamic blended hybrid RANS/ILES bridging strategy for applications involving variable-density turbulent mixing applications, and report progress testing its implementation for relevant canonical problems. Test cases include the Taylor-Green vortex -- prototyping transition to turbulence, and a shock tube experiment -- prototyping shock-driven turbulent mixing. [Preview Abstract] |
Monday, November 25, 2019 2:37PM - 2:50PM |
L16.00005: Large-eddy simulation of Rayleigh-Taylor mixing on the Sierra supercomputer Brandon Morgan, Jason Burmark, Michael Collette, Cyrus Harrison, Matthew Larsen, Brian Pudliner, Brian Ryujin The Sierra system is Lawrence Livermore National Laboratory's first production supercomputer accelerated by graphics processing units (GPUs). As part of the system's initial acceptance testing in October 2018, large-eddy simulation was conducted of Rayleigh-Taylor mixing in a spherical geometry using 97.8 billion computational volumes across 16,384 GPUs on Sierra. This talk will discuss how the Sierra system enabled such a massive calculation and how the results have been used to inform development of the $k$-$L$-$a$-$V$ Reynolds-averaged Navier-Stokes (RANS) model for reacting turbulence [Morgan, B. E., Olson, B. J., Black, W. J., and McFarland, J. A., ``Large-eddy simulation and Reynolds-averaged Navier-Stokes modeling of a reacting Rayleigh-Taylor mixing layer in a spherical geometry.''\emph{Phys. Rev. E} \textbf{98}, 033111 (2018)]. [Preview Abstract] |
Monday, November 25, 2019 2:50PM - 3:03PM |
L16.00006: Towards Exascale Direct Numerical Simulations of Multi-Stage Ignition and Turbulent Mixing in Diesel Jets Jacqueline Chen, Martin Rieth, Myoungke Lee, Elliott Slaughter, Seshu Yamajala, Alex Aiken Direct numerical simulations of~ multi-stage ignition in low-temperature surrogate diesel jets is used to study `turbulence-chemistry' interactions governing cool flame propagation and turbulent diffusion and their role in accelerating low- and high-temperature ignition.~ The effects of varying the ambient temperature and oxygen concentration on mixture formation and combustion processes is quantified. Conditional statistics are presented showing the significance of turbulent diffusion relative to laminar flame propagation. These simulations are enabled by an asynchronous task-based programming model and runtime, Legion, which is used to obtain scalable performance of the Legion-S3D DNS code on Summit at the Oakridge Leadership Computing Facility.~ The Legion runtime is able to hide the memory latency by overlapping communication and computation and to optimize data movement.~~ Refactoring Legion-S3D in Regent, a companion compiled language that maps directly onto the Legion runtime, simplifies and enforces the rules of the Legion programming model enabling domain scientists to write extensible code with sequential semantics. [Preview Abstract] |
Monday, November 25, 2019 3:03PM - 3:16PM |
L16.00007: Parallel and dynamic mesh adaptation of tetrahedral-based meshes for propagating fronts and interfaces: application to premixed combustion and primary atomization. Vincent Moureau, Pierre Benard, Ghislain Lartigue, Mélody Cailler, Renaud Mercier Thanks to the steady growth of computational resources and a large effort on solver optimization, Large-Eddy Simulation (LES) of realistic systems has become attainable. In these systems, turbulent multi-physics flows involve a large range of scales that need to be resolved by the mesh to capture the proper flow dynamics. Adaptive or dynamic mesh adaptation (AMR) is an appealing technique to reduce the modeling errors in LES. AMR of tetrahedral-based meshes for LES is difficult as it requires numerous mesh topology changes and high-quality grids to resolve the turbulent scales that are close to the cut-off frequency of the mesh. A parallel AMR strategy has been developed recently [Bénard et al., IJNMF 2015] in the YALES2 flow solver [www.coria-cfd.fr]. It combines adaptation and repartitioning steps to enable the AMR of massive grids counting billion cells exploiting up to tens of thousand cores. Mesh adaptation relies on the work of Dapogny et al. [JCP 2014] available in the MMG library [www.mmgtools.org]. The presentation will focus on the parallel adaptation strategy for both volume and surface meshes, its optimization on modern super-computers and on various academic and industrial applications related to premixed turbulent combustion and primary atomization. [Preview Abstract] |
Follow Us |
Engage
Become an APS Member |
My APS
Renew Membership |
Information for |
About APSThe American Physical Society (APS) is a non-profit membership organization working to advance the knowledge of physics. |
© 2024 American Physical Society
| All rights reserved | Terms of Use
| Contact Us
Headquarters
1 Physics Ellipse, College Park, MD 20740-3844
(301) 209-3200
Editorial Office
100 Motor Pkwy, Suite 110, Hauppauge, NY 11788
(631) 591-4000
Office of Public Affairs
529 14th St NW, Suite 1050, Washington, D.C. 20045-2001
(202) 662-8700