Bulletin of the American Physical Society
76th Annual Meeting of the Division of Fluid Dynamics
Sunday–Tuesday, November 19–21, 2023; Washington, DC
Session G16: CFD: HPC II |
Hide Abstracts |
Chair: Sara Youssoufi, George Washington University Room: 145A |
Sunday, November 19, 2023 3:00PM - 3:13PM |
G16.00001: Implementation of the asynchronous discontinuous-Galerkin method for reacting flow simulations Shubham K Goswami, Dapse Vidyesh, Konduri Aditya One of the recent advancements toward the development of asynchronous numerical schemes for scalable PDE solvers is the newly introduced asynchronous discontinuous-Galerkin (ADG) method. It avoids communication/synchronization at a mathematical level and uses asynchrony-tolerant (AT) fluxes to provide highly accurate solutions despite asynchrony. The ADG method is particularly beneficial for massively parallel reacting flow solvers, where the chemical time scale is very small, such that simulation time is extremely long. These simulations are typically performed on tens of thousands of processors, where communication overheads significantly affect the scalability of the solver. In this work, we implement the ADG method in a 1D solver, aimed at solving compressible Navier-Stokes equations with relaxed communication/synchronization requirements. A series of numerical experiments are performed to validate its performance for reacting and non-reacting problems. We also demonstrate the scalability of the ADG method based on one of the compressible flow solvers in deal.II (an open-source finite element library) for two and three-dimensional inviscid flow problems. The results demonstrate the great potential of the ADG method for developing exascale PDE solvers for reacting flow simulations. |
Sunday, November 19, 2023 3:13PM - 3:26PM |
G16.00002: Task Graph Scheduler: A Library for Dynamic Runtime Scheduling in MPI Applications Hilario C Torres, Scott Murman The current state-of-the-practice for Single-Program, Multiple-Data (SPMD) applications utilizes a bulk-synchronous paradigm (BSP) implemented with non-blocking Message Passing Interface (MPI) communication calls. In this paradigm, the order of execution of the computational kernels is hard coded at compile time in order to overlap communication and computation in a synchronized fashion. In simple applications this approach is relatively easy to implement and can provide sufficient parallel scalability. However, it is difficult to specify a performant schedule at compile time for applications that simultaneously run multiple interdependent algorithms on a diverse set of data structures. This presentation covers a library that we have developed to solve this problem by dynamically scheduling computational kernels at runtime using directed acyclic graphs to track the data dependencies between kernels. This system is specifically designed to leverage existing computational infrastructure as much as possible, facilitating the extension to legacy applications. This scheduling system is demonstrated using the eddy high-order multi-physics solver developed at NASA. Details regarding the implementation, our experiences using this system, and performance will be discussed. |
Sunday, November 19, 2023 3:26PM - 3:39PM |
G16.00003: Mixed-precision parallel linear solver for high-order compact finite difference schemes Hang Song, Akshay Subramaniam, Britton J Olson, Sanjiva K Lele Compact finite difference methods are widely used in simulations of fluid mechanics problems for their high spectral resolution. For large-scale simulations on modern high-performance computing systems, it is extremely challenging to efficiently solve linear systems arising from compact numerical schemes with hierarchical parallelism on distributed and shared memory. This work further optimizes the previous scalable direct parallel algorithm published by the authors by introducing conditional mixed-precision operations for data involved in the communication between distributed memory partitions. The feasibility of conducting the mixed-precision operations is based on the mathematical structure of the linear system during the solution process, and the numerical error behavior is characterized. Theoretical analysis and numerical experiments have demonstrated that the solution can still achieve double precision accuracy with mixed-precision operations compared to the direct linear solver. This optimization is particularly beneficial for multi-dimensional computations using higher-order compact schemes where cyclic penta-diagonal systems are solved and performance is limited by the communication bandwidth. |
Sunday, November 19, 2023 3:39PM - 3:52PM |
G16.00004: A hybrid staggered/non-staggered formulation for simulating incompressible flows with block-structured mesh refinement Tam T Nguyen, S N V Rajasekhar Rao Dathi, Andy Nonaka, Trung B Le We present a novel approach for simulating incompressible flows with local mesh refinement. In many biological problems such as simulations of cells, it is highly desirable to accommodate high-resolution regions near the surface of moving bodies in flows. In this work, we present a new approach for local mesh refinement with the dual use of staggered and non-staggered grid layouts. Our finite volume solver for incompressible flows is based on a fractional step method. The fluxes are stored at the surface centers whereas the pressure field and the Cartesian velocity components are at the volume centers. This hybrid staggered – non-staggered approach allows the flexibility of prescribing the boundary conditions on the moving bodies while satisfying the incompressibility constraint exactly. We use the Adaptive Mesh Refinement for Exascale (AMReX) framework for designing our grid infrastructure. The momentum equation is solved iteratively with an implicit Runge-Kutta method. The native Poisson solver of AMReX is utilized for the projection step. Our preliminary work includes two benchmark simulations: (a) lid-driven flow; and (b) Taylor-Green Vortex to demonstrate the feasibility of our approach. We will report on the efficiency and scalability of this approach for different grid sizes in different heterogenous computing infrastructures. |
Sunday, November 19, 2023 3:52PM - 4:05PM |
G16.00005: Abstract Withdrawn |
Follow Us |
Engage
Become an APS Member |
My APS
Renew Membership |
Information for |
About APSThe American Physical Society (APS) is a non-profit membership organization working to advance the knowledge of physics. |
© 2024 American Physical Society
| All rights reserved | Terms of Use
| Contact Us
Headquarters
1 Physics Ellipse, College Park, MD 20740-3844
(301) 209-3200
Editorial Office
100 Motor Pkwy, Suite 110, Hauppauge, NY 11788
(631) 591-4000
Office of Public Affairs
529 14th St NW, Suite 1050, Washington, D.C. 20045-2001
(202) 662-8700