Bulletin of the American Physical Society
68th Annual Meeting of the APS Division of Fluid Dynamics
Volume 60, Number 21
Sunday–Tuesday, November 22–24, 2015; Boston, Massachusetts
Session M7: CFD: High Performance Computing |
Hide Abstracts |
Chair: Diego Donzis, Texas A&M University Room: 107 |
Tuesday, November 24, 2015 8:00AM - 8:13AM |
M7.00001: Effect of asynchrony on numerical simulations of fluid flow phenomena Aditya Konduri, Bryan Mahoney, Diego Donzis Designing scalable CFD codes on massively parallel computers is a challenge. This is mainly due to the large number of communications between processing elements (PEs) and their synchronization, leading to idling of PEs. Indeed, communication will likely be the bottleneck in the scalability of codes on Exascale machines. Our recent work on asynchronous computing for PDEs based on finite-differences has shown that it is possible to relax synchronization between PEs at a mathematical level. Computations then proceed regardless of the status of communication, reducing the idle time of PEs and improving the scalability. However, accuracy of the schemes is greatly affected. We have proposed asynchrony-tolerant (AT) schemes to address this issue. In this work, we study the effect of asynchrony on the solution of fluid flow problems using standard and AT schemes. We show that asynchrony creates additional scales with low energy content. The specific wavenumbers affected can be shown to be due to two distinct effects: the randomness in the arrival of messages and the corresponding switching between schemes. Understanding these errors allow us to effectively control them, rendering the method's feasibility in solving turbulent flows at realistic conditions on future computing systems. [Preview Abstract] |
Tuesday, November 24, 2015 8:13AM - 8:26AM |
M7.00002: A Multiscale/Multifidelity CFD Framework for Robust Simulations Seungjoon Lee, Yannis Kevrekidis, George Karniadakis We develop a general CFD framework based on multifidelity simulations to target multiscale problems but also resilience in exascale simulations, where faulty processors may lead to gappy simulated fields. We combine approximation theory and domain decomposition together with machine learning techniques, e.g. co-Kriging, to estimate boundary conditions and minimize communications by performing independent parallel runs. To demonstrate this new simulation approach, we consider two benchmark problems. First, we solve the heat equation with different patches of the domain simulated by finite differences at fine resolution or very low resolution but also with Monte Carlo, hence fusing multifidelity and heterogeneous models to obtain the final answer. Second, we simulate the flow in a driven cavity by fusing finite difference solutions with solutions obtained by dissipative particle dynamics -- a coarse-grained molecular dynamics method. In addition to its robustness and resilience, the new framework generalizes previous multiscale approaches (e.g. continuum-atomistic) in a unified parallel computational framework. [Preview Abstract] |
Tuesday, November 24, 2015 8:26AM - 8:39AM |
M7.00003: Fast linear solvers for variable density turbulent flows Hadi Pouransari, Ali Mani, Eric Darve Variable density flows are ubiquitous in variety of natural and industrial systems. Two-phase and multi-phase flows in natural and industrial processes, astrophysical flows, and flows involved in combustion processes are such examples. For an ideal gas subject to low-Mach approximation, variations in temperature can lead to a non-uniform density field. In this work, we consider radiatively heated particle-laden turbulent flows as an example application in which density variability is resulted from inhomogeneities in the heat absorption by an inhomogeneous particle field. Under such conditions, the divergence constraint of the fluid is enforced through a variable coefficient Poisson equation. Inversion of the discretized variable coefficient Poisson operator is difficult using the conventional linear solvers as the size of the problem grows. We apply a novel hierarchical linear solve algorithm based on low-rank approximations. The proposed linear solver could be applied to variety of linear systems arising from discretized partial differential equations. It can be used as a standalone direct-solver with tunable accuracy and linear complexity, or as a high-accuracy pre-conditioner in conjunction with other iterative methods. [Preview Abstract] |
Tuesday, November 24, 2015 8:39AM - 8:52AM |
M7.00004: Direct numerical simulation of fluid-particle mass, momentum, and heat tranfers in reactive systems. Abdelkader Hammouti, Anthony Wachs Many industrial processes like coal combustion, catalytic cracking, gas phase polymerization reactors and more recently biomass gasification and chemical looping involve two-phase reactive flows in which the continuous phase is a fluid and the dispersed phase consists of rigid particles. Improving both the design and the operating conditions of these processes represents a major scientific and industrial challenge in a context of markedly rising energy cost and sustainable development. Thus, it is above all important to better understand the coupling of hydrodynamic, chemical and thermal phenomena in those flows in order to be able to predict them reliably. The aim of our work is to build up a multi-scale modelling approach of reactive particulate flows and at first to focus on the development of a microscopic-scale including heat and mass transfers and chemical reactions for the prediction of particle-laden flows in dense and dilute regimes. A first step is the upgrading and the validation of our numerical tools via analytical solutions or empirical correlations when it is feasible. These couplings are implemented in a massively parallel numerical code that already enable to take a step towards the enhanced design of semi-industrial processes. [Preview Abstract] |
Tuesday, November 24, 2015 8:52AM - 9:05AM |
M7.00005: FFeasibility of Amazon Cloud Computing Platform for Parallel Multi-phase Flow Simulations Cole Freniere, Ashish Pathak, Mehdi Raessi The feasibility of Amazon's Elastic Compute Cloud (EC2) service is evaluated as a resource for multi-phase flow simulations. The results for two multi-phase flow solvers are presented: a 2D GPU-accelerated serial code and a 3D MPI-parallel GPU-accelerated solver. In both cases, the interaction of two-fluid flow with a moving solid phase is captured, and a GPU pressure Poisson solver is used. A virtual cloud cluster is compared to a conventional high-performance computing cluster at the researchers’ university in terms of performance and cost. The accuracy of the results obtained on Amazon’s Cloud, where the GPUs are single-precision, is the same as those obtained on the university cluster with double-precision GPUs. The parallel code is benchmarked on clusters of varying size, with strong and weak scaling curves. The steps necessary to outsource the data to the cloud, as well as acquiring the appropriate hardware and software stacks are outlined. Amazon’s HPC cloud is competitive with the university cluster, but there are some performance limitations that will be discussed in the presentation. [Preview Abstract] |
Tuesday, November 24, 2015 9:05AM - 9:18AM |
M7.00006: Repartitioning Strategies for Massively Parallel Simulation of Reacting Flow Patrick Pisciuneri, Angen Zheng, Peyman Givi, Alexandros Labrinidis, Panos Chrysanthis The majority of parallel CFD simulators partition the domain into equal regions and assign the calculations for a particular region to a unique processor. This type of domain decomposition is vital to the efficiency of the solver. However, as the simulation develops, the workload among the partitions often become uneven (e.g. by adaptive mesh refinement, or chemically reacting regions) and a new partition should be considered. The process of repartitioning adjusts the current partition to evenly distribute the load again. We compare two repartitioning tools: Zoltan, an architecture-agnostic graph repartitioner developed at the Sandia National Laboratories; and Paragon, an architecture-aware graph repartitioner developed at the University of Pittsburgh. The comparative assessment is conducted via simulation of the Taylor-Green vortex flow with chemical reaction. [Preview Abstract] |
Tuesday, November 24, 2015 9:18AM - 9:31AM |
M7.00007: A GPU-accelerated semi-implicit ADI method for incompressible and compressible Navier-Stokes equations Sanghyun Ha, Donghyun You Utility of the computational power of Graphics Processing Units (GPUs) is elaborated for solutions of both incompressible and compressible Navier-Stokes equations. A semi-implicit ADI finite-volume method (J. Comp. Phys. V. 230 (2011), pp. 7400-7417) for integration of the incompressible and compressible Navier-Stokes equations, which are discretized on a structured arbitrary grid, is parallelized for GPU computations using CUDA (Compute Unified Device Architecture). In the semi-implicit ADI finite-volume method, the nonlinear convection terms and the linear diffusion terms are integrated in time using a combination of an explicit scheme and an ADI scheme. Inversion of multiple tri-diagonal matrices is found to be the major challenge in GPU computations of the present method. Some of the algorithms for solving tri-diagonal matrices on GPUs are evaluated and optimized for GPU-acceleration of the present semi-implicit ADI computations of incompressible and compressible Navier-Stokes equations. [Preview Abstract] |
Tuesday, November 24, 2015 9:31AM - 9:44AM |
M7.00008: Discrete Particle Model for Porous Media Flow using OpenFOAM at Intel Xeon Phi Coprocessors Zhi Shang, Krishnaswamy Nandakumar, Honggao Liu, Mayank Tyagi, James A. Lupo, Karten Thompson The discrete particle model (DPM) in OpenFOAM was used to study the turbulent solid particle suspension flows through the porous media of a natural dual-permeability rock. The 2D and 3D pore geometries of the porous media were generated by sphere packing with the radius ratio of 3. The porosity is about 38{\%} same as the natural dual-permeability rock. In the 2D case, the mesh cells reach 5 million with 1 million solid particles and in the 3D case, the mesh cells are above 10 million with 5 million solid particles. The solid particles are distributed by Gaussian distribution from 20 $\mu $m to 180 $\mu $m with expectation as 100 $\mu $m. Through the numerical simulations, not only was the HPC studied using Intel Xeon Phi Coprocessors but also the flow behaviors of large scale solid suspension flows in porous media were studied. [Preview Abstract] |
Follow Us |
Engage
Become an APS Member |
My APS
Renew Membership |
Information for |
About APSThe American Physical Society (APS) is a non-profit membership organization working to advance the knowledge of physics. |
© 2024 American Physical Society
| All rights reserved | Terms of Use
| Contact Us
Headquarters
1 Physics Ellipse, College Park, MD 20740-3844
(301) 209-3200
Editorial Office
100 Motor Pkwy, Suite 110, Hauppauge, NY 11788
(631) 591-4000
Office of Public Affairs
529 14th St NW, Suite 1050, Washington, D.C. 20045-2001
(202) 662-8700