Bulletin of the American Physical Society
72nd Annual Meeting of the APS Division of Fluid Dynamics
Volume 64, Number 13
Saturday–Tuesday, November 23–26, 2019; Seattle, Washington
Session B19: CFD: Advanced Methods and Models I |
Hide Abstracts |
Chair: Laurette Tuckerman, ESPCI Paris Room: 401 |
Saturday, November 23, 2019 4:40PM - 4:53PM |
B19.00001: {In situ} data compression for large-scale computational fluid dynamics simulations via interpolative decomposition methods Heather Pacella, Alec Dunton, Alireza Doostan, Gianluca Iaccarino Over the next decade, exascale supercomputers will provide a thousand-fold increase in floating-point performance, with memory increase of only a factor of one hundred. As a result, data generation from computational fluid dynamics simulations will easily surpass available memory capacity. Additional data generation from ensemble simulations for uncertainty quantification, inference, and optimization will also contribute to this discrepancy. To address this, we implement a lossy in situ compressive algorithm, interpolative decomposition (ID), within the solvers themselves, which allows us to store simulation results at a fraction of memory cost. Because ID algorithms operate independently on subregions of the fluid domain, they are a natural fit for the flexibility that task-based parallelism programming systems provide. Legion is one such programming system; it allows for implicitly extracted parallelism, easy performance tuning, and porting to various heterogeneous architectures. We will discuss the implementation of both the sub-sampled and single-pass ID algorithms in a high-order Navier-Stokes solver written in Regent, as well as performance, scalability, and cost of both ID algorithms in a task-parallel environment. [Preview Abstract] |
Saturday, November 23, 2019 4:53PM - 5:06PM |
B19.00002: An efficient algorithm to differentiate statistics in turbulent flows Nisha Chandramoorthy, Qiqi Wang In a chaotic system like a turbulent fluid flow around an airplane wing, the derivatives of state functions such as lift and drag with respect to design or control inputs (for example, the geometry of the airfoil or the freestream Mach number) grow exponentially with time. Yet, the infinite-time average of a state function, equal to its average according to the steady-state distribution over state-space, has a bounded derivative to parameters. Computing this statistical response to infinitesimal changes in parameters is an important technical challenge addressing which enables uncertainty quantification, mesh adaptation, parameter estimation and other gradient-based multidisciplinary design optimization techniques. We present a novel approach, the perturbation space-split sensitivity (S3) algorithm, that is provably convergent to the sensitivity of statistics and computationally efficient. The S3 algorithm is demonstrated on a low Reynolds number flow over a vertical block. [Preview Abstract] |
Saturday, November 23, 2019 5:06PM - 5:19PM |
B19.00003: Modeling the Effect of Resolution Inhomogeneity in LES Gopal Yalla, Robert Moser, Todd Oliver, Sigfried Haering, Bjorn Engquist Large eddy simulation (LES) of complex turbulent flows often requires discretizations with resolution that varies rapidly in space. The importance of resolution inhomogeneity in LES has been recognized for a long time, but its numerical impact is not well-understood. Consequently, resolution inhomogeneity effects are largely ignored in the formulation of standard subgrid stress models, which can lead to poor performance in practical applications on complex, highly inhomogeneous grids. In this talk, the effect of convection through inhomogeneous resolution on homogeneous, isotropic turbulence is examined and the development of a new formulation to correct for such inhomogeneity issues is presented. This model formulation is based on (1) updated analysis of the commutator between the filtering and differentiation operators, and (2) propagation properties of the underlying numerical methods. We will also discuss how to further exploit the structure of numerical operators for the correction of issues associated with resolution inhomogeneity. [Preview Abstract] |
Saturday, November 23, 2019 5:19PM - 5:32PM |
B19.00004: Extending the Active Model-Split to Compressible Flow Clark Pederson, Todd Oliver, Sigfried Haering, Robert Moser Hybrid RANS/LES methods have shown success in accurately predicting a wide range of turbulent scales, while keeping computational cost low. Nevertheless, their predictive accuracy is limited by shortcomings such as modeled-stress-depletion and scalar grid measures. Haering, Oliver, and Moser developed an ``active model-split'' (AMS) to address many of these shortcomings. This model splits the unresolved turbulent stress into a mean and fluctuating portion, each with their own model. The AMS has shown improved accuracy in several incompressible test cases. In order to allow more complex test cases, this new model has been applied to the compressible flow equations and implemented in SU2. Results are presented for several compressible test cases, including supersonic channel and boundary layer flows and the Bachalo and Johnson transonic, axisymmetric bump. [Preview Abstract] |
Saturday, November 23, 2019 5:32PM - 5:45PM |
B19.00005: Hyperviscosity and bottlenecks in the Taylor-Green vortex Rahul Agrawal, Alexandros Alexakis, Marc Brachet, Laurette Tuckerman Direct numerical simulations of turbulence are sometimes performed with hyperviscosity, in which the standard viscous term is replaced by a higher power $p$ of the Laplacian, in order to increase the dissipation at high wavenumbers and thus to widen the computationally accessible inertial range. It is essential to determine the effect of hyperviscosity on various features of the turbulent energy spectrum, such as the bottleneck, an increase in energy for wavenumbers just below the dissipation range. Here, we use the symmetries of decaying Taylor-Green flow to study the effect of hyperviscosity on the bottleneck in high-resolution direct numerical simulations for resolutions up to $1024^3$ and for hyperviscosity up to order $p=100$, using simulations with $2048^3$ and $p=1$ as a reference case. We also investigate numerical issues that must be addressed for these high parameter values, in particular the timestepping scheme, the timestep, and the evaluation of the dissipation. [Preview Abstract] |
Saturday, November 23, 2019 5:45PM - 5:58PM |
B19.00006: On the role of anisotropic term in subgrid-scale model for enhancing energy spectrum in high-wavenumber region Kenichi Abe We investigate an anisotropy-resolving subgrid-scale (SGS) model for large eddy simulation, which is constructed by combining an isotropic linear eddy-viscosity model (EVM) with an extra anisotropic term (EAT) (Abe, Int. J. Heat Fluid Flow, 39, pp. 42-52 (2013)). In this study, to reveal the role of the EAT in the SGS model, we performed simulations using several combinations for the terms involved in the SGS model, e.g., only EVM, only EAT as well as full version (EVM$+$EAT). We calculate the power spectrum of the GS velocity component from the obtained data and then compared them in detail. The comparison shows that the EAT works well for enhancing small-scale structures, resulting in an apparent upshift of the power spectrum particularly in a high-wavenumber region. We further investigate another option for the EAT instead of the currently-used formulation, e.g., a modified Leonard-stress formulation. [Preview Abstract] |
Saturday, November 23, 2019 5:58PM - 6:11PM |
B19.00007: Asynchronous Direct Numerical Simulations (DNS) of turbulent flows at extreme scales. Komal Kumari, Diego Donzis A major challenge in turbulence simulations is to accurately resolve the wide range of spatio-temporal scales. The computational cost of well resolved DNS grows with Reynolds number steeply and therefore, necessitates the use of massively parallel computations on supercomputers. However, the increase in communication and synchronization cost of current approaches could pose an insurmountable bottleneck at extreme scales. Thus, we have developed a novel paradigm, which relaxes these synchronization requirements at a mathematical level and leads to the so-called Asynchrony-Tolerant (AT) schemes. A first of its kind implementation of these schemes in a 3-D compressible Navier-Stokes DNS solver (forced and decaying) will be presented. Implementation of asynchrony using communication and synchronization avoiding algorithms resulting in periodic and random delays will be discussed. We show that these asynchronous algorithms accurately resolve large and small scale characteristics of turbulence, including instantaneous fields. We also show their efficiency in mitigating the communication bottleneck. The improved scaling, without affecting the physics, makes this asynchronous paradigm a path towards exascale simulations of turbulence and other non-linear phenomena. [Preview Abstract] |
Follow Us |
Engage
Become an APS Member |
My APS
Renew Membership |
Information for |
About APSThe American Physical Society (APS) is a non-profit membership organization working to advance the knowledge of physics. |
© 2024 American Physical Society
| All rights reserved | Terms of Use
| Contact Us
Headquarters
1 Physics Ellipse, College Park, MD 20740-3844
(301) 209-3200
Editorial Office
100 Motor Pkwy, Suite 110, Hauppauge, NY 11788
(631) 591-4000
Office of Public Affairs
529 14th St NW, Suite 1050, Washington, D.C. 20045-2001
(202) 662-8700