Bulletin of the American Physical Society
72nd Annual Meeting of the APS Division of Fluid Dynamics
Volume 64, Number 13
Saturday–Tuesday, November 23–26, 2019; Seattle, Washington
Session G16: CFD: Data-driven Methods |
Hide Abstracts |
Chair: Mihailo Javanovic, USC Room: 4c3 |
Sunday, November 24, 2019 3:48PM - 4:01PM |
G16.00001: Towards Generalizable Data-driven Turbulence Model Augmentations Vishal Srivastava, Karthik Duraisamy Reynolds Averaged Navier Stokes (RANS) models are based on a mix of physical and phenomenological ideas. Once a model structure is fixed, calibration is typically based on a few canonical flows, and as a result, models are often insufficiently accurate in many general applications. Data-driven techniques present the possibility of more accurate models of complex flows, though generalizability and robustness of such models is an open issue and the topic of the work. We address data-augmented turbulence models with a focus on enforcing consistency of the augmentations with the underlying modeling environment using integrated inference and learning. Further, the augmentation is constrained to satisfy uncompromisable physical laws (such as frame invariance) and known relationships (such as preserving the law of the wall in the large Reynolds number limit). These constraints are either imposed directly at the inference step or implicitly enforced by construction. Sample results are presented for equilibrium and non-equilibrium wall-bounded turbulent flows. [Preview Abstract] |
Sunday, November 24, 2019 4:01PM - 4:14PM |
G16.00002: A data-driven approach to modeling turbulent decay at non-asymptotic Reynolds numbers Mateus Dias Ribeiro, Gavin D Portwood, Peetak Mitra, Tan Mihn Nyugen, Balasubramanya T Nadiga, Michael Chertkov, Anima Anandkumar, David P Schmidt Dynamic modeling of turbulent processes away from asymptotic parameter limits is an active area of turbulence research. This study considers the transient modeling of the kinetic energy dissipation rate, an important component for turbulence closure models like $k-\epsilon$. While asymptotic analysis of the turbulent dissipation process effectively calibrates the model parameters at high and low Reynolds numbers, these calibrations are inaccurate at intermediate Reynolds with strong dependence on large-scale turbulence properties. In intermediate regimes, model tuning via data-driven regression has a leading-order effect on accuracy such that a purely data-driven approach is sensible. Here, we model the kinetic energy dissipation rate in decaying isotropic turbulence using a NeuralODE, a continuous-depth neural network which models continuous-time processes. After training a model using direct numerical simulations (DNSs) over a range of Reynolds numbers and large-scale turbulence initial conditions, we show that a purely data-driven approach to modeling turbulent dynamics via NeuralODEs provides an attractive solution to turbulence closure in non-idealized parameter regimes. [Preview Abstract] |
Sunday, November 24, 2019 4:14PM - 4:27PM |
G16.00003: A data-driven approach to modeling turbulent flows in an engine environment Peetak Mitra, Mateus Dias Ribeiro, David Schmidt In Internal Combustion Engines, turbulence plays a key role in the fuel/air mixing, improving overall efficiency and reducing emissions. Modeling these environments involve dealing with turbulence, multiphase flow, combustion, and moving boundaries. Because of the wide variations in length and time scales, the high fidelity models such as Large Eddy Simulations impose strict resolution requirements making the computations both expensive and doomed to omit critical information that cannot be resolved in a pragmatic design cycle. Here we propose using a data driven method for learning optimized approximations to these unresolved features trained on in-house generated high fidelity dataset. In this hybrid PDE-ML framework developed in OpenFOAM the large scale, resolvable features are to be obtained by solving the governing flow/energy equations (PDE) and the machine learning is to be only applied to the small, unresolved scales. A key aspect in developing this framework is that the machine learning model respects the rotational as well as Galilean invariance of the Reynolds stress models and use local quantities to construct the feature set for the data-driven model thereby improving model performance on a low resolution grid, and thus providing a pathway to coarse-graining methods [Preview Abstract] |
Sunday, November 24, 2019 4:27PM - 4:40PM |
G16.00004: Toward data-driven stochastically forced turbulence closure models Armin Zare, Anubhav Dwivedi, Mihailo Jovanovic We build on work by Zare, Jovanovic, and Georgiou (JFM, vol. 812, 2017) to develop stochastically forced closure models for the mean flow equations of a turbulent channel flow. Given a subset of steady-state velocity correlations for a turbulent channel flow at a friction Reynolds number of 186, we formulate an inverse problem to determine the forcing statistics to the linearized model that provide consistency with DNS. The resulting stochastically forced linearized model is used to drive the mean flow equations in time-dependent simulations. This provides a correction to the mean velocity profile which perturbs the linearized Navier-Stokes dynamics. The feedback connection of mean flow equations with stochastically forced linearized equations incorporates a two-way interaction between the mean flow and second-order statistics of the fluctuating velocity field. By analyzing conditions under which this feedback connection converges, we take a step toward the development of new classes of data-driven turbulence closure models. [Preview Abstract] |
Sunday, November 24, 2019 4:40PM - 4:53PM |
G16.00005: A data-driven approach to simulate turbulent bubbly flows using machine learning for modeling bubble size. Hokyo Jung, Youngjae Kim, Serin Yoon, Gangwoo Ha, Jun Ho Lee, Hyungmin Park, Dongjoo Kim, Jungwoo Kim, Seongwon Kang We investigated a bubble size model using artificial neural networks (ANN). A multi-layer ANN is employed as a basis to remove a need for assuming the functional form. From the training process, average relative errors were 4.98{\%}. It showed good agreement with experimental data. A sensitivity analysis was performed to understand relative importance of each flow parameter. In order to evaluate the prediction capability in a posteriori sense, RANS simulations were performed for turbulent bubbly flows, for which experimental data are available. For a case in the wall peaking regime, the present results were similar to the experimental data. However, for a case in the core peaking regime, the agreement with the experimental data was unsatisfactory. These errors were attributed to an issue of the shear-induced lift model. The present model combined with corrections to the shear-induced lift model by Tomiyama showed significantly improved results compared to existing models. In conclusion, both the evaluation and validation procedures showed that the present model based on ANN can estimate the bubble size reasonably well in turbulent bubbly flows. [Preview Abstract] |
Sunday, November 24, 2019 4:53PM - 5:06PM |
G16.00006: Deep Neural Networks for Data-Driven Turbulence Models Andrea Beck, David Flad, Claus-Dieter Munz Machine learning methods and in particular deep learning via artificial neural networks have generated significant enthusiasm in the last years. Since these methods can provide approximations to general, non-linear functions by learning from data without a-priori assumptions, they are particularly attractive for the generation of subspace models for multi scale problems. In this presentation, we present a novel data-based approach to turbulence modelling for Large Eddy Simulation by deep learning via artificial neural networks. We first discuss and define the exact closure terms and generate training data from direct numerical simulations of decaying homogeneous isotropic turbulence. We then present the design and training of artificial neural networks based on local convolution filters to predict the underlying unknown non-linear mapping from the coarse grid quantities to the closure terms without a-priori assumptions. All investigated networks are able to generalize from the data and learn approximations. We further show that selecting both the coarse grid primitive variables as well as the coarse grid LES operator as input features significantly improves training results. Finally, we show how to construct a stable and accurate LES model from the learned closure terms. [Preview Abstract] |
Sunday, November 24, 2019 5:06PM - 5:19PM |
G16.00007: Generalized Non-Linear Eddy Viscosity Models for Data-Assisted Reynolds Stress Closure Basu Parmar, Eric Peters, Kenneth Jansen, Alireza Doostan, John Evans The prediction of turbulent flow is critical for the design and analysis of engineering systems. Unfortunately, Linear (LEV) and Non-Linear Eddy Viscosity (NLEV) models lack predictive capability in practical flow scenarios involving severe flow separation, secondary flows, and adverse pressure gradients. In this talk, we propose Generalized Non-Linear Eddy Viscosity (GNLEV) models for modeling the Reynolds stress tensor. In these models, we assume the anisotropic part of the Reynolds stress tensor is a function of the mean strain rate tensor, the mean rotation rate tensor, and the mean pressure gradient: The Hilbert basis theorem can be used to generate a symmetric tensor integrity basis, and thus any GNLEV model can be written as a linear combination of these basis functions. The coefficients associated with this expansion themselves are scalar functions of the invariants. The exact form of these coefficients is unknown and hence models must be introduced to obtain complete Reynolds stress closure. In this talk, we use a tensor-based feed forward neural network as a surrogate model to predict these coefficients. Numerical results illustrate the effectiveness of the proposed Reynolds stress closure approach. [Preview Abstract] |
Sunday, November 24, 2019 5:19PM - 5:32PM |
G16.00008: An S-frame Discrepancy Correction for Data-Driven Reynolds Stress Closure Aviral Prakash, Eric Peters, Riccardo Balin, Kenneth Jansen, Alireza Doostan, John Evans Scale-resolving simulations demand large computational resources. Therefore, industry often relies on solving the ensemble averaged mean flow equations. This averaging leads to an unclosed term known as the Reynolds stress tensor. This closure problem is often addressed using linear eddy viscosity (LEV) models which assume alignment of the anisotropic part of the Reynolds stress tensor and mean strain rate tensor. However, the two tensor do not align for many turbulent flows, including those exhibiting flow separation. In this work, we present a strategy for modeling the discrepancy between the Reynolds stress tensor predicted by an LEV model and the actual Reynolds stress tensor. The strategy relies on learning the discrepancy components in the mean strain rate eigenframe. By intelligently selecting model inputs, we arrive at a model that is both frame and Galilean invariant. We can also ensure energy stability using a simple constraint on the diagonal terms of the discrepancy. To learn a computable model, we employ high fidelity DNS data and neural networks. Numerical results illustrate the effectiveness of our discrepancy modeling strategy. We finally discuss how to use model derived turbulence variables rather than DNS data in the learning process. [Preview Abstract] |
Follow Us |
Engage
Become an APS Member |
My APS
Renew Membership |
Information for |
About APSThe American Physical Society (APS) is a non-profit membership organization working to advance the knowledge of physics. |
© 2020 American Physical Society
| All rights reserved | Terms of Use
| Contact Us
Headquarters
1 Physics Ellipse, College Park, MD 20740-3844
(301) 209-3200
Editorial Office
1 Research Road, Ridge, NY 11961-2701
(631) 591-4000
Office of Public Affairs
529 14th St NW, Suite 1050, Washington, D.C. 20045-2001
(202) 662-8700