Bulletin of the American Physical Society
72nd Annual Meeting of the APS Division of Fluid Dynamics
Volume 64, Number 13
Saturday–Tuesday, November 23–26, 2019; Seattle, Washington
Session G17: Focus Session: Recent Advances in Data-driven and Machine Learning Methods for Turbulent Flows II |
Hide Abstracts |
Chair: Pedro M. Milani, Stanford University Room: 4c4 |
Sunday, November 24, 2019 3:48PM - 4:01PM |
G17.00001: Embedded Tensor Basis Neural Network for RANS Simulation of 3D Flows Andrew J. Banko, David S. Ching, John K. Eaton Reynolds-Averaged Navier-Stokes (RANS) simulations continue to be primary tools for engineering design, but standard models are inaccurate when applied to 3D turbulent flows with separation. Recently, the confluence of machine learning algorithms and large simulation datasets has spurred the development of data-driven turbulence models. This work details an embedded Tensor Basis Neural Network (TBNN) to improve Reynolds stress anisotropy predictions in complex turbulent flows. A novel aspect of our approach is that the TBNN is trained using only highly resolved Large-Eddy simulation data and embedded within a RANS code so that its predictions are agnostic to errors in any baseline RANS solution. A Gaussian mixture model is used to prevent the TBNN from extrapolating while the simulation iterates to convergence. The training and testing sets include the flow over an asymmetric bump and a 180 degree U-bend. When applied to the U-bend, the TBNN-RANS improves predictions of the mean velocity deficit and eliminates unphysical secondary flows predicted by the baseline k-omega RANS model. After quantifying its performance, we compare our formulation to previous algebraic stress models and the optimal basis representation derived by Gatski and Jongen (2000). [Preview Abstract] |
Sunday, November 24, 2019 4:01PM - 4:14PM |
G17.00002: Tensor Basis Neural Networks for Turbulent Scalar Flux Modeling Pedro M. Milani, Julia Ling, John K. Eaton The Tensor Basis Neural Network (TBNN) model developed by Ling et al. (2016) has shown great promise to improve the momentum equations in Reynolds-averaged Navier Stokes (RANS) solvers. It uses physical insight together with machine learning paradigms to embed rotational invariance into a deep neural network, and then predicts a turbulence anisotropy tensor that obeys this property. The original formulation allows only the prediction of a symmetric, traceless tensor (which is the case for the turbulence anisotropy). In the scenario where a turbulent flow carries a scalar, such as heat or a contaminant, the turbulent scalar flux (a vector) needs to be modeled concurrently in the RANS framework. In this talk, we will present how to use the TBNN construction to model the turbulent scalar flux in a way that can be readily applied to a RANS solver. Manipulation of the appropriate invariant vector basis leads to a form with a general, tensorial turbulent diffusivity which is predicted by the deep neural network at test time. We apply this model to an inclined jet in crossflow and obtain significant improvement in the mean scalar concentration prediction. [Preview Abstract] |
Sunday, November 24, 2019 4:14PM - 4:27PM |
G17.00003: CFD-ready Turbulence Models from Gene Expression Programming: Concepts Yaomin Zhao, Harshal D. Akolekar, Richard D. Sandberg The gene expression programming (GEP) method is applied to develop Reynolds-averaged Navier-Stokes (RANS) models via symbolic regression. The candidate models, represented by strings of genes competing and evolving in the training, can be interpreted as explicitly given equations. Thus, the resulting model, which minimizes the cost function, can be directly implemented into RANS solvers. Based on the advantages of the GEP method, two training strategies have been proposed to develop CFD-ready RANS models. In the first framework called frozen-training, the models are trained to fit high-fidelity Reynolds stress-data. In the second approach, called CFD-driven training, the fitness of candidate models is evaluated by running RANS calculations in an integrated way. Both methods have been applied to model development for wake mixing in turbomachines. New models are trained based on a high-pressure turbine case and then tested for three additional cases. Despite of the different configurations and operating conditions, the predicted wake mixing profiles are significantly improved in all \textit{a-posteriori} tests. Furthermore, the differences between the models can be analyzed, and it shows that the enhanced wake prediction is predominantly due to the extra diffusion by the CFD-driven model. [Preview Abstract] |
Sunday, November 24, 2019 4:27PM - 4:40PM |
G17.00004: CFD-ready Turbulence Models from Gene Expression Programming: Unsteady Flows Chitrarth Lav, Jimmy Philip, Richard Sandberg Prediction of flows exhibiting vortex shedding using URANS is still a challenge today. The existing turbulence closure in URANS, in addition to being a poor representation of the turbulence length scales also accounts for the deterministic shedding scales twice: through the closure and the scale resolution. We propose an alternative non-linear closure, which is built only for the stochastic scales, i.e., devoid of the shedding scales, allowing URANS to resolve the deterministic unsteadiness. The closure is obtained from a novel symbolic regression algorithm: Gene Expression Programming (GEP), which generates a tangible equation for the modelled anisotropy. Using a high-fidelity dataset as reference, the stochastic component of the anisotropy is extracted by triple decomposing the data, which is subsequently used by GEP to produce the new closure. Once obtained, the closure can be used in isolation within URANS as it doesnâ€™t rely on high-fidelity data anymore. The approach is demonstrated using a zero pressure gradient turbulent wake as the reference dataset. The obtained closure was tested on 6 unseen cases, including pressure gradients, and the model significantly outperforms the existing closure, while being 400 times cheaper than the high-fidelity simulation. [Preview Abstract] |
Sunday, November 24, 2019 4:40PM - 4:53PM |
G17.00005: Non-local, frame-independent data-driven turbulence modeling by using deep neural networks Muhammad Irfan Zafar, Jiequn Han, Heng Xiao Recent advances in machine learning techniques have enabled researchers to explore data-driven turbulence models as attractive alternatives to traditional algebraic or PDE-based models. However, current data-driven models are all based on local mapping and thus are only applicable to equilibrium turbulence (as with the eddy-viscosity model and algebraic stress models). In this work, we present a PDE-inspired deep neural network architecture based on non-local mapping, which will be used to discover a turbulent constitutive relation from data. Such a network-based representation retains the non-local transport physics embodied in the Reynolds stress transport equations but avoids explicit modeling of individual terms. Furthermore, the neural network is devised to be frame-independent, which is a basic requirement of all constitutive models. Simple illustrative examples are presented to demonstrate the merits of the proposed framework. [Preview Abstract] |
Sunday, November 24, 2019 4:53PM - 5:06PM |
G17.00006: LES of turbulent channel flow using an artificial neural network Jonghwan Park, Haecheon Choi Neural networks (NNs) are used to map the relation between subgrid-scale (SGS) stresses and various input sets, and \textit{a priori} and \textit{a posteriori} tests are conducted to investigate the performance of NNs in a turbulent channel flow at \textit{Re}$_{\tau }\approx $180. NN with stencils of the velocity components as the input shows the highest correlation coefficient between true and predicted SGS stresses, and also predicts backscatter very well. However, NN with the strain rate as the input shows the best agreement with the filtered DNS data for the averaged SGS shear stress, even if the correlation coefficient between true and predicted SGS stress is low. In \textit{a posteriori} test, NNs which predict the backscatter well in \textit{a priori} test provide inaccurate statistics. On the other hand, even without wall damping function or \textit{ad hoc} clipping, NN with the strain rate as the input shows excellent agreements with the filtered DNS data for the mean velocity and Reynolds shear stress. The present NN predicts pointwise-SGS stresses without averaging in the homogeneous direction(s), which is often adopted for the use of the dynamic Smagorinsky model. The present NN model is applied to a higher Reynolds number (\textit{Re}$_{\tau }\approx $720) with the model trained at lower Reynolds number. The results also show good agreements with those of filtered DNS data. [Preview Abstract] |
Sunday, November 24, 2019 5:06PM - 5:19PM |
G17.00007: Deep learning based sub-grid scale closure for LES of Kraichnan turbulence Suraj Pawar, Omer San, Adil Rasheed Performing high-fidelity simulations of large scale multiscale flow problems that resolves fine spatiotemporal features is computationally intractable. Large eddy simulation (LES) tech- niques aim at reducing the computational resources by resolving large scales of the flow and the effect of small scales is modeled. In the present work, we put forth data-driven sub-grid scale closure models for LES of two-dimensional Kraichnan turbulence in the a priori settings. We use the resolved flow field on the coarser grid to estimate the eddy viscosity and subgrid-stresses. Our data-driven closure models are based on convolutional neural network (CNN) fed by snapshot data from the whole domain, and multilayer feedforward deep neural network (DNN) that utilizes localized stencil data. We analyze these two different neural networks in terms of the amount of training data, training and deployment computational time, selection of input and output features, and their characteristics in modeling accuracy and numerical stability. [Preview Abstract] |
Sunday, November 24, 2019 5:19PM - 5:32PM |
G17.00008: Improving linear embedding of complex nonlinear flow dynamics Nikolaus Adams, Ludger Paehler We propose an improvement on the concept of linear embedding of nonlinear flow dynamics by a Koopman-mode encoding network. A solution representation of approximate Koopman modes enables a linear estimation of the time evolution on a reduced number of degrees of freedom. Lusch et al., Deep learning for universal linear embeddings of nonlinear dynamics. Nature Communication, 2018, have proposed an encoder-decoder deep learning approach of approximate Koopman projection, and have demonstrated application feasibility for dynamical systems with continuous spectra. The most complex flow considered by Lusch et al. is that of low-Reynolds-number incompressible 2D cylinder flow. The objective of our work is to obtain a better representation of latent dynamics in order to represent significantly more complex flow dynamics. The concept is to improve on the auto-encoding capability of the deep learning approach with a probabilistic objective and by including input information. We demonstrate feasibility of the approach for broad-band flow dynamics such as generated by 3D Taylor-Green vortex-flow transition. Also, we will consider the representation of compressibility effects in oscillating gas-bubble dynamics. [Preview Abstract] |
Follow Us |
Engage
Become an APS Member |
My APS
Renew Membership |
Information for |
About APSThe American Physical Society (APS) is a non-profit membership organization working to advance the knowledge of physics. |
© 2020 American Physical Society
| All rights reserved | Terms of Use
| Contact Us
Headquarters
1 Physics Ellipse, College Park, MD 20740-3844
(301) 209-3200
Editorial Office
1 Research Road, Ridge, NY 11961-2701
(631) 591-4000
Office of Public Affairs
529 14th St NW, Suite 1050, Washington, D.C. 20045-2001
(202) 662-8700