Bulletin of the American Physical Society
75th Annual Meeting of the Division of Fluid Dynamics
Volume 67, Number 19
Sunday–Tuesday, November 20–22, 2022; Indiana Convention Center, Indianapolis, Indiana.
Session A21: Turbulence: Machine Learning Methods for Turbulence Modeling I |
Hide Abstracts |
Chair: Pedram Hassanzadeh, Rice University Room: 208 |
Sunday, November 20, 2022 8:00AM - 8:13AM |
A21.00001: Capturing small scale dynamics of turbulent velocity and scalar fields using deep learning Dhawal Buaria, Katepalli R Sreenivasan The notion of small scale universality, captured for instance by the statistics of velocity gradients, is central to our understanding and modeling of turbulent flows. Similarly, when considering associated scalar transport and mixing processes, the statistics of scalar gradients are of equal importance. However, obtaining statistics of velocity and scalar gradients from direct numerical simulations at practical Reynolds numbers and Schmidt numbers is still beyond the capability of current supercomputers. In this work, we use a deep learning framework to model gradient dynamics by utilizing physics-informed tensor-based neural networks. We learn from a massive direct numerical simulation database at various Reynolds numbers and demonstrate that our model can predict statistics at higher unseen Reynolds numbers. Likewise, we illustrate extensions to passive scalar mixing at high Schmidt numbers. Our work demonstrates that prohibitively expensive direct numerical simulations at increasingly high Reynolds and Schmidt numbers can possibly be avoided, and the small scale dynamics of turbulence can be adequately modeled from existing datasets. |
Sunday, November 20, 2022 8:13AM - 8:26AM |
A21.00002: Lagrangian Large Eddy Simulations via Physics-Informed Machine Learning Michael Chertkov, Yifeng Tian, Mikhail Stepanov, Chris L Fryer, Michael Woodward, Criston M Hyett, Daniel Livescu In this work, we apply Physics-Informed Machine Learning to develop Lagrangian Large Eddy Simulation (L-LES) models for turbulent flows. We generalize the evolutionary equations of Lagrangian particles moving in weakly compressible turbulence with extended, physics-informed parametrization and functional freedom, by combining physics-based parameters and physics-inspired Neural Networks (NN) to describe the evolution of turbulence within the resolved range of scales. The sub-grid scale contributions are modeled separately with physical constraints to account for the effects from un-resolved scales. We build the resulting model under the Differentiable Programming framework to facilitate efficient training and then train the model on a set of coarse-grained Lagrangian data extracted from fully-resolved Direct Numerical Simulations. We experiment with loss functions of different types, including trajectory, field, and statistics-based ones to embed physics into the learning. We show that our Lagrangian LES model is capable of reproducing Eulerian and unique Lagrangian turbulence structures and statistics over a range of turbulent Mach numbers. |
Sunday, November 20, 2022 8:26AM - 8:39AM |
A21.00003: Combining spectral analyses of turbulent flows and neural networks for explainable data-driven closure modeling Pedram Hassanzadeh, Adam Subel, YIFEI GUAN, Ashesh K Chattopadhyay Recent studies have found promising results using machine learning (ML) techniques such as convolutional neural networks (CNNs) to develop data-driven subgrid-scape closures, e.g., for large-eddy simulations. However, the lack of interpretability of such deep NNs is a serious shortcoming, limiting the applications of such data-driven closures. Furthermore, NNs and similar techniques cannot be expected to work accurately outside their training manifold, i.e, they often do not extrapolate. Transfer learning (TL), which involves re-training some layers with a small amount of new data, offers a solution to this and a few recent studies have found promising results in simple test cases. Here, we present a framework, based on combining the spectral analysis of turbulent flows and spectral analysis of CNNs to 1) provide full explainability of what is learned during TL, and 2) provide insights into what the CNNs learn, from the input to the output. Using 2D turbulence as the test case, we show how this framework connects with the physics of the flow and some of the recent advances in the ML community on the training of NNs. |
Sunday, November 20, 2022 8:39AM - 8:52AM |
A21.00004: Toward neural-network-based large eddy simulation: application to turbulent flow over a circular cylinder Myunghwa Kim, Jonghwan Park, Haecheon Choi A neural-network(NN)-based large eddy simulation is conducted for flow over a circular cylinder. We propose NN models with a fusion layer in addition to consecutive hidden layers. The input variables are the grid- and test-filtered strain rate or velocity gradient, and the output is the subgrid-scale (SGS) stresses. The training data are from a direct numerical simulation of flow over a circular cylinder at Re=3,900 based on the free-stream velocity and cylinder diameter. The trained SGS models are evaluated in a priori and a posteriori tests under the trained flow condition and show a slightly better prediction than physics-based SGS models such as the dynamic Smagorinsky model. The SGS models are also applied to higher Reynolds number flows at Re=5,000 and 10,000, and accurately predict the flow statistics. |
Sunday, November 20, 2022 8:52AM - 9:05AM |
A21.00005: Unsupervised machine-learning-based sub-grid scale modeling for coarse-grid LES Soju Maejima, Soshi Kawai In this talk, we propose a machine-learning-based sub-grid scale (SGS) modeling for coarse-grid large-eddy simulation (LES). The machine learning model performs super-resolution of the LES flow field into a flow field of direct numerical simulation (DNS) quality. In other words, the model estimates the high-wavenumber components of flow that the coarse-grid LES does not resolve. By utilizing an unsupervised learning model (CycleGAN), the model is able to learn the correlation between poorly-resolved flows of coarse-grid LES and well-resolved flows of DNS, which is impossible with supervised learning methods. The resultant super-resolved flow is then used to calculate the SGS stress components. We show that the results agree well with the SGS stress derived from DNS data in a priori tests, including the strong anisotropies near the wall. The model is also tested in an a posteriori manner, and the results are discussed. |
Sunday, November 20, 2022 9:05AM - 9:18AM |
A21.00006: Deep Reinforcement Learning for Large-Eddy Simulation Subgrid-Scale Modeling in Turbulent Channel Flow Junhyuk Kim, Hyojin Kim, Jiyeon Kim, Changhoon Lee The need for high-precision and high-efficiency simulation naturally leads to turbulence modeling, which is challenging due to the inevitable trade-off between accuracy and cost. Recently, artificial intelligence, mainly deep neural network (DNN), has been actively tested with the expectation of high performance compared to the existing model. Although the classical supervised learning models require expensive data for training, they do not show as good performance as expected. To overcome this, we adopted deep reinforcement learning (DRL), one of the online learning algorithms, for subgrid-scale (SGS) modeling of large-eddy simulation (LES) in turbulent channel flow. Here, DRL uses a reward function defined by a solution of LES for training and can be carried out using only statistics as given information. Through this, we trained a DNN model that produces local SGS stresses from resolved local velocity gradients with constraints of physical invariance. As a result, we found that in several simulation conditions it is possible to find a SGS model that accurately predicts given target statistics, the mean velocity and mean Reynolds shear stress profiles, showing the potential of DRL for turbulence modeling. Now, we are extending it to develop a general SGS model. |
Sunday, November 20, 2022 9:18AM - 9:31AM |
A21.00007: Machine Learning assisted modeling of velocity gradient dynamics in turbulent flows Deep Shikha, Sawan S Sinha The evolution of nonlinear turbulent processes like energy cascading, scalar mixing, and intermittency is highly dependent on the evolution process of velocity gradients. Accessing the velocity gradient evolution through simple dynamic models has many advantages over direct numerical simulations (DNS) and experimental methods. Indeed. such velocity gradient models can be directly used as closure models for Lagrangian probability density function (PDF) methods. Modeling the velocity gradient dynamics needs modeling of the two terms the pressure Hessian tensor and the viscous term which are nonlocal and mathematically unclosed. In this work, the TBNN architecture is used to model the pressure Hessian term. The TBNN model is trained with the locally normalized incompressible isotropic DNS data. Using a local normalization strategy enables our model to integrate the pressure hessian term in the velocity gradient evolution equation. The current model performance is evaluated based on known turbulent characteristics observed in DNS results. The performance of the new model is also compared with the existing velocity gradient models of incompressible as well as compressible flows. The model is showing a significant improvement over the existing models. |
Sunday, November 20, 2022 9:31AM - 9:44AM |
A21.00008: Accurate Deep Learning sub-grid scale models for large eddy simulations Rikhi Bose, Arunabha M Roy In this work, sub-grid scale (SGS) turbulence models have been developed for the purpose of large-eddy simulations (LES). |
Sunday, November 20, 2022 9:44AM - 9:57AM |
A21.00009: Turbulence Model Development based on a Novel Method Combining Gene Expression Programming and Artificial Neural Network Haochen Li, Yaomin Zhao Data-driven methods have been widely used for developing physical models. Compared with deep learning methods that usually provide "black-box" models, evolutionary algorithms like gene expression programming (GEP) focus on finding explicit model equations via symbolic regression. However, the optimizing process in GEP usually cause issues of slow convergence, facing difficulties in finding accurate model coefficients. Combining GEP which has global searching capabilities and neural networks (NNs) for gradient optimization, we propose a novel method called gene expression programming neural network (GEPNN). In the GEPNN training iteration, candidate models are first optimized in the GEP framework. Then selected GEP models are expressed and optimized as NNs, after which they are transformed back to the GEP framework for the next iteration. This method has been first tested to develop different physical laws, showing that GEPNN converges fast to models with precise constant coefficients. Furthermore, GEPNN is applied to model subgrid-scale stress for large-eddy simulation of turbulence. The GEPNN model shows significant improvements in predicting turbulence statistics and flow structures in a posteriori tests. |
Follow Us |
Engage
Become an APS Member |
My APS
Renew Membership |
Information for |
About APSThe American Physical Society (APS) is a non-profit membership organization working to advance the knowledge of physics. |
© 2025 American Physical Society
| All rights reserved | Terms of Use
| Contact Us
Headquarters
1 Physics Ellipse, College Park, MD 20740-3844
(301) 209-3200
Editorial Office
100 Motor Pkwy, Suite 110, Hauppauge, NY 11788
(631) 591-4000
Office of Public Affairs
529 14th St NW, Suite 1050, Washington, D.C. 20045-2001
(202) 662-8700