Bulletin of the American Physical Society
77th Annual Meeting of the Division of Fluid Dynamics
Sunday–Tuesday, November 24–26, 2024; Salt Lake City, Utah
Session X15: Low-Order Modeling and Machine Learning in Fluid Dynamics: Turbulence Modeling I |
Hide Abstracts |
Chair: Leixin Ma, Arizona State University Room: 155 E |
Tuesday, November 26, 2024 8:00AM - 8:13AM |
X15.00001: Epistemic Uncertainty Quantification of Deep Neural-Network Based Turbulence Closures Cody Grogan, Som Dutta, Mauricio Tano, Som Dhulipala, Izabela Gutowska With the increase in data surrounding diverse turbulence phenomena, Deep Neural Network (DNN) based turbulence closures are being developed to increase the fidelity of turbulence models. A common approach is to increasing the fidelity of Reynolds Averaged Navier-Stokes (RANS) simulations, with DNN trained using data from Direct Numerical Simulation (DNS) or high-resolution Large Eddy Simulations (LES). However, the main obstacle to wider adaptation of DNN-based turbulence closures in simulations for critical industrial processes, is their inability to quantify the uncertainty of the DNN's prediction. The unknown uncertainty of the DNN based closure can be quantified using Bayesian statistics, to determine the Epistemic/Model uncertainty of an DNN through Bayesian Inference. This study introduces the idea and background of Bayesian Inference and how it is used to quantify the Epistemic uncertainty of an DNN-based turbulence closures. Different Bayesian Inference approximation methods such as Deep Ensembles, Monte-Carlo Dropout, and Stochastic Variational Inference will be compared, along with their associated uncertainty quantification performance. |
Tuesday, November 26, 2024 8:13AM - 8:26AM |
X15.00002: Improving Predicted Statistics of Velocity Gradient Closures using Parameterized Lagrangian Deformation Models Criston M Hyett, Michael Woodward, Yifeng Tian, Mikhail Stepanov, Chris L Fryer, Daniel Livescu, Michael Chertkov We advance the modeling of the statistical evolution of the velocity gradient tensor in isotropic turbulence by incorporating parameterized Lagrangian memory terms into a physics-informed machine learning framework. Our novel approach synergizes data-driven techniques from the previously proposed Tensor Basis Neural Network (TBNN) model with phenomenological deformation theories, such as the recent Fluid Deformation Models (FDM). This new model, termed the Lagrangian Deformation Tensor Network, outperforms both the TBNN and phenomenological models in predictive capability while elucidating Lagrangian memory effects. The learned memory kernels are analyzed and compared to the results obtained from the alternative Mori-Zwanzig representation of memory effects and the phenomenological upstream assumptions in the FDMs. Our findings highlight the significant role of time-history in predicting the deviatoric pressure Hessian. |
Tuesday, November 26, 2024 8:26AM - 8:39AM |
X15.00003: Neural Network-Based Closure Model of the Ensemble-Averaging Dynamics of Turbulent Puffs in Transitional Pipe Flow Yu Shuai, Clarence W Rowley The subcritical transition to turbulence in pipe flow is closely related to localized turbulent patterns called puffs. Individual puffs have chaotic dynamics as their trajectories wander around exact coherent states (ECS) of the Navier-Stokes equations in the phase space. Nevertheless, they share and maintain a well-defined characteristic spatial structure which resembles that of localized relative periodic orbits (RPOs) of Navier-Stokes equations (NSE). Such similarity indicates the feasibility of investigating puff dynamics from a statistical perspective to capture the common features of their time evolution. In this work, we derive the equations of the ensemble-average puff profile out of a simplified dynamical model (Barkley 2016). As the ensemble of puffs becomes large enough, the average profile stops fluctuating and approximates a stable equilibrium, which is due to the existence of unclosed terms in the average equations. We next put forward a closure model by training a neural network with an initial guess based on the eddy viscosity hypothesis. The model indicates that the unclosed terms strengthen the turbulent diffusion while decreasing the Reynolds number to stabilize the structure of the mean puff profile. Finally, we associate our model with previous conclusions on the differences between the dynamics of puffs and RPOs in actual pipe flow, which motivates our next step to investigate the mean evolution of real puffs governed by NSE. |
Tuesday, November 26, 2024 8:39AM - 8:52AM |
X15.00004: RANN: A Neural RANS Closure Model for Physics-Informed Machine Learning on General Geometries Matthew Uffenheimer, Luca Rigazio, Eckart Heinz Meiburg Aerodynamic design involves the use of various techniques to optimize the geometry of a design component, to achieve desired characteristics such as increased lift or decreased drag. This usually requires iterative design loops that are slow and computationally expensive, and may rely on simplified turbulence models such as Reynolds-Averaged Navier Stokes (RANS). |
Tuesday, November 26, 2024 8:52AM - 9:05AM |
X15.00005: Subgrid Stress Modeling with Data Driven Structured State Space Sequence Models Andy Wu, Sanjiva K Lele Data driven subgrid stress modeling with machine learning has shown promise in increasing the accuracy of Large Eddy Simulations (LES) compared to traditional subgrid stress models. Adapting Structured State Space sequence (S4) models to learn the subgrid stress tensor in multi-dimensional space with the S4ND model allows for global, continuous convolution kernels to be learned. The S4ND model is used in a U-net architecture to extract multi-scale spatial features of turbulence, and is trained on forced homogenous isotropic turbulence (HIT) and channel flow at two different filter widths. From a priori analysis, the S4ND U-net model is able to generalize to both interpolative and extrapolative filter widths with minimal change in the loss, even when the extrapolative filter width corresponds to situations where over 20 percent of the energy has been filtered out as compared to Direct Numerical Simulation (DNS). This is compared to other data driven subgrid models where the loss when generalizing to an extrapolative filter width increases by a factor of 1.5-3. Furthermore, a posteriori analysis involving both channel flow and forced HIT are conducted with various models to evaluate the S4ND U-net model. |
Tuesday, November 26, 2024 9:05AM - 9:18AM |
X15.00006: The multiscale-based data-driven subgrid-scale model with physics constraints for enhanced prediction of unresolved scales in turbulent flow Bahrul Jalaali, Kie Okabayashi Recent advances in data-driven subgrid-scale (SGS) models are paving the way to capture subfilter-scale fluctuations through deep neural networks (DNN). In this study, we introduce a multi-scale convolutional NN (CNN) -based SGS model that leverages the multi-scale nature of turbulence vortices. This model progressively encodes the features from coarser to finer scales incorporating the energy transfer process between scales. We aim to determine the model’s effectiveness in extracting features of complex turbulent fields to resolve residual stress (Τij ). To enhance predictions, we integrate a physics-constrained DNN. We apply our data-driven SGS model to the large-eddy simulation (LES) while the object of analysis is the turbulence channel. The result of a priori test demonstrates that this model outperforms the conventional CNN-based model in predicting, achieving high correlation coefficients against the label data within different regions: the viscous sublayer, buffer layer, and outer layer. The results highlight the model's proficiency and robustness in resolving scales of motion and residual stress, showcasing its effectiveness in mimicking the energy transfer process of turbulence by this model. The comprehensive a posteriori test will be presented at the conference, highlighting the advancements of this model in actual flow simulations |
Tuesday, November 26, 2024 9:18AM - 9:31AM |
X15.00007: A dynamic recursive neural-network-based subgrid-scale model for large eddy simulation Chonghyuk Cho, Haecheon Choi One approach to developing a subgrid-scale (SGS) model for large eddy simulation involves obtaining the SGS stresses and resolved flow variables from filtered direct numerical simulation (fDNS) data and inserting them into a neural network (NN). However, neural networks suffer from the problem of making arbitrary predictions, when the input data locate outside the scope of the training data (extrapolation issue). As a result, a trained NN-based SGS model shows poor performance, when it is applied to trained flow at much higher Reynolds number or with different grid sizes, and to untrained flows having different flow topologies. To overcome these difficulties, we first develop a recursive procedure to simulate high Reynolds number flow and then adopt a dynamic approach to accommodate different flow topologies. The present neural network is trained only from forced homogenous isotropic turbulence. We apply the dynamic recursive NN-based SGS model to turbulent channel flow and other complex flows. The results are comparable to those of traditional SGS models. |
Tuesday, November 26, 2024 9:31AM - 9:44AM |
X15.00008: SGS backscatter effects in coarse-grid LES predicted by a machine-learning-based SGS model Soju Maejima, Soshi Kawai In this talk, a machine-learning-based sub-grid scale (SGS) modeling for coarse-grid large-eddy simulation (LES) is proposed. The proposed SGS model consists of an unsupervised and supervised machine learning model to enable accurate prediction of the SGS stresses for unconventionally coarse grid LES. |
Tuesday, November 26, 2024 9:44AM - 9:57AM |
X15.00009: Toward machine-learning-based large eddy simulation of flow over a complex geometry MYUNGHWA KIM, Haecheon Choi Our purpose is to develop a machine-learning-based subgrid-scale model that can be applied to large eddy simulation (LES) of flow over/inside a complex geometry. We conduct direct numerical simulation (DNS) of flow over a circular cylinder at the Reynolds number of 3900, and use filtered DNS data for training of the neural network (NN). The trained NN is applied not only to the trained flow at higher Reynolds number but also to untrained flows such as flows over a backward-facing step and an airfoil. LES of the trained flow at 2-3 times higher Reynolds numbers provides good predictions. For the application to untrained flow, various aspects including modification of NN architecture, choice of input variable, and normalization method are considered, and they are discussed at the presentation. |
Tuesday, November 26, 2024 9:57AM - 10:10AM |
X15.00010: Abstract Withdrawn |
Follow Us |
Engage
Become an APS Member |
My APS
Renew Membership |
Information for |
About APSThe American Physical Society (APS) is a non-profit membership organization working to advance the knowledge of physics. |
© 2025 American Physical Society
| All rights reserved | Terms of Use
| Contact Us
Headquarters
1 Physics Ellipse, College Park, MD 20740-3844
(301) 209-3200
Editorial Office
100 Motor Pkwy, Suite 110, Hauppauge, NY 11788
(631) 591-4000
Office of Public Affairs
529 14th St NW, Suite 1050, Washington, D.C. 20045-2001
(202) 662-8700