Bulletin of the American Physical Society
76th Annual Meeting of the Division of Fluid Dynamics
Sunday–Tuesday, November 19–21, 2023; Washington, DC
Session J29: Low-Order Modeling and Machine Learning for Turbulence I |
Hide Abstracts |
Chair: Yifei Guan, Rice University Room: 152B |
Sunday, November 19, 2023 4:35PM - 4:48PM |
J29.00001: A priori screening of machine-learning turbulence models Peng Chen, Yuanwei Bin, Yipeng Shi, Mahdi Abkar, Park George, Xiang Yang Assessing the compliance of a white-box turbulence model with known turbulent knowledge is a straightforward process. It enables users to quickly screen conventional turbulence models and identify any apparent inadequacies, thereby allowing for a more focused and fruitful validation and verification. On the other hand, comparing a black-box machine-learning(ML) model to known empricisms is not straightforward. Unless one implements and tests, it would not be clear if a ML model, trained at finite Reynolds numbers preserves the known high Reynolds number limit. Having to implement a model is inconvenient, particularly when model implementation involves retraining and reinterfacing. This work attempts to address this issue, allowing fast a priori screening of ML models that are based on feed-forward neural networks. The method leverages the mathematical theorems we present in this work. These theorems offer estimates of a network's limits even when the exact weights and biases are unknown. In this work, we screen existing ML wall models and RANS models for their compliance with the logarithmic law in an a priori manner. In addition to enabling fast screening of existing models, the theorems also provide guidelines for future ML models. |
Sunday, November 19, 2023 4:48PM - 5:01PM |
J29.00002: Data-driven classification of sheared stratified turbulence from experimental shadowgraphs Miles M Couchman, Adrien Lefauve We present a novel dimensionality reduction and unsupervised clustering framework for the classification and reduced-order modeling of density-stratified turbulence in laboratory experiments. Our method is applied to shadowgraph data collected in the `Stratified Inclined Duct' (SID) experiment, where a rich set of turbulent states arise in a sheared buoyancy-driven counterflow at Prandtl number Pr≈700, as a function of the Reynolds number (Re) and duct tilt angle (θ). By analyzing statistics of the morphology of density interfaces embedded within the turbulent flow, we identify a `skeleton' of distinct turbulent states underpinning the complex physics of the SID experiment. The ratio of time spent in each turbulent state varies gradually across the (Re, θ) parameter space, and at least two distinct routes to stratified turbulence are revealed. |
Sunday, November 19, 2023 5:01PM - 5:14PM |
J29.00003: Enhancing Wall-Bounded Turbulence Simulation through Differentiable Neural Wall Modeling Xiantao Fan, Jian-Xun Wang Efficiently simulating complex turbulence at high Reynolds numbers is crucial for numerous engineering applications. Accurate reconstruction of near-wall flow features is of paramount importance to effectively simulate wall-bounded turbulence. Wall-Modeled Large Eddy Simulation (WMLES) offers an efficient alternative to Wall-Resolved Large Eddy Simulation (WRLES) and Direct Numerical Simulation (DNS), especially for high Reynolds numbers. However, the traditional equilibrium wall-stress model, based on the algebraic log law, lacks accuracy for non-equilibrium flows. Recent advances in Deep Neural Networks (DNNs) provide an opportunity for a more generalized wall model. In this study, we propose the development of a differentiable neural solver inspired by WMLES. By seamlessly integrating sequential neural networks with a differentiable Computational Fluid Dynamics (CFD) solver, we aim to effectively learn turbulent wall-bounded flows across various conditions. The proposed models will be trained using a posterior metric to ensure accurate a posteriori predictions. Through comparisons with purely data-driven models and conventional WMLES, our proposed method demonstrates advantages in terms of efficiency and generalizability. |
Sunday, November 19, 2023 5:14PM - 5:27PM |
J29.00004: A posteriori learning of closures for geophysical turbulence using ensemble Kalman inversion Yifei Guan, Pedram Hassanzadeh, Tapio Schneider, Zhengyu Huang, Oliver Dunbar, Ignacio Lopez-Gomez, Jinlong Wu In the era of big data, one major question is whether data-driven methods can shed new light on traditional physics-based turbulence subgrid-scale models (SGM). To answer this question, we develop data-optimized SGMs for large eddy simulation (LES) of canonical geophysical turbulent flow, i.e., forced 2D (beta-plane) turbulence, using ensemble Kalman inversion (EKI). The EKI method has the merit of being derivative-free, thus enabling a posteriori learning: to learn SGM parameters from chosen direct numerical simulation (DNS) statistics, such as the turbulent kinetic energy spectra. To further quantify the uncertainty of the optimized coefficients, we use the calibrate, emulate, and sample (CES) algorithm. We apply EKI as the “calibration” part of CES to optimize traditional LES SGMs, e.g., the Smagorinsky, Leith, and backscattering models, for a wide range of systems. We have found that the optimized Smagorinsky and Leith coefficients are universal across different systems, i.e., independent of LES grid resolution, forcing wavenumber, Reynolds number, and beta-plane waves. The universality here is verified from large support of the intersections of posterior distribution across all the systems. While the optimized eddy-viscosity SGMs (Smagorinsky and Leith) perform well in cases where backscattering is not significant, they cannot capture some of the flow statistics for those cases where the inverse cascade/ backscattering dominates. In such cases, backscattering SGMs are required for better performance. Specifically, we find that the Jansen-Held model consisting of a biharmonic diffusion and an anti-diffusion term, with optimized coefficients by EKI, performs the best in matching the flow statistics, particularly in capturing the extreme events and a priori metrics such as the interscale energy/enstrophy transfers. Our findings further show the power of the CES method in addressing the parametric uncertainty of physics-based SGMs when only a small amount of data is available from the original system, where invariant statistics can be used as the EKI target. |
Sunday, November 19, 2023 5:27PM - 5:40PM |
J29.00005: Velocity gradient prediction using parameterized Lagrangian deformation models Criston M Hyett, Yifeng Tian, Mikhail Stepanov, Daniel Livescu, Michael Chertkov We seek to efficiently predict the statistical evolution of the velocity gradient tensor (VGT) by creating local models for the pressure Hessian. Previous work has identified physics-informed machine learning (PIML) to be adept in this prediction; of note in this class of models is the Tensor Basis Neural Network (TBNN) for its embedded physical constraints and demonstrated performance. Simultaneously, phenomenological models were advanced by approximating the local closure to the pressure Hessian via deformation models using the history of the VGT. The latest in this series of models is the Recent Deformation of Gaussian Fields (RDGF) model. In this work, we combine the (local in time) PIML approach with the phenomenological idea of inclusion of recent deformation to create a data-driven Lagrangian deformation model. We compare the model performance to both the TBNN and the RDGF models, and provide data-driven hypotheses regarding the upstream assumptions made in the RDGF model. |
Sunday, November 19, 2023 5:40PM - 5:53PM |
J29.00006: Learning Closed-form Equations for Subgrid-scale Closures from High-fidelity Data: Promises and Challenges Karan Jakhar, Yifei Guan, Rambod Mojgani, Ashesh K Chattopadhyay, Pedram Hassanzadeh, Laura Zanna There is growing interest in discovering interpretable, closed-form equations for subgrid-scale (SGS) closures/parameterizations of complex processes in Earth system. Here, we apply a common equation-discovery technique with expansive libraries to learn closures from filtered direct numerical simulations of 2D forced turbulence and Rayleigh-B'enard convection (RBC). Across common filters, we robustly discover closures of the same form for momentum and heat fluxes. These closures depend on nonlinear combinations of gradients of filtered variables (velocity, temperature), with constants that are independent of the fluid/flow properties and only depend on filter type/size. We show that these closures are the nonlinear gradient model (NGM), which is derivable analytically using Taylor-series expansions. In fact, we suggest that with common (physics-free) equation-discovery algorithms, regardless of the system/physics, discovered closures are always consistent with the Taylor-series. Like previous studies, we find that large-eddy simulations with NGM closures are unstable, despite significant similarities between the true and NGM-predicted fluxes (pattern correlations > 0.95). We identify two shortcomings as reasons for these instabilities: in 2D, NGM produces zero kinetic energy transfer between resolved and subgrid scales, lacking both diffusion and backscattering. In RBC, backscattering of potential energy is poorly predicted. Moreover, we show that SGS fluxes diagnosed from data presumed the ``truth'' for discovery, depend on filtering procedures and are not unique. Accordingly, to learn accurate, stable closures from high-fidelity data in future work, we propose several ideas around using physics-informed libraries, loss functions, and metrics. These findings are relevant beyond turbulence to closure modeling of any multi-scale system. |
Sunday, November 19, 2023 5:53PM - 6:06PM |
J29.00007: Removing the log-layer mismatch in wall-modeled LES using near-wall erroneous flows via physics-informed neural network Soju Maejima, Soshi Kawai In this talk, we propose a physics-informed neural network that corrects the near-wall erroneous flows to accurately drive the equilibrium wall model using the first off-wall grid point as the input. The proposed neural networks predict the amounts of numerical errors in the flow variables near the wall, which is then used to input the physically correct values to the wall model. The input and output features of the neural networks are chosen based on near-wall turbulent physics for robustness against varying Reynolds and Mach number conditions. Tests on the zero-pressure-gradient flat-plate turbulent boundary layer show that the log-layer mismatch problem, which plagues the results of the conventional WMLES, is removed. Furthermore, the probability density distributions of the wall shear stress predicted by the proposed method show better agreement with the reference. |
Sunday, November 19, 2023 6:06PM - 6:19PM |
J29.00008: Resolvent analysis of turbulent flows over progressive surface waves Ziyan Ren, Anqing Xuan, Lian Shen Understanding the mechanism of air-sea interaction is crucial for numerous geophysical, environmental, and engineering applications. In this study, a reduced-order model for the turbulent flows over progressive surface waves using resolvent decomposition is proposed. The model is based on the linearized Navier-Stokes equations with a boundary-fitted grid. Large-eddy simulations (LES) with various wave ages are performed to obtain the two-dimensional time-averaged flows. The resolvent analysis reveals the energy amplification and the associated coherent structural response to turbulent fluctuations over a wide range of frequencies. Using this method, we are able to examine the airflow responses under various wave conditions and describe the flow characteristics with a low-rank model. |
Sunday, November 19, 2023 6:19PM - 6:32PM |
J29.00009: Convective parametrization of dry atmospheric boundary layer by generative machine learning model Joerg Schumacher, Florian Heyder, Juan Pedro Mellado Even though global simulations of the Earth system on monthly timescales reach now resolutions of 2.5 kilometers essential turbulent transport processes in the lowest part of the atmosphere still have to be modeled. Here, we implement a machine learning-based mass-flux parametrization of the subgrid-scale heat flux for a shear-free dry atmospheric atmospheric boundary layer by a generative adversarial network. Training data and prediction capability of the algorithm are increased by incorporating the physics of boundary layer growth following from classical mixed layer similarity theory. Our model is compared successfully to standard mass-flux parametrizations. It is additionally found to reproduce the intermittent fluctuations of the convective buoyancy flux and the horizontal organisation of mesoscale turbulence correctly. |
Follow Us |
Engage
Become an APS Member |
My APS
Renew Membership |
Information for |
About APSThe American Physical Society (APS) is a non-profit membership organization working to advance the knowledge of physics. |
© 2024 American Physical Society
| All rights reserved | Terms of Use
| Contact Us
Headquarters
1 Physics Ellipse, College Park, MD 20740-3844
(301) 209-3200
Editorial Office
100 Motor Pkwy, Suite 110, Hauppauge, NY 11788
(631) 591-4000
Office of Public Affairs
529 14th St NW, Suite 1050, Washington, D.C. 20045-2001
(202) 662-8700