Bulletin of the American Physical Society
77th Annual Meeting of the Division of Fluid Dynamics
Sunday–Tuesday, November 24–26, 2024; Salt Lake City, Utah
Session A15: Low-Order Modeling and Machine Learning in Fluid Dynamics: Methods I |
Hide Abstracts |
Chair: Ricardo Vinuesa, KTH Royal Institute of Technology Room: 155 E |
Sunday, November 24, 2024 8:00AM - 8:13AM |
A15.00001: Adaptive Local Domain Decomposition for Learning Large-Scale Multi-physics Numerical Simulations Wenzhuo Xu, Christopher McComb, Noelia Grande Gutiérrez Applying physics-informed neural networks (PINNs) in engineering scenarios involving millions of elements presents a significant computational challenge, as the complexity and diversity of the physics can exceed the capacities of machine learning (ML) models and available GPU memory. Recently, methods have been developed for PINNs to segment the input domain and perform concurrent inference on smaller subdomains, improving computational efficiency. Building on this concept, we introduce the Adaptive Local Domain Decomposition (ALDD) method, which enhances performance in two primary ways: (1) It employs domain decomposition to boost the efficiency of training and inference, achieving near-linear reductions in computation time with the addition of parallel GPUs. (2) It uses adaptive domain scheduling to divide the physics domain into subdomains according to the physical features, applying specialized sub-ML models to each subdomain. We utilize the energy spectrum of each subdomain, combined with k-means clustering of the spectrum's Wasserstein distance, to determine the most effective distribution of submodels across the subdomains according to the local physical features. This approach shows superior performance compared to other partitioning methods. With ALDD, we can extend the prediction capabilities of modern ML methods for forward problems on discretized domains with over 6 million elements and achieve over 99.6% accuracy in complex physical issues such as turbulent boundary layer flow. |
Sunday, November 24, 2024 8:13AM - 8:26AM |
A15.00002: Robust dominant balance analysis for identifying governing flow physics in experimental settings Samuel Ahnert, Christian Lagemann, Esther Lagemann, Steven L Brunton Deploying unsupervised learning in the context of system identification, the data-driven dominant balance algorithm extracts a low-dimensional representation of the dominant physical processes underlying high-dimensional fluid flow data. While the original implementation proved to be successful in extracting sparse representations from time-averaged DNS data, it is not able to adequately identify governing patterns in more challenging conditions, e.g., instantaneous dynamics, noisy measurement data, and measurement uncertainties. We address these issues by taking an ensemble approach to stabilize the unsupervised learning process and provide a measure of uncertainty quantification. Moreover, we leverage the integral form of the governing equations to cast a "weak form" of the problem which lends additional noise robustness to our algorithm and circumvents the challenging determination of gradients in experimental settings. The effectiveness of this novel formulation is demonstrated on a variety of numerical and experimental test cases including transitional boundary layer flows, wall-bounded turbulence with favorable and adverse pressure gradients, and shock-dominated configurations such as the viscous Burgers' equation. |
Sunday, November 24, 2024 8:26AM - 8:39AM |
A15.00003: SGD-SINDy: Stochastic Gradient-Descent based Framework for Flexible System Identification Amirhossein Arzani, Siva Viknesh, Younes Tatari We propose a novel methodology within the Sparse Identification of Nonlinear Dynamical Systems (SINDy) framework that leverages stochastic gradient descent (SGD) optimization (SGD-SINDy) to enhance the identification of parameters in dynamical systems and reduce the dependency on prior library of candidate terms. Unlike the traditional SINDy method, which often requires prior linear parameter distribution, our framework finds the parameters more efficiently and accurately through a flexible global optimization setting. That is, SGD-SINDy does not require previous knowledge of nonlinear parameters such as frequency in trigonometric functions or bandwidths in exponential functions. Importantly, our approach also alleviates the need for extensive hyperparameter tuning by optimizing hyperparameters simultaneously during the process. We demonstrate the efficacy of our methodology across various dynamical systems, including coupled ordinary differential equations (ODEs) such as harmonic oscillators, Van der Pol oscillators, the chaotic ABC flow, and reaction kinetics. Our results show substantial improvements in parameter identification when nonlinear features exist, highlighting the potential of SGD optimization to advance SINDy-based analyses. |
Sunday, November 24, 2024 8:39AM - 8:52AM |
A15.00004: At the intersection of Reduced Order Modelling and Discrete Loss Minimization: Can complicated, high-Re incompressible flow models become edge computable? Sean R Breckling, Jacob Murri, Clifford E Watkins, Caleb C Monoran, James Watts Digital twin technology is growing widely among a number of research communities. As with any new digital tool, its usefulness is often directly proportional to speed. As a result, large-scale direct 2D and 3D incompressible flow calculations are not widely considered in digital twins, regardless of the application space. There have been a number of advancements in model reduction metholodigies in recent years, to include submerged surrogates and autodifferentiation-based solvers. Herein we present a hybrid technique capable of accurately assimilating temperature and pressure data into a generalizable convection-driven flow model without the use of PINNs. |
Sunday, November 24, 2024 8:52AM - 9:05AM |
A15.00005: Data-Driven Resolvent Analysis with Residual Information Katherine Cao, Matthew J Colbrook, Benjamin Herrmann, Steven L Brunton, Beverley J McKeon Data-driven resolvent analysis (Herrmann et al., JFM, 2021) has seen success in performing resolvent analysis in an equation-free manner, utilizing dynamic mode decomposition (DMD, Schmid, JFM, 2010) to identify the most responsive forcing and receptive states. Towards the application of data-driven resolvent analysis for turbulent flows and multiphysics problems, treatment of inherent nonlinearity and error control is necessary to capture the underlying physics. In this work, we incorporate residual dynamic mode decomposition (ResDMD, Colbrook et al., JFM, 2023) into data-driven resolvent analysis to reduce spectral pollution in the DMD-learned linear operator. The proposed approach is applied to transitional channel flow data. In addition, we investigate the connections between the learned DMD linear operator spectrum and Orr-Sommerfeld and Squire pseudospectra. |
Sunday, November 24, 2024 9:05AM - 9:18AM |
A15.00006: KOopman Operator Learning : A KOOL model for long term stable prediction of dynamical systems Dibyajyoti Chakraborty, Conrad S Ainslie, Derek F DeSantis, Arvind T Mohan, Ashesh K Chattopadhyay, Romit Maulik Long-term stable prediction is crucial for surrogate modeling of dynamical systems. Typical machine learning models are accurate in short ranges but either diverge to infinity or decay and lose dynamics after multiple autoregresisve steps. To address this challenge, we propose a Koopman-based deep learning model that effectively learns the underlying invariant statistics providing improved accuracy and dynamic stability for long-term predictions. We use state-of-the-art deep learning techniques like Adaptive Fourier Neural Operators(AFNOs) for function space approximations and a notably smaller non-linear approximation for the Koopman operator. Moreover, we implement a novel loss function that is used to alternatively learn the function spaces and the Koopman Operator. During inference, the dynamics of the state is governed only by the auto-regressive application of the Koopman operator. We demonstrate the efficacy of our approach using chaotic systems like the Kuramoto Shivashinky equation and 2D turbulence. Furthermore, extensive implementations on climate datasets, showcase its ability to forecast key climate variables with greater precision compared to other machine learning techniques for longer periods of time. We can obtain high stability without using any forcing variables or induced biases. Our results highlights the potential of Koopman-based deep learning models as a powerful tool for enhancing the reliability of long-term climate surrogate models, offering valuable insights for climate science. |
Sunday, November 24, 2024 9:18AM - 9:31AM |
A15.00007: Data-Driven Dimension Reduction Through Symmetry-Promoting Regularization Nicholas Zolman, Samuel E Otto, J. Nathan Kutz, Steven L Brunton Reduced order modeling of complex phenomena has become an increasingly necessary tool for efficiently developing digital twins, control design, and rapidly performing studies. Active subspace approaches have been shown to be lightweight and powerful methods for discovering the dominant linear subspaces for science and engineering applications in the low-data limit. However, these approaches either (1) require access to the gradients from the quantities of interest, (2) sufficient quantities of data to estimate the gradients, or (3) rely on linear regression to identify a one-dimensional subspace. In this work, we demonstrate that the presence of an active subspace is equivalent to the presence of a translationally-invariant subspace. By modeling quantities of interest through a convex, symmetry-regularized optimization, we demonstrate that we can discover maximally invariant subspaces and their active subspace counterparts directly from data without explicit access to gradients. In particular, we explore the effectiveness of our approach on a variety of applications in the low-data limit without access to gradients. |
Sunday, November 24, 2024 9:31AM - 9:44AM |
A15.00008: Dynamic Stall Estimation with Transition Networks Ricardo Cavalcanti Linhares, Karen Mulleners, Ellen Kathryn Longmire, Melissa A Green Estimating dynamic stall under in-flight conditions presents significant challenges and requires advanced aerodynamic prediction tools. This study focuses on enhancing dynamic stall estimation by predicting lift for a NACA0015 airfoil experiencing intermittent, highly separated flow conditions at a Reynolds number of 5.5x105. We present a data-driven approach using surface pressure sensor data as input. The airfoil is pitched around a static stall angle of αss = 20° with an 8° pitching amplitude. Particle Image Velocimetry (PIV) provides detailed visualization of flow structures, aiding in the refinement of the prediction model. The research covers six experimental cases with reduced frequencies ranging from 0.025 to 0.15. A weighted-average transition network is applied to data from a varying number of surface pressure taps. This study investigates the role of clustering in distinguishing different flow states and how reducing the number of clusters affects dynamic stall lift estimation accuracy. It aims to balance effective state differentiation with cluster reduction. Additionally, it explores how improvements in data clustering and network node reduction impact overall accuracy, phase-averaged performance, and the ability to capture variations between cycles. |
Sunday, November 24, 2024 9:44AM - 9:57AM |
A15.00009: A Y-network encoder-decoder model for predicting vorticity fields from kinetic energy spectra in 2D decaying turbulence Mrigank Dhingra, Omer San, Anne E Staples We present an encoder-decoder Y-network model as a lifting operator for reconstructing 2D isotropic decaying turbulent vorticity fields from their instantaneous energy spectra. Two initial specialized models were trained separately to capture different features of the turbulence, focusing on either small-scale or large-scale structures. The encoders from these models were then leveraged in a transfer learning framework that combined their outputs through a new decoder in a Y-network architecture. This innovative combination led to a significant improvement in accuracy, with the overall mean squared error (MSE) and mean absolute error (MAE) reduced by 65% and 40%, respectively, for a suite of reconstructed vorticity fields. Previously, we implemented an encoder-decoder model using stacked LSTM layers to serve as a energy spectrum-to-velocity field lifting operator for a 1D Burgers' turbulence model. We integrated that model into a coarse projective integration multiscale simulation scheme that treated the energy spectrum as the coarse (slow) variable and the velocity field as the fine (fast) variable and found that it accelerated the evolution of the flow to statistical stationarity by a factor of 443 (Dhingra et al., Phys. Fluids, 2024). Preliminary tests show that the current model can be integrated into similar multiscale simulation schemes to enable accelerated simulations of 2D turbulent vorticity fields. |
Sunday, November 24, 2024 9:57AM - 10:10AM |
A15.00010: Neural Operator Based Coarse-Grid Navier-Stokes / Interface Tracking Model Development Anna Iskhakova, Arsen S Iskhakov, Nam T Dinh, Igor A Bolotnov Direct numerical simulations (DNS) combined with an interface tracking method enable the investigation of interface dynamics in multiphase flows. However, these simulations are typically limited to small, millimeter-sized domains and/or short simulation times. To overcome these limitations, a novel data-driven approach has been proposed. This approach facilitates coarse-grid (CG) Navier-Stokes modeling using the level-set method, allowing for extended domain sizes and/or longer simulation periods. Numerical simulations are performed using the finite element code PHASTA, which has been validated for various two-phase flows and geometries, such as bubbly flow through a spacer grid with mixing vanes, flow regime transition in a pipe, and two-phase flow near the pickoff ring of a steam separator. The Fourier neural operator is trained on DNS datasets generated by PHASTA. The trained model is then incorporated into CG PHASTA simulations as a torch script to provide solutions for the level-set function at each time step. Several tests have been conducted, and the observed speed-up has been documented. Future work aims to extend this workflow to predict temperature variations. |
Sunday, November 24, 2024 10:10AM - 10:23AM |
A15.00011: Learning Acoustic Scattering in Turbulent Stratified Flows With Neural Operators Christophe Millet Recent advances in machine learning have demonstrated that neural networks can approximate operators using specialized architectures known as neural operators. In this work, we explore the use of Fourier Neural Operators (FNOs) to learn the physics of wave propagation in randomly layered media, mapping the space of random sound speed fields to acoustic waveforms. This approach is tested by predicting the scattering of broadband and narrowband acoustic wave packets off a stochastic gravity wave field, a key factor in atmospheric infrasound variability. Gravity wave fields are computed using a stochastic multiwave series that recovers the usual vertical wavenumber power spectral density and produces intermittency. Using spectral analysis tools, we demonstrate that FNOs can approximate physically consistent scattered pressure fields but fail to capture fine details due to the truncation of high-frequency modes in each Fourier layer. Inspired by reduced-order modeling, we propose a variant of FNOs that learns the optimal number of modes for representing the integral kernel in the Fourier layers. These modes enable the FNOs to capture intricate patterns related to the interaction between the incoming infrasound and vertically distributed small-scale structures in the sound speed profile. When applied to the inverse problem of estimating gravity wave fields from acoustic waveforms, this approach can be orders of magnitude more efficient compared to traditional finite-difference solvers. |
Follow Us |
Engage
Become an APS Member |
My APS
Renew Membership |
Information for |
About APSThe American Physical Society (APS) is a non-profit membership organization working to advance the knowledge of physics. |
© 2025 American Physical Society
| All rights reserved | Terms of Use
| Contact Us
Headquarters
1 Physics Ellipse, College Park, MD 20740-3844
(301) 209-3200
Editorial Office
100 Motor Pkwy, Suite 110, Hauppauge, NY 11788
(631) 591-4000
Office of Public Affairs
529 14th St NW, Suite 1050, Washington, D.C. 20045-2001
(202) 662-8700