Bulletin of the American Physical Society
77th Annual Meeting of the Division of Fluid Dynamics
Sunday–Tuesday, November 24–26, 2024; Salt Lake City, Utah
Session A12: Low-Order Modeling and Machine Learning in Fluid Dynamics: General I |
Hide Abstracts |
Chair: Samuel Grauer, Pennsylvania State University Room: 155 B |
Sunday, November 24, 2024 8:00AM - 8:13AM |
A12.00001: Single-snapshot machine learning for super-resolution analysis of turbulence Kai Fukami, Kunihiko Taira While modern machine-learning techniques are generally considered data-hungry, it may not be the case for turbulence as each of its snapshots is likely to hold a greater amount of information than those studies in image science. In this talk, we discuss how nonlinear machine learning can efficiently extract physical insights even from a single snapshot of a turbulent vortical flow. We perform machine-learning-based super-resolution analysis, that reconstructs a high-resolution field from low-resolution data for an example of two-dimensional decaying turbulence. We find that vortical structures across a range of Reynolds numbers can be reconstructed from grossly coarse data using a carefully designed convolutional neural network trained with flow tiles sampled from only a single snapshot. Our results show that nonlinear machine learning can leverage scale-invariant properties to efficiently learn turbulent flows. We further present that training data of turbulent flows can be efficiently collected from a single snapshot by incorporating prior knowledge of their statistical characteristics. |
Sunday, November 24, 2024 8:13AM - 8:26AM |
A12.00002: Mesh-based Super-Resolution of Fluid Flows with Multiscale Graph Neural Networks Shivam Barwey, Pinaki Pal, Saumil S Patel, Riccardo Balin, Bethany A Lusch, Venkatram Vishwanath, Romit Maulik, RAMESH BALAKRISHNAN A graph neural network (GNN)-based scientific machine learning framework is developed for mesh-based super-resolution of three-dimensional fluid flows. In this framework, the GNN operates in the context of local interpretation of flow-fields (it acts on local meshes of elements/cells). To facilitate GNN representations in a manner similar to spectral (or finite) element discretizations, the baseline message passing layer is modified to account for synchronization of coincident graph nodes, rendering compatibility with commonly used element-based mesh connectivities. The multiscale architecture is comprised of a combination of a coarse-scale processor and a fine-scale processor separated by a graph unpooling layer. The coarse-scale processor embeds a query element (alongside a set number of neighboring coarse elements) into a single latent graph representation using coarse-scale synchronized message passing over the element neighborhood, and the fine-scale processor leverages additional message passing operations on this latent graph at smaller length scales to produce the super-resolved flow. Demonstration studies are conducted using hexahedral mesh-based data produced by simulations of the Taylor-Green Vortex flow (at Reynolds numbers of 1600 and 3200) performed using NekRS, Argonne's high-order spectral element flow solver. The results show that the GNN architecture is able to produce accurate super-resolved fields for a variety of model configurations. |
Sunday, November 24, 2024 8:26AM - 8:39AM |
A12.00003: Physics-informed image inpainting for fluid flow reconstruction Alvaro Moreno Soto, Manuel Soler, Stefano Discetti Image inpainting is the process of reconstructing missing / masked regions in an image. Typically, these techniques take an input image partially covered by a mask blocking a certain area to be recovered. The output image must completely resemble the available information of the input image whilst be consistent in the area to be reconstructed. For fluid flow applications, a direct analogy can be made between an image in RGB color code and a flow with UVP (2D velocity and pressure) or UVW (3D velocity) components. We leverage the use of image inpainting techniques to predict and recover parts of the missing flow based on the surrounding available information. The masked area can be filled with flow features adhering to physical constraints, ensuring the flow reconstruction is consistent with a viable physical behavior. Physics-informed image inpainting is highly relevant for experimental fluid dynamicists facing visual limitations from setup constraints and for other fields where fluid information is partially available, such as weather reconstruction based on on-ground weather stations or partially blanked satellite data. |
Sunday, November 24, 2024 8:39AM - 8:52AM |
A12.00004: Toward guaranteed stability in low-order data-driven models with closure modeling Vamsi Krishna Chinta, Diganta Bhattacharjee, Peter Seiler, Maziar S Hemati
|
Sunday, November 24, 2024 8:52AM - 9:05AM |
A12.00005: Predictive reduced order models of tsunamis via neural Galerkin-projection and heirarchical pooling Shane X Coffing, John Tipton, Darren Engwirda, Arvind T Mohan Reduced order models (ROMs) often posit that the state of a dynamical system can be decomposed into temporal weights that activate spatial bases. By projecting these bases back onto the state's governing partial differential equations, we form a system of ordinary differential equations that describe how those temporal weights evolve, called a Galerkin-Projection ROM (GP-ROM). New extensions of this method based on differentiable programming known as neuralGP-ROMs can then be used to stabilize these equations to unprecedented levels of accuracy. Since these neuralGP-ROMs are based on the spatial basis from which they are constructed, they are typically valid for some deviations from the original basis. In the case of tsunamis, this implies that spatial basis of one tsunami (a reference model) may also be used to describe another (a test model) nearby or of slightly varying earthquake magnitude, with acceptable accuracy. In this presentation, we describe how this can be accomplished using a hierarchical pooling method that parameterizes our neural GP-ROMs to ensure that our models provide interpretable, accurate representations of these tsunamis. We demonstrate that a neural GP-ROM built on the basis of one realization can be leveraged to model another realization's dynamics in the neighborhood and predict unseen trajectories. |
Sunday, November 24, 2024 9:05AM - 9:18AM |
A12.00006: Accelerating dynamic fluid solvers with pure data-driven deep learning models. Isaac C Bannerman, Shaowu Pan, Lucy T Zhang Numerical simulations of the Navier-Stokes equations are typically expensive and complex, which drives the need to build Reduced Order Models (ROMs) capable of predicting fluid flows. Currently, there exists frameworks where deep learning-based ROMs are coupled with numerical solvers to accelerate steady-state flow. Extending such frameworks to dynamic problems is demanding, which requires a careful handling of the coupling between the ROM and the numerical solver at each time step to prevent error accumulation. Furthermore, approaches that couple differentiable hybrid neural models with fluid solvers rely on automatic differentiation (AD), limiting integration with non-AD-supported Computational Fluid Dynamics (CFD) platforms. In this study, we present a novel non-AD-dependent approach for accelerating the dynamic fluid solver by building a framework that allows the flexibility of using the ROM prediction as an initial guess of the fluid solver at any chosen time-step. Careful consideration of the prediction length and frequency of usage of the purely data-driven ROM's output as initial guess prevents error accumulation while expediting solver convergence. The capability of the method is verified by testing it on the flow around a cylinder benchmark test case where the ROM prediction is used to accelerate fluid flow at Reynolds numbers not encountered during the training of the ROM. The methodology significantly speeds up the CFD solver while preserving the dynamical behavior of the flow. The results obtained show the potential of applying the methodology to much complex cases such as fluid solvers in fluid-structure interactions and other models that are time dependent. |
Sunday, November 24, 2024 9:18AM - 9:31AM |
A12.00007: Abstract Withdrawn Invited Speaker:
|
Sunday, November 24, 2024 9:31AM - 9:44AM |
A12.00008: Bayesian autoencoders for physics learning Liyao Mars M Gao, J. Nathan Kutz Recent progress in autoencoder-based sparse identification of nonlinear dynamics (SINDy) under $\ell_1$ constraints allows joint discoveries of governing equations and latent coordinate systems from spatio-temporal data, including simulated video frames. To address the data-driven discovery of physics in the low-data and high-noise regimes, we propose Bayesian SINDy autoencoders, which incorporate a hierarchical Bayesian Spike-and-slab Gaussian Lasso prior. Bayesian SINDy autoencoder enables the joint discovery of governing equations and coordinate systems with uncertainty estimate. To resolve the challenging computational tractability of the Bayesian hierarchical setting, we adapt an adaptive empirical Bayesian method with Stochatic gradient Langevin dynamics (SGLD) which gives a computationally tractable way of Bayesian posterior sampling within our framework. Bayesian SINDy autoencoder achieves better physics discovery with lower data and fewer training epochs, along with valid uncertainty quantification suggested by the experimental studies. The Bayesian SINDy autoencoder can be applied to real video data, with accurate physics discovery which correctly identifies the governing equation and provides a close estimate for standard physics constants like gravity $g$, for example, in videos of a pendulum. We further demonstrate the power of Bayesian SINDy deep learning from a broader range of physics discovery, including global temperature data, synthetic Kolmogorov flow, and real video recordings of flow over a cylinder. |
Sunday, November 24, 2024 9:44AM - 9:57AM |
A12.00009: A multilevel flow agnostic LES approach using deep learning Dhawal Buaria Turbulent flows in nature and engineering are characterized by an enormous range of scales, rendering their direct numerical simulation (DNS) prohibitively expensive. A well-established practical alternative is large eddy simulation (LES), which resolves the large scales, while modeling the entire range of small scales. However, conventional LES closure models fall short in applications where the controlling physical processes predominantly occur at small scales, such as scalar mixing, particle transport, chemical reactions. In this talk, we introduce a novel modeling approach for LES, where a multilevel strategy is employed instead of modeling the entire range of small scales. By leveraging tensor representation theory, a general functional closure is obtained in terms of filtered velocity gradients at each level, which is then represented using tensor-based neural networks. Training and performance of the model is assessed using DNS data from isotropic and wall turbulence, demonstrating significant improvement over conventional LES approaches. Extensions to other scenarios are discussed, highlighting the versatility of the approach for broad range of flows. |
Sunday, November 24, 2024 9:57AM - 10:10AM |
A12.00010: A hierarchical deep neural-network for long-term prediction of turbulent flow Jonghyun Chae, Youngmin Jeon, Donghyun You A hierarchical neural-network method is developed to stably predict the future of turbulent flow over long periods using recursive predictions. This method introduces a reconstruction network in addition to a base prediction network, aiming to restore turbulent fluctuations in the output fields of the prediction network. The reconstruction network is trained using a statistical loss function to match the statistical properties of turbulence. In tests with turbulent channel flow, the method accurately predicted instantaneous and mean flow fields over 15 flow-through times, whereas the base prediction model became unstable. Turbulent kinetic energy budget analysis revealed significant errors in kinetic energy dissipation and production rates near the wall in predictions without the reconstruction model. Conversely, the reconstruction network provided accurate predictions of these rates, ensuring stable long-term predictions. Additionally, the method was tested on predicting unsteady wake flow over a square cylinder, yielding flow fields in good agreement with large-eddy simulation results, while predictions without the reconstruction model diverged. |
Follow Us |
Engage
Become an APS Member |
My APS
Renew Membership |
Information for |
About APSThe American Physical Society (APS) is a non-profit membership organization working to advance the knowledge of physics. |
© 2025 American Physical Society
| All rights reserved | Terms of Use
| Contact Us
Headquarters
1 Physics Ellipse, College Park, MD 20740-3844
(301) 209-3200
Editorial Office
100 Motor Pkwy, Suite 110, Hauppauge, NY 11788
(631) 591-4000
Office of Public Affairs
529 14th St NW, Suite 1050, Washington, D.C. 20045-2001
(202) 662-8700