Bulletin of the American Physical Society
77th Annual Meeting of the Division of Fluid Dynamics
Sunday–Tuesday, November 24–26, 2024; Salt Lake City, Utah
Session T15: Low-Order Modeling and Machine Learning in Fluid Dynamics: Methods V |
Hide Abstracts |
Chair: Haithem Taha, University of California, Irvine Room: 155 E |
Monday, November 25, 2024 4:45PM - 4:58PM |
T15.00001: Deterministic local reduced-order modelling for chaotic flows with cluster-based quantization Antonio Colanera, Luca Magri The long-term behavior of dissipative dynamical systems can be represented on manifolds that have fewer degrees of freedom than the phase space. In chaotic systems, the manifolds can have intricate shapes, which makes may make single reduced-order model (ROM) inaccurate. In this work, we construct a series of local ROMs, each of which models the dynamics of specific portions of the manifold. To do so, we quantize the manifold with cluster-based analysis, whose centroids identify local patches to create the manifold cartography. We develop both linear methods based on Galerkin projections and nonlinear methods based on Echo State Networks and Long Short-Term Memory networks. Our methodology is verified on the Kuramoto-Sivashinsky equations, and finally applied to the wake flow past the fluidic pinball in different flow regimes. Deterministic local ROMs open opportunities for efficient flow control and real-time prediction of turbulent systems. |
Monday, November 25, 2024 4:58PM - 5:11PM |
T15.00002: Intrinsic Instabilities and Generalization Challenges in Neural Partial Differential Equations Arvind T Mohan, Ashesh K Chattopadhyay, Jonah M Miller NeuralPDEs have seen considerable success and interest, as they directly embed neural networks inside physics PDEs. Like most successful ML models, they are trained on PDE simulations as “ground truth.” An implicit assumption made is that this represents only physics. In contrast, mathematics dictates that these are only numerical approximations of true physics. Since the NeuralPDEs intimately tie networks to the mathematically rigorous governing PDEs, there is also a widespread assumption that NeuralPDEs are more trustworthy and generalizable. In this work, we rigorously test these assumptions, using established ideas from computational physics and numerical analysis to verify if they predict accurate solutions for the right reasons. We posit that NeuralPDEs learn the artifacts in the simulation training data arising from the spatial derivatives' discretized Taylor Series truncation error. Consequently, we find that NeuralPDE models are systematically biased, and their generalization capability often results from a fortuitous interplay of numerical dissipation and truncation error in the training dataset and NeuralPDE, which seldom happens in practical applications. The evidence for our hypothesis is provided with theory, numerical experiments and dynamical system analysis. We show this bias manifests aggressively in simple systems such as the Burgers and KdV equations. Additionally, we demonstrate that an eigenanalysis of the learned network weights can indicate a priori if the model will be unstable/inaccurate for out-of-distribution inputs. Additionally, we show evidence that even when the training dataset is qualitatively and quantitatively accurate, intrinsic sample differences in truncation error act as an “adversarial attack” by destroying generalization accuracy in NeuralPDEs despite achieving excellent training accuracy. Finally, we discuss the implications of this finding on the reliability and robustness of NeuralPDEs and ML models for applications. |
Monday, November 25, 2024 5:11PM - 5:24PM |
T15.00003: Submerged Reduced-Order Models for Incompressible Flow around Obstacles Jacob W Murri, Clifford E Watkins, James Watts, Sean R Breckling We present a hybrid reduced-order modeling scheme utilizing domain decomposition to simulate Navier-Stokes pipe flow with circular obstacles. In contrast to previous schemes which use a reduced-order model for the full domain, or a series of separate reduced-order models on small-scale unit components which compose the whole domain, our scheme deploys reduced-order models on user-identified subdomains containing obstacles. These user-identified subdomains have the same shape and size, allowing a single simulation of flow around obstacles to provide multiple useful snapshots per time step for training. Principal orthogonal decomposition (POD) is performed on the collection of snapshots to form a suitable low-rank Galerkin basis for each ROM subdomain. We employ discontinuous Galerkin domain decomposition (DG-DD) finite element methods to enforce interface boundary conditions weakly. We demonstrate that this method can approximate Navier-Stokes flow with high accuracy but reduced computational expense in terms of time and memory. |
Monday, November 25, 2024 5:24PM - 5:37PM |
T15.00004: Data-driven artificial viscosity closures for projection-based reduced order modeling of incompressible fluid flows Aviral Prakash, Yongjie J Zhang Advancements in computational hardware and physical simulation techniques have pushed the envelope of the complexity of fluid physics that can be modeled with adequate accuracy. However, these high-fidelity simulations are expensive for multi-query applications, real-time simulation and dynamics forecasting. In such situations, reduced order models (ROMs) are an attractive alternative as they can simulate engineering systems at a lower computational overhead without a significant loss in accuracy. Projection-based ROMs rely on offline-online model decomposition, where the data-based energetic spatial basis obtained from data is used in the expensive offline stage to obtain equations of reduced states that evolve in time during the inexpensive online stage. |
Monday, November 25, 2024 5:37PM - 5:50PM |
T15.00005: Flexi-Propagator For Partial Differential Equations Khalid Rafiq, Wenjing Liao, Aditya G Nair We introduce a novel data-driven architecture that leverages an enhanced Variational Autoencoder (VAE) framework to predict solutions to non-linear partial differential equations (PDEs). Our method integrates an end-to-end learnable model for capturing temporal dynamics within the VAE architecture, allowing for the direct representation and evolution of the system's state in the latent space over time. A key innovation of our approach is its ability to predict the solution field for multiple future time steps in a single forward pass. This capability eliminates the need for conventional recursive one-step predictions, enhancing computational efficiency and improving prediction accuracy. We demonstrate the effectiveness of our model by learning low-dimensional representations and providing one-shot predictions for the nonlinear Burgers' equation. The model exhibits strong generalization across a broad spectrum of Reynolds numbers and time steps not encountered during training, underscoring its potential for predictive modeling in complex nonlinear dynamical systems. Additionally, the framework is extended to a parametric reduced-order model, embedding parametric information into the latent space to identify trends in system evolution. |
Monday, November 25, 2024 5:50PM - 6:03PM |
T15.00006: Bundle embeddings for learning chaotic dynamics from irregularly sampled partial observable data Charles Douglas Young A challenge in training models on experimental data is the spatial sparsity of sampling. These incomplete state measurements can be augmented with state history to recover the underlying dynamics, e.g. through time delay embedding. Often the data is also sampled at irregular time intervals, such as when a sensor fails to transmit or one sensor records at different intervals than others. We use bundle embeddings to generalize previous work on uniform time delay embeddings for learning continuous dynamics from partial observable data sampled irregularly in time. Using neural ODEs, this is essentially a simple modification of the loss function. We demonstrate the accuracy of the approach for several benchmark multivariate periodic and chaotic dynamical systems. |
Monday, November 25, 2024 6:03PM - 6:16PM |
T15.00007: Objective Determination of Optimal POD Modes for Large-Scale Motion Reconstruction in Turbulent Flows Nathan Ziems, Venkatesh Pulletikurthi, Suranga I Dharmarathne This study proposes and evaluates an objective method for identifying the optimal number of proper orthogonal decomposition (POD) modes. The method is implemented for two frictional Reynolds numbers, Re𝜏, 500 and 1000, to account for both moderate and high Reynolds numbers. The physical structures of large-scale motions (LSM) are identified using two-point correlations. We then apply the method of snapshots POD to decompose the flow field and develop a novel approach using two-point correlations to determine the precise number of POD modes necessary for realistic reconstruction of LSM. This method addresses the limitations of arbitrary mode selection in snapshot POD analysis, which can lead to the visualization of large-scale structures that do not exist in the physical flow domain. By comparing the reconstructed fields with fully resolved DNS data, we assess the effectiveness of our approach in capturing the essential features of LSM at different Reynolds numbers. Our findings provide new insights into the implication of low-order models to effectively capture both spatially and energetic large-scale motions. |
Monday, November 25, 2024 6:16PM - 6:29PM |
T15.00008: Continuous latent flow modeling for model-based reinforcement learning using temporal transformer networks Christian Lagemann, Kai Lagemann, Steven L Brunton Dynamical models are central to our ability to understand and predict natural and engineered systems. However, real-world systems often show time-varying behavior that is too complex for straightforward statistical forecasting approaches. This is due to the fact that the temporal behavior, while potentially explained by an underlying dynamical model, can show strong, possibly abrupt changes in the observation space. To tackle these fundamental challenges, we propose a new model class which is explicitly aimed at predicting dynamical trajectories from high-dimensional empirical fluid flow data. This is done by combining amortized variational autoencoders and spatio-temporal attention within a framework designed to enforce certain scientifically motivated invariances. |
Follow Us |
Engage
Become an APS Member |
My APS
Renew Membership |
Information for |
About APSThe American Physical Society (APS) is a non-profit membership organization working to advance the knowledge of physics. |
© 2025 American Physical Society
| All rights reserved | Terms of Use
| Contact Us
Headquarters
1 Physics Ellipse, College Park, MD 20740-3844
(301) 209-3200
Editorial Office
100 Motor Pkwy, Suite 110, Hauppauge, NY 11788
(631) 591-4000
Office of Public Affairs
529 14th St NW, Suite 1050, Washington, D.C. 20045-2001
(202) 662-8700