Bulletin of the American Physical Society
76th Annual Meeting of the Division of Fluid Dynamics
Sunday–Tuesday, November 19–21, 2023; Washington, DC
Session X29: Modeling Methods II: Speed-up, Stabilization, and Super-Resolution |
Hide Abstracts |
Chair: Arvind Mohan, Los Alamos National Laboratory Room: 152B |
Tuesday, November 21, 2023 8:00AM - 8:13AM |
X29.00001: Implicit Neural Solver for Stable Surrogate Simulation of Fluid Dynamics Deepak Akhare, Pan Du, Tengfei Luo, Jian-Xun Wang Fluid simulations play a critical role in scientific and engineering domains, but their computational complexity has traditionally hindered real-time or many-query applications. Recent advances in scientific machine learning and neural solvers show promise in creating fast surrogate models using data, neural networks, and numerical techniques. However, most existing neural solvers rely on auto-regressive architectures to capture temporal dynamics, similar to explicit numerical methods, which can lead to error accumulation and limit their reliability for long-term predictions. To address this challenge, we propose an innovative implicit neural solver inspired by stable numerical implicit schemes. By adopting this approach, our neural network effectively mitigates the error accumulation problem, enabling accurate and dependable long-term trajectory predictions in fluid simulations. Through comprehensive numerical experiments, we demonstrate the effectiveness and merit of our proposed approach, showcasing the potential for advancing data-driven neural solvers in spatiotemporal simulations. |
Tuesday, November 21, 2023 8:13AM - 8:26AM |
X29.00002: Machine-aided initial guesses for unstable periodic orbits Pierre Beck, Jeremy P Parker, Tobias M Schneider Unstable periodic orbits (UPOs) are believed to be the underlying dynamical structures of turbulence. Loop convergence algorithms deform entire space-time fields (loops) until they satisfy the evolution equations, and initial guesses are thus space-time fields in a high-dimensional space, rendering their identification highly challenging. We use a convolutional autoencoder to obtain a low-dimensional latent representation of the discretized physical space for the one-dimensional Kuramoto-Sivashinksy equation. In this latent space, we construct loops, which are decoded to physical space and used as initial guesses. They are found to be realistic initial guesses, and together with variational convergence algorithms, these guesses help us to quickly converge to UPOs. These initial loops are constructed both through random guesses, and by 'gluing' known UPOs to create longer ones. |
Tuesday, November 21, 2023 8:26AM - 8:39AM |
X29.00003: Eigen analysis of neural autoregressive models of multi-scale chaotic systems: Stability and error propagation Ashesh K Chattopadhyay, Pedram Hassanzadeh Recent years have seen unprecedented success in the development of data-driven autoregressive (AR) models for predicting high-dimensional multi-scale chaotic systems, e.g., weather, climate, and ocean. These models have proven to be more accurate than state-of-the-art numerical models at orders of magnitude lower computational cost. Despite their success, these models suffer from instabilities when integrated for short periods of time and show unphysical predictions. A cause of this instability is spectral bias, wherein the small scales of the system are poorly represented resulting in the consequent nontrivial error propagation through the neural AR models. In this work, for the first time, we present a rigorous theoretical dynamical systems analysis of such error propagation on state-of-the-art neural networks and operators to close this gap between the theory and practice of the application of neural AR models for scientific applications. |
Tuesday, November 21, 2023 8:39AM - 8:52AM |
X29.00004: Harnessing the power of natural language processing techniques for multiscale turbulence simulation Mrigank Dhingra, Omer San, Anne E Staples In this work we leveraged neural machine translation (NMT) methods to enable accelerated multiscale simulations of Burgers turbulence. We employed a sequence-to-sequence (seq2seq) autoencoding mechanism with a long short-term memory (LSTM) integration. Originally employed in natural language processing (NLP), this mechanism enabled implementation of a coarse projective integration (CPI) multiscale scheme by translating between the energy spectrum and velocity field descriptions of the flow field. Using seq2seq, our model creates a many-to-many mapping between these scales. When integrated into the CPI scheme, this mapping forms an effective closure model for the lifting operator, translating coarse-scale information back to the fine scale with a mean squared error (MSE) of 0.005. Compared to a random phase initialized velocity signal (MSE of 0.01), our method exhibits superior precision. The Burgers equation was evolved to statistical stationarity using this model, yielding savings factors of 442 compared to DNS and 3-digit precision. Additionally, a convolutional neural network (CNN) was trained to perform the translation, showing superior performance to the LSTM for certain CPI parameters. This work demonstrates neural networks' potential to accelerate multiscale turbulence simulations. |
Tuesday, November 21, 2023 8:52AM - 9:05AM |
X29.00005: Abstract Withdrawn
|
Tuesday, November 21, 2023 9:05AM - 9:18AM |
X29.00006: MuRFiV-Net: A Multi-Resolution Finite-Volume Inspired Neural Network for Predicting Spatiotemporal Dynamics Xin-yang Liu, Xiantao Fan, Jian-Xun Wang Predicting complex spatiotemporal dynamics in physical processes often demands computationally expensive numerical methods or data-driven neural networks that suffer from high training costs, error accumulation, and limited generalizability to unseen parameters. An effective approach to address these challenges is leveraging physics priors in training neural networks, known as physics-informed deep learning (PiDL). In this work, we introduce the Multi-Resolution Finite-Volume-inspired network, MuRFiV-Net, designed to capitalize on the conservative property of finite volume on the global scale and the expressive power of deep learning on the local scale. We demonstrate the effectiveness of MuRFiV-Net on several spatio-temporal systems governed by partial differential equations (PDEs), including burgers' equation, Kuramoto–Sivashinsky equation, and Navier-Stokes equation. By embedding PDE information into the deep learning architecture, MuRFiV-Net achieves superior performance in predicting spatiotemporal dynamics, surpassing data-driven neural networks. This novel approach offers a promising avenue for tackling complex dynamic systems with improved accuracy and efficiency. |
Tuesday, November 21, 2023 9:18AM - 9:31AM |
X29.00007: Non-Intrusive Reduced Order Models with Neural PDEs: The Interpretability Challenge Arvind T Mohan Surrogate models of partial differential equations (PDEs) are an important area of research for applications where we desire rapid, accurate predictions with low computational costs. Differentiable programming is an emerging paradigm that aims to enable the expressivity of neural networks inside PDEs, such that the learned model is intimately connected to the physics of the problem by construction. Recent efforts in differentiable programming, such as Neural PDEs, have shown promise in learning accurate parameterizations for PDEs from simulation data. However, several earth/climate applications have incomplete or partially known PDEs that need non-intrusive parameterization from observational training data. This leads to a significantly challenging learning problem, where the strengths and weaknesses of differentiable programming are less known. This work systematically studies differentiable programming-based strategies to learn such dynamics with differentiable programming in Neural PDEs. Our results show that differentiable programming as a paradigm can accurately model PDEs while surpassing vanilla neural networks. Interestingly, it succeeds even when strong assumptions are made about the missing physics while requiring lesser data and computational cost. However, we also discover that differences in numerical methods between the training data and Neural PDE have a non-trivial impact on the quality and stability of the learned model, with significant implications on the interpretability and robustness of this technique. |
Tuesday, November 21, 2023 9:31AM - 9:44AM |
X29.00008: Particle Dispersion in Indoor Environments: Can Super-resolution Autoencoders Revolutionize Air Quality Predictions? Adib Bazgir, Hong Y Kek, Huiyi Tan, Yuwen Zhang, Keng Y Wong The critical concern of airborne infections and their associated health implications has underscored the necessity for a comprehensive understanding of indoor airflow dynamics and particle dispersion. While the conventional use of Computational Fluid Dynamics (CFD) has been instrumental in indoor air quality assessments, the repeated analysis of varying patient postures may prove time-consuming. To address this issue and improve efficiency, there exists an opportunity to develop a Super-resolution Autoencoder (SR-AE) algorithm for the sake of the prediction purposes. By employing the SR-AE algorithm and snapshots obtained from the high-fidelity CFD simulations, providing direct visualization of particle dispersion in indoor environments, this study aims to demonstrate the potential of a novel Autoencoder-based approach in accurately interpreting indoor airflow and particle dynamics for airborne infection exposure assessment. Through this research, we are seeking to bridge the gap between high-fidelity CFD simulations and advanced deep learning techniques, contributing to more efficient indoor air quality predictions, thus mitigating health risks and ensuring safer indoor environments. |
Tuesday, November 21, 2023 9:44AM - 9:57AM |
X29.00009: Data-driven symmetry-aware low-dimensional models for predicting turbulent fluid flows Carlos E Perez De Jesus, Alec Linot, Michael D Graham Reduced-order models (ROMs) that capture flow dynamics are important for decreasing computational cost in simulations and for practical applications such as control for drag reduction. In this work we present a framework for developing low-dimensional models that take advantage of discrete and continuous symmetries in the Navier-Stokes equations (NSE). In general, ROMs will not have information of the symmetries. This means that to learn accurate ROMs, the models need access to data in every symmetry subspace which is populated in the long-time dynamics. To overcome this, we learn ROMs with the use of neural networks in a subspace of the symmetries and apply this to the case of two-dimensional Kolmogorov flow in a chaotic bursting regime. By charting the space into different symmetric sections related by the symmetries of the system, tracked with the use of indicators that distinguish these, we can map the flow field to a fundamental space and learn dynamics on it. With this framework: equivariance is satisfied, less data is needed to learn accurate models, better short-time tracking with respect to the true data is observed, and long-time statistics are captured. |
Tuesday, November 21, 2023 9:57AM - 10:10AM |
X29.00010: Taylor series error correction network for super-resolution of discretized fluid solutions Wenzhuo Xu, Christopher McComb, Noelia Grande Gutiérrez High-fidelity fluid simulations can impose an enormous computational burden, thus bringing up the need for an effective up-sampling method for generating high-resolution data. However, conventional up-sampling methods encounter challenges when estimating results based on low-resolution meshes due to the often non-linear behavior of discretization error induced by the coarse mesh [1]. In this study, we present TEECNet (Taylor Expansion Error Correction Network), designed to efficiently super-resolve partial differential equations (PDEs) solutions via graph representations. We use neural networks to learn high-dimensional non-linear mappings between low- and high-fidelity solution spaces to mitigate the effects of discretization error. Building upon the notion that discretization error can be expressed as a Taylor series expansion based on the mesh size, we directly encode approximations of the numerical error in the network design. This novel approach is capable of calibrating point-wise evaluations and emulating physical laws in infinite-dimensional solution spaces. Additionally, computational experiment results verify that the proposed method exhibits favorable generalization across diverse physics domains including heat transfer and simplified Navier-Stokes equations, achieving over 96% accuracy by mean squared error and close to 2% better performance than state-of-the-art measures. |
Tuesday, November 21, 2023 10:10AM - 10:23AM |
X29.00011: A Differentiable Hybrid Neural Solver for Efficient Simulation of Cavitating Flows Bo Zhang, Xiantao Fan, Jian-Xun Wang Cavitation is a prevalent phenomenon in nature and engineering, leading to erosion, noise, and efficiency loss in hydraulic machines. However, the computational costs of conventional numerical solvers for such multi-physics, multi-scale simulations are prohibitively high. To address this challenge and leverage the advances in machine learning and ever-increasing data availability, we present a novel differentiable programming approach that merges machine learning with classical numerical solvers to achieve efficient GPU-accelerated simulation of cavitating flows. Specifically, we develop a differentiable hybrid neural solver, which employs a homogeneous equilibrium model with a barotropic correlation to accurately model cavitation. Since all the modules are coded on JAX with auto-differentiation capabilities, gradient can be back-propagated over the entire model, allowing a seamless integration with neural networks, which can be trained in an end-to-end, sequence-to-sequence manner. The excellent performance of the proposed neural differentiable modeling is exhibited as compared against pure data-driven model and traditional numerical solver. |
Tuesday, November 21, 2023 10:23AM - 10:36AM |
X29.00012: Neural operator-based super-fidelity: A warm-start approach for accelerating steady-state fluid flow simulations Xuhui Zhou, Jiequn Han, Muhammad Irfan Zafar, Christopher Roy, Heng Xiao Neural operators have emerged as a powerful tool for approximating mappings between infinite-dimensional function spaces, gaining attention in both research and industry. However, relying solely on neural operators as surrogate models may fall short in scientific tasks that prioritize computational precision and deterministic outcomes. In this study, we demonstrate that despite visual fidelity in flow field predictions, integral quantities can significantly deviate from ground truths. Consequently, we emphasize the necessity of numerically solving governing equations and present a novel warm-start approach, termed neural operator-based super-fidelity, to accelerate steady-state fluid flow simulations. The concept of super-fidelity, inspired by super-resolution in computer vision, involves mapping low-fidelity model solutions to high-fidelity ones through a vector-cloud neural network with equivariance (VCNN-e). The VCNN-e preserves all desired invariance/equivariance properties for solutions and adapts to different spatial resolutions. We evaluate the approach in two scenarios: (1) simulating incompressible laminar flows over parameterized elliptical cylinders and (2) simulating compressible flows over various airfoils at different angles of attack using a turbulence model. By utilizing the network's prediction as refined initial conditions, both simulations achieve speed-up ratios of no less than two times while maintaining the same level of accuracy compared to iterative convergence from potential flows. The robustness of this approach is evidenced across diverse CPU configurations and various iterative algorithms. Moreover, this method offers distinct advantages and practicality, bypassing the need for extensive high-quality data during training, leading to substantial time savings in data preparation, particularly for industrial applications. This study highlights the benefits of initializing traditional CFD solvers with neural operator-based predictions, enhancing computational efficiency while ensuring outcome accuracy. |
Follow Us |
Engage
Become an APS Member |
My APS
Renew Membership |
Information for |
About APSThe American Physical Society (APS) is a non-profit membership organization working to advance the knowledge of physics. |
© 2024 American Physical Society
| All rights reserved | Terms of Use
| Contact Us
Headquarters
1 Physics Ellipse, College Park, MD 20740-3844
(301) 209-3200
Editorial Office
100 Motor Pkwy, Suite 110, Hauppauge, NY 11788
(631) 591-4000
Office of Public Affairs
529 14th St NW, Suite 1050, Washington, D.C. 20045-2001
(202) 662-8700