Bulletin of the American Physical Society
75th Annual Meeting of the Division of Fluid Dynamics
Volume 67, Number 19
Sunday–Tuesday, November 20–22, 2022; Indiana Convention Center, Indianapolis, Indiana.
Session U21: Nonlinear Dynamics: Machine Learning |
Hide Abstracts |
Chair: Moritz Linkmann, University of Edinburgh; Andrew Fox, University of Wisconsin - Madison Room: 207 |
Tuesday, November 22, 2022 8:00AM - 8:13AM |
U21.00001: Applicability of Machine Learning Methodologies to Model the Statistical Evolution of the Coarse-Grained Velocity Gradient Tensor Criston M Hyett, Yifeng Tian, Michael Woodward, Michael Chertkov, Daniel Livescu, Mikhail Stepanov The evolution of the Lagrangian velocity gradient tensor contains local information about a variety of important turbulence characteristics. Work to model this evolution in isotropic turbulence, and at the smallest scales, has been successful - particularly through the use of machine learning (ML) techniques to approximate local closures to the non-local pressure hessian. However, extension of these methods to describe the evolution of the velocity gradient tensor (CGVGT) coarse-grained at a scale within the inertial range of turbulence remains to be a challenge. In this work, we examine the statistics of the CGVGT and its associated pressure Hessian to determine why the proposed ML methods struggle as the coarse-graining size increases. Through this investigation, we hope to enable a path forward in modeling the statistical evolution of the CGVGT. |
Tuesday, November 22, 2022 8:13AM - 8:26AM |
U21.00002: Attention-enhanced PDE-preserved Neural Network for Predicting Spatiotemporal Physics Xinyang Liu, Jian-Xun Wang Modeling complex spatiotemporal dynamics plays an essential role in predicting, understanding, and controlling physical processes. However, traditional numerical methods are prohibitively expensive to conduct in many-query tasks (e.g., design optimization). Although data-driven models based on deep learning have shown extraordinary capabilities in learning complicated dynamics, issues like high training costs, error accumulation, and poor generalizability limit their applications to real-world problems. A promising way is to combine the advantages of physics models and deep learning, known as physics-informed deep learning (PiDL). One direction in this regard is to preserve the mathematical structure of the governing physics in the deep learning architecture, i.e., PDE-preserved neural network (PPNN). In this work, we extend the PPNN structure by leveraging the attention mechanism in both time and space to learn a more accurate representation of the system. A more efficient multi-step time integration scheme could be learned using temporal attention, alleviating the error accumulation even further than the original PPNN. The merit of attention-enhanced PPNN is demonstrated over a group of complex spatiotemporal systems governed by PDEs, including Navier-Stokes equations. |
Tuesday, November 22, 2022 8:26AM - 8:39AM |
U21.00003: Transitions in electromagnetically-driven 2D flows with random forcing configurations Himanshi Saini, Jeffrey Tithof Two-dimensional (2D) flows offer a convenient platform for testing new theoretical approaches for predicting turbulent flow. Compared to 3D flows, 2D flows have faster numerical simulations and are less technically challenging to measure experimentally. We present a combined numerical and experimental study of quasi-2D flows in a shallow electrolyte layer driven by Lorentz forces produced by electric current interacting with a magnetic field generated by a random array of permanent magnets. The simulations are based on a 2D model which is derived by depth-averaging the three-dimensional Navier-Stokes equations. Ensembles of simulations and experiments are carried out with different random magnet arrangements to study the transitions the flow undergoes as the forcing is increased. These transitions are sensitive to the forcing profile which varies with different magnet arrangements. In this work, we predict flow dynamics for different magnet arrangements using machine learning based algorithms, including physics-informed neural networks (PINNs), and present preliminary results comparing different approaches. Our research will provide a foundational stepping stone toward machine learning-based predictions of (quasi-2D) geophysical flows, 3D turbulence, and more. |
Tuesday, November 22, 2022 8:39AM - 8:52AM |
U21.00004: Manifold learning and deep autoencoders for nonlinear embedding of unsteady fluid flows Hunor Csala, Scott T Dawson, Amirhossein Arzani Computational fluid dynamics (CFD) is known for producing high-dimensional data in space and time. Modern data-driven modeling approaches present a myriad of techniques to extract physical information from these datasets and identify an optimal set of coordinates for representing them in a low-dimensional embedding. This is a crucial first step toward reduced order modeling, usually done via proper orthogonal decomposition (POD), which gives the best linear approximation. However, fluid flows are often highly complex with nonlinear structures. Several unsupervised machine learning algorithms have been developed in other branches of science for nonlinear dimensionality reduction (NDR), but have not yet been extensively used for fluid flow data. We investigate four manifold learning and two deep learning based NDR methods and compare them to POD. These are tested on two canonical fluid flow problems and biomedical flows in diseased arteries. We compare the performance of these methods and discuss the associated challenges. The temporal vs. spatial arrangement of input data and its influence on NDR mode extraction is investigated, and the obtained spatial modes are compared. Finite time Lyapunov exponents (FTLE) are calculated to facilitate flow physics interpretation. |
Tuesday, November 22, 2022 8:52AM - 9:05AM |
U21.00005: From Navier-Stokes simulations for thin films to amplitude equations and back via physics-assisted machine-learning Cristina P Martin Linares, Eleni Koronaki, Yorgos Psarellis, George Karapetsas, Ioannis G Kevrekidis Amplitude equations for the interfaces in thin film flows are reduced representations of the physics under limiting conditions. The Kuramoto-Sivashinsky (KS) equation is the first in a hierarchy of models, with the most detailed being Navier-Stokes but requiring involved computations. We present a machine learning approach to leverage velocity and amplitude data at various Re from simulations of a thin film flow under conditions where the KS is approximately valid. We use this dataset to train a Neural Network (NN) that learns the PDE describing the time-evolution of the interface, integrated in time as a “black box” to compute the amplitude. We adopt a “gray box” approach to learn a correction to the KS. We also show that local values of the right-hand side of the KS and corresponding derivatives in space and time as input in a NN can predict the dynamics. The approximate nonlinear manifold of the dataset can be parametrized by a small number of latent variables using linear (Proper orthogonal decomposition-POD) and nonlinear (Diffusion Maps and autoencoders) methods. We can then predict fluid velocity distributions from some amplitude data points by interpolating in the latent space with Gappy POD and Geometric Harmonics with the linear and nonlinear methods, respectively. |
Tuesday, November 22, 2022 9:05AM - 9:18AM |
U21.00006: Interpreted machine learning in fluid dynamics: Explaining relaminarisation events in wall-bounded shear flows Moritz Linkmann, Martin Lellep, Jonathan Prexl, Bruno Eckhardt Powerful machine learning (ML) methods are notoriously difficult to interpret. Here, we use ML methods to predict relaminarisation events in wall-bounded shear flows and obtain human-interpretable information through an explainable artificial intelligence method, the game-theoretic Shapley additive explanations (SHAP) algorithm (Lundberg & Lee, Advances in Neural Information Processing Systems, 4765 (2017)). For a proof of concept, we consider a low-dimensional model based on the self-sustaining process (SSP), where each data feature has a clear physical and dynamical interpretation in terms of representative features of the near-wall dynamics. SHAP determines that only the laminar profile, the streamwise vortex and a specific streak instability play a major role in the prediction of relaminarisation events. The method is applicable to larger datasets, in minimal plane Couette flow the prediction is based on proxies for linear streak instabilities. The SHAP analysis thus suggests that the break-up of the self-sustaining cycle is connected with a suppression of streak instabilities. |
Tuesday, November 22, 2022 9:18AM - 9:31AM |
U21.00007: Data-driven modeling of a dynamic system with extreme events through neural networks in an atlas of charts Andrew J Fox, Michael D Graham Fluid dynamic systems with extreme events are difficult to capture with data-driven modeling, due to strong dependence of the long-time occurrence of extreme events on short-time conditions and the relative scarcity of data within extreme events compared to non-extreme states. Our technique known as Charts and Atlases for Nonlinear Data-Driven Dynamics on Manifolds, or CANDyMan, works by decomposing the time series into separate charts based on data similarity, learning dynamical models on each chart via individual time-mapping neural networks, then stitching the charts together to create a single atlas, obtaining a global dynamical model. We apply CANDyMan to a nine-dimensional model of turbulent shear flow between infinite parallel free-slip walls under a sinusoidal body force developed by Moehlis, Faisst and Eckhart (MFE), which undergoes extreme events in the form of high-energy, intermittent quasi-relaminarization. We demonstrate that the application of CANDyMan reduces the error in predictions and captures the frequency of high-energy extreme events more accurately than a single time-mapping neural network. Finally, we project onto the full velocity field, where CANDyMan creates a more accurate reproduction of the turbulent velocity statistics than a single dynamical model. |
Tuesday, November 22, 2022 9:31AM - 9:44AM |
U21.00008: Neighbor search in latent spaces via geometric deep learning for nonlocal methods in fluid dynamics Liam K Magargal, Steven N Rodriguez, Justin Jaworski, Athanasios Iliopoulos, John Michopoulos Nonlocal (NL) numerical methods are algorithmically versatile and enable effective simulations of complex physical phenomena, such as free-surface and multi-phase fluid flows. In contrast to traditional methods that employ local operations, such as finite elements or finite volumes, NL methods rely on NL collocation support to model dynamical systems. It follows that NL methods suffer generally from a lack of sparsity, and incur a higher computational cost than local methods due to the requirement of a NL neighbor search. Recent efforts in NL projection-based model order reduction have attempted to ameliorate this cost bottleneck using dimensional reduction, hyper-reduction, and hierarchical agglomeration. Unfortunately, most of this work relies on hierarchical agglomeration of neighbors in a linear subspace and cannot account for neighbor shifting and evolution. Toward addressing these limitations, the current work aims to leverage graph neural networks, a framework within geometric deep learning, to serve as a time-adaptive NL neighbor search algorithm in a nonlinear latent space. This approach will be applied to the NL smoothed-particle hydrodynamics framework, where case studies will include natural convection instabilities. |
Tuesday, November 22, 2022 9:44AM - 9:57AM |
U21.00009: Predicting wake-body synchronization using deep learning Amir Chizfahm, Rajeev K Jaiman We present a deep learning-based reduced-order model (DL-ROM) for the stability prediction of unsteady 3D fluid-structure interaction systems. The proposed DL-ROM has the format of a nonlinear state-space model and employs a recurrent neural network with long short-term memory (LSTM). We consider a canonical fluid-structure system of an elastically-mounted sphere coupled with incompressible fluid flow in a state-space format. We develop a nonlinear data-driven coupling for predicting unsteady forces and vortex-induced vibration (VIV) lock-in of the freely vibrating sphere in a transverse direction. We design an input-output relationship as a temporal sequence of force and displacement datasets for a low-dimensional approximation of the fluid-structure system. Based on the prior knowledge of the VIV lock-in process, the input function contains a range of frequencies and amplitudes, which enables an efficient DL-ROM without the need for a massive training dataset for the low-dimensional modeling. Once trained, the network provides a nonlinear mapping of input-output dynamics that can predict the coupled fluid-structure dynamics for a longer horizon via the feedback process. By integrating the LSTM network with the eigensystem realization algorithm (ERA), we construct a data-driven state-space model for the reduced-order stability analysis. We investigate the underlying mechanism and stability characteristics of VIV via an eigenvalue selection process. To understand the frequency lock-in mechanism, we study the eigenvalue trajectories for a range of the reduced oscillation frequencies and the mass ratios. Consistent with the full-order simulations, the frequency lock-in branches are accurately captured by the combined LSTM-ERA procedure. The proposed DL-ROM aligns with the development of physics-based digital twin of engineering systems involving fluid-structure interactions. |
Tuesday, November 22, 2022 9:57AM - 10:10AM |
U21.00010: Spatio-Temporal Mode Decomposition for Unsteady Flow with Convolutional Neural Network Yosuke Shimoda, Naoya Fukushima We develop CNN Mode Decomposition Models, to perform spatio-temporal mode decompositions of flow around a cylinder at Reynolds number, ReD=U∞D/ν=100, as an example of unsteady flows. The input of the model, which is a time series of flow field during one cycle of vortex shedding, are mapped into 1, 2 or 3 modes in the latent space, and then a time series of each decomposed flow field is reconstructed from each mode. The models with only 1, 2 and 3 modes can represent the flow field for one cycle of vortex shedding with high accuracy. For the model with 2 modes, both the decomposed flow fields are unsteady. The first decomposed field represents large unsteady structures similar to the Karman vortices. The second decomposed field includes similar-size and opposite-phase structures in the wake and compensates the discrepancies. For 3 modes model, 1st decomposed field shows a steady flow field, similar to the time-averaged flow field. Large unsteady structures, corresponding to the Karman vortices in the wake region, appear in 2nd decomposed field. 3rd decomposed field consists of unsteady structures of opposite phase, smaller size, and smaller magnitude in the wake. |
Tuesday, November 22, 2022 10:10AM - 10:23AM |
U21.00011: Learning spatiotemporal dynamics in a turbulent flow: A 3D Autoencoded Reservoir Computer approach Nguyen Anh Khoa Doan, Alberto Racca, Luca Magri Deep learning has shown the potential to learn the dynamics of chaotic systems and reduced-order models of turbulence. The scalability of deep learning to three dimensional turbulent flows as well as the ability to time-accurately predict their evolution, however, are yet to be investigated. |
Tuesday, November 22, 2022 10:23AM - 10:36AM Author not Attending |
U21.00012: Extrapolating fluid dynamics with spatiotemporal convolution networks Indu Kant Deo, Rui Gao, Rajeev K Jaiman There is a critical need for efficient and reliable active flow control strategies to reduce drag and noise in various engineering systems. While traditional full-order models based on the Navier-Stokes equations are not feasible, advanced model reduction techniques can be inefficient for active control tasks, especially with strong nonlinearity and convection-dominated phenomena. In our recent works, deep learning-based surrogate models have been shown to be effective and they run orders of magnitude faster than full-order simulations. However, outside of the training data, these models encounter significant challenges, limiting their effectiveness in real-world applications. In this study, we aim to improve the extrapolation capability of deep neural networks by modifying the network architecture and integrating physics as an implicit bias. Surrogate models via deep learning generally employ decoupling in spatial and temporal dimensions, which can introduce modelling and approximation errors. To alleviate these errors, we propose a novel technique for learning coupled spatial-temporal correlation using total convolution networks. We compare the proposed technique against a standard encoder-propagator-decoder model and demonstrate a superior extrapolation performance. |
Follow Us |
Engage
Become an APS Member |
My APS
Renew Membership |
Information for |
About APSThe American Physical Society (APS) is a non-profit membership organization working to advance the knowledge of physics. |
© 2024 American Physical Society
| All rights reserved | Terms of Use
| Contact Us
Headquarters
1 Physics Ellipse, College Park, MD 20740-3844
(301) 209-3200
Editorial Office
100 Motor Pkwy, Suite 110, Hauppauge, NY 11788
(631) 591-4000
Office of Public Affairs
529 14th St NW, Suite 1050, Washington, D.C. 20045-2001
(202) 662-8700