Bulletin of the American Physical Society
74th Annual Meeting of the APS Division of Fluid Dynamics
Volume 66, Number 17
Sunday–Tuesday, November 21–23, 2021; Phoenix Convention Center, Phoenix, Arizona
Session M31: Nonlinear Dynamics: Model Reduction & Turbulence IV |
Hide Abstracts |
Chair: Jian-Xun Wang, University of Notre Dame Room: North 232 ABC |
Monday, November 22, 2021 1:10PM - 1:23PM |
M31.00001: Development of closures for coarse-scale modeling of multiphase and free surface flows using machine learning. Cristina P Martin Linares, Tom Bertalan, Eleni Koronaki, Jiacai Lu, Gretar Tryggvason, Ioannis G Kevrekidis The aim of this work is to learn coarse-grained PDEs as well as reduced-order models of those using a data-driven approach. We train a neural network to learn an approximate inertial form: ODEs for the coarse-scale system behavior obtained from the fine-scale simulations of a bubbly multiphase flow in a vertical channel. We average in the direction parallel to the overall flow to create a dataset of one-spatial-dimension, time-dependent profiles. We perform Proper Orthogonal Decomposition (POD) to reduce the high-dimensional averaged snapshot data to a truncated set of 10 leading-mode amplitude coefficients, and further reduce these through an autoencoder. We then train a second neural network to approximate the continuous-time dynamics of the system in terms of the amplitudes of the ``determining'' POD coefficients (after filtering through the autoencoder) and also reconstruct the full solution via a third network that approximates the remaining POD coefficients as a function of the determining ones. Finally, we also learn a ``grey-box'' model for the right-hand-side operator of the averaged PDE that uses the known parts. To evolve the relevant fields, a pair of unknown closure terms, the wall-normal liquid flux, and summed dissipative terms are learned from coarse evolution data, using only spatial local information. |
Monday, November 22, 2021 1:23PM - 1:36PM |
M31.00002: Analyses of the Differentiable-programming Paradigm for Learning Physics-constrained Surrogate Models Divya Sri Praturi, Arvind T Mohan Surrogate models of PDEs are an important area of research for applications where rapid, accurate predictions are desired with low computational costs. Deep learning is a popular approach, but they typically lack the strong physical constraints which are intrinsic in PDEs. Furthermore, PDEs such as the Navier-Stokes equations often exhibit chaotic non-local dynamics, which are considerably harder to model than the local dynamics seen in several canonical PDEs. In this work, we present differentiable programming-based strategies as an alternative to learn such dynamics by training neural networks embedded directly inside the PDE structure. In particular, we represent the nonlinear and non-local terms as neural networks and use backpropagation to train them, while simultaneously solving the surrogate PDE in the forward pass. Finally, we investigate the properties of the learned surrogate PDEs, including their sensitivity to system noise, external forcing, impact on prediction accuracy; and comment on potential applications. |
Monday, November 22, 2021 1:36PM - 1:49PM |
M31.00003: Frequency-time analysis of turbulent flows using spectral POD Akhil Nekkanti, Oliver T Schmidt Intermittency is an inherent feature of turbulent flows that describes the occurrence of flow events at irregular intervals. A common approach for the characterization of intermittent behaviour is frequency-time analysis. The standard tools of frequency-time analysis are wavelet and short-time Fourier transforms, which are applied to 1-D time series and quantify intermittency locally. In this work, we propose a method that identifies the intermittency of spatially coherent flow structures identified by spectral proper orthogonal decomposition (SPOD). The SPOD-based frequency-time analysis provides spectrograms that characterize the temporal evolution of the SPOD modes. This requires the computation of time-continuous expansion coefficients, which can in principle be obtained from a SPOD with a sliding window. This approach, however, is computationally intractable even for moderately-sized data. To mitigate this limitation, we propose an alternative strategy based on convolution in the time domain. We demonstrate this approach on large-eddy simulation data of a turbulent jet. The SPOD-based frequency-time analysis reveals that the intermittent occurrence of large-scale coherent structures is directly associated with high-energy events. |
Monday, November 22, 2021 1:49PM - 2:02PM |
M31.00004: Data-driven coarse graining of many-body systems Zachary G Nicolaou, Matthew Kafker, Steven L Brunton, J. Nathan Kutz First-principle derivations of governing equations for many-body systems have traditionally been based on systematic coarse graining procedures, but classical approaches rely on heuristic assumptions and become intractable for non-ideal systems. Recently, system identification has received renewed interest as machine learning has revolutionized data-driven discovery. Modern algorithms leverage sparsity and physical constraints to discover dynamical governing equations directly from trajectory data. These methods offer a powerful new approach to study the emergence of macroscopic behavior from microscopic physics. We apply system identification algorithms to discover coarse-grained dynamics governing data derived from molecular dynamics simulations. We focus on systems of spherical particles with hard and soft interaction potentials, which exhibit a myriad of collective behaviors including gaseous, liquid, crystalline, glassy, and jammed phases. Our results shine light on the emergence of universal macroscopic dynamics and may aid in the study of intractable disordered systems. |
Monday, November 22, 2021 2:02PM - 2:15PM |
M31.00005: Optimizing oblique projections for nonlinear systems using trajectories Samuel E Otto, Alberto Padovan, Clarence W Rowley Reduced-order modeling techniques, including balanced truncation and H2-optimal model reduction, exploit the structure of linear dynamical systems to produce models that accurately capture the dynamics. For nonlinear systems operating far away from equilibria, on the other hand, current approaches seek low-dimensional representations of the state that often neglect low-energy features that have high dynamical significance. For instance, low-energy features are known to play an important role in fluid dynamics where they can be a driving mechanism for shear-layer instabilities. Neglecting these features leads to models with poor predictive accuracy despite being able to accurately encode and decode states. In order to improve predictive accuracy, we propose to optimize the reduced-order model to fit a collection of coarsely sampled trajectories from the original system. In particular, we optimize over the product of two Grassmann manifolds defining Petrov-Galerkin projections of the full-order governing equations. We compare our approach with existing methods such as proper orthogonal decomposition and balanced truncation-based Petrov-Galerkin projection, and our approach demonstrates significantly improved accuracy both on a nonlinear toy model and on an incompressible (nonlinear) axisymmetric jet flow with 69,000 states. |
Monday, November 22, 2021 2:15PM - 2:28PM |
M31.00006: Neural Implicit Flow: A mesh-agnostic representation paradigm for spatio-temporal fields Shaowu Pan, Steven L Brunton, Nathan Kutz Fluid dynamics exhibits complex, multi-scale spatial structure, chaotic dynamics in time, and bifurcation in the relevant parameters. Among these challenges, spatial complexity is the major barrier for modeling and control of fluid dynamics, which motivates the need for dimensionality reduction. Existing paradigms, such as proper orthogonal decomposition or convolutional autoencoders, both struggle to accurately and efficiently represent flow structures for problems requiring variable geometry, non-uniform grid resolution (e.g., wall-bounded flows, flow phenomenon induced by small geometry features), adaptive mesh refinement, or parameter-dependent meshes. To resolve these difficulties, we propose Neural Implicit Flow (NIF) as a general framework that enables a compact and flexible dimension reduction of large-scale, parametric, spatial-temporal data into mesh-agnostic fixed-length representations. This work complements existing meshless methods, e.g., physics-informed neural networks, and we focus specifically on obtaining reduced coordinates where modeling and control tasks may be performed more efficiently. We apply our mesh-agnostic approach to several fluid flows, including flow past a cylinder, sea surface temperature data, and 3D homogeneous isotropic turbulence. In these examples, we demonstrate the utility of NIF for parametric surrogate modeling, efficient differential query in space, learning non-linear manifolds, and the interpretable low-rank decomposition of fluid flow data. |
Monday, November 22, 2021 2:28PM - 2:41PM |
M31.00007: Physics-guided machine learning for surrogate modeling in fluid mechanics Suraj A Pawar, Omer San, Adil Rasheed Recently, computational modeling has shifted towards the use of statistical inference, deep learning, and other data-driven modeling frameworks. Although this shift in modeling holds promise in many applications like design optimization and real-time control by lowering the computational burden, training deep learning models needs a huge amount of data. This big data is not always available for scientific problems and leads to poorly generalizable data-driven models. This gap can be furnished by leveraging information from physics-based simplified approximations. In particular, we combine the information from simplified analytical models with the noisy data obtained either from experiments or computational fluid dynamics simulations (high-fidelity models) through a neural network. We illustrate the proposed physics-guided machine learning framework for different test cases like boundary layer flow reconstruction, airfoil force prediction, and projection-based reduced order modeling. This multi-fidelity information fusion framework produces physically consistent models that attempt to achieve better generalizability than data-driven models obtained purely based on data. This work builds a bridge between extensive simplified physics-based theories and data-driven modeling paradigm and paves the way for using hybrid physics and machine learning modeling approaches for next-generation digital twin technologies. |
Monday, November 22, 2021 2:41PM - 2:54PM |
M31.00008: Data-driven estimation of inertial manifold dimension for chaotic Kolmogorov flow and time evolution on the manifold Carlos E Perez De Jesus, Michael D Graham Model reduction techniques have previously been applied to evolve the Navier-Stokes equations in time, however finding the minimal dimension needed to correctly capture the key dynamics is not a trivial task. To estimate this dimension we trained an undercomplete autoencoder on weakly chaotic vorticity data (32x32 grid) from Kolmogorov flow simulations tracking the reconstruction error as a function of dimension. We also trained a discrete time stepper that evolves the reduced order model with a nonlinear dense neural network. The trajectory travels in the vicinity of relative periodic orbits (RPOs) followed by sporadic bursting events. At a dimension of five (as opposed to the full state dimension of 1024), input power-dissipation probability density function is well-approximated; Fourier coefficient evolution shows that the trajectory correctly captures the heteroclinic connections (bursts) between the different RPOs, and the prediction and true data track each other for approximately a Lyapunov time. In the autoencoder section we also account for the group symmetries and find further improvement in the reconstruction error. |
Monday, November 22, 2021 2:54PM - 3:07PM |
M31.00009: Promoting global stability in data-driven models of quadratic nonlinear dynamics Alan Kaptanoglu, Jared Callaham, Christopher J Hansen, Aleksandr Aravkin, Steven L Brunton Modeling realistic fluid and plasma flows is computationally intensive, motivating the use of reduced-order models for a variety of scientific and engineering tasks. However, it is challenging to characterize, much less guarantee, the global stability (i.e., long-time boundedness) of these models. The seminal work of Schlegel and Noack (2015) provided a theorem outlining necessary and sufficient conditions to ensure global stability in systems with energy-preserving, quadratic nonlinearities, with the goal of evaluating the stability of projection-based models. In this work, we incorporate this theorem into modern data-driven models obtained via machine learning. First, we propose that this theorem should be a standard diagnostic for the stability of projection-based and data-driven models, examining the conditions under which it holds. Second, we illustrate how to modify the objective function in machine learning algorithms to promote globally stable models, with implications for the modeling of fluid and plasma flows. Specifically, we introduce a modified "trapping SINDy" algorithm based on the sparse identification of nonlinear dynamics (SINDy) method. This method enables the identification of models that, by construction, only produce bounded trajectories. The effectiveness and accuracy of this approach are demonstrated on a broad set of examples of varying model complexity and physical origin, including the vortex shedding in the wake of a circular cylinder. |
Monday, November 22, 2021 3:07PM - 3:20PM |
M31.00010: Modeling chaotic spatiotemporal dynamics with a minimal representation using Neural ODEs Alec Linot, Michael D Graham Solutions to dissipative partial differential equations that exhibit chaotic dynamics often evolve to attractors that exist on finite-dimensional manifolds. We describe a data-driven reduced order modelling (ROM) method to find the coordinates on this manifold and find an ordinary differential equation (ODE) in these coordinates. This ROM is useful because it is data-driven, it is computationally less expensive than the full system, and it provides coordinates which may be physically meaningful. We find the manifold coordinates by reducing the system dimension via an undercomplete autoencoder – a neural network (NN) that reduces then expands dimension. By varying the dimension, we get a minimal representation of the state. Then, in the manifold coordinate system, we train a Neural ODE – a NN that approximates an ODE. Learning an ODE, instead of a discrete time-map, allows us to evolve trajectories arbitrarily far forward, and allows for training on unevenly and/or widely spaced data in time. We test on the Kuramoto-Sivashinsky equation for domain sizes that exhibit spatiotemporally chaos. These ROMs generate accurate short- and long-time statistics with data separated up to 0.7 Lyapunov times. We also study the effect of reducing dimension below the expected manifold dimension. |
Follow Us |
Engage
Become an APS Member |
My APS
Renew Membership |
Information for |
About APSThe American Physical Society (APS) is a non-profit membership organization working to advance the knowledge of physics. |
© 2024 American Physical Society
| All rights reserved | Terms of Use
| Contact Us
Headquarters
1 Physics Ellipse, College Park, MD 20740-3844
(301) 209-3200
Editorial Office
100 Motor Pkwy, Suite 110, Hauppauge, NY 11788
(631) 591-4000
Office of Public Affairs
529 14th St NW, Suite 1050, Washington, D.C. 20045-2001
(202) 662-8700