Bulletin of the American Physical Society
72nd Annual Meeting of the APS Division of Fluid Dynamics
Volume 64, Number 13
Saturday–Tuesday, November 23–26, 2019; Seattle, Washington
Session L10: Nonlinear Dynamics: Model Reduction II |
Hide Abstracts |
Chair: Nathan Kutz, University of Washington Room: 3A |
Monday, November 25, 2019 1:45PM - 1:58PM |
L10.00001: A Bayesian Reinterpretation of Dynamical System Identification by Sparse Regression Methods Robert K Niven, Laurent Cordier, Markus Abel, Markus Quade, Ali Mohammad-Djafari Recently, many researchers have developed sparse regression methods for the identification of a dynamical system from its time-series data. We demonstrate that these methods fall within the framework of Bayesian inverse methods. Indeed, the Bayesian maximum {\it a posteriori} method, using Gaussian likelihood and prior functions, is equivalent to Tikhonov regularization based on Euclidean norms. This insight provides a Bayesian rationale for the choice of residual and regularisation terms for any problem, respectively from the Bayesian likelihood and prior distributions. It also provides access to the full Bayesian inversion apparatus, including estimation of uncertainties in the inferred parameters and the model, explicit calculation of the optimal regularization parameter, and the ranking of competing models using Bayes factors. In addition, advanced Bayesian methods are available to explore the inferred probability distribution of the model, should this be desired. We demonstrate these points by analysis of several dynamical systems using standard and Bayesian sparse regression methods. We also discuss the estimation of intermediate parameters and their handling within a Bayesian framework. [Preview Abstract] |
Monday, November 25, 2019 1:58PM - 2:11PM |
L10.00002: Model Reduction via Time-Continuous Least-Squares Residual Minimization Eric Parish A time-continuous residual minimization approach for reduced-order models of dynamical systems is presented. The proposed approach, referred to as Time-Continuous Least-Squares Residual Minimization (TC-LSRM), sequentially minimizes the time-continuous full-order model residual within a low-dimensional trial space over a series of time slabs. The stationary conditions for the time-continuous minimization problems are obtained by deriving the associated Euler-Lagrange equations. Both direct (discretize then minimize) and indirect (minimize and then discretize) solution techniques are explored. The proposed approach displays commonalities with optimal control problems and can be viewed as a generalization of the popular Least-Squares Petrov-Galerkin (LSPG) method. By formulating the residual minimization problem from the time-continuous level, the TC-LSRM approach overcomes the time-step sensitivity and time-scheme dependence that LSPG is subject to. Numerical experiments demonstrate that the proposed approach can lead to more accurate and physically relevant solutions than existing model reduction approaches. [Preview Abstract] |
Monday, November 25, 2019 2:11PM - 2:24PM |
L10.00003: LSTM based nonintrusive ROM of convective flows Shady Ahmed, Sk. Mashfiqur Rahman, Omer San, Adil Rasheed A feasible digital twin of any complex system necessitates computationally efficient and accurate simulations possibly without a complete mathematical form of the driving physics. Conventional projection-based reduced order modeling (ROM) techniques can satisfy the first two requirements, but usually fail at the third one. In the present study, we aim at addressing all the three components within a nonintrusive ROM framework. Proper orthogonal decomposition (POD) is well-known for its optimality in representing complex systems. However, its intrinsic global nature often causes a deformation of the generated bases, especially in convective flows. Based on dimensionality reduction using POD, we introduce a long short-term memory (LSTM) neural network architecture along with a principal interval decomposition (PID) framework as an enabler to account for localized modal deformation. We describe the concept of buffer zone, where a reconstruction step is performed at the interface between any two consecutive partitions. We test our framework using different convection-dominated, unsteady-flow problems. [Preview Abstract] |
Monday, November 25, 2019 2:24PM - 2:37PM |
L10.00004: Nonlinear dimensionality reduction and prediction of chaotic spatiotemporal dynamics in translation-symmetric systems via deep learning Alec Linot, Michael D. Graham Many flow geometries, including pipe, channel and boundary layer, have an unbounded or spatially periodic direction in which the governing equations have continuous translation symmetry. As a model for such systems we consider the Kuramoto-Sivashinsky equation (KSE) in a periodic domain for a parameter regime with chaotic dynamics. We describe a method to map the dynamics of the KSE onto a translationally invariant low-dimensional manifold and evolve them forward in time using neural networks (NN). Invariant dimensionality reduction is achieved by first applying the method of slices to phase-align the spatial structures at each time, which are then input into an undercomplete autoencoder that maps the original dynamics onto a lower-dimensional manifold where the long-time dynamics live. The spatial structure on this manifold and also the spatial phase are integrated forward in time using a NN. This approach significantly outperforms linear methods, such as POD, for drastic dimensionality reduction. Furthermore, evolving the nonlinear dynamics on the manifold with this NN architecture allows us to capture statistics of the chaotic attractor, whereas linear methods, like dynamic mode decomposition (DMD), fail to capture nonlinear dynamics. [Preview Abstract] |
Monday, November 25, 2019 2:37PM - 2:50PM |
L10.00005: Koopman operator approximations for PDEs using deep learning Craig Gin, Bethany Lusch, Steven Brunton, Nathan Kutz Koopman operator theory allows for any autonomous nonlinear dynamical system to be transformed into a linear system. Because of the linearity of the Koopman operator, the dynamics can be represented using traditional methods such as eigenfunction expansion. Therefore, the ability to transform a nonlinear dynamical system to a linear system can be a powerful tool for fluid flow problems and other physical systems. However, finding a transformation to linearize a general nonlinear system is difficult. Dynamic mode decomposition (DMD), introduced in the fluid mechanics community, is one approach for approximating the Koopman operator. We present an approach that uses deep learning to approximate the Koopman operator in the context of partial differential equations. Our method is data-driven and therefore does not require knowledge of the governing equations. As a prototypical example, we demonstrate the method on Burgers’ equation and show the importance of having the right neural network architecture in order to get a good coordinate transformation that linearizes the dynamics. [Preview Abstract] |
Monday, November 25, 2019 2:50PM - 3:03PM |
L10.00006: A Koopman-based framework for forecasting the spatiotemporal evolution of chaotic dynamics Mohammad Amin Khodkar, Pedram Hassanzadeh, Athanasios Antoulas We show the remarkable skills of a data-driven method in spatiotemporal prediction of high-dimensional and chaotic dynamics. The method is based on a finite-dimensional approximation of Koopman operator where the observables are vector-valued and delay-embedded, and the nonlinearities are treated as external forcings. The predictive capabilities of the method are demonstrated for well-known prototypes of chaos such as the Kuramoto-Sivashinsky equation and Lorenz-96 system, for which the data-driven predictions are accurate for several Lyapunov timescales. Similar performance is seen for two-dimensional lid-driven cavity flows at high Reynolds numbers. [Preview Abstract] |
Monday, November 25, 2019 3:03PM - 3:16PM |
L10.00007: Predicting long-term dynamics of chaotic systems with hybrid machine learning Sreetej Lakkam, Balamurali B T, Jurriaan J J Gillissen, Roland Bouffanais Forecasting of chaotic systems relies on estimating long-term dynamics of the system to make reasonable predictions. Our work aims to use an efficient hybrid machine learning technique to improve the estimation of long-term statistics while being resilient to short-term anomalies in determining future states of the system. Our hybrid machine learning technique combines a Long Short Term Memory (LSTM) architecture and ensemble modeling. LSTM is used to extract the long-term dependencies in the chaotic system data, while ensembling perturbs and combines multiple LSTMs to obtain better predictive performance compared to any of the constituent LSTM alone. We demonstrate the forecasting capability of this framework using time-series data from a Lorenz system and subsequently apply it to planar homogeneous turbulence flow field. Using visual verification and power spectra analysis, we conclude that our model can learn and predict the long-term dynamics even when short-term forecasting fails owing to the inherent unpredictability of chaotic systems. These results have far-reaching implications for the use of machine learning in fluid mechanics. Moreover, this approach is completely data-driven and relies on the LSTM capability to capture the long-term statistics of a chaotic system. [Preview Abstract] |
Follow Us |
Engage
Become an APS Member |
My APS
Renew Membership |
Information for |
About APSThe American Physical Society (APS) is a non-profit membership organization working to advance the knowledge of physics. |
© 2024 American Physical Society
| All rights reserved | Terms of Use
| Contact Us
Headquarters
1 Physics Ellipse, College Park, MD 20740-3844
(301) 209-3200
Editorial Office
100 Motor Pkwy, Suite 110, Hauppauge, NY 11788
(631) 591-4000
Office of Public Affairs
529 14th St NW, Suite 1050, Washington, D.C. 20045-2001
(202) 662-8700