Bulletin of the American Physical Society
71st Annual Meeting of the APS Division of Fluid Dynamics
Volume 63, Number 13
Sunday–Tuesday, November 18–20, 2018; Atlanta, Georgia
Session Q01: Nonlinear Dynamics: Model Reduction II |
Hide Abstracts |
Chair: Pedram Hassenzadeh, Rice University Room: Georgia World Congress Center B201 |
Tuesday, November 20, 2018 12:50PM - 1:03PM |
Q01.00001: Data-driven reduced modeling of turbulent convection using DMD-enhanced Fluctuation-Dissipation Theorem Pedram Hassanzadeh, Mohammad Amin Khodkar A data-driven, model-free framework is introduced for calculating Reduced-Order Models (ROMs) capable of accurately predicting time-mean responses to external forcings, or forcings needed for specified responses, e.g., for control, in fully turbulent flows. The framework is based on using the Fluctuation-Dissipation Theorem (FDT) in the space of a limited number of modes obtained from Dynamic Mode Decomposition (DMD). Using the DMD modes as the basis functions, rather than the commonly used Proper Orthogonal Decomposition (POD) modes, resolves a previously identified problem in applying FDT to high-dimensional, non-normal turbulent flows. Employing this DMD-enhanced FDT method (FDTDMD ), a linear ROM with horizontally averaged temperature as state vector, is calculated for a 3D Rayleigh-Benard convection system at the Rayleigh number of 106 using data obtained from Direct Numerical Simulation (DNS). The calculated ROM performs well in various tests for this turbulent ow, suggesting FDTDMD as a promising method for developing ROMs for high-dimensional, turbulent systems. |
Tuesday, November 20, 2018 1:03PM - 1:16PM |
Q01.00002: Why is the cylinder flow a terrible test case for deep learning? Jean-Christophe Loiseau Recently, deep learning has attracted a lot attention and its successes are regularly reported by both the scientific and mainstream media. Within the fluid dynamics community, the two-dimensional cylinder flow is often used as a test case to illustrate the performances of different network architectures for tasks such as reduced-order modeling, flow field estimation or nonlinear control. Despite its wide use as a representative test case for "complex" nonlinear dynamics, the inherent low-dimensionality of this flow can be captured by a fairly simple model. The reduced-order model proposed herein not only mimics the nonlinear dynamics of the system but also accounts for the mode deformation that occurs as the flow evolves from the base flow to the mean flow. Based on ideas from dynamical systems and differential geometry, the simplicity and accuracy of our data-driven model provide hints about which features recently proposed neural network models may have actually learned. Its simplicity moreover strongly underlines that the aforementioned neural networks are likely to be overly complex for the configuration considered and that, as a consequence, the performances reported for the cylinder flow may not be indicative of what would be obtained for more realistic configurations. |
Tuesday, November 20, 2018 1:16PM - 1:29PM |
Q01.00003: Deep learning of dynamics and signal-noise decomposition with time-stepping constraints Samuel Rudy, Nathan Kutz, Steven L Brunton A critical challenge in the data-driven modeling of dynamical systems is producing methods robust to measurement error, particularly when data is limited. Many leading methods either rely on denoising prior to learning or on access to large volumes of data to average over the effect of noise. We propose a novel paradigm for data-driven modeling that simultaneously learns the dynamics and estimates the measurement noise. Our method explicitly accounts for measurement error in the map between observations, treating both the measurement error and the dynamics as unknowns to be identified, rather than assuming idealized noiseless trajectories. We model the unknown vector field using a neural network, imposing a Runge-Kutta integrator structure to isolate this vector field, even when the data has a non-uniform time-step, thus constraining and focusing the modeling effort. We demonstrate the ability of this framework to form predictive models on a variety of test problems including low dimensional flow around a cylinder and discuss some challenges with using neural networks to interpolate governing equations. |
Tuesday, November 20, 2018 1:29PM - 1:42PM |
Q01.00004: Sparse identification of nonlinear dynamics for model predictive control in the low-data limit Eurika Kaiser, J. Nathan Kutz, Steven L Brunton This work extends the recent sparse identification of nonlinear dynamics (SINDY) modeling procedure to include the effects of actuation and demonstrate the ability of these models to enhance the performance of model predictive control (MPC), based on limited, noisy data. SINDY models are parsimonious, identifying the fewest terms in the model needed to explain the data, making them interpretable and generalizable. Many leading methods in machine learning, such as neural networks, require large volumes of training data, may not be interpretable, do not easily include known constraints and symmetries, and may not generalize beyond the attractor where models are trained. In contrast, we demonstrate that the resulting SINDY-MPC framework has higher performance, requires significantly less data, and is more computationally efficient and robust to noise, making it viable for online training and execution in response to rapid system changes. SINDY-MPC also shows improved performance over linear data driven models, although linear models may provide a stopgap until enough data is available for SINDY. |
Tuesday, November 20, 2018 1:42PM - 1:55PM |
Q01.00005: Recovering Quasi-2D Navier-Stokes Model Parameters via Weak Formulation Patrick Reinbold, Roman O Grigoriev Quantitative prediction of thin-layer fluid flow based on a modified two-dimensional Navier-Stokes model demands that the model and its parameters accurately describe the system. Algorithms exist that can infer model parameters from trajectory observations alone, but differential model reconstruction is greatly hindered by noisy data; noise makes numerical derivatives rather inaccurate. Unobservable quantities (e.g. pressure) complicate things further, and although higher order PDEs in which they aren't present (e.g. the vorticity transport equation) can be used instead, this only exacerbates the first problem. Thus, we developed a method that considers a weak formulation instead of the model PDE directly, which allows us to estimate parameters despite some quantities being noisy and others being unobservable altogether. We confirm the quality of the method by accurately finding the parameters for simulation data obtained from a quasi-2D Navier-Stokes model, with added Gaussian noise. We also apply the method to spatiotemporally-chaotic experimental data and predict new parameter values. |
Tuesday, November 20, 2018 1:55PM - 2:08PM |
Q01.00006: Koopman mode expansions between two invariant sets Jacob Page, Rich Kerswell Koopman mode expansions have been touted as a very useful way to represent nonlinear dynamics given the ability of dynamic mode decomposition to extract the modes from data. Here we explore how such an approach works for heteroclinic dynamics between two equilibria using a 1D nonlinear system (a pitchfork bifurcation) which allows explicit calculations. Well-defined Koopman mode expansions are found to exist around either equilibria but each fails at the same intermediary point between them indicating that there is no uniformly valid Koopman expansion for the dynamics. Results will be presented to indicate that this lack of uniformity carries over to the 3D Navier-Stokes equations as well. |
Tuesday, November 20, 2018 2:08PM - 2:21PM |
Q01.00007: Abstract Withdrawn
|
Tuesday, November 20, 2018 2:21PM - 2:34PM |
Q01.00008: Control-oriented model learning with a recurrent neural network Michele Alessandro Bucci, Onofrio Semeraro, Alexandre Allauzen, Laurent Cordier, Guillaume Wisniewski, Lionel Mathelin In the recent years, model learning has been boosted by the increased computational power and the availability of large amount of high-quality data. We here focus on the approximation of the dynamics of complex systems using Recurrent Neural Networks (RNN) for control purposes. RNNs are able to accurately approximate the attractor of chaotic systems by solely observing their time evolution [Pathak et al. 2018] and to predict the state of the system over long time-horizons[Vlachas et al. 2018]. However, it is crucial to ensure generalizability of the learned model to data unseen in the training set (overfitting issue). In this work, we consider the Kuramoto-Sivashinsky equation in the chaotic regime and show that it is necessary to train with more than one trajectory emanating from each of the equilibrium solutions of the chaotic attractor to learn a generalizable model; in particular, we combine the Long Short Time Memory architecture for the time prediction and the convolutional process for the spatial embedding. The quality of the proxy with respect of the actual dynamics will also be discussed in terms of Lyapunov exponents and distance between trajectories in the phase space. |
Tuesday, November 20, 2018 2:34PM - 2:47PM |
Q01.00009: Nonlinear integro-differential operator regression with neural networks Ravi G Patel, Olivier Desjardins While direct numerical simulation (DNS) may provide accurate data for the evolution of a fluid mechanical system, there are still many challenges in synthesizing information obtained from these simulations into reduced-order models. Machine learning has already shown promise as a tool for physical modeling. In this talk we discuss a technique for extracting nonlinear partial integro-differential equations from data using a combination of Fourier spectral methods and neural network regression. Using a database of DNS results for the fractional heat equation, the Burgers' equation, and the Kuramoto-Sivashinsky equation, we demonstrate that this technique is capable of recovering approximate equations. We also show that a subgrid scale model for the Burgers' equation can be recovered using filtered DNS results. |
Tuesday, November 20, 2018 2:47PM - 3:00PM |
Q01.00010: Control-Informed Dynamic Mode Decomposition Michael J Banks, Daniel Joseph Bodony Dynamic mode decomposition (DMD) is a data-driven system identification and model reduction technique. In some situations, the data used as an input to DMD poorly represents the dynamics of the system, often when the data are used for controller development. This insufficient data set results in a reduced-order model which also poorly captures the desired dynamics. In order to enrich the data set, we apply a control to our system. This control is designed to illicit a more complete and representative response from the data which can then be used to create a more robust DMD model. We use the adjoint method to create a such a control. The control is chosen to minimize a cost functional designed to fit the desired data to the control-informed DMD-reconstruction of the data. The method is demonstrated on the Ginzburg-Landau equation. |
Tuesday, November 20, 2018 3:00PM - 3:13PM |
Q01.00011: Improvements on Extended Kalman Filter Dynamic Mode Decomposition for Noisy Dataset Taku Nonomura In the present study, a family of Kalman filter dynamic mode decomposition (KFDMD) for system identification is reviewed and the preliminary results on a new formulation of extended Kalman filter dynamic mode decomposition (EKFDMD) is discussed. First, the advantages and points to be improved for KFDMD are summarized, and the one of the problems is pointed out to be used with batch proper orthogonal decomposition (POD). Because of the problem above, it cannot run as a pure online algorithm. Regarding this problem, EKFDMD is improved to be able to handle a streaming dataset in this study. In the presentation, rough idea is presented and the preliminary results of EKFDMD are summarized. |
Tuesday, November 20, 2018 3:13PM - 3:26PM |
Q01.00012: Reduced Order Control using Low-Rank Dynamic Mode Decomposition Palash Sashittal, Daniel Joseph Bodony In this work we present a non-intrusive data-driven method for reduced order modeling of fluid flows using a low-rank Dynamic Mode Decomposition (lr-DMD). A non-convex matrix optimization problem is formulated and two methods of solving it are discussed along with variants for high-dimensional flows. It is a generalization of Optimal Mode Decomposition (OMD) and Dynamic Mode Decomposition (DMD), and is shown to give lower residual errors in comparison for a given rank of the low-order model. We perform model order reduction on the complex linearized Ginzburg-Landau equation in the globally unstable regime and unsteady flow over flat plate at an high angle of attack. A low-dimensional controller is then constructed using the reduced order model. We compare the performance of controllers constructed using DMD, OMD and lr-DMD.
|
Follow Us |
Engage
Become an APS Member |
My APS
Renew Membership |
Information for |
About APSThe American Physical Society (APS) is a non-profit membership organization working to advance the knowledge of physics. |
© 2024 American Physical Society
| All rights reserved | Terms of Use
| Contact Us
Headquarters
1 Physics Ellipse, College Park, MD 20740-3844
(301) 209-3200
Editorial Office
100 Motor Pkwy, Suite 110, Hauppauge, NY 11788
(631) 591-4000
Office of Public Affairs
529 14th St NW, Suite 1050, Washington, D.C. 20045-2001
(202) 662-8700