Bulletin of the American Physical Society
72nd Annual Meeting of the APS Division of Fluid Dynamics
Volume 64, Number 13
Saturday–Tuesday, November 23–26, 2019; Seattle, Washington
Session C17: Focus Session: Recent Advances in Data-driven and Machine Learning Methods for Turbulent Flows I |
Hide Abstracts |
Chair: Karthik Duraisamy, University of Michigan Room: 4c4 |
Sunday, November 24, 2019 8:00AM - 8:13AM |
C17.00001: Unsteady Flow Field Predictions Using Multi-level Deep Convolutional Autoencoder Networks Jiayang Xu, Karthik Duraisamy A machine learning framework is proposed for unsteady flow field predictions. Three levels of deep neural networks are used, with the goal of predicting the future state of the flow for unseen global parameters. A Convolutional autoencoder is used as the top level to encode the high-dimensional data sequence along spatial dimensions into a sequence of latent variables. A temporal convolutional autoencoder serves as the second level, which further encodes the output sequence from the first level along the temporal dimension, and outputs a set of latent variables that fully captures the spatio-temporal evolution of the flow field. A fully connected network is used as the third level to learn the mapping between these latent variables and the global parameters from training data, and predict them for new parameters. For future state predictions, the second level uses temporal convolutional network to predict subsequent steps of the output sequence from the top level. Outputs at the bottom level are decoded to obtain the high-dimensional flow field sequence at unseen global parameters and/or future states. [Preview Abstract] |
Sunday, November 24, 2019 8:13AM - 8:26AM |
C17.00002: Physics-Constrained Convolutional LSTM Neural Networks for Generative Modeling of Turbulence Arvind Mohan, Daniel Livescu, Michael Chertkov High fidelity modeling of turbulence and related physical phenomena is often challenging due to its prohibitive computational costs or the lack of accurate theoretical models. In the recent years, deep learning approaches have shown much promise in modeling of complex systems. A major challenge in deep learning for~generative modeling of turbulence is the chaotic, high dimensional and spatio-temporal nature of the data, which can make the learning process ineffective and/or expensive. Previous work by the authors (Mohan et al., 2018) showed the capability of Convolutional LSTM (ConvLSTM) neural networks in modeling~high fidelity 3D~turbulence. ConvLSTM augments the traditional architecture of a LSTM cell with a convolutional layer to learn spatial features in high dimensional datasets. In this work, we introduce various physical constraints of incompressible turbulent flows into ConvLSTM networks. We demonstrate its efficacy by learning and predicting physically consistent dynamics of a homogeneous isotropic turbulence DNS dataset. Statistical tests are also performed on the predicted turbulence to assess the quality of the physical constraints on the ``learned'' physics. Finally, we discuss challenges and opportunities with ConvLSTM when enforced with physical constraints, with additional focus on computational scaling of this approach to large datasets. [Preview Abstract] |
Sunday, November 24, 2019 8:26AM - 8:39AM |
C17.00003: Physics Informed Learning of Lagrangian Turbulence: Velocity Gradient Tensor over Inertial-Range Geometry Yifeng Tian, Daniel Livescu, Michael Chertkov The phenomenological model of coarse-grained velocity gradient tensor (VGT) constructed by considering the Lagrangian dynamics of four points, or the tetrad, is extended under the Physics-Informed Machine Learning (PIML) framework. The pressure hessian contribution is re-constructed from the dynamics of Lagrangian tetrad, which provides an improved representation of its magnitude and orientation. The unclosed incoherent small scale fluctuations are modeled using ML techniques trained on Lagrangian data from a high-Reynolds number Direct Numerical Simulation (DNS). Certain constraints, such as Galilean invariance, rotational invariance, and zero-pressure work condition, are enforced to implement known physics into the ML model. Then, a comprehensive diagnostic test is performed. Statistics of the flow, as indicated by the joint PDF of second and third invariants of the VGT at different coarse-grained scales, show good agreement with the ground-truth DNS. Some important features regarding the structure of the turbulence are correctly reproduced by the model including skewed distribution of the velocity gradient, vorticity-strain rate alignment and vortex stretching mechanism. The pressure hessian and small-scale contributions to Lagrangian dynamics are also well-captured. [Preview Abstract] |
Sunday, November 24, 2019 8:39AM - 8:52AM |
C17.00004: Prediction of Aerodynamic Flow Fields Using Spectral Convolutions on Graph Networks James Duvall, Karthik Duraisamy, Yaser Afshar In this work, spectral methods for performing localized convolutions on graphs are investigated to predict aerodynamic flow fields given the geometry of the surface, and flow configurations therein. Previous work has shown that convolutional neural networks (CNNs) can be used for this purpose. CNNs, however, are restricted to Euclidean domains, and their use requires interpolation from non-regular mesh representations typical of flow solutions to an evenly spaced Cartesian mesh. This represents a loss of information as flow solvers include a clustering of points near boundary layers and other regions of sharp gradients. We pursue graph convolutional networks (GCNs) which operate over non-Euclidean data represented by a graph. GCNs generalize many of the characteristics associated with CNNs. Localized filtering operations are defined in the graph spectral domain, and depend on the graph Laplacian, which is graph structure dependent. Although meshes for different geometries may be spatially distinct, they share spectral characteristics if a binary adjacency matrix is considered. GCNs operating directly on graph representations of spatial flow solver meshes are shown to predict aerodynamic flow fields on unseen airfoil shapes and operating conditions to a good degree of accuracy. [Preview Abstract] |
Sunday, November 24, 2019 8:52AM - 9:05AM |
C17.00005: Potential of using deep neural networks for turbulent-flow predictions Ricardo Vinuesa, Prem A. Srinivasan, Luca Guastoni, Hossein Azizpour, Philipp Schlatter The capabilities of deep neural networks to predict temporally evolving turbulent flows are evaluated in this work. To this end, we employ the nine-equation shear flow model by Moehlis {\it et al.} ({\it New J. Phys.} 6, 56, 2004) as a low-order dynamical representation of near-wall turbulence. We thoroughly tested two different neural networks: the multilayer perceptron (MLP) and the long short-term memory (LSTM) network, and determined the best configurations for flow prediction ({\it i.e.}, number of layers, number of units per layer, dimension of the input, weight initialization strategy and activation function). Because of its ability to exploit the sequential nature of the data, the LSTM network outperformed the MLP. In particular, relative errors of $0.45\%$ and $2.49\%$ were obtained in mean and fluctuating quantities respectively with the LSTM. Furthermore, this network also led to an excellent representation of the dynamical behavior of the system, characterized by Poincar\'e maps and Lyapunov exponents. The present results underpin future applications aimed at developing inflow and off-wall boundary conditions for turbulence simulations, and data-driven flow reconstruction of more complex wall-bounded turbulent flows, including channels and developing boundary layers. [Preview Abstract] |
Sunday, November 24, 2019 9:05AM - 9:18AM |
C17.00006: Turbulence inflow generation using generative adversarial network Junhyuk Kim, Changhoon Lee Using unsupervised learning, we developed an inflow generator that performs better than previously proposed synthetic methods. Direct numerical simulations of turbulent channel flow were carried out at three Reynolds numbers, and then temporally successive flow fields in a cross-sectional (y-z) plane were collected. Using the collected data, we trained a novel model, RNN-GAN, which is composed of a recurrent neural network (RNN) and a generative adversarial network (GAN). Here, RNN represents time-variation of generated flow, while GAN represents a spatial correlation of the flow. Our trained RNN-GAN produces surprising results. First, the generated flow is qualitatively and statistically accurate as compared with DNS. Second, it is possible to generate the flow not only at the trained Reynolds numbers but also at the other Reynolds numbers, although the extrapolated case shows a little deterioration of statistical accuracy. The generated flow is stochastically varying over time, unlike a supervised learning method. Finally, the domain size of the generated flow is extendable. These results indicate that our model provides good inflow generator required for developing channel flow. [Preview Abstract] |
Sunday, November 24, 2019 9:18AM - 9:31AM |
C17.00007: Physics-informed Spatio-temporal Deep Learning Models Karthik Kashinath, Adrian Albert, Rui Wang, Mustafa Mustafa, Rose Yu Simulating the spatio-temporal evolution of a complex system over a realistic domain is extremely compute-intensive with current PDE-solvers. Deep learning (DL) shows great promise for augmenting or replacing compute-intensive parts of computational physics models. However, it remains a grand challenge to incorporate physical principles in a systematic manner to the design, training and inference of such models. Physics informed DL aims to infuse principles governing the dynamics of physical systems into DL models, but existing studies are either limited to linear dynamics or purely spatial constraints of physical systems. We study spatiotemporal modeling of velocity fields for a highly nonlinear turbulent flow using various state-of-the-art physics informed DL methods. We benchmark these methods on the task of forecasting velocity fields at different future time horizons, given historic data of different lengths. We find that incorporating prior physics knowledge can not only speed up the training process but improve model performance. Our results show that the Spatiotemporal Generative Networks with an autoregressive U-net as the generator performs the best for varying forecasting horizons. [Preview Abstract] |
Sunday, November 24, 2019 9:31AM - 9:44AM |
C17.00008: Neural Network Optimization Under Partial Differential Equation Constraints Karthik Kashinath, Chiyu Jiang, Gavin Eli Jergensen, Mr Prabhat, Philip Marcus Enforcing physical constraints to solutions generated by neural networks (NN) remains a challenge, yet it is essential to their accuracy and trustworthiness. We propose a novel differentiable spectral projection layer that efficiently enforces spatial PDE constraints using spectral methods, yet is fully differentiable, facilitating end-to-end training. We train a 3D Conditional Generative Adversarial Network for turbulence superresolution, whilst guaranteeing the spatial constraint of zero divergence. Results show that the model produces realistic flow fields with more accurate flow statistics when trained with hard constraints, compared to soft constrained and unconstrained baselines. We also present a method of applying multiple PDE constraints by modifying the loss function directly. We provide theoretical guarantees of convergence and evaluate the computational complexity of the method. We offer an approximation which trades convergence guarantees for improved speed. Experimentally, we train constrained NN to learn continuous representations of solutions to the linear Helmholtz equation and the nonlinear steady-state Navier-Stokes equation. We show that the model outputs better respect the underlying physics, but note the complexity restricts its application to small NN. [Preview Abstract] |
Sunday, November 24, 2019 9:44AM - 9:57AM |
C17.00009: Data-driven prediction of a multi-scale Lorenz 96 chaotic system using a hierarchy of deep learning methods: Reservoir computing, ANN, and RNN-LSTM. Pedram Hassanzadeh, Ashesh Chattopadhyay, Krishna Palem, Devika Subramanian The performance of three deep learning methods for predicting short-term evolution and reproducing the long-term statistics of a multi-scale spatio-temporal Lorenz 96 system is examined. The methods are: echo state network (a type of reservoir computing, RC-ESN), deep feed-forward artificial neural network (ANN), and recurrent neural network with long short-term memory (RNN-LSTM). This Lorenz system has three tiers of nonlinearly interacting variables representing slow/large-scale (X), intermediate (Y), and fast/small-scale (Z) processes. For training or testing, only X is available; Y and Z are never known/used. It is shown that RC-ESN substantially outperforms ANN and RNN-LSTM for short-term prediction, e.g., accurately forecasting the chaotic trajectories for hundreds of numerical solver's time steps, equivalent to several Lyapunov timescales. RNN-LSTM and ANN show some prediction skills as well; RNN-LSTM bests ANN. Furthermore, even after losing the trajectory, data predicted by RC-ESN and RNN-LSTM have probability density functions (PDFs) that closely match the true PDF, even at the tails. PDF of the ANN data deviates from the true PDF. Implications, caveats, and applications to data-driven surrogate modeling of complex dynamical systems are discussed. [Preview Abstract] |
Sunday, November 24, 2019 9:57AM - 10:10AM |
C17.00010: Data-driven super-parametrization using deep learning for large scale turbulent flow in weather/climate modeling Ashesh Chattopadhyay, Adam Subel, Pedram Hassanzadeh, Krishna Palem Some of the physical processes that play key roles in turbulent systems such as weather/climate systems occur at such small spatial and fast time scales that trying to explicitly solve for them can lead to computationally intractable numerical models. These subgrid-scale processes (denoted by variable Y hereafter), are often parameterized using semi-empirical/physics-based schemes as a function of the large-scale/slow variables (X) that are explicitly solved. Multi-scale numerical models that explicitly solve for X and Y, but at different numerical resolutions, dubbed super-parameterization (SP), has been shown to improve simulations of large-scale turbulence in climate models, but at a large computational cost. More recently, several studies have shown promises of using deep neural networks, trained on data from high resolution climate models, for data-driven parameterization (DDP) of Y as a function of X. Here, we show that Gated Recurrent Units (GRU) can be used for data-driven super-parameterization (DDSP): To solve for X numerically at low resolution and emulate the evolution of Y with a GRU at higher numerical resolutions, at a much lower computational cost, and similar accuracy as that of SP on a multi-scale Lorenz 96 test bed. [Preview Abstract] |
Follow Us |
Engage
Become an APS Member |
My APS
Renew Membership |
Information for |
About APSThe American Physical Society (APS) is a non-profit membership organization working to advance the knowledge of physics. |
© 2020 American Physical Society
| All rights reserved | Terms of Use
| Contact Us
Headquarters
1 Physics Ellipse, College Park, MD 20740-3844
(301) 209-3200
Editorial Office
1 Research Road, Ridge, NY 11961-2701
(631) 591-4000
Office of Public Affairs
529 14th St NW, Suite 1050, Washington, D.C. 20045-2001
(202) 662-8700