Bulletin of the American Physical Society
71st Annual Meeting of the APS Division of Fluid Dynamics
Volume 63, Number 13
Sunday–Tuesday, November 18–20, 2018; Atlanta, Georgia
Session F32: Machine Learning and Data Driven Models I |
Hide Abstracts |
Chair: Michael Brenner, Harvard University Room: Georgia World Congress Center B404 |
Monday, November 19, 2018 8:00AM - 8:13AM |
F32.00001: From Deep to Physics-Informed Learning of Turbulence: Diagnostics Michael Chertkov, Oliver Hennigh, Ryan King, Arvind Mohan We describe tests which allow to validate the progress made toward acceleration and automation of hydro-codes. We aim to verify whether various statistical properties, constraints, and relations not enforced explicitly within the Deep Learning (DL) training hold. To do this, we compare results extracted from the training data and from the generated/synthetic data. Through the tests we verify physical laws and intuition about turbulence. Three DL schemes, GANS of [1], LAT-NET of [2] and LSTM of [3] are juxtaposed within the setting of the homogeneous, isotropic, stationary turbulence. Even the bare DL solutions, which do not take into account any physics of turbulence explicitly, are impressively good overall when it comes to qualitative description of important features of turbulence. However, we also uncovered some significant caveats of the DL approaches and describe the next steps aimed at making corrections to the respective DL schemes through reinforcement of the special feature of turbulence that the current DL scheme fails to extract. [1] http://meetings.aps.org/Meeting/DFD17/Session/A31.8 [2] https://arxiv.org/abs/1705.09036 [3] https://arxiv.org/abs/1804.09269 |
Monday, November 19, 2018 8:13AM - 8:26AM |
F32.00002: Neural Network Powered Adjoint Methods - Gradient Based Shape Optimization with Deep Learning Dana Lynn Ona Lansigan, Chiyu Max Jiang, Philip S Marcus Recent work has shown that Neural Networks, especially Convolutional Neural Networks (CNNs), can serve as powerful surrogate models for physical processes regarding input shapes. However, two major issues stand in the way of deep learning based shape optimization using prediction from these surrogate models: how to represent shape functions to feed to the networks, and how to efficiently and accurately compute gradients of the predicted output quantity of interest with respect to the input shape coordinates for optimization. In this work, we present a pipeline for efficient shape optimization that includes an optimal shape representation based on simplex-mesh Fourier transforms, training a CNN-based surrogate model for the prediction of physical quantitates, and a method for backpropagating gradients into original shape coordinates. We illustrate the effectiveness of the methodology with a case study for 2D airfoil optimization. |
Monday, November 19, 2018 8:26AM - 8:39AM |
F32.00003: Data-driven discretization of PDEs Yohai Bar-Sinai, Stephan Hoyer, Dmitrii Kochkov, Jason Hickey, Michael Phillip Brenner One of the most generic problems in theoretical physics in general, and in Fluid Mechanics in particular, is that of coarse graining: how to represent the behavior of a physical theory at long wavelengths and slow frequencies by "integrating out" degrees of freedom which change rapidly in time and space. This is crucial for allowing an efficient computation of the field equations, as otherwise the number of grid points required to simulate a given system becomes unmanagable. Here we introduce data-driven discretization, a method for learning the effective long-wavelength dynamics from actual solutions to the known underlying equations. We use a neural network to learn a discretization for the true spatial derivatives of partial differential equations (PDEs). We demonstrate that this approach obtains remarkable accuracy which allows to integrate equations in time and to extract emerging scaling relations in one and two spatial dimensions. Possible applications and generalizations are discussed. |
Monday, November 19, 2018 8:39AM - 8:52AM |
F32.00004: Surrogate Modeling of High-Order Physics-Based Fluid Modeling Tools Nicholas Magina, James Tallman, Robert Zacharias A neural network surrogate modeling methodology was used to reproduce a two-dimensional flowfield distribution over a set of NACA airfoils. Once trained and validated, the surrogate model has the potential to generate subsequent CFD quality predictions in 5-6 orders of magnitude less computational effort, in terms of cpu*hours, than traditional methods. A suite of 250 RANS-based Computational Fluid Dynamics (CFD) solutions, for varying NACA airfoil shapes and angles of attacks, was utilized as training data to a machine learning algorithm. The resultant tuned surrogate model was validated, and the differences between the CFD and the surrogate model predictions were compared on a node-by-node basis for mean shift and standard deviation. For these validation cases, the average of these metrics was -7.230e-05 and 1.710e-03, respectively for the Mach number. The surrogate model was then applied to three specific engineering problem classes of interest: (1) initial guess / accelerated convergence of a CFD model, (2) generating derived quantities (pressure envelopes) and (3) optimization of the airfoil geometry toward an objective function. Conclusions and recommendations are reported as to the appropriateness of using the surrogate model towards expediting these problem classes. |
Monday, November 19, 2018 8:52AM - 9:05AM |
F32.00005: Bridging simulation and deep learning - convolutional neural networks on unstructured grids Chiyu Max Jiang, Karthik Kashinath, Philip S Marcus, Mr Prabhat We develop a novel method for efficiently deploying Convolutional Neural Networks (CNNs) on arbitrary unstructured grids. Unstructured grids have been the major workhorse for PDE solvers, while CNNs have been the predominant neural network architecture in deep learning for processing spatial data. Recent works have shown successes in utilizing CNN-based deep neural networks for better modeling of physical systems (e.g., turbulence modeling), and acceleration of solutions to PDEs. However regular CNN framework operates under regular grids and cannot be easily incorporated into PDE solvers that operate on unstructured grids (e.g., FEM, DG, FV). Natively performing CNN-based deep learning in the unstructured grid domain for simulation allows for smooth integration between physical simulations and deep learning based physical models. |
Monday, November 19, 2018 9:05AM - 9:18AM |
F32.00006: Physics-Informed Generative Learning to Predict Unresolved Physics in Complex Systems Jinlong Wu, Yang Zeng, Karthik Kashinath, Adrian Albert, Mr Prabhat, Heng Xiao Simulating complex physical systems often involves solving partial differential equations (PDEs) with closures due to the presence of multi-scale physics. Although the advancement of high-performance computing has made resolving small-scale physics possible, such simulations are still very expensive. Therefore, reliable and accurate models for the unresolved physics remain an important requirement for many complex systems, e.g., turbulent flows. Recently, machine learning techniques have been explored in many data-driven physical modeling problems, and several researchers adopted generative adversarial networks (GANs) to generate solutions of PDEs governed complex systems by training on some existing simulation results from these PDEs. We present a physics-informed GAN by enforcing constraints of both conservation laws and certain statistical properties from the training data. We show that physics-informed GAN is more robust and better captures high-order statistics. These results suggest that physics-informed GANs may be an attractive alternative to the explicit modeling of closures for unresolved physics, which account for a major source of uncertainty when simulating complex systems, e.g., turbulent flows and Earth’s climate. |
Monday, November 19, 2018 9:18AM - 9:31AM |
F32.00007: A transfer learning approach for data-driven turbulence modeling Rui Fang, David Sondak, Pavlos Protopapas, Sauro Succi The Reynolds-Averaged Navier-Stokes (RANS) equations are widely used to predict engineering flow fields, but traditional Reynolds stress closure models lead to only partially reliable predictions. Recently, with continuing advances in high performance computing and machine learning practices, data-driven turbulence modeling is becoming possible. In this work, the Reynolds stress anisotropy tensor is learned using a physics-aware machine learning model. The Tensor Basis Neural Network (TBNN), first proposed by Ling et al., is tested on turbulent channel flow at various Reynolds numbers. Numerical experiments demonstrate that the TBNN is fundamentally limited by the mathematical structure of the underlying tensor basis. In spite of this limitation, the neural network makes an effort to match the provided turbulence data by adjusting model parameters. With these observations in mind, the TBNN model is trained on turbulent channel flow data at several Reynolds numbers and used to predict the Reynolds stress tensor at a different Reynolds number. We show that adjustments to the neural network architecture via transfer learning techniques improve predictions of the Reynolds stress tensor. |
Monday, November 19, 2018 9:31AM - 9:44AM |
F32.00008: Machine Learning to Improve RANS Turbulent Kinetic Energy Transport Equation David S Ching, Andrew J Banko, John K Eaton Conventional Reynolds Averaged Navier-Stokes (RANS) models are not predictive for 3D separated flows. Inaccuracies in the predicted turbulent kinetic energy are a major source of error in the computed Reynolds stresses. A neural network machine learning approach is used to improve the realizable k-epsilon model by modifying terms in the turbulent kinetic energy transport equation. The network is trained on Large Eddy Simulation (LES) data of a smooth three-dimensional bump flow and is coupled in a RANS solver to continually update the model predictions as the solution converges. Inputs to the model are complex invariant functions of the strain rate, rotation rate, and wall distance Reynolds number. The machine-learned model is tested on a wall-mounted cube and shows improved turbulent kinetic energy and mean velocities compared to the baseline RANS. Additional comparisons in other separated flows are underway. |
Monday, November 19, 2018 9:44AM - 9:57AM |
F32.00009: Physics-Informed Machine Learning Approach for Augmenting Turbulence Models: A Comprehensive Framework Heng Xiao, Jinlong Wu, Jianxun Wang, Eric G Paterson Turbulence modeling introduces large model-form uncertainties in the predictions. Recently, data-driven methods have been proposed as a promising alternative by using existing databases of experiments or high-fidelity simulations. In this talk, we present a comprehensive framework for augmenting turbulence models with physics-informed machine learning, illustrating a complete workflow from identification of input features to final prediction of mean velocities. The learned model satisfies two key requirements in turbulence modeling: Galilean invariance and coordinate rotational invariance. This framework consists of three components: (1) reconstructing Reynolds stress modeling discrepancies based on DNS data via machine learning techniques, (2) assessing the prediction confidence a priori based on distance metrics in the mean flow features space, and (3) propagating the predicted Reynolds stress field to mean velocity field by using physics-informed stabilization. Several flows with massive separations are investigated to evaluate the performance of the proposed framework. Significant improvements over the baseline RANS simulation are observed for the Reynolds stress and the mean velocity fields. |
Monday, November 19, 2018 9:57AM - 10:10AM |
F32.00010: Interpretability of Machine Learning Models for the Reynolds Stress Tensor in Reynolds-Averaged Navier-Stokes Simulations Andrew J. Banko, David S. Ching, Julia Ling, John K. Eaton Data-driven approaches to turbulence modeling have grown in popularity because Reynolds-Averaged Navier-Stokes (RANS) models continue to be industrial workhorses and Direct Numerical Simulation databases for complex turbulent flow are becoming available for use as training sets. Applications of modern machine learning architectures have improved predictions of the Reynolds stress anisotropy tensor over standard two-equation RANS models, but suffer from black-box obscurity. Interpretable machine learning predictions are needed to understand the high-dimensional input feature space, advance physical intuition, establish confidence in model generalizability, and develop models which are robust and easy to train. In this work we apply several interpretability methods to the Tensor Basis Neural Network (TBNN) architecture developed by Ling et al., JFM, 2016. The TBNN structure is exploited to understand the physical effect of each term with respect to shifting the state of anisotropy. Sensitivity maps and importance rankings are also obtained for the input features. The methodology is first validated on a network trained to reproduce the k-ω model. Results are then presented for a turbulent square duct flow and the flow over a wall-mounted cube. |
Follow Us |
Engage
Become an APS Member |
My APS
Renew Membership |
Information for |
About APSThe American Physical Society (APS) is a non-profit membership organization working to advance the knowledge of physics. |
© 2024 American Physical Society
| All rights reserved | Terms of Use
| Contact Us
Headquarters
1 Physics Ellipse, College Park, MD 20740-3844
(301) 209-3200
Editorial Office
100 Motor Pkwy, Suite 110, Hauppauge, NY 11788
(631) 591-4000
Office of Public Affairs
529 14th St NW, Suite 1050, Washington, D.C. 20045-2001
(202) 662-8700