Bulletin of the American Physical Society
72nd Annual Meeting of the APS Division of Fluid Dynamics
Volume 64, Number 13
Saturday–Tuesday, November 23–26, 2019; Seattle, Washington
Session Q17: Focus Session: Recent Advances in Data-driven and Machine Learning Methods for Turbulent Flows V |
Hide Abstracts |
Chair: Michael Chertkov, University of Arizona Room: 4c4 |
Tuesday, November 26, 2019 7:45AM - 7:58AM |
Q17.00001: Towards (Machine) Learning of Large Eddy Lagrangian Models (of Turbulence) Michael Chertkov, Mikhail Stepanov Aimed at developing a physics-informed simulation approach compatible with modern Machine Learning, we focus here on designing, analyzing and experimenting with reduced Lagrangian, multi-particle models, which are capable of capturing fatefully turbulent dynamics within the resolved large-scale portion of the inertial range. We generalize over popular particle-based models, e.g. Molecular Dynamics (MD) and Smooth Particle Hydrodynamics (SPH), known to generate hydrodynamic behaviour at the scales (typically much) larger than the mean-particle distance. The generalization, reflected in introducing sufficient number of interpretable parameters, is inclusive: we allow variability in the choice of (a) MD pair-wise potential, (b) SPH weighting function, and (c) thermodynamic relation between pressure and density. To mimic effects of the under-resolved scales, we include in the model additional regularizations, such as dependence of the potential and of the weighting function on the inter-particle velocity. In order to generate homogeneous, isotropic and weakly-compressible turbulence we force the Lagrangian system at large scales. To gain a qualitative understanding of what can be achieved at large scales we test the model in different regimes. [Preview Abstract] |
Tuesday, November 26, 2019 7:58AM - 8:11AM |
Q17.00002: Dynamical System Analysis of Data-Driven Turbulence Models Salar Taghizadeh, Freddie Witherden, Sharath Girimaji Recent advances in machine learning (ML) algorithms, in conjunction with the availability of direct numerical simulation data, have resulted in a surge of interest in data-driven turbulence modelling. The idea with such models is to replace one or more components of a classical closure model with an implicit function obtained through a trained ML procedure. In training such a procedure, data is used to infer unknown turbulence constitutive relationships. However, as the properties of these learned functions are often poorly understood, the set of modelled equations can be internally inconsistent. Specifically, attributes such as fixed point behaviour, realizability, and consistency with other physical and mathematical constraints such as the rapid distortion limit may no longer be preserved. In this work, we introduce a novel procedure – based on fixed point analysis – for ensuring that the overall set of equations in data-driven turbulence modelling form a self-consistent dynamical system. The procedure will be showcased on a new data-driven Reynolds averaged Navier--Stokes model which we have developed. [Preview Abstract] |
Tuesday, November 26, 2019 8:11AM - 8:24AM |
Q17.00003: Uncertainty quantification and optimization of spray break-up submodel using regularized multi-task neural nets. Xiang Gao, Hongyuan Zhang, Krishna Bavandla, Ping Yi, Suo Yang For a high-fidelity simulation of engine combustion, parameters of a spray atomization break-up submodel needs to be optimized for the specified conditions to match with the non-reactive experiment. The well-accepted KH-RT spray breakup model include at least 6 parameters and they are not independent of each other, thus cannot be optimized independently. Properly tuning is time-consuming and often need expertise-guide. We propose a regularized multi-task neural nets approach to find optimal submodel parameters $\theta $ at the working condition X that minimizes ``error'' $\epsilon $. The proposed model includes two neural nets: a predictor and an autoencoder. Predictor is trained to predict the submodel parameters $\theta $ for a given X and $\epsilon $. The optimal $\theta $ then can be estimated by setting $\epsilon $ as zero. Autoencoder is used to learn a latent representation of a pair of (X, $\theta )$, which is encouraged by a regularization term to share the same latent space as the predictor. For an unseen condition X and estimated optimal $\epsilon $, we can use the autoencoder to find similar (X, $\theta )$ pairs from the training data to interpret the predictor prediction and quantify the uncertainty. [Preview Abstract] |
Tuesday, November 26, 2019 8:24AM - 8:37AM |
Q17.00004: Approximate Bayesian Computation for Parameter Estimation in RANS Turbulence Models Olga Doronina, Scott Murman, Peter Hamlington Traditionally, turbulence model parameters have been determined through either direct inversion of model equations given some reference data or using optimization techniques. However, the former approach becomes complicated for models with many different parameters or when the model consists of partial differential equations. Here, we use an Approximate Bayesian computation (ABC) approach to estimate unknown model parameter values, as well as their uncertainties, in a nonequilibrium anisotropy closure for Reynolds averaged Navier-Stokes (RANS) simulations. ABC does not require direct computation of a likelihood function, thereby enabling substantially faster estimation of unknown parameters as compared to full Bayesian analyses. Details of the ABC approach are described, including the use of a Markov chain Monte Carlo technique as well as the choice of summary statistics and distance function. Unknown model parameters are estimated based on reference data for different homogeneous nonequilibrium test cases. We also discuss the calibration of turbulence models in inhomogeneous flows using forward simulations of an axisymmetric bump. [Preview Abstract] |
(Author Not Attending)
|
Q17.00005: Machine-learning-assisting investigation of turbulence anisotropy Junyi Mi, Chao Jiang, Shujin Laima, Hui Li An anisotropy invariant map (AIM) and barycentric map (BMap), which are based the space spanned by invariants of the anisotropic stress tensor, have been playing a crucial role in stress invariant analysis to quantify turbulence anisotropy. However, these methods cannot offer any scale information about the turbulent structures, otherwise the degree of axisymmetry and anisotropy. Only a mingy portion of the turbulent flow in real world can reach the edges or vertices of the AIM or BMap, with which therefore a deeper understanding of flow details about the flow pattern regimes is rarely developed. Here we report an unsupervised machine-learning algorithm (a modified K-means method) as a classifier of flow pattern regimes, with the Reynolds stress tensor instead of their secondary quantities as input features (including the distances from the walls). Tests are performed in duct flows. As a result, (i) there is a consistent one-to-one match between the separation boundaries for different regimes in flow space and the border of stress invariants for this flow itself in the AIM or BMap; (ii) the size of flow regime in coordinate space leads to identifying the scales of turbulent structures. Besides, effects of the aspect ratio and Reynolds number are examined. [Preview Abstract] |
Tuesday, November 26, 2019 8:50AM - 9:03AM |
Q17.00006: Predicting the stochasticity: GAN-VAE based deep learning model for turbulence prediction Changlin Jiang, Amir Barati Farimani Turbulence is a classical tempo-spatial system that has high non-linearity and prohibitively large degrees of freedom, which is basically considered impossible to solve analytically. Unlike Computational Fluid Dynamics (CFD) techniques which generally have considerable time and memory consumption, deep learning (DL) models could learn the hidden features automatically and hierarchically in multiple levels given massive dataset. Instead of solely learning to do turbulence parameterization, we seek to predict turbulence without any underlying physical rules. Our GAN-VAE based DL model could successfully simulate the ambiguous nature of turbulence by combining two distinct but complementary approaches: (a) a variational auto-encoder (VAE) which explicitly models stochasticity by latent layer sampling. (b) a generative adversarial network (GAN) that aims to produce realistic predictions. In the meantime, our model takes advantage of recent work in object detection and motion prediction, such as Convolutional Dynamic Neural Advection (CDNA) and convolutional LSTM. The model prediction is assessed with physics-based metrics like small scale statistics and flow morphology. Our results show promising capability of predicting turbulence that satisfies physical rules with high accuracy. [Preview Abstract] |
Tuesday, November 26, 2019 9:03AM - 9:16AM |
Q17.00007: Using machine learning to predict low-altitude atmospheric optical turbulence. Chris Jellen, John Burkhardt, Charles Nelson, Cody Brownell Laser-based systems employed within the atmospheric surface layer are subject to degradation due to index of refraction fluctuations within the beam path, known as optical turbulence. Laser propagation through optical turbulence results in beam spread, loss of coherence, and reduced irradiance on target. The root causes of optical turbulence are temperature and humidity fluctuations within the atmosphere. Prediction of these parameters from basic meteorological data is required for effective implementation of any long-range laser system. A field measurement site at the U.S. Naval Academy in Annapolis, Maryland, is used to gather data related to atmospheric effects on optical propagation. A scintillometer measures the refractive index structure constant along a 1-km path over the Severn River adjacent to the Chesapeake Bay. At each end of the scintillometer link, weather data including wind velocity, air and sea surface temperatures, etc. is captured. Using this data, machine learning techniques are used to predict the refractive index structure parameter of the atmosphere and the scintillation on target of the laser. [Preview Abstract] |
Tuesday, November 26, 2019 9:16AM - 9:29AM |
Q17.00008: Convolutional Neural Networks for the Solution of the 2D Poisson Equation with Arbitrary Dirichlet Boundary Conditions, Mesh Sizes and Grid Spacings Ali Girayhan Ozbay, Panagiotis Tzirakis, Georgios Rizos, Bjorn Schuller, Sylvain Laizet The Poisson equation is a problem commonly encountered in engineering, including in computational fluid dynamics where it is needed to compute corrections to the pressure field. However, solving the Poisson equation numerically can be very costly, especially for large-scale problems. We propose a fully convolutional neural network (CNN) architecture to infer the solution of the Poisson equation on a Cartesian grid of arbitrary size and grid spacing, given the right hand side term, Dirichlet boundary conditions and grid parameters. Analytical test cases indicate that our CNN architecture is capable of predicting the correct solution of a Poisson equation with mean percentage errors of a few percentage points and a reduction in wall-clock time compared to traditional solvers based on finite difference methods. [Preview Abstract] |
Follow Us |
Engage
Become an APS Member |
My APS
Renew Membership |
Information for |
About APSThe American Physical Society (APS) is a non-profit membership organization working to advance the knowledge of physics. |
© 2024 American Physical Society
| All rights reserved | Terms of Use
| Contact Us
Headquarters
1 Physics Ellipse, College Park, MD 20740-3844
(301) 209-3200
Editorial Office
100 Motor Pkwy, Suite 110, Hauppauge, NY 11788
(631) 591-4000
Office of Public Affairs
529 14th St NW, Suite 1050, Washington, D.C. 20045-2001
(202) 662-8700