Bulletin of the American Physical Society
72nd Annual Meeting of the APS Division of Fluid Dynamics
Volume 64, Number 13
Saturday–Tuesday, November 23–26, 2019; Seattle, Washington
Session Q17: Focus Session: Recent Advances in Datadriven and Machine Learning Methods for Turbulent Flows V 
Hide Abstracts 
Chair: Michael Chertkov, University of Arizona Room: 4c4 
Tuesday, November 26, 2019 7:45AM  7:58AM 
Q17.00001: Towards (Machine) Learning of Large Eddy Lagrangian Models (of Turbulence) Michael Chertkov, Mikhail Stepanov Aimed at developing a physicsinformed simulation approach compatible with modern Machine Learning, we focus here on designing, analyzing and experimenting with reduced Lagrangian, multiparticle models, which are capable of capturing fatefully turbulent dynamics within the resolved largescale portion of the inertial range. We generalize over popular particlebased models, e.g. Molecular Dynamics (MD) and Smooth Particle Hydrodynamics (SPH), known to generate hydrodynamic behaviour at the scales (typically much) larger than the meanparticle distance. The generalization, reflected in introducing sufficient number of interpretable parameters, is inclusive: we allow variability in the choice of (a) MD pairwise potential, (b) SPH weighting function, and (c) thermodynamic relation between pressure and density. To mimic effects of the underresolved scales, we include in the model additional regularizations, such as dependence of the potential and of the weighting function on the interparticle velocity. In order to generate homogeneous, isotropic and weaklycompressible turbulence we force the Lagrangian system at large scales. To gain a qualitative understanding of what can be achieved at large scales we test the model in different regimes. [Preview Abstract] 
Tuesday, November 26, 2019 7:58AM  8:11AM 
Q17.00002: Dynamical System Analysis of DataDriven Turbulence Models Salar Taghizadeh, Freddie Witherden, Sharath Girimaji Recent advances in machine learning (ML) algorithms, in conjunction with the availability of direct numerical simulation data, have resulted in a surge of interest in datadriven turbulence modelling. The idea with such models is to replace one or more components of a classical closure model with an implicit function obtained through a trained ML procedure. In training such a procedure, data is used to infer unknown turbulence constitutive relationships. However, as the properties of these learned functions are often poorly understood, the set of modelled equations can be internally inconsistent. Specifically, attributes such as fixed point behaviour, realizability, and consistency with other physical and mathematical constraints such as the rapid distortion limit may no longer be preserved. In this work, we introduce a novel procedure – based on fixed point analysis – for ensuring that the overall set of equations in datadriven turbulence modelling form a selfconsistent dynamical system. The procedure will be showcased on a new datadriven Reynolds averaged NavierStokes model which we have developed. [Preview Abstract] 
Tuesday, November 26, 2019 8:11AM  8:24AM 
Q17.00003: Uncertainty quantification and optimization of spray breakup submodel using regularized multitask neural nets. Xiang Gao, Hongyuan Zhang, Krishna Bavandla, Ping Yi, Suo Yang For a highfidelity simulation of engine combustion, parameters of a spray atomization breakup submodel needs to be optimized for the specified conditions to match with the nonreactive experiment. The wellaccepted KHRT spray breakup model include at least 6 parameters and they are not independent of each other, thus cannot be optimized independently. Properly tuning is timeconsuming and often need expertiseguide. We propose a regularized multitask neural nets approach to find optimal submodel parameters $\theta $ at the working condition X that minimizes ``error'' $\epsilon $. The proposed model includes two neural nets: a predictor and an autoencoder. Predictor is trained to predict the submodel parameters $\theta $ for a given X and $\epsilon $. The optimal $\theta $ then can be estimated by setting $\epsilon $ as zero. Autoencoder is used to learn a latent representation of a pair of (X, $\theta )$, which is encouraged by a regularization term to share the same latent space as the predictor. For an unseen condition X and estimated optimal $\epsilon $, we can use the autoencoder to find similar (X, $\theta )$ pairs from the training data to interpret the predictor prediction and quantify the uncertainty. [Preview Abstract] 
Tuesday, November 26, 2019 8:24AM  8:37AM 
Q17.00004: Approximate Bayesian Computation for Parameter Estimation in RANS Turbulence Models Olga Doronina, Scott Murman, Peter Hamlington Traditionally, turbulence model parameters have been determined through either direct inversion of model equations given some reference data or using optimization techniques. However, the former approach becomes complicated for models with many different parameters or when the model consists of partial differential equations. Here, we use an Approximate Bayesian computation (ABC) approach to estimate unknown model parameter values, as well as their uncertainties, in a nonequilibrium anisotropy closure for Reynolds averaged NavierStokes (RANS) simulations. ABC does not require direct computation of a likelihood function, thereby enabling substantially faster estimation of unknown parameters as compared to full Bayesian analyses. Details of the ABC approach are described, including the use of a Markov chain Monte Carlo technique as well as the choice of summary statistics and distance function. Unknown model parameters are estimated based on reference data for different homogeneous nonequilibrium test cases. We also discuss the calibration of turbulence models in inhomogeneous flows using forward simulations of an axisymmetric bump. [Preview Abstract] 
(Author Not Attending)

Q17.00005: Machinelearningassisting investigation of turbulence anisotropy Junyi Mi, Chao Jiang, Shujin Laima, Hui Li An anisotropy invariant map (AIM) and barycentric map (BMap), which are based the space spanned by invariants of the anisotropic stress tensor, have been playing a crucial role in stress invariant analysis to quantify turbulence anisotropy. However, these methods cannot offer any scale information about the turbulent structures, otherwise the degree of axisymmetry and anisotropy. Only a mingy portion of the turbulent flow in real world can reach the edges or vertices of the AIM or BMap, with which therefore a deeper understanding of flow details about the flow pattern regimes is rarely developed. Here we report an unsupervised machinelearning algorithm (a modified Kmeans method) as a classifier of flow pattern regimes, with the Reynolds stress tensor instead of their secondary quantities as input features (including the distances from the walls). Tests are performed in duct flows. As a result, (i) there is a consistent onetoone match between the separation boundaries for different regimes in flow space and the border of stress invariants for this flow itself in the AIM or BMap; (ii) the size of flow regime in coordinate space leads to identifying the scales of turbulent structures. Besides, effects of the aspect ratio and Reynolds number are examined. [Preview Abstract] 
Tuesday, November 26, 2019 8:50AM  9:03AM 
Q17.00006: Predicting the stochasticity: GANVAE based deep learning model for turbulence prediction Changlin Jiang, Amir Barati Farimani Turbulence is a classical tempospatial system that has high nonlinearity and prohibitively large degrees of freedom, which is basically considered impossible to solve analytically. Unlike Computational Fluid Dynamics (CFD) techniques which generally have considerable time and memory consumption, deep learning (DL) models could learn the hidden features automatically and hierarchically in multiple levels given massive dataset. Instead of solely learning to do turbulence parameterization, we seek to predict turbulence without any underlying physical rules. Our GANVAE based DL model could successfully simulate the ambiguous nature of turbulence by combining two distinct but complementary approaches: (a) a variational autoencoder (VAE) which explicitly models stochasticity by latent layer sampling. (b) a generative adversarial network (GAN) that aims to produce realistic predictions. In the meantime, our model takes advantage of recent work in object detection and motion prediction, such as Convolutional Dynamic Neural Advection (CDNA) and convolutional LSTM. The model prediction is assessed with physicsbased metrics like small scale statistics and flow morphology. Our results show promising capability of predicting turbulence that satisfies physical rules with high accuracy. [Preview Abstract] 
Tuesday, November 26, 2019 9:03AM  9:16AM 
Q17.00007: Using machine learning to predict lowaltitude atmospheric optical turbulence. Chris Jellen, John Burkhardt, Charles Nelson, Cody Brownell Laserbased systems employed within the atmospheric surface layer are subject to degradation due to index of refraction fluctuations within the beam path, known as optical turbulence. Laser propagation through optical turbulence results in beam spread, loss of coherence, and reduced irradiance on target. The root causes of optical turbulence are temperature and humidity fluctuations within the atmosphere. Prediction of these parameters from basic meteorological data is required for effective implementation of any longrange laser system. A field measurement site at the U.S. Naval Academy in Annapolis, Maryland, is used to gather data related to atmospheric effects on optical propagation. A scintillometer measures the refractive index structure constant along a 1km path over the Severn River adjacent to the Chesapeake Bay. At each end of the scintillometer link, weather data including wind velocity, air and sea surface temperatures, etc. is captured. Using this data, machine learning techniques are used to predict the refractive index structure parameter of the atmosphere and the scintillation on target of the laser. [Preview Abstract] 
Tuesday, November 26, 2019 9:16AM  9:29AM 
Q17.00008: Convolutional Neural Networks for the Solution of the 2D Poisson Equation with Arbitrary Dirichlet Boundary Conditions, Mesh Sizes and Grid Spacings Ali Girayhan Ozbay, Panagiotis Tzirakis, Georgios Rizos, Bjorn Schuller, Sylvain Laizet The Poisson equation is a problem commonly encountered in engineering, including in computational fluid dynamics where it is needed to compute corrections to the pressure field. However, solving the Poisson equation numerically can be very costly, especially for largescale problems. We propose a fully convolutional neural network (CNN) architecture to infer the solution of the Poisson equation on a Cartesian grid of arbitrary size and grid spacing, given the right hand side term, Dirichlet boundary conditions and grid parameters. Analytical test cases indicate that our CNN architecture is capable of predicting the correct solution of a Poisson equation with mean percentage errors of a few percentage points and a reduction in wallclock time compared to traditional solvers based on finite difference methods. [Preview Abstract] 
Follow Us 
Engage
Become an APS Member 
My APS
Renew Membership 
Information for 
About APSThe American Physical Society (APS) is a nonprofit membership organization working to advance the knowledge of physics. 
© 2020 American Physical Society
 All rights reserved  Terms of Use
 Contact Us
Headquarters
1 Physics Ellipse, College Park, MD 207403844
(301) 2093200
Editorial Office
1 Research Road, Ridge, NY 119612701
(631) 5914000
Office of Public Affairs
529 14th St NW, Suite 1050, Washington, D.C. 200452001
(202) 6628700