Bulletin of the American Physical Society
73rd Annual Meeting of the APS Division of Fluid Dynamics
Volume 65, Number 13
Sunday–Tuesday, November 22–24, 2020; Virtual, CT (Chicago time)
Session S01: Focus Session: Deep Learning in Experimental and Computational Fluid Mechanics (Part II) (5:45pm - 6:30pm CST)Interactive On Demand
|
Hide Abstracts |
|
S01.00001: Super-resolution of Finite Element spaces using Physics-informed Deep Learning Networks for Turbulent flows Aniruddhe Pradhan, Rajarshi Biswas, Karthik Duraisamy High dimensional representation of turbulent flows present several challenges due to the wide range of spatial and temporal scales present in it. High Reynolds number simulations demand coarse-grained modeling, which requires an adequate representation of the impact of the unresolved scales. We present a deep learning (DL) based super-resolution technique to recover the fine scale information in the form of high-order Discontinuous Galerkin (DG) fields from a low-order DG solution. We train the DL models using coarse and fine scale data obtained by $L^2$-projection of the full order solution on low and high order DG sub-spaces respectively. The predictive and operational efficacy of the learning algorithms is then assessed. The performance of the model is improved by: (i) introducing non-dimensionalized physics-informed input and output features; and (ii) by weighting the loss with a prior obtained directly from the training data. The present approach is found to generalise to unseen data at different flow conditions as well. [Preview Abstract] |
|
S01.00002: Dispersed Multiphase Flow Generation using 3D Steerable Convolutional Neural Network Bhargav Sriram Siddani, S. Balachandar, Ruogu Fang, William Chandler Moore, Yunchao Yang This work deals with recreating particle-resolved fluid flow around a random distribution of particles in a dispersed multiphase setup using Convolutional Neural Networks (\textbf{CNN}s). The considered problem is rotationally invariant about the mean velocity (streamwise) direction. Thus, the objective of our work is to enforce this symmetry using \textbf{SE(3)-equivariant} CNN architecture, which is translation and three-dimensional rotation equivariant. This study mainly explores the generalization capabilities of SE(3)-equivariant network when it is used in conjunction with physics-based loss terms. Synthetic flow fields that are 75-95{\%} accurate are produced for Reynolds number and particle volume fraction combinations spanning over a range of [2.69, 172.96] and [0.11, 0.45] respectively with careful application of physics-constrained data-driven approach, whose computational cost is more than four orders of magnitude lower compared to an equivalent CFD approach. [Preview Abstract] |
|
S01.00003: Deep Reinforcement Learning for Control of Fuel Injection in Compression Ignition Engines Nicholas Wimer, Marc Henry de Frahan, Shashank Yellapantula, Ray Grout Compression ignition (CI) engines have long offered high thermal efficiencies and torque across a wide range of loads, but come at the cost of high quantities of NOx and soot. One strategy to decrease harmful emissions from CI engines is to split the fuel injection into a series of smaller injections. In this talk, we explore a new way of discovering optimal injection strategies for the next generation of compression ignition engines using deep reinforcement learning (DRL). An overview of the DRL algorithm and training procedure are outlined and the resulting new injection schedules are discussed. We demonstrate the use of transfer learning (TL) across hierarchies of physical models to accelerate the learning process, making this approach feasible for a range of complex scientific problems. Using a well-trained DRL agent as a controller, NOx emissions from a zero-dimensional model are reduced three-fold while only decreasing net work by 2\%. [Preview Abstract] |
|
S01.00004: Performance Bounds of Data-Driven Reynolds Stress Models via Optimal Tensor Basis Expansions Andrew J. Banko, Christopher J. Elkins, John K. Eaton Reynolds-Averaged Navier-Stokes simulations continue to be primary tools for engineering design, but standard models are inaccurate when applied to 3D turbulent flows with large-scale separation. As a result, data-driven approaches were developed to derive non-linear algebraic stress models from high fidelity simulations. Most use a tensor basis expansion and learn the coefficients as functions of the basis tensor invariants. However, it is often unclear how to adjust the model form or algorithm hyperparameters to further improve a priori and a posteriori accuracy. In this work, we propose optimal tensor basis expansions as a methodology to determine the performance bounds of data-driven Reynolds stress models. The optimal expansion is independent of the machine learning algorithm, and therefore isolates errors associated with an assumed tensor basis. We apply the optimal bases in forward simulations using large-eddy simulation data to analyze the relative importance of errors in the anisotropy and auxiliary turbulence equations. Results are demonstrated for the flow over a 3D bump with large-scale separation. We find that few tensor basis terms are needed to model the Reynolds stress anisotropy, and that the greatest errors reside in the auxiliary equations. [Preview Abstract] |
|
S01.00005: Using generative adversarial networks for subfilter modeling of turbulent flows Mathis Bode Accurately modeling turbulence is still one of the main challenges in many industrial flows and, therefore, the development of universally applicable turbulence closures is essential. One approach is to employ data-driven methods, which has become very popular in many fields over the last years as large, often extensively labeled, datasets became available and the usage of GPUs speeded up the training of large neural networks tremendously. However, the successful application of deep neural networks in fluid dynamics, for example for subfilter modeling in the context of large-eddy simulations (LESs), is still challenging. For example, the high requirements with respect to accuracy, error robustness, and physical plausibility demand tailored methods and also the generalization is an open question. This work focuses on generative adversarial networks (GANs) with physics-informed loss function. In particular, physics-informed enhanced super-resolution GANs (PIESRGANs) are discussed, and their application to turbulent reactive flows and multiphase flows is shown. The superior performance of PIESRGAN-based subfilter models to classical subfilter models is demonstrated. Aspects, such as a two-step learning approach and the adversarial part of the loss function, are emphasized. [Preview Abstract] |
|
S01.00006: Predictions in Wall-bounded Turbulence Through Convolutional-network Models Using Wall Quantities Luca Guastoni, Alejandro G\"uemes, Andrea Ianiro, Stefano Discetti, Philipp Schlatter, Hossein Azizpour, Ricardo Vinuesa Deep neural networks (DNNs) have been applied to a variety of fluid dynamics problems in recent years, providing encouraging results in flow prediction, modelling and control. Here we train two models based on convolutional neural networks, aiming to predict velocity fields in a turbulent open channel flow, using quantities measured at the wall. The first model is a fully-convolutional neural network (FCN) which directly predicts the flow fluctuations, while the second one, named FCN-POD, uses orthonormal basis functions obtained through proper orthogonal decomposition. The performance assessment is based on predictions of the instantaneous fields, turbulence statistics and power-spectral densities, at different wall-normal locations, for friction Reynolds numbers $Re_{\tau} = 180$ and $550$. The FCN exhibits the best predictions closer to the wall, whereas the FCN-POD model provides better predictions at larger wall-normal distances. Both models are shown to perform better than traditional linear models, thanks to their ability to capture also non-linear interactions in turbulent flows. The potential of transfer learning between friction Reynolds numbers is also investigated. [Preview Abstract] |
|
S01.00007: SimNet: A neural network solver for multi-Physics applications Oliver Hennigh, Kaustubh Tangsali, Akshay Subramaniam, Susheela Narasimhan, Mohammad Nabian, Jose del Aguila Ferrandis, Sanjay Choudhry There is an ever-growing body of work using neural networks to solve partial differential equations (PDEs) often referred to as Physics Informed Neural Networks (PINNs). By virtue of inherent data parallelism and use of point clouds, neural networks eliminate time consuming tasks like domain decomposition for parallelization or meshing for domain discretization in numerical solvers. In addition, neural network solvers can solve parameterized problems and offer a powerful tool that can evaluate multiple designs simultaneously thereby facilitating a faster design cycle. Despite the considerable interest in this field there has been little success in solving complex problems beyond simple benchmarks. SimNet improves on existing work to handle real world engineering problems while taking maximal advantage of GPU computing. In this work we present new neural network architectures and training methodologies that allow for solving multi-physics problems with complex geometries. In particular, we present the solution of conjugate heat transfer problems for heat sinks used to cool the next generation of DGX servers. [Preview Abstract] |
|
S01.00008: Generalization of Machine Learning Criteria for Ignition Prediction Faustino Martinez, Pavel Popov We present work on machine learning criteria for the prediction of the outcome of an attempted ignition. A hotspot of varying shape and peak temperature is introduced in a partially-premixed two-dimensional flow with a random velocity field following the Kolmogorov 5/3 law, and a random stoichiometric surface. The machine learning ignition criteria predict whether the ignition will eventually be successful or not, based on temperature and radical information known early on during the ignition process. A successful prediction of this binary outcome can reduce computational effort in simulations of turbulent flow ignition. The criteria are trained on 1000 realizations of the random velocity and composition fields, and are tested on a separate set of 200 realizations. The performance of convolutional neural networks is compared to that of densely-connected networks. We examine how well both types of networks generalize for new values of the random field parameters, specifically the fuel sheet thickness and stoichiometric surface curvature, as well as the velocity field fluctuations' expected amplitude. The feasibility of an ignition criterion prior to any radical generation -- based purely on the velocity and composition field at the time of hotspot deposition -- is also examined. [Preview Abstract] |
|
S01.00009: Learning minimal representations for chaotic dynamics of partial differential equations Alec J. Linot, Michael D. Graham We describe a method of reduced order modeling for systems with high-dimensional chaotic dynamics in which we map data to an ``exact'' minimal representation called an inertial manifold and evolve trajectories forward in time with this representation. The mapping is learned by training an undercomplete autoencoder where we vary the dimension of the minimal representation. Once we reach the dimension of the inertial manifold there is a drastic drop in recreation error. For the Kuramoto-Sivashinsky equation (KSE), we validate this conclusion against known estimates of the dimension and make predictions for larger domain sizes. Next, we show that time evolution can be predicted in the inertial manifold coordinates either in a data-driven manner by learning a neural network representation for the differential equation on the inertial manifold, or in a hybrid data/equation-based method using knowledge of the governing equations and a nonlinear Galerkin method. For the KSE, both methods show excellent short- and long-time predictive capabilities when using the correct number of dimensions and a significant drop in performance with too few dimensions. Finally, we apply this method to direct numerical simulations of chaotic 2D Kolmogorov flow. [Preview Abstract] |
|
S01.00010: Flowtaxis in the wakes of oscillating airfoils Haotian Hang, Sina Heydari, Brendan Colvert, Eva Kanso Swimming animals can produce long-lasting wakes in the surrounding fluid, and many exhibit fascinating abilities to sense and respond to these flows. Although the physiological mechanisms underlying such flow sensing abilities remain unclear, mathematical modeling offers an enticing platform for developing and testing sensorimotor control hypotheses. Here, we propose a simple flow sensing scenario in which a mobile sensor of constant speed reorients its heading in response to local flow stimuli, with the goal of flowtaxis or tracing vortical wakes to their source. We consider five types of sensors that measure the lateral difference, relative to the sensor direction of motion, in speed, velocity components, vorticity, and pressure, and we train each of these sensors on classic von Karman streets with decaying vorticity. Our results suggest that local lateral differences in the flow speed are most effective for flowtaxis. We then test the trained policies in wakes obtained from high-fidelity numerical simulations of oscillating airfoils, and quantify the robustness of the policy to wake variations. [Preview Abstract] |
|
S01.00011: Physics-informed Autoencoders for Operator-theoretic decomposition and Model reduction of Complex Flows Karthik Duraisamy, Shaowu Pan We explore the design of physics-informed autoencoders for operator-theoretic decomposition and reduced order modeling of complex flow dynamics. Focus is on enforcing additional physical and mathematical structure into Convolutional Neural network-based Autencoders. The autoencoders are used to extract the lower-dimensional manifold of the latent variables, and parameterized to yield provably stable predictions and is constrained by the governing equations of the full order dynamics that we aim to represent. Further, the latent space is explicitly endowed with a specific structure to promote interpretability and to extract Koopman modes. Variational inference is used in a hierarchical Bayesian setting to quantify uncertainties in the characterization and prediction of the spatio-temporal dynamics. The framework is evaluated on a range of problems involving strong gradients, wave propagation, and coherent structures. [Preview Abstract] |
|
S01.00012: Fast solver of the shallow water equations with application to estimation of the riverine surface flow velocity Mojtaba Forghani, Yizhou Qian, Peter Kitanidis, Matthew Farthing, Tyler Hesser, Jonghyun Lee, Eric Darve Estimation of the riverine flow velocity is important in applications such as the safe and efficient maritime transportation, prediction of beach erosion, and flood risk management. By assuming small vertical length scale compared to the horizontal length scale, the shallow water equations (SWE) are derived from the Navier-Stokes equations to predict flow velocity, given the riverbed profile (bathymetry) and the boundary conditions (BCs), e.g., the discharge and the free surface elevation. Here, we propose a fast solver using machine learning for the SWE that can be used for the online prediction of riverine flow velocities. Our approach consists of first, estimating the probability density function of the bathymetry from the flow velocity measurements, and then using deep learning to obtain a fast solver of the SWE, given the distribution of the bathymetry and known BCs. Our method can incorporate bathymetry information into the flow velocity prediction for improved accuracy at no additional cost; for example, in cases where the bathymetry is available for a limited number of cross-sections. Our results, validated on Savannah river, GA, show reasonable accuracy of prediction at a very low computational cost.\\ \\Teams/Collaborations, Stanford Civil Engineering Department; University of Hawaii Civil Engineering Department; Oak Ridge Institute for Science and Education; Stanford Institute for Computational and Mathematical Engineering; Stanford Mechanical Engineering [Preview Abstract] |
|
S01.00013: Deep learning to predict the effectiveness factor in the closure problems Ehsan Taghizadeh, Paul Macklin, Helen Byrne, Brian Wood We adopt a combination of upscaling and machine learning to generate a closed macroscale equation to predict convection, diffusion, and reactions within a tissue. We start with a microscale description of the system. We upscale these equations to determine a macroscale representation of the system. Nonlinearity in the reaction rate prevents computation of the closure factor using conventional analytical techniques. We overcome the nonlinear closure problem by using deep neural networks (DNNs) to learn an appropriate representation. First, we construct a simple “representative” geometry, that approximates the true geometry. Then we determine the “features” space on the basis of the source terms in the nonlinear closure problem. Next, we perform exhaustive microscale simulations for the nonlinear problem to compute an effectiveness factor at each point in feature space. Then, we design a DNN that learns the nonlinear dependencies from the features space. Finally, we test the algorithm on two representative tissues (brain and liver) with different cell geometries and scales. Our results show that the effectiveness factor predicted from the artificial neural network can accurately estimate the correction factor computed by the direct numerical solutions. [Preview Abstract] |
|
S01.00014: Embedded training of neural-network sub-grid-scale turbulence models Justin Sirignano, Jonathan MacArt, Jonathan Freund The weights of a deep neural network model are optimized in conjunction with the governing flow equations to provide a model for sub-grid-scale stresses in a temporally developing plane turbulent jet at Reynolds number 6000. The objective function for training is first based on the instantaneous filtered velocity fields from a corresponding direct numerical simulation, and the training uses the adjoint Navier–Stokes equations to provide the end-to-end sensitivities of the model weights to the velocity fields. In-sample and out-of-sample testing on multiple dual-jet configurations show that its required mesh density in each coordinate direction for prediction of mean flow, Reynolds stresses, and spectra is half that needed by the dynamic Smagorinsky model for comparable accuracy. The same neural-network model trained directly to match filtered sub-grid-scale stresses fails to provide a qualitatively correct prediction. The formulation is generalized to train based only on mean-flow and Reynolds stresses, which provides a robust model though a somewhat less accurate prediction. The anticipated advantage of the formulation is that the inclusion of resolved physics in the training increases its capacity to extrapolate, which is assessed for the case of passive scalar transport. [Preview Abstract] |
|
S01.00015: Deep learning-based assignment of combustion submodels for large-eddy simulation Wai Tong Chung, Aashwin Mishra, Nikolaos Perakis, Matthias Ihme This work introduces a data-assisted approach in the form of a neural network classifier for local and dynamic combustion submodel assignment in simulations of a rocket combustor. In this data-assisted simulation three different combustion models -- finite-rate chemistry (FRC), flamelet progress variable (FPV), and inert mixing (IM) models -- are assigned in the same domain using neural networks. \textit{A priori} and \textit{a posteriori} assessments are conducted to (i) evaluate the accuracy and adjustability of the classifier for targeting different quantities-of-interest (QoIs), and (ii) assess improvements of the data-assisted simulations compared to monolithic FRC and FPV model utilization in predicting target QoIs during simulation runtime. Results from employing neural networks, trained with local flow properties as input variables and combustion model errors in temperature and emissions as training labels, are compared with results from employing random forests, representing another classication approach. These results demonstrate that the present data-driven framework holds promise for the dynamic combustion submodel assignment in reacting flow simulations. [Preview Abstract] |
|
S01.00016: Embedding Physics as Hard Constraints in Generative Adversarial Networks for 3D Turbulence Dima Tretiak, Arvind Mohan, Daniel Livescu Generative Adversarial Networks (GANs) have achieved impressive results in the deep learning literature for being able to generate photorealistic 2D images. Among many other neural networks, GANs are now being applied to more complex physical problems, such as turbulence. However, neural networks have become notorious for being physics-agnostic ``black boxes.'' Recent work in enforcing physical laws using augmented loss functions as ``soft constraints'' show promise, but still suffer from excessive dependence on hyper-parameter tuning and interpretability issues. In this work, we analyze a variety of physics embeddings into GANs for effectiveness and present a novel GANs architecture capable of capturing the statistics of homogeneous isotropic turbulence while also enforcing the zero divergence condition as hard constraint for incompressible turbulence. We evaluate our model's strengths and weaknesses through the use of rigorous physical and statistical diagnostics and discuss future directions for physics-embedded GANs in turbulence. [Preview Abstract] |
|
S01.00017: Dynamic Masking of PIV Images using Deep Learning Bernhard Vennemann, Thomas Rösgen Masking of foreign objects in PIV image frames is required to avoid correlation bias from interrogation windows overlapping the object. Automatic masking strategies are required in the presence of moving or deforming objects, where manual masking efforts become unfeasible. We developed a dynamic masking technique based on convolutional autoencoders (a special type of convolutional neural network) to mask semi-transparent, moving and deforming objects from the PIV image frame. The neural network was trained both, using the PIV image sequence to be masked or synthetic PIV images exclusively. The proposed deep learning approach was found to yield improvements over existing methods, especially for objects of high transparency and scenarios with high seeding density. We tested the method using real-world PIV data of a swimming jellyfish, where the highly transparent animal could be successfully extracted from the images. A quantitative evaluation of synthetic benchmark images confirmed that masks could be produced with high fidelity at an average shape deviation of less than one pixel. [Preview Abstract] |
|
S01.00018: Pollution Source Localization Using Physics-Driven Deep Neural Net Roshan D'Souza, Isaac Perez-Raya Pollution source localization is of great significance in environmental damage prevention and mitigation. While sensors can detect time-resolved chemical concentrations, spatio-temporal localization of the source is a difficult inverse problem. Here we propose a novel method based on physics-driven deep learning to detect the spatio-temporal location of pollution sources based on the time-resolved chemical concentration readings from a finite number of sensors. The chemical concentration and the mobile source spatio-temporal functions are modeled as neural nets. The training process involves minimizing a loss function that enforces data fidelity with respect to sensor readings and the physics of advection diffusion through regularization. The proposed method is purely data driven and does not require specification of geometry and boundary conditions of the domain. The method has been tested on a 1-D unsteady problem with a single mobile point source that is active for certain time duration. The reference data was generated using the commercial finite volume solver Fluent. Results show an average source location error of 1{\%}, average source magnitude error of 12{\%}, and average source time duration error of 2.5{\%}. [Preview Abstract] |
|
S01.00019: LES Turbulence Model with Learnt Closure; Integration of DNN into a CFD Solver Majid Haghshenas, Peetak Mitra, Niccolo Dal Santo, Mateus Dias Ribeiro, Shounak Mitra, David Schmidt Turbulence modeling has been an ongoing subject of study. While high-fidelity turbulence models such as Large Eddy Simulation (LES) show promise, there is a continuing need for better closure models for subgrid flow features. Here we propose a CFD-DNN approach to use a data-driven closure for approximations of subgrid features and close an LES model. The workflow is implemented in an open-source CFD solver (OpenFOAM), and learning is performed using MATLAB. Our approach proposes the use of neural networks to estimate the closure model relating the small scales to the mean flow features. A high-fidelity LES method with a well-established closure model is utilized to generate ground truth data, which is used to train the ML model. The trained model is integrated with the CFD solver to predict eddy viscosity. The CFD-DNN solver is tested for a standard channel-flow problem and also on a practical Internal Combustion Engine (ICE) simulation, which involves complex flow features. Additionally, different network architectures and the corresponding accuracy and efficiency are reported. Overall, the approach shows promising results and provides new opportunities for developing CFD-ML infrastructure. [Preview Abstract] |
|
S01.00020: Deep learning-based shadowgraph: implementation of Mask R-CNN to bubble detection in complex two-phase Yewon Kim, Hyungmin Park One of the tricky issues in experimentally investigating the gas-liquid two-phase flow is to measure the bubble statistics. This is more serious when the flows are measured optically, in which the bubbles are densely populated. Due to wide range of flow conditions and lighting system, it is impossible to apply a global threshold for processing the optically-obtained images. Recently, deep learning showed up as a promising tool for tackling complex fluid mechanics problems, including the two-phase flow experiments. However, the bubble detection still remains at the level of bounding box detection, which is not sufficient. In this study, we trained the Mask R-CNN, a popular model in the field of object detection, using optimized datasets (real experimental and synthetic images for bubbly flows) and parameters to develop an universal tool for detecting the actual bubble shapes (not a box). We also used a customized loss function to enhance the detection performance for small objects (bubbles). We found that the accuracy of detection on the validation data set is above 95{\%}. Furthermore, the time taken for detection is reduced by up to 3 times compared to conventional digital image processing method, while providing comparable quality of detected bubble masks. [Preview Abstract] |
|
S01.00021: A hybrid data-driven deep learning technique for fluid-structure interaction Rajeev Jaiman, Tharindu Miyanawala This presents the development of a hybrid data-driven technique for unsteady fluid-structure interaction systems. The proposed data-driven technique combines the deep learning framework with a projection-based low-order modeling. While the deep learning provides low-dimensional approximations from datasets arising from black-box solvers, the projection-based model constructs the low-dimensional approximations by projecting the original high-dimensional model onto a low-dimensional subspace. Of particular interest is to predict the long time series of unsteady flow fields of a freely vibrating bluff-body subjected to wake-body synchronization. We consider convolutional neural networks (CNN) for the learning dynamics of wake-body interaction. The time-dependent coefficients of the proper orthogonal decomposition (POD) subspace are mapped to the flow field via a CNN with nonlinear rectification, and the CNN is iteratively trained using the stochastic gradient descent method to predict the POD time coefficient when a new flow field is fed to it. The time-averaged flow field, the POD basis vectors, and the trained CNN are used to predict the long time series of the flow fields and the flow predictions are quantitatively assessed with the full-order simulation data. The proposed POD-CNN model based on the data-driven approximation has a remarkable accuracy in the entire fluid domain including the highly nonlinear near wake region. [Preview Abstract] |
|
S01.00022: Learning to write and paint using a liquid rope trick Gaurav Chaudhary, Stephanie Christ, A. John Hart, L. Mahadevan The range and speed of direct ink writing, the workhorse of 3d and 4d printing, is limited by the practice of liquid extrusion from a nozzle just above the surface. This is to prevent instabilities that lead to folding and coiling instabilities that cause deviations from the required print path. But what if we could harness and control the “liquid rope coiling trick”, whereby a thin stream of viscous fluid falling from a height spontaneously folds or coils, to write specified patterns on a substrate? Here, we show that a type of machine learning known as Reinforcement Learning can be used to control the motion of a liquid extruding nozzle and thence the fluid patterns that are deposited on the surface. The learner (nozzle) repeatedly interacts with the environment (a viscous filament simulator), and improves its strategy using the results of this experience. We demonstrate the results in an experimental setting where the learned motion control instructions are used to drive a viscous jet to accomplish complex tasks such as cursive writing and paintings a la Pollock. [Preview Abstract] |
|
S01.00023: Inference on spatially unstructured flow fields using Graph Neural Networks Francis Ogoke, Kazem Meidani, Amirreza Hashemi, Amir Barati Farimani Experimental and computational models of fluid behavior frequently produce spatially unstructured data. However, machine learning models require data samples that have a within-sample ordered set of features. This limits the ability to form a coherent feature matrix based on a spatially unstructured dataset. Therefore, we present a data-driven model to perform inference on fields defined on an unstructured mesh, using a Graph Convolutional Neural Network framework. We demonstrate the ability of the method to predict global properties from spatially irregular measurements with high accuracy, by predicting the body forces associated with the laminar flow around airfoils from scattered velocity measurements. The network can infer from field samples at different resolutions and is invariant to the order in which the measurements within each sample are presented. The results are compared to the performance of both shallow and deep conventional machine learning methods. [Preview Abstract] |
|
S01.00024: Machine Learning Statistical Lagrangian Geometry of Turbulence Criston Hyett, Michael Chertkov, Yifeng Tian, Daniel Livescu Recently, there has been great success machine learning the Lagrangian dynamics of fluid particles in turbulent flows. We extend this work in search of Lagrangian dynamics of coarse-grained fluid volume/geometry and velocity gradient. Our work builds on the machine learning of Lagrangian dynamics, as well as the development of phenomenological reduced order models by approximating the closure of a physics-based model using neural networks to create a parameterized stochastic differential equation; coupling the evolution of the geometry to the evolution of the coarse-grained dynamical quantities; including deterministic and stochastic dynamics. Further, because the stochastic terms are themselves parameterized, we are able to target higher-order moments of dynamical quantities of interest. We train and evaluate the parameterized SDE against filtered Lagrangian DNS data to obtain a data-driven closure to the hypothesized model. We then evaluate the trained model to recover the learned insights to the phenomenological model. [Preview Abstract] |
|
S01.00025: Modeling active fluids via physically constrained machine learning Matthew Golden, Jyothishraj Nambisan, Alberto Fernandez-Nieves, Roman Grigoriev Active matter is abundant on Earth, especially in living systems, with examples ranging from~cell-division to bacterial suspensions.~We investigate a particular example of active matter -- a~fluid driven by chemically-driven molecular motors acting on a suspension of microtubules.~~Its dynamics should be described by a model including a pair of coupled partial differential equations:~one governing the fluid flow and another governing the orientation of microtubules.~These equations must capture all relevant forces and torques acting on the two components, both described by tensor fields.~Deriving these equations from first principles is difficult, as interactions occur over many length and time scales and not all the relevant physical processes are understood.~A data-driven model discovery offers a promising alternative.~We use~a hybrid approach which combines general physical constraints such as locality, causality, and symmetries to construct a library of candidate models with~symbolic regression to narrow it down. We show that this approach allows a parsimonious model of the system to be derived from experimental recordings. [Preview Abstract] |
|
S01.00026: Reconstruction of Turbulent High-resolution DNS Data Using Deep Learning Pranshu Pant, Amir Barati Farimani Within the domain of Computational Fluid Dynamics, Direct Numerical Simulation (DNS) is used to obtain highly accurate numerical solutions for fluid flows. However, this approach for numerically solving the Navier-Stokes equation is extremely computationally expensive mostly due to the requirement of greatly refined grids. Large Eddy Simulation (LES) presents a more computationally efficient approach for solving fluid flows on lower-resolution (LR) grids but results in an overall reduction in solution fidelity. Through this paper, we introduce a novel deep learning framework DNS-SR Net, which aims to mitigate this inherent tradeoff between solution fidelity and computational complexity by leveraging deep learning techniques used in image super-resolution. Using our model, we wish to learn the mapping from a coarser LR solution to a refined high-resolution (HR) DNS solution so as to eliminate the need for DNS simulations on highly refined grids. Our model efficiently reconstructs the high-fidelity DNS data from the LES like low-resolution solutions while yielding good reconstruction metrics. Thus our implementation improves the solution accuracy of LR solutions while incurring only a marginal increase in computational cost required for deploying the trained deep learning model. [Preview Abstract] |
|
S01.00027: Machine Learning of Reduced Lagrangian Models of Turbulence Michael Woodward, Yifeng Tian, Michael Chertkov, Mikhail Stepanov, Daniel Livescu, Chris Fryer While it has been demonstrated that scientific machine learning can be successfully applied to many fluid dynamics applications, it still remains a great challenge to encode physical constraints. In this work, we develop physics informed machine learning techniques to discover reduced Lagrangian models from turbulence simulation data. Specifically, we utilize symplectic integrators consistent with back propagation over the parameter space while embedding physical constraints within artificial neural networks. We explore parameterized families of molecular dynamics (MD) and smoothed particle hydrodynamics (SPH) models for simulating coarse-grained Lagrangian turbulence and for validating our learning algorithms. From this, we show that our method is capable of extracting relevant physics and can be used for data-driven discovery of parameters (e.g. smoothing kernels in SPH) while retaining a high level of interpretability and explainability. We train and evaluate our method on high fidelity Lagrangian DNS data and show it is capable of capturing turbulent dynamics within the resolved coarse-grained scales. [Preview Abstract] |
|
S01.00028: Physics-informed Machine Learning of the Lagrangian Dynamics of Velocity Gradient Tensor Yifeng Tian, Daniel Livescu, Michael Chertkov Reduced Lagrangian models describing dynamics of the Velocity Gradient Tensor (VGT), probing Kolmogorov scale and also coarse-grained at the scales within the inertial range of turbulence, are developed under the Physics-Informed Machine Learning (PIML) framework. The coherent part of pressure Hessian contribution is re-constructed with the Tensor-based Neural Network (TBNN) using the integrity bases and invariants of the VGT, which provides an improved representation of magnitude and orientation of the pressure Hessian eigenvectors. The incoherent part associated with small scale fluctuations is modeled using standard ML techniques. Both constructs are trained on Lagrangian data from a high-Reynolds number Direct Numerical Simulation (DNS). Physical constraints, such as Galilean invariance, rotational invariance, and zero-pressure work condition, are embedded into the models. Statistics of the flow, as indicated by the joint PDF of second and third invariants of the VGT, show good agreement with the ground-truth DNS. A number of important features describing structure of the turbulence are reproduced correctly by the model. We have also identified features, e.g. related to inertial range dynamics, which require more in-depth modeling. This helps us to identify important directions for future research, in particular towards including inertial range geometry into TBNN. [Preview Abstract] |
|
S01.00029: Learning Physics-based Galerkin models of turbulence with Neural Differential Equations Arvind Mohan, Kaushik Nagarajan, Daniel Livescu Turbulent flow control has numerous applications and building reduced order models (ROMs) of the flow and its associated feedback control laws is extremely challenging. Despite the complexity of building data-driven ROMs for turbulence, the superior representational capacity of Deep neural networks have demonstrated considerable success in learning ROMs. However, these strategies are typically devoid of physical foundations and often lack interpretability. Conversely, the Proper Orthogonal Decomposition (POD) based Galerkin projection (GP) approach for ROM has been a popular approach with successes in many problems. A key limitation is that ordinary differential equations (ODEs) arising from GP ROMs are highly susceptible to instabilities due to truncation of POD modes and lead to deterioration in temporal predictions. In this work, we propose a deep learning approach that blends the strengths of both these strategies, by incorporating neural networks directly into the GP ODE formulation. Given the structure of the projected equations, the resulting Neural Galerkin approach implicitly learns stable ODE coefficients from POD data and demonstrates significantly longer time horizon predictions. Finally, we demonstrate various applications of the Neural Galerkin projection approach compared to traditional GP ROMs, including learning stable ODEs when only the partial structure of the equation is known. [Preview Abstract] |
|
S01.00030: Designing networks to accurately learn 2D turbulence closures Keaton Burns, Ronan Legin, Adrian Liu, Laurence Perreault-Levasseur, Yashar Hezaveh, Siamak Ravanbakhsh, Gregory Wagner Scientifically meaningful deployment of machine-learned subgrid closures in large-eddy simulations (LES) requires learned closures to be more accurate or faster to compute than existing closure models. Here we present a systematic study of the accuracy of neural LES closures for forced 2D turbulence as a function of the network architecture and hyperparameters. We examine statistically steady flows where we can control the location of the filtering scale with respect to the stationary spectrum, and include a range of architectures that allow us to distinguish the effects of nonlocality and finite-differencing errors in the closure accuracy. We consider fully-connected, convolutional, and u-net network architectures trained on filtered snapshots from highly resolved direct numerical simulations (DNS). We vary the breadth and depth of the networks as well as the selected input variables and cost functions used during training. We examine how these choices impact the accuracy of the learned closures in predicting true subgrid stresses from DNS, and how they affect the statistics of new coarse forward models / LES using the learned closures. [Preview Abstract] |
|
S01.00031: Turbulence closure modeling with machine-learning methods: Influence of choice of neural network and training procedure Salar Taghizadeh, Yassin Hassan, Freddie Witherden, Sharath Girimaji Generalizability of machine-learning (ML) assisted turbulence closure models to unseen flows remains an important challenge. It is well known from the computer vision community that the architecture of a neural network and the manner of training have a profound influence on the performance of the resulting model [Goodfellow et al. Deep learning. MIT press, 2016]. The objective of the present work is to characterize the relationship among the choice of network (in terms of the number of nodes and layers), the type of these layers (fully connected or convolutional), the set of training flows and the domain of generalizability. We will also examine the impact of the training procedure and the impact of techniques such as dropout. For a given set of training data (of different flows), it is reasonable to expect that most networks would perform reasonably in predictive computations of similar classes of flows. However, it is unclear how the closure model network will perform in a class of flows different from training flows. In our study, two sets of training and prediction flows are considered: (i) training in simple rectilinear shear flows and predictions of separated flows; and (ii) training in one type of separated flow and predictions of a different type of separated flow. It is expected that this line of investigation will lead to a formal procedure for selecting the optimal neural network for turbulence closure modeling contingent upon training data sets and targeted prediction flow classes. [Preview Abstract] |
|
S01.00032: Turbulence closure modeling with Machine-Learning Methods: Can RANS overcome curse of averaging? Sharath Girimaji Reynolds-averaged Navier-Stokes method (RANS) is the most commonly used turbulence closure model in engineering applications due its inherent simplicity and reasonable predictive capability in elementary flows. However, RANS models are restricted in their applicability as many complicating influences cannot be adequately accounted for at this level of turbulence closure. In recent years, data-driven methods, specifically machine learning (ML) procedures, have been used to enhance the capability of RANS to complex flows. At this stage of development, the extent to which ML can help RANS models overcome its inherent inadequacies is unclear. In this work, we demonstrate that averaging the governing equations over all scales of motions renders the RANS method intrinsically inadequate in many aspects. This `curse' of averaging is due to the fact that many critical physical processes occurring in the fluctuating fields cannot be accurately represented in terms of low-order statistics. We investigate the types of physical effects that cannot be captured within the RANS-paradigm even with the best data-driven procedures. It is expected that the findings can help better understand the limitations of ML-turbulence models and temper expectations. [Preview Abstract] |
|
S01.00033: A Deep Learning Based Physics Informed Continuous Spatio Temporal Super-Resolution Framework Soheil Esmaeilzadeh, Chiyu Max Jiang, Kamyar Azizzadenesheli, Karthik Kashinath, Mustafa Mustafa, Hamdi A. Tchelepi, Philip Marcus, Mr Prabhat, Anima Anandkumar We propose a novel deep learning based super-resolution framework to generate continuous (grid-free) spatio-temporal solutions from the low-resolution inputs. While being computationally efficient, our proposed framework accurately recovers the fine-scale quantities of interest and allows for: (i) the output to be sampled at all spatio-temporal resolutions, (ii) a set of Partial Differential Equation (PDE) constraints to be imposed, and (iii) training on fixed-size inputs on arbitrarily sized spatio-temporal domains owing to its fully convolutional encoder. We empirically study the performance of our framework on the task of super-resolution of turbulent flows in the Rayleigh-Benard convection problem. Across a diverse set of evaluation metrics, we show that our proposed framework significantly outperforms the existing baselines. Furthermore, we provide a large-scale implementation of our framework and show that it efficiently scales across large clusters, achieving 96.80 percent scaling efficiency on up to 128 GPUs and a training time of less than 4 minutes. We provide an open-source implementation of our method that supports arbitrary combinations of PDE constraints. [Preview Abstract] |
|
S01.00034: Developing an automatic calibration tool for turbulence closure models using machine learning techniques Ismael Boureima, Vitaliy Gyrya, Juan Saenz, Susan Kurien We present a new data-driven methodology, using Machine-Learning techniques, to develop, test and optimize turbulence closure models. The proposed methodology is validated by automatically tuning and calibrating the system of parameter coefficients in the BHR 3.1 turbulence closure model against reference statistics from direct numerical simulation (DNS) of homogeneous variable-density turbulence and Rayleigh-Taylor instability canonical turbulence flows. Two approaches are considered: a static approach which considers (and minimizes) the instantaneous rate of deviation of the model from the DNS data, and a dynamic approach which considers the deviation over a finite (vs. infinitesimal) time interval. Both approaches were found to work with high degree of accuracy in the ideal case where the ground truth data was generated by the model. However, on actual DNS data, the static method was found to well approximate only short(instantaneous) times limit of the dynamics. We will contrast results obtained using the different approaches, and discuss their merits, together with their limitations and suggest possible remedies. We will also discuss various challenges and decisions that were made along the way. [Preview Abstract] |
Follow Us |
Engage
Become an APS Member |
My APS
Renew Membership |
Information for |
About APSThe American Physical Society (APS) is a non-profit membership organization working to advance the knowledge of physics. |
© 2024 American Physical Society
| All rights reserved | Terms of Use
| Contact Us
Headquarters
1 Physics Ellipse, College Park, MD 20740-3844
(301) 209-3200
Editorial Office
100 Motor Pkwy, Suite 110, Hauppauge, NY 11788
(631) 591-4000
Office of Public Affairs
529 14th St NW, Suite 1050, Washington, D.C. 20045-2001
(202) 662-8700