Bulletin of the American Physical Society
73rd Annual Meeting of the APS Division of Fluid Dynamics
Volume 65, Number 13
Sunday–Tuesday, November 22–24, 2020; Virtual, CT (Chicago time)
Session R01: Focus Session: Deep Learning in Experimental and Computational Fluid Mechanics (Part I) (5:00pm - 5:45pm CST)Interactive On Demand
|
Hide Abstracts |
|
R01.00001: Pollution Transport Simulation and Machine-Learning Aided Source Detection in Metropolitan Areas Sarah Zhang Air pollution is one of the world’s largest environmental health threats. This study aims to use remote signals to locate the source of pollution release, which will strengthen our readiness to counter its threat. In urban areas, flow structures advecting pollution are extremely complex: boundary layer separation generates vortical structures that increase pollutant spread and break the plume into smaller patches by dispersion. Flow structures were obtained by solving the two-dimensional Navier-Stokes equations using Computational Fluid Dynamics in a simplified scenario with imaginary urban architectures. The canonical neural network was applied to relate characteristics of pollutant detector signals to the release location. The proposed algorithm identified the source and its uncertainty through a Monte Carlo analysis. When the number of training samples was small, as limited by the number of trial-releases performed in reality, data augmentation was done by introducing noisy measurements as new training samples. As a result, sensors away from the center line of the flow outperformed the ones near it, indicating that boundary layer separation enhanced the differentiability between sensor measurements from various sources and improved source reconstruction for off-center sensors. [Preview Abstract] |
|
R01.00002: Unstructured fluid flow data recovery using machine learning and Voronoi diagrams Kai Fukami, Romit Maulik, Nesar Ramachandra, Kunihiko Taira, Koji Fukagata Recent studies have demonstrated the strengths of convolutional neural networks (CNNs) in a range of applications in fluid dynamics. However, most studies have been performed on structured grids since traditional convolutional operations in CNNs are founded on image processing. We here introduce the use of a Voronoi diagram, as a simple data preprocessing step, to interface the structured grid-based convolutional methods and unstructured data arising from sparse sensor placements or unstructured grids widely used in numerical simulations. The Voronoi diagram provides a structured-grid approximation of low-dimensional measurements based on Euclidean distance from the unstructured data. The present idea serves as a proof of concept for spatial fluid flow reconstruction on unstructured grids or from randomly placed sensors. To demonstrate the overall CNN approach with the Voronoi diagram inputs, we consider (1) two-dimensional cylinder wake, (2) NOAA sea surface temperature, and (3) turbulent channel flow. We show that the present CNN with the Voronoi idea can reconstruct the high-resolution flow field from coarse information. Our results reveal that the unstructured fluid data sets can be handled by CNNs without considering complex machine learning algorithms. [Preview Abstract] |
|
R01.00003: Estimating model error using sparsity-promoting ensemble Kalman inversion Jinlong Wu, Tapio Schneider, Andrew Stuart Closure models are widely used in simulating complex systems such as turbulence and Earth’s climate, for which direct numerical simulation is too expensive. Although it is almost impossible to perfectly reproduce the true system with closure models, it is often sufficient to correctly reproduce time-averaged statistics. Here we present a sparsity-promoting, derivative-free optimization method to estimate model error from time-averaged statistics. Specifically, we show how sparsity can be imposed as a constraint in ensemble Kalman inversion (EKI), resulting in an iterative quadratic programming problem. We illustrate how this approach can be used to quantify model error in the closures of dynamical systems. In addition, we demonstrate the merit of introducing stochastic processes to quantify model error for certain systems. We also present the potential of replacing existing closures with purely data-driven closures using the proposed methodology. The results show that the proposed methodology provides a systematic approach to estimate model error in closures of dynamical systems. [Preview Abstract] |
|
R01.00004: Deep Operator Neural Networks (DeepONets) for prediction of instability waves in high-speed boundary layers Patricio Clark Di Leoni, Charles Meneveau, George Karniadakis, Tamer Zaki We show how DeepONets can predict the amplification of instability waves in high-speed flows. In contrast to traditional networks that are intended to approximate functions, DeepOnets are designed to approximate operators and functionals. Using this framework, we train a DeepONet that takes as inputs an upstream disturbance and a downstream location of interest, and provide as output the amplified profile at the downstream position in the boundary layer. DeepONet thus approximates the linearlized Navier-Stokes operator for this flow. Once trained, the network can perform predictions of the downstream flow for a wide variety of inflow conditions without the need to calculate the whole trajectory of the perturbations, and at a very small computational cost compared to discretization of the original flow equations. [Preview Abstract] |
|
R01.00005: Active Learning of Nonlinear Operators for Forecasting Extreme and Rare Events Themistoklis Sapsis, George Karniadakis We formulate algorithms capable of predicting rare extreme events occurring in complex dynamical systems using only scarce, but carefully chosen, data-points produced by an accurate (but expensive) model or experiment. For these problems modern machine-learning methods have very limited capability as the phenomena of interest are typically transient, i.e. they `live' away from the statistical steady state of the chaotic attractor. This feature combined with the fact that the majority of machine-learning schemes have non-guaranteed generalization properties leads to limited applicability to these problems. We utilize machine learned representations of operators (in contrast to functions) and train those using carefully chosen datasets. In particular, we follow the paradigm of active learning, whereby existing samples of a black-box function are utilized to optimize the next most informative sample. We develop a new class of acquisition functions for sample selection that leads to faster convergence in applications related to statistical quantification of rare events. The proposed method relies on the fact that some directions of input functional space have a larger impact on the output than others, which is important especially for systems exhibiting rare and extreme events. [Preview Abstract] |
|
R01.00006: Reconstruction of turbulent data with deep generative models for semantic inpainting from TURB-Rot database Michele Buzzicotti, Fabio Bonaccorso, Patricio Clark Di Leoni, Luca Biferale We study the applicability of tools developed by the computer vision community for feature learning and semantic image inpainting to perform data reconstruction of fluid turbulence configurations. The aim is twofold. First, we explore on a quantitative basis, the capability of Convolutional Neural Networks embedded in a Deep Generative Adversarial Model (Deep-GAN) to generate missing data in turbulence, a paradigmatic high dimensional chaotic system. In particular, we investigate their use in reconstructing two-dimensional damaged snapshots extracted from a large database of numerical configurations of 3d turbulence in the presence of rotation, a case with multi-scale random features where both large-scale organised structures and small-scale highly intermittent and non-Gaussian fluctuations are present. Second, following a reverse engineering approach, we aim to rank the input flow properties (features) in terms of their qualitative and quantitative importance to obtain a better set of reconstructed fields. Finally, we present a comparison with a different data assimilation tool, based on Nudging, an equation-informed unbiased protocol, well known in the numerical weather prediction community. M. Buzzicotti, et al. arXiv preprint arXiv:2006.09179 (2020). [Preview Abstract] |
|
R01.00007: Non-invasive Inference of Thrombus Material Properties with Physics-informed Neural Networks Minglang Yin, Xiaoning Zheng, Jay Humphrey, George Karniadakis We employ physics-informed neural networks (PINNs) to infer properties of biological materials using synthetic data. In particular, we successfully apply PINNs on inferring the thrombus permeability and visco-elastic modulus from thrombus deformation data, which can be described by the fourth-order Cahn-Hilliard and Navier-Stokes Equations. In addition, to tackling the challenge of calculating the fourth-order derivative in the Cahn-Hilliard equation with automatic differentiation, we introduce an auxiliary network along with the main neural network to approximate the second derivative of the energy potential term. Our model can predict simultaneously unknown parameters and velocity, pressure, and deformation gradient fields by merely training with partial information among all data, i.e., phase-field and pressure measurements, and is also highly flexible in sampling within the spatio-temporal domain for data acquisition. We validate our model by numerical solutions from the spectral element method (SEM) and demonstrate its robustness by training it with noisy measurements. Our results show that PINNs can accurately infer the material properties with noisy synthetic data, and thus they have great potential for inferring these properties from experimental data. [Preview Abstract] |
|
R01.00008: Application of a Machine Learning Turbulent and Non-turbulent Classification Method to Wall Modeled LES of Transitional Channel Flows Ghanesh Narasimhan, Charles Meneveau, Tamer Zaki While wall-resolved large eddy simulation (LES) can predict laminar-to-turbulence transition, further reduction in computational cost by wall modeling compromises the ability to accurately capture the transition process. This issue arises, in part, because the wall model assumes the flow is in a statistically stationary turbulent state and hence incorrectly prescribes turbulent wall stresses in laminar regions. We retain the application of the wall model within the turbulent regions of transitional channel flow where even nascent spots exhibit high-Reynolds number characteristics, and we exclude the model from laminar regions. The distinction is performed using a self-organizing map (Wu et al, PRF 2019), an unsupervised machine-learning classifier. We discuss the capability of WMLES with turbulent/non-turbulent classification (WMSOM) in predicting both natural and bypass transitions in channel flow. Predictions of bypass transition agree well with DNS, while for natural transition both K- and H-type are predicted with only a slight delay in the transition time. In addition, the approach offers a significant reduction in computational cost. [Preview Abstract] |
|
R01.00009: Learning high dimensional surrogates from mantle convection simulations Siddhant Agarwal, Nicola Tosi, Pan Kessel, Doris Breuer, Sebastiano Padovan, Grégoire Montavon Exploring the high-dimensional parameter space governing 2D or 3D mantle convection simulations is computationally challenging. Hence, surrogates are helpful. Using 10,000 simulations of Mars' thermal evolution carried out in a 2D cylindrical-shell geometry, we recently demonstrated that Neural Networks (NN) can take five key parameters (initial temperature, radial distribution of radiogenic elements, reference viscosity, pressure- and temperature-dependence of the viscosity) plus time as an additional variable, and predict the 1D horizontally-averaged temperature profile at any time during 4.5 billion years of evolution. We now extend this work and attempt to predict the entire 2D temperature field which contains more information than the 1D profile such as the structure of plumes and downwellings. First, we compress the temperature fields by a factor of 70 using a convolutional autoencoder. Then, we use NNs to predict this compressed (latent) state from five parameters (plus time). The predictions on the test set are 99.5{\%} accurate on average. Animations of the true and predicted thermal evolutions show that the 0.5{\%} error comes from a failure to capture small-scale structures, thereby motivating further research. [Preview Abstract] |
|
R01.00010: Deep Reinforcement Learning for Bluff Body Active Flow Control in Experiments and Simulations. Dixia Fan, Liu Yang, Zhicheng Wang, Michael Triantafyllou, George Karniadakis We demonstrate in experimental environments the feasibility of applying deep reinforcement learning (DRL) in complex fluid applications automatically discovering active control strategies without any prior knowledge of flow physics. We demonstrate the methodology in the active control of the turbulent flow past a circular cylinder with the aim of reducing the drag force. We maximize the power gain efficiency by properly selecting the rotation of two small diameter cylinders located parallel to and downstream of the main cylinder. By properly defining rewards and noise reduction techniques and after an automatic sequence of tens of towing experiments the DRL agent is shown to discover a control strategy that is comparable to the optimal strategy found through lengthy planned control experiments. In addition, companion DRL-guided simulations illustrate the flow mechanism: the fast rotation of small cylinders reattach the flow in the main cylinder rear and hence significantly reduce the pressure drag. While DRL has been used effectively in recent flow simulation studies, this is the first time that its effectiveness is demonstrated experimentally, providing a potential paradigm shift in conducting fluid experiments and paving the way for exploring even more complex flow phenomena. [Preview Abstract] |
|
R01.00011: Super-resolution and Denoising of Fluid Flows Using Physics-informed Convolutional Neural Networks Jian-Xun Wang, Han Gao, Luning Sun High-resolution (HR) information of fluid flows, although preferable, is usually less accessible due to limited computational or experimental resources. In many cases, fluid data are usually sparse, incomplete, and possibly noisy. How to enhance spatial resolution and decrease noise levels of fluid flow data is important and practically useful. Deep learning (DL) techniques have been demonstrated effective for super-resolution (SR) tasks, which, however, largely relies on sufficient HR labeled data for training. In this work, we present a novel weakly-supervised or unsupervised DL-based SR framework based on physics-informed convolutional neural networks (CNN), which can generate HR flow fields from low-resolution (LR) inputs in high-dimensional parameter space. By leveraging conservation laws and flow conditions, the CNN SR model can be trained even without using any HR labeled data. Numerical examples of several fluid flows have been used to demonstrate the effectiveness and merit of the proposed method. [Preview Abstract] |
|
R01.00012: Stable and Generalizable Subgrid Modeling of Forced Burgers Turbulence Using Neural Networks and Transfer Learning Adam Subel, Ashesh Chattopadhyay, Yifei Guan, Pedram Hassanzadeh In order to model turbulence on computationally affordable coarse grids, methods like LES and RANS are used to model the feedback of the subgrid scales not explicitly resolved. Recently, there has been an increasing interest in using machine learning to improve the accuracy of these subgrid-scale models by either directly predicting the subgrid closure terms, or by estimating coefficients for LES or RANS. These data-driven methods lose accuracy when the parameters (e.g. the Reynolds number) of the system change from those of the training set. This is an obstacle for practical applications as long, high-quality simulations of high-Re turbulent flows, and thus enough data for training, may not be available. Here, using the forced Burgers equation as a test bed, we look at the generalization of a data-driven parameterization method, which uses a regularized artificial neural network (ANN) to stably predict the subgrid closure terms a posteriori. We find that a fivefold increase in the Reynolds number degrades the performance of the ANN. Transfer learning provides a practical solution to this problem: By taking an ANN trained on a lower Reynolds number and using a small dataset to re-train the final layers of the ANN, we show that we can recapture the statistics of the subgrid terms. [Preview Abstract] |
|
R01.00013: Statistically constrained neural networks for augmenting LES wall modeling Yue Hao, Charles Meneveau, Tamer Zaki The equilibrium wall model in large-eddy simulations (LES) is designed to predict the mean behavior but, applied to instantaneous realizations, it appreciably underpredicts the variance of the stress. We introduce a formalism whereby the equilibrium model provides a prior estimation of the stress, and a statistically constrained neural network (NN) provides a correction to that estimate. The network design is motivated by universal properties of the joint probability density functions of the local LES Reynolds number at the first grid point above the wall and the instantaneous wall stress. Inputs and outputs of the network are normalized, conditioned on the estimate from the equilibrium wall model; and the loss function is designed to ensure the statistics of the corrected stress match the universal trends. Spatially filtered data from the JHTDB channel flow at $Re_{\tau}$ =1000 and 5200 are used for training and testing. A priori tests are performed to assess the accuracy of the model relative to filtered wall stress from the database. The NN demonstrate better accuracy than the equilibrium wall model in (i) predicting statistics of the wall stress and (ii) the correlation of instantaneous predictions with the true filtered stress. [Preview Abstract] |
|
R01.00014: FiniteNet: A Fully Convolutional LSTM Network Architecture for Time-Dependent Partial Differential Equations Ben Stevens, Tim Colonius In this work, we present a machine learning approach for reducing the error when numerically solving fluid mechanics problems governed by time-dependent partial differential equations (PDE). We use a fully convolutional LSTM network to exploit the spatiotemporal dynamics of PDEs. The neural network serves to enhance finite-difference and finite-volume methods (FDM/FVM) that are commonly used to solve PDEs in fluid mechanics, allowing us to maintain guarantees on the order of convergence of our method. We train the network on simulation data, and show that our network can significantly reduce error compared to the baseline algorithms. We also explore the effect of adding a temporal modeling component to the method through the LSTM, and compare the results we can achieve using this strategy to other temporal modeling techniques. We demonstrate our method on three PDEs relevant to flow problems that each feature qualitatively different dynamics: the linear advection equation, which propagates its initial conditions at a constant speed, the inviscid Burgers' equation, which develops shockwaves, and the Kuramoto-Sivashinsky (KS) equation, which is chaotic. [Preview Abstract] |
|
R01.00015: Visualization of internal procedure in neural networks for fluid flows Masaki Morimoto, Kai Fukami, Koji Fukagata In recent years, many researchers have explored the use of neural networks for various problems in fluid dynamics. For more practical uses of them, we aim here to increase the interpretability of the machine learning models, i.e., to provide understandable explanation on the results. Generally, the internal structure of deep networks is complicated, and we often encounter difficulty in its interpretation due to a massive number of parameters and nonlinear activation functions. In the present talk, we introduce two ways of visualizing the internal procedure of neural networks following our previous studies, i.e., (1) CD prediction of a cylinder wake (Fukami et al., Theor. Comput. Fluid Dyn., 2020) and (2) experimental velocity estimation from PIV images (Morimoto et al., arXiv:2005.00756). The visualization of each layer for the trained network is demonstrated first. We find that the upstream layer has higher interest on the alignment of bodies while the downstream layer is more related to the velocity fluctuations. We also use a gradient-weighted class activation mapping (Grad-CAM), which can map the influential regions. We anticipate that both methods would serve as a powerful tool for the interpretation of various neural networks with fluid flow problems. [Preview Abstract] |
|
R01.00016: Emulating turbulence via a Physics-Informed Deep Learning framework Mohammadreza Momenifar, Enmao Diao, Vahid Tarokh, Andrew D. Bragg We use a data-driven approach to model a three-dimensional turbulent flow using cutting-edge Deep Learning techniques. The deep learning framework incorporates physical constraints on the flow, such as preserving incompressibility and global statistical invariants and relationships for the filtered strain-rate and vorticity. The accuracy of the model is assessed using statistical and physics-based metrics. The data set comes from Direct Numerical Simulation of an incompressible, statistically stationary, isotropic turbulent flow in a cubic box. Since the size of the dataset is memory intensive, we first generate a low-dimensional representation of the velocity data, and then pass it to a sequence prediction network that learns the spatial and temporal correlations of the underlying data. The dimensionality reduction is performed via extraction using Vector-Quantized Variational Autoencoder (VQ-VAE), which learns the discrete latent variables. For the sequence forecasting, the idea of Transformer architecture from natural language processing is used, and its performance compared against more standard Recurrent Networks (such as Convolutional LSTM). Detailed results on the multi-scale turbulence properties predicted by the model will be presented in the talk. [Preview Abstract] |
|
R01.00017: Autoencoded Reservoir Computing for the Spatio-Temporal Prediction of a Turbulent Flow Nguyen Anh Khoa Doan, Wolfgang Polifke, Luca Magri The spatio-temporal prediction of turbulence is challenging because of the sensitivity of the temporal evolution of the flow to perturbations, the nonlinear spatial interactions between turbulent structures of different scales, and the seemingly-random nature of sudden energy/dissipation bursts, which are extreme events. However, turbulence exhibits spatio-temporal correlations, such as the energy cascade, which can be inferred by a data-driven method. We develop an AutoEncoded Reservoir Computing (AE-RC) framework to predict the evolution of turbulent flows. The AE-RC consists of a Convolutional Autoencoder, which learns an efficient latent representation of the flow state, and a reservoir approach based on Echo State Networks, that learns the time evolution of the flow in the latent space. The AE-RC is applied to learn the dynamics of the 2D Kolmogorov flow in the quasi-periodic and turbulent regimes with/without extreme events. The AE-RC is able to predict adequately the short-term evolution of the Kolmogorov flow and the long-term statistics in all cases. This AE-RC approach demonstrates the potential of machine learning in the spatio-temporal prediction of turbulence. [Preview Abstract] |
|
R01.00018: Deep Reinforcement Learning for Efficient Navigation in Vortical Flow Fields Peter Gunnarson, Ioannis Mandralis, Guido Novati, Petros Koumoutsakos, John Dabiri Efficient point-to-point navigation in the presence of a background flow field is important for robotic applications such as ocean surveying. In such applications, robots may only have knowledge of their immediate surroundings rather than the global flow field, which limits the use of optimal control theory for planning trajectories. Here, we investigate the application of deep reinforcement learning to discover efficient navigation policies for a fixed-speed swimmer through steady and unsteady 2D flow fields. The algorithm entails encoding the swimmer policy as a deep neural network that uses as input the swimmer’s location and local vorticity, and outputs a swimming direction. We find that the resulting deep reinforcement learning policies significantly outperform a simple policy of swimming towards the target. The present navigation policies exploit the vorticity field to reach the target quickly and reliably. [Preview Abstract] |
|
R01.00019: Convolutional neural network based wall modeling for large eddy simulation in a turbulent channel flow Naoki Moriya, Kai Fukami, Yusuke Nabae, Masaki Morimoto, Taichi Nakamura, Koji Fukagata Large-eddy simulation (LES) has played a significant role in fluid dynamics community to deal with various aerospace and mechanical engineering designs. Especially for the LES at a practically high Reynolds numbers, a proper wall model should be required to keep the number of computational points in the regions near the walls at a reasonable level while retaining the accuracy. Although we now see a wide range of proposals for wall modeling, the quest for seeking more generalized models is still challenging. To tackle this issue, we here propose a supervised machine learning based wall model for LES considering a turbulent channel flow. The present model based on a convolutional neural network aims to predict the virtual wall-surface velocity from $x-z$ sectional fields near the wall, whose training data are prepared with a direct numerical simulation (DNS). The results in {\it a priori} test are in statistical agreement with the reference DNS data. The present model is then combined with an LES as {\it a posteriori} test. We find that the present machine learning based wall modeling can successfully augment the LES. We will also discuss the dependence of the model performance on the grid coarseness in the wall-normal direction. [Preview Abstract] |
|
R01.00020: A data-driven wall model for LES of flow over periodic hills Zhideng Zhou, Guowei He, Xiaolei Yang In wall-modeled large-eddy simulation (WMLES), wall models are often employed to provide wall shear stress for outer flow simulations. However, conventional wall models based on the equilibrium hypothesis are not able to accurately predict the wall shear stress for flows with separation and reattachment. In this work, we propose a data-driven wall model based on the physics-informed feedforward neural network (FNN) and wall-resolved LES (WRLES) data for flow over periodic hills. In the proposed FNN wall model, we employ the wall-normal distance, near-wall velocities and pressure gradients as input features and the wall shear stresses as output labels, respectively. The trained FNN wall model is applied to different snapshots and spanwise slices for both training and testing datasets. For the instantaneous wall shear stress, the correlation coefficients between the predicted results and WRLES data are larger than 0.6 and the relative errors are smaller than 0.3 at most streamwise locations. For the time-averaged wall shear stress, the predictions from the FNN wall model and the WRLES data agree well with each other for both training and testing datasets, demonstrating the outstanding generalization capacity of the FNN wall model. [Preview Abstract] |
|
R01.00021: Machine learning method for 3D particle tracking velocimetry based on digital inline holography Jiarong Hong, Ruichen He, Siyao Shao, Kevin Mallery, Santosh Kumar We present our recent work incorporating machine learning into the reconstruction and tracking processes of 3D particle tracking velocimetry based on digital inline holography (ML-DIH). Specifically, we developed a U-net based convolutional neuron network (CNN) architecture for hologram reconstruction and long short-term memory (LSTM) recurrent architecture for 3D particle tracking. The performance of our machine learning approach has been evaluated through 3D flow measurements in four cases, i.e., synthetic isotropic turbulence, droplet characterization in sprays, microorganism locomotion, nanoparticle deposition on surfaces. Through these measurements, our ML-DIH has demonstrated its ability: (i) to achieve high precision PTV at a tracer concentration more than 10 times higher than conventional DIH methods (in the synthetic turbulence case); (ii) to obtain accurate characterization of particle size and shape across more than two orders of magnitude in scale (in the spray case); (iii) to reconstruct complex locomotion trajectories over dense cellular medium ($3\times {10}^{6}$ cells/ml) (in the microorganism locomotion case); and (iv) to capture nanoparticle motions at nanoscale precision in highly nosy images (in the nanoparticle deposition case). [Preview Abstract] |
|
R01.00022: Modeling wall-shear stress of turbulent flows through deep reinforcement learning Junhyuk Kim, Hyojin Kim, Changhoon Lee Deep reinforcement learning (DRL) of turbulent flows, which is very rarely studied, is challenging because state and action are spatio-temporally high dimensional. But, it would be useful for turbulence modeling and control. In the present work, we adopted DRL to wall modeling of large-eddy simulation (LES) in turbulent channel flow, developing a deep neural network mapping wall-shear stress from off-wall velocity. Our approach is cost-efficient since we use only wall-modeled LES rather than direct numerical simulation (DNS) and it is free from prior assumption used in supervised learning. Using deep deterministic policy gradient, an actor-critic algorithm, we automatically control the wall shear boundary condition to match the target statistics including mean and root-mean-square (RMS) velocity profiles, responses of which are delayed in wall-normal direction. As a result, an LES with the trained wall model well reflected the target mean profile in log-layer, and RMS profile by our model was improved than conventional equilibrium wall model. [Preview Abstract] |
|
R01.00023: Super-resolution reconstruction of turbulence using unsupervised deep learning Hyojin Kim, Junhyuk Kim, Sungjin Won, Changhoon Lee We propose an unsupervised learning model that adopts a cycle-consistent generative adversarial network (CycleGAN) for super-resolution reconstruction of turbulence. In most practical problems, turbulence data is unpaired. A representative example is large-eddy simulation (LES) and the corresponding direct numerical simulation (DNS) data, for which a supervised learning is impossible. We trained our model using unpaired LES and DNS data in turbulent channel flows. As a result, the model can successfully reconstruct the high-resolution flow field with statistically DNS-quality from the LES one. In addition, the model showed excellent performance for another input data obtained by different LES model that was not used in the training process, and produced highly accurate statistics for temporal behavior despite not considering the temporal information. Through the unsupervised learning, a super-resolution reconstruction of turbulent flows would be extended to more practical applications such as LES modeling, removal of experimental noise, and synchronization of different experiments. [Preview Abstract] |
|
R01.00024: Data assimilation assisted neural network parameterizations for subgrid processes in multiscale systems Suraj Pawar, Omer San Despite the success of data-driven closure model for different types of flow, their online deployment may cause instabilities and biases in modeling the overall effect of subgrid scale processes, which in turn leads to inaccurate prediction. To tackle this issue, we exploit the data assimilation technique to correct the physics-based model coupled with the neural network as a surrogate for unresolved flow dynamics in multiscale systems. In particular, we use a set of neural network architectures to learn the correlation between resolved flow variables and the parameterizations of unresolved flow dynamics and formulate a data assimilation approach to correct the hybrid model during their online deployment. We illustrate our framework in an application of the multiscale Lorenz 96 system for which the parameterization model for unresolved scales is exactly known. We show significant improvement in the long-term perdition of the underlying chaotic dynamics with our framework compared to using only neural network parameterizations for the forecasting. Moreover, we demonstrate that these data-driven parameterization models can handle the non-Gaussian statistics of subgrid scale processes, and effectively improve the accuracy of outer data assimilation workflow loops. [Preview Abstract] |
|
R01.00025: Closed-loop optimal control for shear flows using reinforcement learning Onofro Semeraro, Michele Alessandro Bucci, Lionel Mathelin Numerous research efforts have been devoted to the application of control theory to fluid flows in the last decades. Despite some success in the application of model-based techniques, limitations imposed by the model often result in moderate performance in actual conditions. A possible turnaround is represented by fully data-driven methods where a physical model is not employed. Reinforcement Learning (RL) algorithms allow such a strategy while preserving optimality of the control solutions. This class of algorithms can be regarded as a fully data-driven counterpart of the discrete-in-time optimal control strategies based on the Bellman equation. When neural networks are employed as the approximation format, the framework is referred to as deep RL (DRL). In this contribution, we clarify the connection between RL and optimal control by our recent results obtained for the control of the Kuramoto-Sivashinsky (KS) equation. We focus our attention on the application of the Deep Deterministic Policy Gradient. We show that, by means of localized actuation and partial knowledge of the state, it is possible to control the KS in its chaotic regime. These results will be put in perspective by comparing the DRL policy with standard optimal controllers. [Preview Abstract] |
|
R01.00026: Avoiding High-frequency Thermoacoustic Instabilities in Liquid Propellant Rocket Engines Using Bayesian Deep Learning Ushnish Sengupta, Guenther Waxenegger-Wilfing, Jan Martin, Justin Hardi, Matthew Juniper Destructive high-frequency thermoacoustic instabilities have afflicted liquid propellant rocket engine development for decades. The 90 MW cryogenic liquid oxygen/hydrogen multi-injector research combustor BKD operated by DLR Lampoldshausen is a platform that allows their study under realistic conditions. In this study, we use data from BKD experimental campaigns where the static chamber pressure and fuel-oxidizer ratio were varied such that the first tangential mode of the combustor is excited under some conditions. We train a Bayesian neural network to predict the occurence probability of thermoacoustic instabilities 500 ms in the future, given the power spectra of the most recent 300 ms sample of the dynamic pressure data and mass flowrate control signals as input. The Bayesian nature of our algorithms allow us to work in this "small data" setting where the size of our dataset is restricted by the effort and expense associated with each experimental run, without making overconfident extrapolations. We find that the network is able to accurately forecast the occurence probability of instabilities on unseen experimental runs. We envision that these algorithms will eventually be used online by rocket engine controllers to avoid regions of thermoacoustic instabilities. [Preview Abstract] |
|
R01.00027: Estimation of 3D Velocity and Pressure Fields from Tomographic Background Oriented Schlieren Videos using a Physics-Informed Neural Network Shengze Cai, Zhicheng Wang, Frederik Fuest, Young Jin Jeon, Callum Gray, George Karniadakis Tomographic background oriented schlieren (Tomo-BOS) imaging measures density or temperature fields in 3D using multiple camera BOS projections, and is particularly useful for instantaneous flow visualizations of complex fluid dynamics problems. In this paper, we propose a new algorithm based on physics-informed neural networks (PINNs) to infer the full continuous 3D velocity and pressure fields from snapshots of 3D temperature fields obtained by Tomo-BOS imaging. PINNs seamlessly integrate the underlying physics of the observed fluid flow and the visualization data, hence enabling the inference of latent quantities using limited experimental data. In this hidden fluid mechanics paradigm, the neural network is trained by minimizing a loss function composed of a data mismatch term and a residual term associated with the coupled Navier-Stokes and temperature equations. The proposed method is first validated based on a 2D set of synthetic data for buoyancy-driven flow, and subsequently it is applied to the 3D Tomo-BOS data set. We demonstrate that by using PINNs, we are able to quantify accurately the instantaneous three-dimensional velocity and pressure of the flow over a coffee mug based on the temperature field provided by the tomographic Tomo-BOS imaging. [Preview Abstract] |
|
R01.00028: Convolutional neural networks to predict the onset of oscillatory instabilities in turbulent systems Eustaquio Aguilar Ruiz, Vishnu Rajasekharan Unni, R. I. Sujith, Abhishek Saha Oscillatory instabilities marked by ruinous high amplitude oscillations are common in fluid dynamic systems. Examples include thermoacoustic, aeroacoustics, and aeroelastic instabilities. In a turbulent system, the transition regime from safe operation to oscillatory instabilities exhibits a dynamical state of intermittency where the system exhibits bursts of high amplitude periodic oscillations amidst low amplitude aperiodic fluctuations. In this study we identify the extent of periodicity during intermittency by classifying the corresponding recurrence plots utilizing a Convolutional Neural Network (CNN), and thereby predict the onset of oscillatory instability. The CNN we use consists of two convolutional layers each followed by a rectified linear unit (activation function) and a max pooling layer. All of which is followed by a fully-connected layer which classifies the dynamics of the input recurrence plot as aperiodic fluctuations or periodic oscillations. The trained CNN is used to analyze time series of a state variable to which it assigns a probability of periodicity which in turn indicates the proximity of the system to oscillatory instability. We validate this methodology by predicting the onset of instabilities in thermoacoustic, aeroacoustic, and aeroelastic systems. [Preview Abstract] |
|
R01.00029: Equivariance-preserving Deep Spatial Transformers for Auto-regressive Data-driven Forecasting of Geophysical Turbulence. Ashesh Chattopadhyay, Mustafa Mustafa, Pedram Hassanzadeh, Karthik Kashinath A deep spatial transformer based encoder-decoder model has been developed to autoregressively predict the time evolution of the upper layer's stream function of a two-layered fully turbulent quasi-geostrophic (QG) system without any information about the lower layer's stream function. The spatio-temporal complexity of QG flow is comparable to the complexity of the observed atmospheric flow dynamics. The ability to predict autoregressively, the turbulent dynamics of QG is the first step towards building data-driven surrogates for more complex climate models. We show that the equivariance preserving properties of modern spatial transformers incorporated within a convolutional encoder-decoder module can predict up to 9 days in a QG system (outperforming a baseline persistence model and a standard convolutional encoder decoder with a custom loss function). The proposed data-driven model remains stable for multiple time steps thus promising us of stable and physical data-driven long-term statistics. [Preview Abstract] |
|
R01.00030: Data-driven super-parameterization of subgrid-scale processes using deep learning Pedram Hassanzadeh, Ashesh Chattopadhyay, Adam Subel, Yifei Guan A common approach to simulating turbulent flows is parameterization, in which the large-scale flow is numerically solved for on a low-resolution grid and the small-scale processes are represented in terms of the resolved flow using a parameterization scheme. Another approach, computationally more demanding but often more accurate, is called super-parameterization (SP), which involves integrating the equations of small-scale processes on high-resolution grids embedded within the low-resolution grid. Recently, a number of studies have explored applications of deep learning to find data-driven parameterization (DD-P) schemes. Leveraging recurrent neural networks (RNNs), here we introduce a data-driven super-parameterization (DD-SP) approach, in which the equations for small-scale processes are integrated data-drivenly, and thus inexpensively, using RNNs and the equations for the large-scale flow are integrated numerically on a low-resolution grid. Using a chaotic multi-scale Lorenz system and forced Burgers' turbulence, we show that DD-SP provides accuracy comparable to that of the SP (and better than DD-P) but at the low computational cost of parameterized low-resolution models and DD-P. Earlier results are presented at preprint arXiv:2002.11167: Data-driven super-parameterization using deep learning: Experimentation with multi-scale Lorenz 96 systems and transfer-learning. [Preview Abstract] |
|
R01.00031: Prediction of Rheological Parameters using Surrogate Models with Neural Networks James Hewett, Mathieu Sellier, Dale Cusack, Ben Kennedy, Miguel Moyers-Gonzalez, Jerome Monnier Directly measuring the rheology of fluids in adverse conditions, such as lava flowing from an eruption, can be both challenging and impractical. Instead, an inverse problem is posed, where rheology of the lava can be inferred in situ from tracking the free surface velocity of the flow, by minimising the discrepancy between the observed and model output velocity field. Solving the full numerical simulations for the optimisation problem is computationally expensive. Therefore, we explore the use of surrogate models that are capable of predicting the output of the expensive simulation, by training a neural network. [Preview Abstract] |
|
R01.00032: Control by Deep Reinforcement Learning of a separated flow Thibaut Guegan, Michele Alessandro Bucci, Onofrio Semeraro, Laurent Cordier, Lionel Mathelin In the closed-loop control framework, a dynamical model is often used to predict the effect of a given control action on the system. Specifically, model-based control approaches rely on a physical model based on first-principle equations is used. However, in the general case, a useful model is not always available. Besides systems whose governing equations are poorly known, there are situations where solving the governing equations is too slow with respect to the dynamics at play. While reduced-order models may help, they can lose accuracy when control is applied, resulting in poor performance. A different line of control strategy relies on a data-driven approach. No model is assumed to be known and the control command is based on measurements only. In this contribution, we consider a reinforcement learning strategy for the closed-loop nonlinear control of separated flows. Deep neural networks are used to approximate both the control objective and the control policy. We consider the flow over a 2D open cavity in the realistic settings where one relies only on a few pressure sensors at the wall. The performance of the control strategy is demonstrated on the dampening of the Kelvin-Helmholtz vortices of the shear layer. [Preview Abstract] |
|
R01.00033: Learning Full Flow Fields from Sparse Wind Tunnel Data Pablo Hermoso Moreno, Emile Oshima, Shengze Cai, Morteza Gharib Surface tufts and pressure taps are commonly employed in wind tunnel tests to diagnose flow around aerodynamic models. These are relatively simple to implement but can only provide spatially sparse information. On the other hand, techniques that give full flow fields such as pressure sensitive paint or particle image velocimetry are costly and limited in spatial domain. To bridge this gap, we employ deep learning methods to obtain full flow fields from simple and sparse data. In particular, data provided to the learning algorithm is limited to flow direction and pressure which can be obtained from tufts and taps, respectively. To demonstrate concept feasibility, we developed a physics-informed neural network (PINN) which takes points in space as input and outputs flow variables at those points. The algorithm minimizes a loss function that represents the deviation of the learned flow field from provided data and the governing flow physics. The PINN is first validated with 2D flow over a NACA0012 airfoil. Sensitivity to experimentally relevant factors such as data point distribution and noise are investigated. Finally, the work is extended to 3D flows that are representative of wind tunnel testing and the ability to predict wall shear stress is explored. [Preview Abstract] |
|
R01.00034: Interface learning paradigms for multi-scale and multi-physics systems Shady Ahmed, Suraj Pawar, Omer San A multitude of natural and engineered systems compromise multiple characteristic scales, multiple spatiotemporal domains, multiple physical closure laws, and even multiple disciplines. In a naive implementation of numerical simulation, the stiffest component dictates the spatial mesh resolution and time stepping requirements, making the solution of such systems computationally daunting. Instead, an ensemble of solvers and modeling approaches with varying levels of complexity has to be selected for efficient computations. This includes domain decomposition techniques, multi-fidelity solvers, and multi-geometrical abstractions. However, effective communications and information sharing among solvers have to be accomplished to guarantee solution convergence and reduce idle times. To this end, we exploit machine learning capabilities to provide physically-consistent interface conditions. A variety of interface learning paradigms are presented for full and reduced order modeling (FOM-ROM) coupling, macro-micro solvers coupling, and mixed-dimensional coupling using hybrid analysis and modeling (HAM) techniques. [Preview Abstract] |
|
R01.00035: Robust Reservoir Computing for the Prediction of Chaotic Systems Alberto Racca, Luca Magri Reservoir computing with Echo State Networks (ESNs) is an accurate machine learning technique to predict the evolution of chaotic dynamical systems. ESNs have been applied for the prediction of extreme events in turbulent channel flow and learning ergodic averages in thermoacoustics. These studies indicate that ESNs are an accurate tool for the prediction of chaotic dynamics, but they are sensitive to the ``tuning'' of the hyperparameters. In this work, we assess and improve the robustness of existing architectures. First, we find that the commonly used strategy to determine the hyperparameters lacks robustness. Secondly, we propose a validation strategy-the Recycle Validation-to improve robustness. Thirdly, we modify folds selection in existing validation strategies. We call this variant \emph{chaotic}, given its roots in the properties of the underlying signal. Both methods are versatile and can be readily applied to Recurrent Neural Network architectures. We test the robust ESNs on different datasets obtained from 3D ODE systems, including a reduced order model of Rayleigh-B\'{e}nard convection. In all testcases, the robust ESN outperforms the traditional ESN. This work opens up new possibilities for robustly employing reservoir computing to higher-dimensional fluid dynamics. [Preview Abstract] |
|
R01.00036: A Deep Learning Framework for Computational Fluid Dynamics on Irregular Geometries Ali Kashefi, Davis Rempe, Leonidas Guibas We present a novel deep learning framework for the prediction of flow fields in irregular domains. Grid vertices in a CFD domain are viewed as a point cloud and used as input to a neural network based on the PointNet architecture that learns an end-to-end mapping between spatial positions and CFD quantities. Using our approach, (i) the network inherits the feature of unstructured meshes (e.g., fine points near the object surface and coarse points in far field); hence the training cost is optimized; (ii) object geometry is accurately represented through vertices located on the object boundaries with no artificial affect; and (iii) no data interpolation is employed for creating training data; thus the accuracy of CFD data is preserved. None of these features are achievable by extant methods based on projecting scattered CFD data into Cartesian grids and then using regular convolutional neural networks. To evaluate the network, flow past a cylinder with different shapes for its cross section is considered. The mass and momentum of predicted fields are conserved. For the first time, our network predicts flow fields around multiple objects and airfoils, while it has only seen one object per object class and has never seen airfoils during the training process. [Preview Abstract] |
Not Participating |
R01.00037: Identifying Flow Physics in Convolutional Layers Ashley Scillitoe, Pranay Seshadri In many industrial design processes computational fluid dynamics (CFD) simulations play a key role. However, the simulations are often computationally intensive and time-consuming. Data-driven methods offer the possibility of replacing expensive computational simulations with cheaper approximations. Recently, convolutional neural networks (CNN's) have seen increasing attention for this purpose. They offer accurate and fast data-driven flowfield predictions, allowing for near-immediate feedback for real-time design iterations. Unlike fully connected neural networks, which require large amounts of training data, CNN's have been shown to offer relatively accurate flowfield predictions even with only a small amount of training data. Despite this success, exactly how CNN's are able to provide such accurate results is not well understood, and efforts at interpreting their predictions have been limited. In the present work, we explore a CNN's flowfield predictions using state-of-the-art CNN interpretation techniques. Additionally, we examine parallels between CNN's and another recently proposed method for flowfield prediction, embedded ridge functions (ERF's). By identifying low dimensional structures in the flowfield, ERF's can provide important physical insights into the flow. [Preview Abstract] |
|
R01.00038: A unifying framework of solving forward and inverse problems in fluid mechanics via deep learning Han Gao, Jian-Xun Wang Numerical simulation has been playing an increasingly important role in understanding and predicting fluid phenomena. The traditional paradigm focuses on forward solutions with given modeling conditions (e.g., flow boundaries or mechanical parameters), some of which, however, are often unknown in many practical scenarios. On the other hand, indirect, sparse, and possibly noisy observations are usually available, which can be leveraged to estimate these unknowns, enabling modeling in an inverse fashion. Nonetheless, existing finite volume or finite element based numerical solvers have difficulties in assimilating data and solving such inverse problems because of considerable computational overhead for most nontrivial cases. In this work, we present a novel deep learning framework that enables us to solve forward and inverse problems in a unified manner, where sparse data can be naturally assimilated based discrete learning. The proposed method is demonstrated effective and efficient in simulating a number of flow transport problems with partially known boundary conditions. [Preview Abstract] |
|
R01.00039: A Generative Model to Solve Steady Navier-Stokes Equations with Reduced Training Shen Wang, Joshua Agar, Yaling Liu Traditional computational fluid dynamics (CFD) seeks high-performance computational resources to reduce computational time. Recently, machine learning has been deployed to create data-driven surrogate models for CFD that improve computational efficiency. A majority of these approaches rely on labeled CFD datasets which are computationally intractable to obtain at a scale necessary to build data-driven models. Weakly-supervised learning, as an alternative approach, has shown the ability for solving Laplace’s equation with costless training data by building up a generative model with the physics-driven loss function according to the finite-difference method. Here we extend such an approach and train a model that instantly generates the steady solutions of the Navier-Stokes equations with various boundary conditions. We improved the model to handle the computational domains with internal obstacles. The trained model produces accurate steady solutions facilitated by warm-up initializations given during training. We expect that the model can be generalized to speed up the boundary-value CFD problems with minimal requirement of training data. [Preview Abstract] |
Follow Us |
Engage
Become an APS Member |
My APS
Renew Membership |
Information for |
About APSThe American Physical Society (APS) is a non-profit membership organization working to advance the knowledge of physics. |
© 2024 American Physical Society
| All rights reserved | Terms of Use
| Contact Us
Headquarters
1 Physics Ellipse, College Park, MD 20740-3844
(301) 209-3200
Editorial Office
100 Motor Pkwy, Suite 110, Hauppauge, NY 11788
(631) 591-4000
Office of Public Affairs
529 14th St NW, Suite 1050, Washington, D.C. 20045-2001
(202) 662-8700