Bulletin of the American Physical Society
77th Annual Meeting of the Division of Fluid Dynamics
Sunday–Tuesday, November 24–26, 2024; Salt Lake City, Utah
Session X01: Minisymposium: Bayesian Inference for Synthesis of Models and Data in Fluid Mechanics |
Hide Abstracts |
Chair: Robert Niven, University of New South Wales Room: Ballroom A |
Tuesday, November 26, 2024 8:00AM - 8:26AM |
X01.00001: Foundations of Bayesian Inference and Application to Dynamical System Identification Invited Speaker: Robert K Niven This opening double-length oral presentation of this minisymposium first examines the foundations of Bayesian inference, and its breadth and scope for all inference problems in all branches of science and engineering. This includes an overview of different schools of probability, the role of deductive vs plausible reasoning, the interpretation of probabilities as “plausibilities”, the basis of Bayes’ theorem, and the meaning of its key constructs (prior, likelihood, posterior and evidence). These are illustrated by several simple examples. |
Tuesday, November 26, 2024 8:26AM - 8:52AM |
X01.00002: The Elephant in the Room: Adjoint-accelerated Bayesian Inference into multi-parameter CFD Invited Speaker: Matthew P Juniper John von Neumann is often quoted as saying "with four parameters I can fit an elephant, and with five I can make him wiggle his trunk." The implication is that physical models should contain only a handful of parameters. A century later, we seem happy to use physics-agnostic neural networks containing millions of parameters. What would von Neumann say? How should physical modellers respond? I will show that von Neumann's quote is more nuanced than it sounds. I will then frame a response within a probabilistic framework in which priors and data are both treated as sources of information. Some prior information, such as conservation of mass and momentum, is hard-wired into models and cannot be violated. Other prior information, such as the value of viscosity or the position of a boundary, can be soft-wired into model parameters as a prior probability distribution. Data then becomes a source of information for each candidate model. The information content of data can be quantified for each model and the likelihoods of different candidate models can be compared after the data arrives. Crucially, adjoint methods can be used to accelerate Bayesian inference such that it becomes computationally tractable for many important multi-parameter fluids problems. I will demonstrate this through assimilation of 3D Flow-MRI data directly into 3D CFD (i) to segment and reconstruct the flow in an in-vitro aorta, (ii) to infer a non-Newtonian viscosity model from one MR image, (iii) to infer candidate turbulence models from data. This probabilistic framework shows the power of keeping physics in a model if you can, because models cannot violate the imposed physics, can output their uncertainty, and have far fewer parameters than neural networks so require much less training data. |
Tuesday, November 26, 2024 8:52AM - 9:18AM |
X01.00003: Quantification and reduction of RANS model uncertainties through regional Bayesian calibration and model mixtures Invited Speaker: Paola Cinnella Accurate turbulent closures for the Reynolds-Averaged Navier-Stokes (RANS) equations are essential for a wide range of applications in engineering. Despite a plethora of proposed RANS models, there is no consensus on a single « best » model, and model choice is based on expert judgment. The uncertainty about model choice corresponds to an « epistemic » uncertainty, i.e. due to the loss of information about turbulent motions associated with the averaging process. Furthermore, RANS model require the specification of several closure coefficients using (uncertain) data for a small set of « canonical » flows (representative of limiting behaviors of turbulence), leading to so-called « parametric » uncertainties. The quantification and reduction of both such uncertainties is then of the utmost importance for reliable flow simulations. Bayesian statistical methods such as Bayesian updating of model parameters and Bayesian Model Averaging (BMA) can be used to deal with both parametric and epistemic uncertainties. In the last decades, Bayesian calibration and BMA have been applied to the quantification of RANS modelling uncertainties. However, 1) the choice of the calibration scenarios remains a source of uncertainty and can lead to non-optimal compromise solutions for model parameters, while 2) BMA model weights are constant throughout the covariate space, in contrast with the observation that model performance depends on the local flow physics, some models being better than others at capturing some physical processes. As a consequence, BMA cannot perform better than the best model in the mixture (even if it cannot perform worse than the -unknown- worst one). In this talk, we present and compare various approaches for calibrating "expert" models for capturing specific flow processes, and automatically combine them through a model aggregation approach that, unlike BMA, assigns regionally variable weights to the competing models. These include Clustered Bayesian averaging and mixtures of expert models. Such methods promote best-performing models in their region of expertise while downgrading unsuitable models, thus achieving better performance than any of the individual models. The procedure also provides estimates of the predictive variance. Results are shown for simple flows and turbomachinery applications. |
Tuesday, November 26, 2024 9:18AM - 9:44AM |
X01.00004: Bayesian-based merging of data assimilation and machine learning to learn unsteady turbulence models from sparse data Invited Speaker: Vincent Mons In this presentation, we describe a Bayesian-based approach to learn unsteady turbulence-model corrections from sparse data. Relying on the Expectation-Maximization (EM) formalism, we rigorously justify performing such a learning task in two steps. In a first step, Data Assimilation (DA) techniques, more specifically ensemble Kalman filtering (EnKF), are employed to infer full flow descriptions from the considered sparse data (Expectation step). In a second step, the thus obtained full flow descriptions are gathered to form a training dataset that may be exploited by machine-learning (ML) tools to derive the sought model corrections (Maximization step). As such, and as justified by the EM approach, the present methodology enables a seemingly optimal combination of the respective strengths of DA and ML techniques, namely the ability of the former in state estimation and the ability of the latter in optimizing highly nonlinear model representations. Moreover, thanks to the use of the EnKF and its sequential treatment of data in time, the present approach is essentially non-intrusive and may deal with potentially chaotic flows and over arbitrary long time horizons. The potentialities of this methodology are illustrated in particular through learning corrective terms in an Unsteady Reynolds-Averaged Navier-Stokes (URANS) model from synthetic sparse velocity data of the turbulent flow around a circular bluff body. |
Tuesday, November 26, 2024 9:44AM - 9:57AM |
X01.00005: Bayesian model selection for the squeeze flow of soft matter Invited Speaker: Aricia Rinkens Soft matter - such as polymeric liquids and particle suspensions - have a microstructure due to which the constitutive behavior is dependent on its state (e.g. deformation or stress). To optimize industrial processes such as additive manufacturing, injection molding and extrusion, characterizing the flow behavior is essential. However, due to the increase in complexity of the flow setting (e.g. type of flow or material), the calibration of the accompanying models using relatively simple experiments can be difficult. |
Tuesday, November 26, 2024 9:57AM - 10:10AM |
X01.00006: Multi-fidelity modeling and uncertainty quantification of heterogeneous roughness Invited Speaker: YoungIn Shin Operational models of the atmosphere used for decision-making, such as numerical weather prediction, cannot resolve atmosphere-surface interactions. Instead, these models use surface parameterizations that typically use deterministic estimates of required input parameters based on morphometric, geometry-based approaches. In this study, we present a method to improve lower fidelity operational models through a closed-loop workflow. We leverage geometry-resolving high-fidelity large eddy simulations (LES) to learn the uncertainties in both mid-fidelity (wall modeled LES) and low-fidelity (RANS) models that parameterize the surface roughness. We achieve this in a computationally tractable manner using a machine learning-accelerated inverse uncertainty quantification approach that reduces the required model evaluations by a thousand-fold. To enhance lower fidelity operational atmospheric models, we address two questions: (1) How can we quantify and reduce uncertainty in parameterizing heterogeneous roughness? (2) To what extent does this reduction lead to improved atmospheric predictions? Focusing on a case study in an idealized urban environment, we evaluate the predictions, with confidence intervals from uncertainty quantification, against morphometric approaches across a range of roughness geometries. Further, we investigate the impact of spatial averaging on assimilated statistics and the assimilation of turbulence statistics beyond wind speed on inversion accuracy. |
Tuesday, November 26, 2024 10:10AM - 10:23AM |
X01.00007: Uncertainty Quantification of Separated Flows Using Bayesian Neural Networks Invited Speaker: Tyler S Buchanan Data-driven turbulence modeling has become increasingly utilized to enhance the accuracy of simulations in fluid dynamics, particularly for Reynolds-Averaged Navier–Stokes (RANS) simulations. Although traditional data-driven methods have shown promise in improving turbulence predictions, they often struggle to accurately capture complex turbulence dynamics, with uncertainty quantification generally neglected. Such quantification is crucial as the extrapolation capability of these models can significantly deteriorate when applied to Out-Of-Distribution (OOD) regimes. In response to these challenges, this work introduces a novel Relative Importance Term Analysis (RITA) approach with Bayesian Neural Networks (BNNs) to advance turbulence modeling for separated flows. BNNs offer several advantages over traditional multilayer perceptions (MLPs), including superior generalization capabilities and an intrinsic framework for capturing epistemic and aleatoric uncertainties. |
Tuesday, November 26, 2024 10:23AM - 10:36AM |
X01.00008: Particle filters and stochastic transport models for geophysical data assimilation: localization and scalability Invited Speaker: Eliana Fausti Data assimilation (DA) is an essential Bayesian inference methodology used in weather forecasting and ocean prediction. In recent years, higher model resolution and complex observation sensors make it a priority to develop DA techniques that can handle highly non-linear models. Particle filters, compared to other existing methodologies, are well-suited to deal with non-linear, non-Gaussian models. Their use in geophysical data assimilation, however, is not yet widespread due to the so-called "curse of dimensionality". In this talk we propose a new strategy for assimilation of high-resolution geophysical fluids data, combining particle filters with ocean models with stochastic transport. The particle trajectories are modelled by runs of stochastic partial differential equations, simulated at a coarser resolution than the data. The stochasticity is introduced in the models in a physical way, to capture subgrid-scale processes. To overcame the problem of weight degeneracy in the particle filter, we introduce a localization method based on the use of the Gaspari–Cohn localization function, which tapers off the importance of the observations the further away they are from each region of interest. In each region, we run a smaller, independent, particle filter, which, together with resulting in a parallelizable implementation, also reduces the number of resampling, tempering and jittering steps required to avoid degeneracy of the ensemble. We present some initial results by testing our particle filter in a "twin experiment" for the 2D (stochastic) rotating shallow water equations, where the signal is taken to be an SPDE path, run on the same grid as the particle ensemble. In later experiments we will first generate high-resolution data synthetically using fine grid PDE runs, and, if successful, we will then test our methodology on SWOT satellite data. This is joint work with Dan Crisan. |
Follow Us |
Engage
Become an APS Member |
My APS
Renew Membership |
Information for |
About APSThe American Physical Society (APS) is a non-profit membership organization working to advance the knowledge of physics. |
© 2025 American Physical Society
| All rights reserved | Terms of Use
| Contact Us
Headquarters
1 Physics Ellipse, College Park, MD 20740-3844
(301) 209-3200
Editorial Office
100 Motor Pkwy, Suite 110, Hauppauge, NY 11788
(631) 591-4000
Office of Public Affairs
529 14th St NW, Suite 1050, Washington, D.C. 20045-2001
(202) 662-8700