76th Annual Meeting of the Division of Fluid Dynamics
Sunday–Tuesday, November 19–21, 2023;
Washington, DC
Session J43: Turbulence: Modeling II
4:35 PM–6:32 PM,
Sunday, November 19, 2023
Room: 207B
Chair: Dhawal Buaria, New York University
Abstract: J43.00002 : Predicting scalar gradient dynamics in turbulent mixing using deep neural networks
4:48 PM–5:01 PM
Abstract
Presenter:
Dhawal Buaria
(New York University (NYU))
Authors:
Dhawal Buaria
(New York University (NYU))
Katepalli R Sreenivasan
(New York University)
A defining characteristic of turbulence in fluid flows is that it dramatically enhances the mixing and transport rates of scalars, such as heat or substance concentration. While turbulence stirs the scalars across a wide range of scales, the mixing efficiency is ultimately controlled by scalar gradients at the smallest diffusive scales, where the scalar fluctuations are dissipated. Consequently, understanding the dynamics of scalar gradients is important for both improving our fundamental understanding and in various modeling endeavours. However, measuring scalar gradients in experiments is extremely challenging, especially when scalar diffusivities are very low, or the Schmidt number, the ratio of kinematic viscosity to scalar diffusivity, is high. In contrast, direct numerical simulations (DNS) can provide any quantity by very design. However, due to prohibitive cost of resolving the smallest scales, they are restricted to low Reynolds numbers, particularly for mixing at high Schmidt numbers. In this work, we propose an alternative approach, whereby the power of deep learning is utilized to learn scalar gradient dynamics from available data at lower Reynolds and Schmidt numbers, and predict unseen dynamics at higher Reynolds and Schmidt numbers. To this end, we consider the evolution equation for scalar gradients, and model the unclosed diffusive Laplacian term as a function of velocity and scalar gradients using a physics informed vector-based neural network (VBNN), which imbues various physical constraints and symmetries. Training is performed using a massive DNS database. For validation, the trained model is run at both seen and higher unseen Reynolds and Schmidt numbers, demonstrating excellent prediction of various statistical properties, such as probability distributions of scalar gradients and joint structure of velocity and scalar gradients.