Bulletin of the American Physical Society
76th Annual Meeting of the Division of Fluid Dynamics
Sunday–Tuesday, November 19–21, 2023; Washington, DC
Session A30: Reinforcement Learning for Flow Control |
Hide Abstracts |
Chair: Ricardo Vinuesa, KTH (Royal Institute of Technology) Room: 154AB |
Sunday, November 19, 2023 8:00AM - 8:13AM |
A30.00001: Discovering novel control strategies for turbulent flows through deep reinforcement learning Ricardo Vinuesa, Luca Guastoni, Jean Rabault, Hossein Azizpour In this work we introduce a deep-reinforcement-learning (DRL) environment to design and benchmark control strategies aimed at reducing drag in turbulent fluid flows through a channel and over a flat plate. The environment provides a framework for computationally efficient, parallelized, high-fidelity fluid simulations, ready to interface with established DRL agent-programming interfaces. This allows for both testing existing DRL algorithms against a challenging task, and advancing our knowledge of a complex, turbulent physical system that has been a major topic of research for over two centuries, and remains, even today, the subject of many unanswered questions. The control is applied in the form of blowing and suction at the wall, while the observable state is configurable, allowing to choose different variables such as velocity and pressure, in different locations of the domain. Given the complex nonlinear nature of turbulent flows, the control strategies proposed so far in the literature are physically grounded, but too simple. DRL, by contrast, enables leveraging the high-dimensional data that can be sampled from flow simulations to design advanced control strategies. In an effort to establish a benchmark for testing data-driven control strategies, we compare opposition control, a state-of-the-art turbulence-control strategy from the literature, and a commonly used DRL algorithm, deep deterministic policy gradient (DDPG). Our results show that DRL leads to 43% and 30% drag reduction in a minimal and a larger channel (at a friction Reynolds number of 180), respectively, outperforming the classical opposition control by around 20 and 10 percentage points, respectively. We also discuss the changes of control policy for different sensing planes in the wall-normal direction, increasing Reynolds number as well as the application of the framework for zero-pressure-gradient (ZPG) turbulent boundary layers (TBLs). |
Sunday, November 19, 2023 8:13AM - 8:26AM |
A30.00002: Swimming in Turbulent Environments with Physics Informed Reinforcement Learning Christopher F Koh, Michael Chertkov, Laurent Pagnier Turbulent diffusion drives the separation of particles initially in close proximity. Understanding |
Sunday, November 19, 2023 8:26AM - 8:39AM |
A30.00003: HydroGym: A Reinforcement Learning Control Framework for Fluid Dynamics Ludger Paehler, Jared Callaham, Samuel Ahnert, Nikolaus Adams, Steven L Brunton We propose HydroGym, a framework for reinforcement learning control of fluid flows. Over the past years, Reinforcement Learning has proven itself as a highly effective control paradigm in complex environments ranging from robotics to protein folding, building on a foundation of scalable reinforcement learning frameworks and standardized benchmark problems. Progress in the application of reinforcement learning to flow control has in contrast been challenged by the scarcity of such frameworks, and benchmarks. To this end, we present HydroGym, a new solver-independent reinforcement learning framework for flow control, which enables the seamless scaling of flow control reinforcement learning environments with state of the art online, offline, and differentiable reinforcement learning. We present these various online, offline, and differentiable reinforcement learning results on a set of four canonical fluid flow environments showing the ease-of-use, scalability, and ease of extensibility to new flow control environments. |
Sunday, November 19, 2023 8:39AM - 8:52AM |
A30.00004: Path planning of swimmers in complex flows: Comparing reinforcement learning vs optimizing a discrete loss (ODIL) Lucas Amoudruz, Petr Karnakov, Petros Koumoutsakos Path planning for swimmers in complex flow fields is fundamental in domains ranging from targeted drug delivery to underwater navigation. Reinforcement Learning (RL) is often used in such problems where a swimmer repeatedly interacts with an environment to find an optimal path control policy. RL treats the environment as a black box which may result in poor sampling efficiency. We propose a method for closed-loop optimal control based on the ODIL (Optimizing a DIscrete Loss) framework where we combine both the dynamics and control objective into the same optimization problem. We compare this method to RL on a variety of path planning problems involving swimmers in fluid flow. Our results suggest that ODIL is more robust and requires 10–100 times fewer policy evaluations during training, especially in a high-dimensional action space. The implementation of the method is straightforward as it takes advantage of standard machine learning tools for automatic differentiation and gradient-based optimization. Overall, we find that ODIL is a fast and easy to adopt computational tool for solving path planning and control problems in fluid mechanics. |
Sunday, November 19, 2023 8:52AM - 9:05AM |
A30.00005: SINDy-RL: Interpretable and Efficient Reinforcement Learning for Fluid Flow Control Nicholas Zolman, Urban Fasel, Nathan Kutz, Steven L Brunton Deep Reinforcement Learning (DRL) has shown significant promise for uncovering sophisticated control policies that interact in environments with complicated dynamics, such as stabilizing the magnetohydrodynamics of a tokamak reactor and minimizing the drag force exerted on an object in a fluid flow. However, these algorithms require many training examples and can become prohibitively expensive for many applications. In addition, the reliance on deep neural networks results in an uninterpretable, black-box policy that may be too computationally challenging to use with certain embedded systems. Recent advances in sparse dictionary learning, such as the Sparse Identification of Nonlinear Dynamics (SINDy), have shown to be a promising method for creating efficient and interpretable data-driven models in the low-data regime. In this work, we extend ideas from the SINDy literature to introduce a unifying framework for combining sparse dictionary learning and DRL to create efficient, interpretable, and trustworthy representations of the dynamics model, reward function, and control policy. We demonstrate the effectiveness of our approaches on benchmark control environments and challenging fluids problems, achieving comparable performance to state-of-the-art DRL algorithms using significantly fewer interactions in the environment and an interpretable control policy orders of magnitude smaller than a deep neural network policy. |
Sunday, November 19, 2023 9:05AM - 9:18AM |
A30.00006: Distributed Actuation of Turbulent Flow Around a Cylinder using Deep Reinforcement Learning Pedro Ivo Almeida, Ian Jacobi, Beni Cukurel, Siddhartha Verma The turbulent wake behind a cylinder in crossflow exhibits large-scale unsteadiness which is highly sensitive to perturbations in the freestream. The ability to effectively modulate the wake can benefit various performance metrics such as reduced drag, noise suppression, and mixing enhancement. Prior work on flow control around cylinders has focused on utilizing a variety of actuation methods such as steady suction and blowing, cylinder rotation, acoustic excitation, electromagnetic forcing, synthetic jets, and various other approaches. Although these actuation methods have succeeded in effectively reducing drag and lift forces by suppressing vortex shedding, traditional control methods usually rely on linearization approaches, which can limit their effectiveness in fully-developed turbulent flows. This study presents a data-driven approach to modulating large- and small-scale coherent structures by coupling Large Eddy Simulations (LES) in fully developed turbulent flow (104 < Re < 105) with an autonomous technique called Deep Reinforcement Learning (RL). An RL agent is trained to perturb the flow in real-time using a coordinated array of actuators distributed over the surface of the cylinder. The RL algorithm dynamically alters the actuation of 4 independent spanwise surface actuators to produce local sources of wall vorticity, thereby modulating the coupling between large- and small-scale coherent structures downstream. |
Sunday, November 19, 2023 9:18AM - 9:31AM |
A30.00007: Reinforcement learning for real-time flow control of vertical axis wind turbines Baptiste Corban, Daniel Fernex, Karen Mulleners, Emmanuel Rachelson, Michaël Bauerheim, Thierry Jardin Vertical axis wind turbines present several advantages including omni-directionality and low noise production. |
Sunday, November 19, 2023 9:31AM - 9:44AM |
A30.00008: Deep reinforcement learning for active separation control in a turbulent boundary layer Francisco Alcántara-Ávila, Bernat Font, Jean Rabault, Ricardo Vinuesa, Oriol Lehmkuhl Active flow control to reduce the bubble of recirculation (BR) in a separated turbulent boundary layer is investigated using deep reinforcement learning (DRL). The BR is induced by imposing a wall-normal blowing and suction at the top of the domain, which generates the separation. The separation control is performed by several control surfaces in the form of rectangular jets placed upstream of the BR, alongside the streamwise direction and parallel one to each other in the spanwise direction. These jets apply a wall-normal velocity magnitude defined by the DRL agent. The actions proposed by the DRL agent are based on a partial observation of velocity components at the BR and aim to maximize the accumulated reward in time. In this case, the wall shear stress is used as a reward proxy of the BR length. Since the flow is periodic in the spanwise direction, the domain can be divided into invariant subdomains that allows us to use the multi-agent reinforcement learning technique. This technique exploits the invariants of the domain to generate multiple explorations within a single large eddy simulation. Comparison with classical control techniques to reduce the BR size is also reported, highlighting the improvements that DRL brings to this case. |
Sunday, November 19, 2023 9:44AM - 9:57AM |
A30.00009: Control of reacting flows with hybrid differentiable/deep learning flow solver Nilam Tathawadekar, Camilo Silva, Nils Thuerey, Nguyen Anh Khoa Doan The control of turbulent reacting flows is very challenging due to the chaotic nature of flows, the strong nonlinearity of the chemical reactions and the complex interplay between flow and chemistry. Nonetheless, achieving such flow control in reacting flows could be of great importance given the various combustion instabilities that can occur in such systems. Typical approaches to this problem rely on the construction of ad-hoc reduced-order models of the combustion system for which control laws are designed, mainly based on expert knowledge. Recently, deep reinforcement learning was applied to nonreacting flow control with some success. Nevertheless, this latter approach tends to be computationally costly as it requires many episodes to train the controller. In this work, we propose a novel approach which combines a differentiable reacting flow solver and a deep learning controller. It uses the differentiable capability of the flow solver to provide the gradients of the controller parameters with respect to our objective function over multiple timesteps, enabling an acceleration of its training and ensuring stable control over a longer period. We test our framework on the problem of controlling an arbitrary given starting flame shape into another target flame shape which can be of relevance when wanting to enforce the position of the flame away from regions of low-velocity (to prevent flashback, for example). We show that the proposed framework identifies an efficient control approach to this problem. |
Follow Us |
Engage
Become an APS Member |
My APS
Renew Membership |
Information for |
About APSThe American Physical Society (APS) is a non-profit membership organization working to advance the knowledge of physics. |
© 2025 American Physical Society
| All rights reserved | Terms of Use
| Contact Us
Headquarters
1 Physics Ellipse, College Park, MD 20740-3844
(301) 209-3200
Editorial Office
100 Motor Pkwy, Suite 110, Hauppauge, NY 11788
(631) 591-4000
Office of Public Affairs
529 14th St NW, Suite 1050, Washington, D.C. 20045-2001
(202) 662-8700