Bulletin of the American Physical Society
72nd Annual Meeting of the APS Division of Fluid Dynamics
Volume 64, Number 13
Saturday–Tuesday, November 23–26, 2019; Seattle, Washington
Session H17: Minisymposium: Machine Learning in Fluid Mechanics |
Hide Abstracts |
Chair: Jeff Eldredge, University of California, Los Angeles Room: 4c4 |
Monday, November 25, 2019 8:00AM - 8:26AM |
H17.00001: Opportunities for Machine Learning in Fluid Mechanics Invited Speaker: Michael Brenner There are tremendous opportunities to use recent advances in machine learning and artificial intelligence to advance fluid mechanics as a discipline. This talk will give an overview of these opportunities, including (i) making scientific discoveries, such as the discovery novel flow phenomena; (ii) defining new representation of dynamical flows that make numerical solvers more efficient; (iii) the design of novel methods for experimental imaging and characterization; and (iv) development of novel coarse grained solvers for the Navier Stokes equations. I will summarize the advances in Machine Learning that have made these opportunities possible and include some recent examples from our own work. [Preview Abstract] |
Monday, November 25, 2019 8:26AM - 8:52AM |
H17.00002: Interpretable and Generalizable Machine Learning for Fluid Mechanics Invited Speaker: Steven Brunton Many tasks in fluid mechanics, such as design optimization and control, are challenging because fluids are nonlinear and exhibit a large range of scales in both space and time. This range of scales necessitates exceedingly high-dimensional measurements and computational discretization to resolve all relevant features, resulting in vast data sets and time-intensive computations. Indeed, fluid dynamics is one of the original big data fields, and many high-performance computing architectures, experimental measurement techniques, and advanced data processing and visualization algorithms were driven by decades of research in fluid mechanics. Machine learning constitutes a growing set of powerful techniques to extract patterns and build models from this data, complementing the existing theoretical, numerical, and experimental efforts in fluid mechanics. In this talk, we will explore current goals and opportunities for machine learning in fluid mechanics, and we will highlight a number of recent technical advances. Because fluid dynamics is central to transportation, health, and defense systems, we will emphasize the importance of machine learning solutions that are interpretable, explainable, generalizable, and that respect known physics. [Preview Abstract] |
Monday, November 25, 2019 8:52AM - 9:18AM |
H17.00003: Machine learning for Predictive Turbulence Modeling : A Cautiously Optimistic Perspective Invited Speaker: Karthik Duraisamy Machine learning has shown promise in describing, reconstructing or even predicting properties of a {\em given} system, given large amounts of relevant data. This talk focuses on how one can construct data-augmented models for turbulent flows that can learn from different systems, and transfer this modeling knowledge to make predictions in other systems in a non-parametric context. This defines a paradigm of transfer learning in the sense that the learning should target {\em global rules} - rather than problem-specific information - that is common to a {\em class of systems that share similar physics}. In the limit of finite (big or small) data, this requires the enforcement of a variety of physical, physics-inspired and empirically-known constraints. We embed learning architectures within PDE models and train the hybrid model in an integrated fashion, thus enforcing consistency between the learning and model construction. Examples of the enforcement of hard and soft constraints will be provided. These hybrid models are trained across different systems that are representative of the underlying model discrepancy, yielding predictions on unseen problems with quantified error bounds. Algorithmic, physical and data-related challenges will be discussed toward the end of achieving truly robust and generalizable models. [Preview Abstract] |
Monday, November 25, 2019 9:18AM - 9:44AM |
H17.00004: Deep Reinforcement Learning for Flow Control. Invited Speaker: Petros Koumoutsakos Reinforcement Learning (RL) is a mathematical framework for problem solving that implies goal-directed interactions of an agent with its environment. The agent has a repertoire of actions, perceives states and learns a policy from its experiences in the form of rewards. Deep recurrent neural networks can be used to encode the state-action policy of the agent and effective training implies a selective choice of experiences. RL does not require a model of the dynamics, such as a Markov transition model, and relies instead on repeated interactions of the agent with the environment. I consider that this approximation makes it highly suitable for complex problems in fluid dynamics, albeit, presently, only when significant computing power or experimental automation is possible. Deep RL has been very successful in games (from backgammon to video games and the game of Go) and robotics, but remains rather unexplored in fluid mechanics. I hope to illustrate some of the strengths and limitations of Deep RL for flow control using simulations of perching and gliding bodies as well collective fish swimming. [Preview Abstract] |
Monday, November 25, 2019 9:44AM - 10:10AM |
H17.00005: Classifying Flows using Neural Networks Invited Speaker: Eva Kanso Swimming organisms and robotic vehicles create long-lived flow disturbances that can in principle be detected and even exploited for tracking and navigation. Experimental evidence suggests that many aquatic organisms, from mate seekers to hungry predators, respond to specific hydrodynamic cues created by their respective preys. However, the exact features that make these flows distinguishable and the sensory measurements and layouts that are needed to detect them remain elusive. Here, we consider the inverse problem of classifying flow patterns from local sensory measurements. Specifically, we train neural networks to classify flow patterns by relying on flow sensors that measure a time history of the local flow signal at the sensor location. We systematically investigate the network performance for distinct types of sensory measurements: vorticity, flow velocities parallel and transverse to the direction of flow propagation, and flow speed. We show that the networks trained using transverse velocity outperform other networks, even when subjected to aggressive data corruption. We then train the network to classify flow patterns from instantaneous (one time) measurements, using a spatially-distributed array of sensors. The networks based on the spatially-distributed sensory arrays exhibit remarkable accuracy in flow classification, even when only a handful of sensors are active. We conclude by commenting on the advantages and limitations of these models for flow detection and classification, and we discuss how these results lay the groundwork for developing combined data-driven and physics-based models for flow sensing using distributed sensory arrays. [Preview Abstract] |
Monday, November 25, 2019 10:10AM - 10:36AM |
H17.00006: Differentiable Fluid Simulations for Deep Learning Invited Speaker: Nils Thuerey In this talk I will focus on the possibilities that arise from recent advances in the area of deep learning for accelerating and improving physics simulations. In this context, the Navier-Stokes equations represent an interesting and challenging advection-diffusion PDE that poses a variety of challenges for deep learning methods. In particular, I will focus on highlighting how differentiable fluid solvers can guide deep learning processes, and support finding desirable solutions. The existing numerical methods for efficient solvers can be leveraged within learning tasks to provide crucial information in the form of reliable gradients to update the weights of a neural networks. Interestingly, it turns out to be beneficial to combine supervised and unsupervised approaches. The former poses a much simpler learning task by providing explicit reference data that is typically pre-computed. Unsupervised learning on the other hand can provide gradients for a larger space of states that are only encountered during training runs. Here, differentiable solvers are particularly powerful to, e.g., provide neural networks with feedback about how inferred solutions influence the long-term behavior of a physical model. I will demonstrate this concept with several examples of force-based interactions with fluids. Learning with differentiable solvers represents a very promising direction within the larger field of physics-based deep learning. I will conclude by discussing current limitations and by giving an outlook about promising future directions. [Preview Abstract] |
Follow Us |
Engage
Become an APS Member |
My APS
Renew Membership |
Information for |
About APSThe American Physical Society (APS) is a non-profit membership organization working to advance the knowledge of physics. |
© 2024 American Physical Society
| All rights reserved | Terms of Use
| Contact Us
Headquarters
1 Physics Ellipse, College Park, MD 20740-3844
(301) 209-3200
Editorial Office
100 Motor Pkwy, Suite 110, Hauppauge, NY 11788
(631) 591-4000
Office of Public Affairs
529 14th St NW, Suite 1050, Washington, D.C. 20045-2001
(202) 662-8700