Bulletin of the American Physical Society
63rd Annual Meeting of the APS Division of Plasma Physics
Volume 66, Number 13
Monday–Friday, November 8–12, 2021; Pittsburgh, PA
Session NM10: Mini-Conference: Machine Learning in Plasma Sciences IOn Demand
|
Hide Abstracts |
Chair: Zhehui Wang, Los Alamos Natl Lab Room: Room 406 |
Wednesday, November 10, 2021 9:30AM - 9:35AM |
NM10.00001: Welcome Remarks
|
Wednesday, November 10, 2021 9:35AM - 10:05AM |
NM10.00002: Automation and control of laser wakefield accelerators using Bayesian optimization Rob Shalloo, Stephen J Dann, Jan-Niclas Gruse, Christopher Underwood, Andre F Antoine, Christopher Arran, Michael Backhouse, Christopher Baird, Mario Balcazar, Nicholas Bourgeois, Jason A Cardarelli, Peter W Hatfield, Jiwoong Kang, Karl M Krushelnick, Stuart P.D. Mangles, Chris D Murphy, Ning Lu, Jens Osterhoff, Kristjan Poder, Pattathil Rajeev, Christopher P Ridgers, Savio V Rozario, Matthew P Selwood, Ashwin J Shahani, Dan R Symes, Alec G.R. Thomas, Christopher Thornton, Zulfikar Najmudin, Matthew J. V Streeter Laser wakefield accelerators promise to revolutionize many areas of accelerator science. However, one of the greatest challenges to their widespread adoption is the difficulty in the control and optimization of the accelerator outputs due to coupling between input parameters and the dynamic evolution of the accelerating structure. Here, we use machine learning techniques to automate a 100 MeV-scale accelerator, which optimized its outputs by simultaneously varying up to six parameters including the spectral and spatial phase of the laser pulse and the plasma density and length. Crucially, the algorithm incorporates the measurement uncertainties, a key feature for the efficient multi-dimensional optimisation of a real machine. Most notably, the model built by the algorithm enabled optimization of the laser evolution that might otherwise have been missed in single variable scans. In addition, interrogation of the generated models can be used to provide physical insight into the systems under study. In our case, subtle tuning of the laser pulse shape caused an 80% increase in electron beam charge, despite the pulse length changing by just 1% by the usual metrics. |
Wednesday, November 10, 2021 10:05AM - 10:20AM |
NM10.00003: Bayesian Methods in Inertial Confinement Fusion Research: Embracing uncertainty and Learning More from our Data* Patrick F Knapp, Michael E Glinsky, William E Lewis Bayesian methods have recently been applied to a wide range of problems in the physical sciences, allowing physicists extract more information from the available data with quantified uncertainties. Statistical inference is at the heart of many critical tasks such as feature extraction, hypothesis testing, and parameter estimation. Here we will briefly review the formalism that underlies these tasks before describing the novel ways in which Bayesian methods are being applied in the field of inertial confinement fusion. These applications include data assimilation, the process of combining multiple disparate measurements to constrain a model, optimization of diagnostic configurations to minimize uncertainty, and automated feature extraction. When combined with deep-learning enabled surrogate models, these tools are able to efficiently capture complex physics with high fidelity, furthering our understanding in ways that were previously impossible. *SNL is managed and operated by NTESS under DOE NNSA contract DE-NA0003525 |
Wednesday, November 10, 2021 10:20AM - 10:35AM |
NM10.00004: Machine Learning Spike Trains of Uneven Duration and Delay: STUD Pulses for Laser-Plasma Instability Control and Suppression Bedros B Afeyan, Jeffrey A Hittinger, Jayaraman J Thiagarajan, Anirudh Rushil We will show how the laser-plasma instability control and suppression problem can be translated into an inverse problem of discovering the strong space-time modulation profile of an incident laser to mimic or reproduce a certain Stimulated Raman Scattering (SRS) reflectivity or hot electron generation profile, that is deemed desirable or acceptble for a given IFE scheme. How to execute this inverse learning program, how to optimize the seach, and guide the vast terrain exploration of hights, widths, and spacings between laser spikes in a spike trrain, and mapping this to desirable plasma responses will be our goal. Many techniques can be used to simplify or speed up these tasks such as Koopman operator linearlization and spectral analysis, Dynamic Mode Decomposition, suppression of strong turbulence and the relaxation to weak (or wave) turbulence regimes, suppression of intermittency, and more generally kinetic plasma phase space control. This program can be executed on very sophisticated kinetic models of SRS or with mere PIC codes or yet simpler PDE and ODE models in one space and/or one time dimension. The question is how well will transfer learning work among these models and real world data obtained in different laser and plasma conditions. We will turn to ML to explore for answers. |
Wednesday, November 10, 2021 10:35AM - 10:50AM |
NM10.00005: Data-driven modelling of laser-plasma experiments enabled by large datasets. Andre F Antoine, Alexander G Thomas, Jason A Cardarelli, Matthew J. V Streeter, Chris D Murphy, Karl M Krushelnick, Christopher Arran, Stuart P.D. Mangles, Zulfikar Najmudin, Archis Joglekar, Mario Balcazar, Peter W Hatfield, Nicholas Bourgeois, Stephen J Dann, Jan-Niclas Gruse, Daniel R Symes, Christopher P Ridgers, Ashwin J Shahani, Rob Shalloo, Savio V Rozario, Kristjan Poder, Jens Osterhoff, Matthew P Selwood, Michael Backhouse, Christopher Underwood, Jiwoong Kang, Rajeev Pattathil, Christopher Baird, Ning Lu Laser Wakefield Acceleration (LWFA) is a process by which high gradient plasma waves are excited by a laser leading to the acceleration of electrons. The process is highly nonlinear leading to difficulties in developing 3 dimensional models for a priori, and/or ab initio prediction. |
Wednesday, November 10, 2021 10:50AM - 11:05AM |
NM10.00006: What machine learning can and cannot do for ICF Baolian Cheng Machine learning (ML) methodologies have played remarkable roles in solving complex systems with large data, well-defined input-output pairs, and clearly definable goals and metrics, The methodology is especially effective in image analysis, classifications, or systems having no long chains of logic or reasoning dependent on diverse background knowledge or common sense. Recently, the methodology has been actively applied to inertial confinement fusion (ICF) capsules and design optimizations of the NIF (National Ignition Facility) ignition capsules, making significant progress. As it is applied more, ML raises concerns on its capabilities and deficiencies for ICF. ICF is a physical system requiring one or more of: long chains of logic, complex planning, and/or relying on physics knowledge and human judgement unknown to the computer. Additionally, the experimental database in ICF is not large enough to be used for credible training, so that most researchers in ICF use simulations (or a mix of simulations and experimental) results instead of real data to train ML and related tools like deep learning. They then use the trained learning model to predict for future events. As expected, the present ML predictions are not as predictive as one would like. Also, because of the extreme sensitivity of the neutron yield to the input implosion parameters, the distribution function in the ICF learning models changes rapidly over time, requiring frequent retraining. In order to be effective, using physics guided machine learning for ICF is preferred, especially while the database is small and the physical capabilities of the learning models are still being developed. In this work, we identify problems in ICF that are suitable for ML and describe circumstances where ML is less likely to be successful. |
Wednesday, November 10, 2021 11:05AM - 11:20AM |
NM10.00007: Optimizing high energy density experimental designs to elucidate complex coupling between physics phenomena for training ML models John L Kline, Michael J Grosskopf, Nelson M Hoffman, Bedros B Afeyan Complex engineered systems in the high energy density regime such as inertial confinement fusion (ICF) remain challenging to predict with radiation hydrodynamic codes. While progress has been made with large 3D simulations, computing power limits the number of simulations that can be done. Similarly, the low data return rates for these complex engineered systems makes it hard to uncover interactions between the various pieces of unit physics that make up the system. The typical approach to unraveling and validating the unit physics in these systems is to attempt to hold all parameters constant while varying a single parameter and measuring the system response. These slices through parameter space take many experiments and except for parameters that cannot be held fixed, there is little information about the coupling between the underlying physics. Here we examine ways to optimize the approach to collecting data that span the parameter space and target not only validate our understanding of the individual effects, but also collect information about the coupling. This multivariate approach can possibly provide a better set of data to validate our models with the same number of experiments. For machine learning models trained on simulation data and aimed to enable faster exploration of the space, these data sets could provide a more global validation the ML model. This could be especially impactful In the case of transfer learning, by not limiting data to small regions of space focused on optimizing the yield a given ICF target design. |
Wednesday, November 10, 2021 11:20AM - 11:35AM |
NM10.00008: Neural Networks for Rapid Analysis of High Repetition Rate Diagnostics Derek Mariscal While many repetition rate capable PW-class laser facilities (0.1-10 Hz) are now online across the world, most are unable to regularly operate at their designed shot rate due to the lack of diagnostics that can operate at comparable rates. Diagnostic analysis is also a manual human-supervised process that is often time-consuming even with a well-designed algorithm. It would be favorable to intelligently operate experiments with software, however, this requires rapid and robust solutions to perform diagnostic analysis "on the fly" at rates higher than the laser operating frequency. Neural networks (NN's) have proven to be very powerful tools for image classification, object recognition, natural language processing, and more recently in the sciences. Here we present a proof-of-principle methodology for developing a NN surrogate for rapidly recovering metrics of interest from a rep-rate-compatible diagnostic for laser-accelerated MeV proton beams that can then be used to analyze images and return the beam characteristics such as spectral temperature, spatial profile, and total energy contained within the beam. This enables recovery of this information on the millisecond timescale, compatible with current generation high rep-rate lasers, with an average error of approximately 3%. |
Wednesday, November 10, 2021 11:35AM - 11:50AM |
NM10.00009: Multi-Output Surrogate Construction for Fusion Simulations Kathryn Maupin, Anh Tran, Michael E Glinsky, Patrick F Knapp, William E Lewis Computational simulation has allowed scientists to explore, observe, and test physical regimes previously thought to be unattainable. Bayesian analysis provides a natural framework for incorporating the uncertainties that undeniably exist in computational modeling. In the absence of a reliable low-fidelity physics model, phenomenological surrogate models can be used to mitigate the expense of performing Bayesian analysis and uncertainty quantification; however, phenomenological models may not adhere to known physics or properties. Furthermore, the interactions of complex physics in high-fidelity codes lead to dependencies between quantities of interest (QoIs) that are difficult to quantify and capture when individual surrogates are used for each observable. |
Wednesday, November 10, 2021 11:50AM - 12:05PM |
NM10.00010: Improving the plasma science data environment Nicholas Murphy The ability to perform data science in a field depends strongly on its data environment. At present, most experimental data sets are not openly available, and most laboratory plasma facilities have their own internal practices for storing, organizing, and accessing data sets. This is in contrast to fields like heliophysics and astronomy, where data sets largely follow community-wide conventions and it is hard to think of an observational data set that is not open access. I will present two strategies for improving the data environment of plasma science: (1) community-wide adoption of open metadata standards, and (2) the creation of an online portal to provide open access to plasma data akin to the Virtual Solar Observatory. Finally, I will discuss how these strategies will improve findability, accessibility, interoperability, and reusability of plasma data and move our field closer to the open science paradigm. |
Wednesday, November 10, 2021 12:05PM - 12:20PM |
NM10.00011: Reducing Economic Costs as an Explicit Requirement for Machine-Learning-Based Classifiers Matthew S Parsons With fusion research pushing toward commercialization, there's no time like the present to start thinking about how economic costs play into various aspects of our research. Along those lines, machine learning tools offer cost savings in a variety of ways, including by reducing computing resources needed for complex simulations and real-time analysis, by guiding the use of limited experimental resources through improved predictive modeling, and even by reducing damage to tokamak reactor components through improved disruption prediction. In this work, we will focus on discussing how cost reduction can be used as an explicit requirement for machine-learning-based classifiers. In this intuitive approach we will look at how the balance between the True Positive and False Positive Rates of classification represents a tradeoff of real economic costs, and how a minimum performance threshold can be derived from these costs. The cost-reduction threshold provides both a necessary and sufficient condition for implementing any classifier, and can further be used to assess which classifier provides the best cost savings. We will look at recent examples from the literature on disruption prediction as an illustration of how to use this cost-based framework for assessing classifiers. |
Follow Us |
Engage
Become an APS Member |
My APS
Renew Membership |
Information for |
About APSThe American Physical Society (APS) is a non-profit membership organization working to advance the knowledge of physics. |
© 2024 American Physical Society
| All rights reserved | Terms of Use
| Contact Us
Headquarters
1 Physics Ellipse, College Park, MD 20740-3844
(301) 209-3200
Editorial Office
100 Motor Pkwy, Suite 110, Hauppauge, NY 11788
(631) 591-4000
Office of Public Affairs
529 14th St NW, Suite 1050, Washington, D.C. 20045-2001
(202) 662-8700