Bulletin of the American Physical Society
APS March Meeting 2022
Volume 67, Number 3
Monday–Friday, March 14–18, 2022; Chicago
Session D32: Data Science, Artificial Intelligence and Machine Learning IFocus Recordings Available

Hide Abstracts 
Sponsoring Units: GDS Chair: Pavel Lukashev, University of Northern Iowa Room: McCormick Place W192B 
Monday, March 14, 2022 3:00PM  3:36PM 
D32.00001: Data, Disorder and Ceramics Invited Speaker: Stefano Curtarolo Disordered multicomponent systems  occupying the mostly uncharted centers of phase diagrams  have been studied for the last two decades for their potential revolutionary properties. The search for new systems is mostly performed with trialanderror techniques, as effective computational discovery is challenged by the immense number of configurations: the synthesizability of highentropy ceramics is typically assessed using ideal entropy along with the formation enthalpies from density functional theory, with simplified descriptors or machine learning methods. With respect to vibrations — even if they may have significant impact on phase stability — their contributions are drastically approximated to reduce the high computational cost, or often avoided with the hope of them being negligible, due to the technical difficulties posed in calculating them for disordered systems. This is an area where data intensive techniques are making the difference and the presentation will illustrate some recent results. 
Monday, March 14, 2022 3:36PM  3:48PM 
D32.00002: Novel approaches and bounds for maximum entropy reinforcement learning using nonequilibrium statistical mechanics Jacob Adamczyk, Argenis Arriojas Maldonado, Stas Tiomkin, Rahul V Kulkarni Reinforcement learning (RL) is an important subfield of AI that holds great promise for important applications such as robotic control and autonomous driving. Maximum entropy RL (MaxEnt RL) is a robust and flexible generalization of RL which has recently been connected to applications of large deviation theory in nonequilibrium statistical mechanics. In this approach, the scaled cumulant generating function (scgf) from large deviation theory can be mapped on to the soft value functions in MaxEnt RL. Using this mapping, we have developed novel algorithms to determine the optimal policy and soft value functions in MaxEnt RL. Furthermore, the connections of the scgf to PerronFrobenius theory allow us to use results from linear algebra to derive bounds and develop useful approximations for the optimal policy and soft value functions. The formalism developed leads to new results for the problem of compositionality in MaxEnt RL that provide insights into how we can combine previously learned behaviors to obtain solutions for more complex tasks. 
Monday, March 14, 2022 3:48PM  4:00PM 
D32.00003: ClosedForm Analytical Results for Maximum Entropy Reinforcement Learning Using Large Deviation Theory Argenis Arriojas Maldonado, Jacob Adamczyk, Stas Tiomkin, Rahul V Kulkarni Reinforcement learning (RL) is an important field of current research in artificial intelligence which has seen tremendous accomplishments in recent years. Important advances in RL have resulted from the infusion of ideas from statistical physics leading to successful approaches like maximum entropy reinforcement learning (Maxent RL). With the addition of an entropybased regularization term, the optimal control problem in RL can be transformed into a problem in Bayesian inference. While this controlasinference approach to RL has led to several advances, obtaining analytical results for the general case of stochastic dynamics has been an open problem. We establish a mapping between Maxent RL and research in nonequilibrium statistical mechanics based on applications of large deviation theory. In the longtime limit, we apply approaches from large deviation theory to derive exact analytical results for the optimal policy and optimal dynamics in Markov Decision Process models of RL. The mapping established connects research in reinforcement learning and nonequilibrium statistical mechanics, thereby opening further avenues for the application of analytical and computational approaches from physics to cuttingedge problems in machine learning. 
Monday, March 14, 2022 4:00PM  4:12PM 
D32.00004: Causality Analysis of Physical Parameters Derived from AtomicResolution STEM Christopher T Nelson, Maxim Ziatdinov, Xiaohang Zhang, Rama K Vasudevan, Eugene A. Eliseev, Anna N. Morozovska, Ichiro Takeuchi, Sergei V Kalinin Atomic scale Scanning Transmission Electron Microscopy data can be parameterized to infer local physical parameters such as structure (e.g. unit cell volume), chemistry, and electrical polarization. In the absence of tunable independent variables it is challenging to ascertain not just the correlation but the causal direction between such parameters, especially in the presence of noise. In this work we implement a workflow and evaluate causal analysis methods for parameterized HRSTEM from the natural experiment of inherent parameter variation in large datasets. Our system is a Sm_{x}Bi_{1x}FeO_{3} perovskite for 0≦x≦20% which traverses a ferroelectric phase boundary. Descriptors are defined from local (unitcell) atomic HAADF STEM corresponding to structural, compositional, and polarization parameters. These are subject to informationgeometric causal inference (IGCI), additive noise models (ANM), and a linear nongaussian acyclic models (LiNGAM) to determine pairwise causality, causal chains, and in the latter estimates of linear connection coefficients. Generally, chemical effects including local composition and molar volume are found to be higher on the causal chain, polarization effects being secondary, and tetragonality and differential chemical contrast the weakest. 
Monday, March 14, 2022 4:12PM  4:24PM 
D32.00005: signac: Simple Data and Workflow Management Corwin B Kerr, Brandon L Butler, Bradley D Dice, Sharon C Glotzer The signac data management framework (https://signac.io) helps researchers execute reproducible computational studies, scaling from laptops to supercomputers, while emphasizing portability and fast prototyping. Through signac, users can track, search, and archive data and metadata in filebased workflows and automate job submission to high performance computing clusters (http://doi.org/10.1016/j.commatsci.2018.01.035). The signac framework is driven by the data management needs of computational researchers, which increasingly involve large collaborations and open data, where the research data needs to be stored in a coherent and queryable manner. We will discuss how signac facilitates the sharing and archiving of research data and workflows. In addition, we will emphasize recent developments in the framework that have increased flexibility in metadata storage, workflow execution, and data visualization. To demonstrate signac's utility, we will showcase scientific publications that have made use of the project, particularly those that have made their data public. We will highlight our push toward better documentation and community engagement, including promoting best practices in data management, encouraging contributions, and providing support. 
Monday, March 14, 2022 4:24PM  4:36PM 
D32.00006: Predicting polarizabilities of silicon clusters using local chemical environments Mario G Zauchner, Johannes C Lischner, Andrew Horsfield, Gabor Csanyi, Stefano Dal Forno

Monday, March 14, 2022 4:36PM  4:48PM 
D32.00007: Local Extreme Learning Machines: A Neural NetworkBased Spectral ElementLike Method for Computational PDEs Suchuan Dong Existing deep neural networkbased methods for solving boundary/initialvalue problems suffer from several drawbacks (e.g. lack of convergence with a certain convergence rate, limited accuracy, extremely high computational cost) that make them numerically less than attractive and computationally uncompetitive. In this talk we present a neural networkbased method that has largely overcome these drawbacks. This method, termed local extreme learning machines (locELM), combines three ideas: extreme learning machines, domain decomposition, and local neural networks. The field solution on each subdomain is represented by a local feedforward neural network, and $C^k$ continuity conditions are imposed on the subdomain boundaries. The hiddenlayer coefficients of the local neural networks are preset to random values and fixed, and only the weight coefficients in the output layers are trainable. The overall neural network is trained by a linear or nonlinear least squares computation, not by the backpropagation (or gradient descent) type algorithms. The current method exhibits a clear sense of convergence with respect to the degrees of freedom in the neural network. Its numerical errors decrease exponentially or nearly exponentially as the number of degrees of freedom (number of training parameters, number of training data points) increases, which is reminiscent of the spectral convergence of traditional spectral or spectral elementtype methods. LocELM far outperforms the physics informed neural network (PINN) and the deep Galerkin method (DGM) method in terms of the accuracy and computational cost (network training time). Its computational performance (accuracy/cost) is on par with the classical finite element method (FEM), and outperforms FEM when the problem size becomes larger. These points will be demonstrated for a number of problems. 
Follow Us 
Engage
Become an APS Member 
My APS
Renew Membership 
Information for 
About APSThe American Physical Society (APS) is a nonprofit membership organization working to advance the knowledge of physics. 
© 2022 American Physical Society
 All rights reserved  Terms of Use
 Contact Us
Headquarters
1 Physics Ellipse, College Park, MD 207403844
(301) 2093200
Editorial Office
1 Research Road, Ridge, NY 119612701
(631) 5914000
Office of Public Affairs
529 14th St NW, Suite 1050, Washington, D.C. 200452001
(202) 6628700