Bulletin of the American Physical Society
APS March Meeting 2022
Volume 67, Number 3
Monday–Friday, March 14–18, 2022; Chicago
Session D32: Data Science, Artificial Intelligence and Machine Learning IFocus Recordings Available
|
Hide Abstracts |
Sponsoring Units: GDS Chair: Pavel Lukashev, University of Northern Iowa Room: McCormick Place W-192B |
Monday, March 14, 2022 3:00PM - 3:36PM |
D32.00001: Data, Disorder and Ceramics Invited Speaker: Stefano Curtarolo Disordered multicomponent systems - occupying the mostly uncharted centers of phase diagrams - have been studied for the last two decades for their potential revolutionary properties. The search for new systems is mostly performed with trial-and-error techniques, as effective computational discovery is challenged by the immense number of configurations: the synthesizability of high-entropy ceramics is typically assessed using ideal entropy along with the formation enthalpies from density functional theory, with simplified descriptors or machine learning methods. With respect to vibrations — even if they may have significant impact on phase stability — their contributions are drastically approximated to reduce the high computational cost, or often avoided with the hope of them being negligible, due to the technical difficulties posed in calculating them for disordered systems. This is an area where data intensive techniques are making the difference and the presentation will illustrate some recent results. |
Monday, March 14, 2022 3:36PM - 3:48PM |
D32.00002: Novel approaches and bounds for maximum entropy reinforcement learning using nonequilibrium statistical mechanics Jacob Adamczyk, Argenis Arriojas Maldonado, Stas Tiomkin, Rahul V Kulkarni Reinforcement learning (RL) is an important subfield of AI that holds great promise for important applications such as robotic control and autonomous driving. Maximum entropy RL (MaxEnt RL) is a robust and flexible generalization of RL which has recently been connected to applications of large deviation theory in non-equilibrium statistical mechanics. In this approach, the scaled cumulant generating function (scgf) from large deviation theory can be mapped on to the soft value functions in MaxEnt RL. Using this mapping, we have developed novel algorithms to determine the optimal policy and soft value functions in MaxEnt RL. Furthermore, the connections of the scgf to Perron-Frobenius theory allow us to use results from linear algebra to derive bounds and develop useful approximations for the optimal policy and soft value functions. The formalism developed leads to new results for the problem of compositionality in MaxEnt RL that provide insights into how we can combine previously learned behaviors to obtain solutions for more complex tasks. |
Monday, March 14, 2022 3:48PM - 4:00PM |
D32.00003: Closed-Form Analytical Results for Maximum Entropy Reinforcement Learning Using Large Deviation Theory Argenis Arriojas Maldonado, Jacob Adamczyk, Stas Tiomkin, Rahul V Kulkarni Reinforcement learning (RL) is an important field of current research in artificial intelligence which has seen tremendous accomplishments in recent years. Important advances in RL have resulted from the infusion of ideas from statistical physics leading to successful approaches like maximum entropy reinforcement learning (Maxent RL). With the addition of an entropy-based regularization term, the optimal control problem in RL can be transformed into a problem in Bayesian inference. While this control-as-inference approach to RL has led to several advances, obtaining analytical results for the general case of stochastic dynamics has been an open problem. We establish a mapping between Maxent RL and research in non-equilibrium statistical mechanics based on applications of large deviation theory. In the long-time limit, we apply approaches from large deviation theory to derive exact analytical results for the optimal policy and optimal dynamics in Markov Decision Process models of RL. The mapping established connects research in reinforcement learning and non-equilibrium statistical mechanics, thereby opening further avenues for the application of analytical and computational approaches from physics to cutting-edge problems in machine learning. |
Monday, March 14, 2022 4:00PM - 4:12PM |
D32.00004: Causality Analysis of Physical Parameters Derived from Atomic-Resolution STEM Christopher T Nelson, Maxim Ziatdinov, Xiaohang Zhang, Rama K Vasudevan, Eugene A. Eliseev, Anna N. Morozovska, Ichiro Takeuchi, Sergei V Kalinin Atomic scale Scanning Transmission Electron Microscopy data can be parameterized to infer local physical parameters such as structure (e.g. unit cell volume), chemistry, and electrical polarization. In the absence of tunable independent variables it is challenging to ascertain not just the correlation but the causal direction between such parameters, especially in the presence of noise. In this work we implement a workflow and evaluate causal analysis methods for parameterized HRSTEM from the natural experiment of inherent parameter variation in large datasets. Our system is a SmxBi1-xFeO3 perovskite for 0≦x≦20% which traverses a ferroelectric phase boundary. Descriptors are defined from local (unit-cell) atomic HAADF STEM corresponding to structural, compositional, and polarization parameters. These are subject to information-geometric causal inference (IGCI), additive noise models (ANM), and a linear non-gaussian acyclic models (LiNGAM) to determine pairwise causality, causal chains, and in the latter estimates of linear connection coefficients. Generally, chemical effects including local composition and molar volume are found to be higher on the causal chain, polarization effects being secondary, and tetragonality and differential chemical contrast the weakest. |
Monday, March 14, 2022 4:12PM - 4:24PM |
D32.00005: signac: Simple Data and Workflow Management Corwin B Kerr, Brandon L Butler, Bradley D Dice, Sharon C Glotzer The signac data management framework (https://signac.io) helps researchers execute reproducible computational studies, scaling from laptops to supercomputers, while emphasizing portability and fast prototyping. Through signac, users can track, search, and archive data and metadata in file-based workflows and automate job submission to high performance computing clusters (http://doi.org/10.1016/j.commatsci.2018.01.035). The signac framework is driven by the data management needs of computational researchers, which increasingly involve large collaborations and open data, where the research data needs to be stored in a coherent and queryable manner. We will discuss how signac facilitates the sharing and archiving of research data and workflows. In addition, we will emphasize recent developments in the framework that have increased flexibility in metadata storage, workflow execution, and data visualization. To demonstrate signac's utility, we will showcase scientific publications that have made use of the project, particularly those that have made their data public. We will highlight our push toward better documentation and community engagement, including promoting best practices in data management, encouraging contributions, and providing support. |
Monday, March 14, 2022 4:24PM - 4:36PM |
D32.00006: Predicting polarizabilities of silicon clusters using local chemical environments Mario G Zauchner, Johannes C Lischner, Andrew Horsfield, Gabor Csanyi, Stefano Dal Forno
|
Monday, March 14, 2022 4:36PM - 4:48PM |
D32.00007: Local Extreme Learning Machines: A Neural Network-Based Spectral Element-Like Method for Computational PDEs Suchuan Dong Existing deep neural network-based methods for solving boundary/initial-value problems suffer from several drawbacks (e.g. lack of convergence with a certain convergence rate, limited accuracy, extremely high computational cost) that make them numerically less than attractive and computationally uncompetitive. In this talk we present a neural network-based method that has largely overcome these drawbacks. This method, termed local extreme learning machines (locELM), combines three ideas: extreme learning machines, domain decomposition, and local neural networks. The field solution on each sub-domain is represented by a local feed-forward neural network, and $C^k$ continuity conditions are imposed on the sub-domain boundaries. The hidden-layer coefficients of the local neural networks are pre-set to random values and fixed, and only the weight coefficients in the output layers are trainable. The overall neural network is trained by a linear or nonlinear least squares computation, not by the back-propagation (or gradient descent) type algorithms. The current method exhibits a clear sense of convergence with respect to the degrees of freedom in the neural network. Its numerical errors decrease exponentially or nearly exponentially as the number of degrees of freedom (number of training parameters, number of training data points) increases, which is reminiscent of the spectral convergence of traditional spectral or spectral element-type methods. LocELM far outperforms the physics informed neural network (PINN) and the deep Galerkin method (DGM) method in terms of the accuracy and computational cost (network training time). Its computational performance (accuracy/cost) is on par with the classical finite element method (FEM), and outperforms FEM when the problem size becomes larger. These points will be demonstrated for a number of problems. |
Follow Us |
Engage
Become an APS Member |
My APS
Renew Membership |
Information for |
About APSThe American Physical Society (APS) is a non-profit membership organization working to advance the knowledge of physics. |
© 2024 American Physical Society
| All rights reserved | Terms of Use
| Contact Us
Headquarters
1 Physics Ellipse, College Park, MD 20740-3844
(301) 209-3200
Editorial Office
100 Motor Pkwy, Suite 110, Hauppauge, NY 11788
(631) 591-4000
Office of Public Affairs
529 14th St NW, Suite 1050, Washington, D.C. 20045-2001
(202) 662-8700