Bulletin of the American Physical Society
APS March Meeting 2022
Volume 67, Number 3
Monday–Friday, March 14–18, 2022; Chicago
Session S01: Machine Learning and Neural Networks in Chemical PhysicsRecordings Available
|
Hide Abstracts |
Chair: Susan Kempinger, North Central College Room: McCormick Place W-175A |
Thursday, March 17, 2022 8:00AM - 8:12AM |
S01.00001: Atomistic Line Graph Neural Network for Improved Materials Property Predictions Kamal Choudhary, Brian DeCost Graph neural networks (GNN) have been shown to provide substantial performance improvements for representing and modeling atomistic materials compared with descriptor-based machine-learning models. While most existing GNN models for atomistic predictions are based on atomic distance information, they do not explicitly incorporate bond angles, which are critical for distinguishing many atomic structures. Furthermore, many material properties are known to be sensitive to slight changes in bond angles. We present an Atomistic Line Graph Neural Network (ALIGNN), a GNN architecture that performs message passing on both the interatomic bond graph and its line graph corresponding to bond angles. We demonstrate that angle information can be explicitly and efficiently included, leading to improved performance on multiple atomistic prediction tasks. We use ALIGNN models for predicting 52 solid-state and molecular properties available in the JARVIS-DFT, Materials project, and QM9 databases. ALIGNN can outperform some previously reported GNN models on atomistic prediction tasks by up to 85 % in accuracy with better or comparable model training speed. |
Thursday, March 17, 2022 8:12AM - 8:24AM |
S01.00002: Graph Neural Networks that incorporate Physical Structure Erik Thiede, Wenda Zhou, Risi Kondor There has been considerable interest in applying neural networks to physical systems that can be represented by graphs. While this is typically done using message-passing neural networks, these networks are strictly limited in their expressivity and there is no obvious way to include our knowledge of physical structure into the network. To address this need, we introduce automorphism-based graph neural networks (Autobahn), a new family of graph neural networks. Whereas in most graph neural networks neurons correspond to individual bonds or edges, in Autobahn neurons correspond to graph substructures. This allows us to incorporate domain knowledge into the design of the network. Moreover, by applying local convolutions that are equivariant to each subgraph's automorphism group, we construct neurons whose action reflects the natural way that a physical substructure transforms. Specific choices of local neighborhoods and subgraphs recover existing graph neural network architectures such as message passing neural networks, but our formalism also encompasses novel architectures: as an example, we introduce a graph neural network that decomposes the graph into paths and cycles. We validate our approach by applying Autobahn to molecular graphs, where we achieve competitive results. |
Thursday, March 17, 2022 8:24AM - 8:36AM |
S01.00003: Data augmentation techniques to improve material property prediction performance using Graph Neural Networks Rishikesh Magar In recent years, Graph Neural Network (GNN) based methodologies have been extensively used for material property prediction. Although these GNNs have been successful in predicting material properties with a very high accuracy, they rely on large amounts datasets for training. Often, these large datasets are generated from ab-initio calculations or experimentations which are resource extensive and time consuming, limiting the applicability of GNNs. To overcome the lack of data availability, we introduce five physics informed data augmentations – Perturbation, Rotation, SwapAxes, Translation and SuperCell transformation that can be applied to crystalline systems and increase the amount of data available for GNN training. Using these augmentation techniques, we show improvements in performance for 4 state of the art GNN models - CGCNN, MEGNET, GINE and SchNET on 5 different datasets. We observe a performance gain between 10%-50% on most of the models, proving the effectiveness of data augmentation in training GNNs. We also perform ablation studies to determine the most effective augmentation strategies for a particular material property. Finaly, we develop an open source software package that performs these augmentations under the hood and make it available for public use. |
Thursday, March 17, 2022 8:36AM - 8:48AM Withdrawn |
S01.00004: Kinetics studies of gas phase reactions using neural network potentials Adrian Gordon, Jason D Goodpaster Atomistic simulations play an important role in a wide range of chemical investigations, including studies of chemical kinetics. These simulations rely on accurate energies and forces, often obtained through expensive ab initio electronic structure calculations. Recently researchers have explored the use of machine learning models to provide analytical and differentiable potential energy surfaces for use in atomistic simulations. These ML models can provide energies at a fraction of the cost of ab initio methods and are also highly accurate within the chemical space represented in the training data. In this work, we develop highly accurate neural network potentials for targeted organic gas phase reactions, such as the OH + CH4 hydrogen abstraction reaction. We use high-dimensional neural networks, which predict energies based on calculated fingerprints of atomic environments. With these neural networks, the chemical kinetics of these reactions are explored using methods such as ring polymer molecular dynamics. We use active learning techniques to show that highly accurate potential energy surfaces can be developed at the DFT and CCSD(T) levels of theory from a limited amount of training data. |
Thursday, March 17, 2022 8:48AM - 9:00AM |
S01.00005: Yet Another Reaction Prediction v2.0: Advances in Automatic Reaction Prediction and Establishment of Benchmark Systems Qiyuan Zhao, Brett M Savoie Automated reaction prediction has the potential to elucidate complex reaction networks for applications ranging from combustion to materials degradation, but computational cost and inconsistent reaction coverage are still obstacles to exploring deep reaction networks. In our recent study, yet another reaction program (YARP) has been developed to simultaneously reduce the cost and increase the reaction coverage by relatively straightforward modifications of the reaction enumeration, geometry initialization and transition state convergence. Despite of the success of YARP, further improvements are implemented to address the remaining limitations and continually increase the reaction coverage. For instance, a reaction conformational sampling strategy is developed to better locate transition states and a composite double-ended and single-ended searching structure is designed to discover reactions outsides of enumerated ones. On the other hand, the computational cost is controlled by efficient pre-pruning scheme and low cost semi-empirical quantum chemistry level reaction exploration. This elegant combination of ultra-low cost and high reaction coverage creates opportunities to explore more complex reaction networks and build up a quantum chemistry based large-scale reaction database. |
Thursday, March 17, 2022 9:00AM - 9:12AM |
S01.00006: Predicting the density of states of crystalline materials via machine learning Francesco Ricci, Shufeng Kong, Dan Guevarra, Carla P Gomes, John M Gregoire, Jeffrey B Neaton Spectral properties, such as the density of states, which are central for understanding the fundamental properties of materials, have been so far accessed via experiments and ab initio computations. Nowadays, machine learning methods have been applied in computational materials science enabling accelerated discovery mainly via scalar properties predictions, such as the electronic band gap. However, the application of these methods on predicting spectral properties of crystalline compounds is still in its infancy. In this context, we present an overview of the recent materials-to-spectrum (Mat2Spec) model, which outperforms state-of-the-art methods in predicting the ab initio phonon and electronic density of states of crystalline compounds, combining different machine learning techniques. As a proof of concept, we apply this model to identify new materials with gaps below the Fermi energy in the electronic density of states, which are pertinent to thermoelectrics and transparent conductors, and validate the predictions with DFT calculations. Finally, further developments of the model will be discussed. |
Thursday, March 17, 2022 9:12AM - 9:24AM |
S01.00007: Machine learning Kohn-Sham potentials in time-dependent density functional theory Jun Yang, James D Whitfield The exact time-dependent Kohn-Sham potentials are not available due to the difficulty of approximating the exchange-correlation functional of TDDFT. In an effort to understand this approximation, we have developed a machine learning based method to obtain the Kohn-Sham potentials given the time-dependent density. We approach this potential inversion problem by rewriting the Kohn-Sham equations as classical Hamilton’s equations. |
Thursday, March 17, 2022 9:24AM - 9:36AM |
S01.00008: Machine learning methodologies for accurate electron correlation energies and potential energy surfaces. Jason D Goodpaster, Clara Kirkvold, Andrew M Johannesen, Quin H Hu, Adrian Gordon Studies on quantum systems impose a challenge for theoretical studies due to the compromise between accuracy and computational cost in their calculations. Machine learning methods are an approach to solve this trade-off by leveraging large data sets to train on highly accurate calculations using small molecules and then apply them to larger systems. In this study, we will discuss two machine learning projects: (1) the prediction of electron correlation energies and (2) neural network potentials for chemical reactions. To accurately predict the total correlation energy, we explore different machine learning architecture and features and discuss various trade-offs between complexity and performance. For neural network potentials, we discuss an active learning algorithm which allows for the accurate description of potential energy surfaces. Together, we believe these algorithms will allow for the accurate study of quantum devices. |
Thursday, March 17, 2022 9:36AM - 9:48AM |
S01.00009: Size-Extensivity of Machine Learning Potentials for Molecules Murat Keceli, Alvaro Vazquez-Mayagoitia Size-extensivity is an important concept for quantum chemistry methods to ensure properties such as total energies scale proportionally with the system size. Machine learning potentials have been shown to be an efficient method to construct accurate potential energy surfaces for molecules and extended systems. Transferability of these potentials have been studied generally on molecules that are of similar size with the ones in the training set. In this study, we explored both neural network and Gaussian process regression based potentials with a variety of descriptors and compared the accuracy of these potentials as the system size increased. We studied alkanes, molecular clusters, and polycyclic aromatic hydrocarbons and identified techniques to satisfy size-extensivity. |
Thursday, March 17, 2022 9:48AM - 10:00AM |
S01.00010: Semi-Local Density Fingerprints for Machine Learning Molecular Properties, Intra-/Inter-molecular Interactions, and Chemical Reactions Yang Yang, Zachary M Sparrow, Brian G Ernst, Trine K Quady, Justin Lee, Yan Yang, Lijie Tu, Robert A Distasio In this work, we propose a novel machine learning (ML) feature space that is constructed using semi-local descriptors of the electron density (i.e., ρ and ▽ρ)---the quantum mechanical objects at the very heart of density functional theory (DFT). The proposed ML descriptor or "semi-local density fingerprint" (SLDF), can be quickly assembled from any input electron density, provides a compact (system-size-independent) and unique representation for each molecule, accounts for molecular symmetry by construction (and is invariant to translations and rotations), contains transferable information across wide swaths of chemical compound space, and has lead to unprecedented levels of accuracy during initial proof-of-principle tests. As a demonstration of the accuracy, reliability, and transferability that one can acheive using SLDFs, we will discuss their performance in the prediction of molecular properties, intra-/inter-molecular interactions, and chemical reactions. |
Thursday, March 17, 2022 10:00AM - 10:12AM |
S01.00011: Unsupervised machine learning approach for detecting second order phase transition in three-dimensional liquid mixtures Inhyuk Jang, Supreet Kaur, Arun Yethiraj Phase transition is one of the most challenging topic in physical chemistry because of not only its unusual singularity at the phase transition point, but also the drastic change of thermodynamic properties that makes mathematical description difficult. In this study we deal with the specific part of the phase transition, which is called phase separation or mixing-demixing transition. This can usually be observed when the temperature or density of the mixture changes. In order to capture the phase separation point, it is necessary to run simulations based on grand canonical ensemble or Gibbs ensemble, but these ensembles are in general not applicable to the simulations with complex molecules. Therefore, we introduce unsupervised machine learning to detect the phase separation in symmetric binary mixtures and this technique can be applied for usual canonical or isothermal-isobaric ensemble simulations. We find that by using Principal Component Analysis(PCA) on two model systems, Lennard-Jones(LJ) binary mixture and Widom-Rowlinson mixture, we can observe the drastic change of order parameter and diverging heat capacity on the critical temperature, which shows the clear evidences of critical behavior. We also find that the change of PCA-derived order parameter from LJ binary mixture shows the critical exponent $\beta$ which is included in 3D Ising universality class, and the standard deviation of PCA clusters behaves like heat capacity and its critical exponent is also close to its universality class. Additionally, we compare two different types of feature vector in order to see the importance of constructing appropriate feature vectors for the system. We find that the feature vector based on the Euclidean distance is not an appropriate choice for the system with high dimension. Moreover, the feature vector based on the concentration fluctuation works more accurately whatever the type of mixtures or the space dimensions are. |
Thursday, March 17, 2022 10:12AM - 10:24AM |
S01.00012: Fully Automated Nanoscale to Atomistic Structure from Theory and X-Ray Spectroscopy Experiments Davis G Unruh, Chaitanya Kolluru, Eli D Kinigstein, Xiaoyi Zhang, Maria K Chan Photocatalytic reactions often require multiple coordinated reaction steps. It is critical to extract the oxidation state and atomic configuration of transition metal catalysts to understand the photocatalytic reaction mechanism and optimize the catalytic rate and total yield. X-ray Transient Absorption spectroscopy can be used to perform in-situ mechanistic studies, but theoretical insight requires searching a vast structural space where it is critical to not only match experimental data but to also minimize quantities such as the energy to ensure that the structures are physically plausible and realizable. The structural space is further complicated by the simultaneous presence of multiple molecular species. In response, we have extended our previously developed FANTASTX code, a multi-objective evolutionary algorithm which performs structure search using genetic algorithm and basin hopping methods, to include full support for x-ray spectroscopy simulations. To search the multi-molecular structural space more efficiently, we have further extended FANTASTX by incorporating structural fingerprinting and clustering methods, enabling identification of fundamentally different molecular candidates which can be uniformly prioritized through a novel cluster-driven evolutionary approach. |
Thursday, March 17, 2022 10:24AM - 10:36AM |
S01.00013: Comprehensive Analysis of Machine-Learning Kernels for Predicting Molecular Properties Mirela Puleva, Leonardo Medrano Sandonas, Artem Kokorin, Alexandre Tkatchenko Exploration of the vast chemical compound space has been widely assisted by machine learning (ML) approaches, e.g., neural networks and kernel ridge regression (KRR). Yet, a comprehensive understanding of the different components in the development of ML models is still lacking. In this work, we analyze the influence of components of the KRR method (representation, kernel function, distance metric) in the prediction performance of (energetic and electronic) quantum-mechanical molecular properties. To do so, we consider the QM7-X dataset containing 42 physicochemical properties for ~4.2M equilibrium and non-equilibrium primarily organic molecular structures. Two- and three-body geometric representations as well as Gaussian and Laplacian kernels are used to develop the KRR models. To probe the distance metric impact, we use a generalized form of the standard Euclidean and Manhattan distances in KRR – the Minkowski metric. This allows for non-integer norms between geometric representations, thus optimizing the impact of outliers in molecular data. We expect our work to provide a deeper understanding of the correlation between KRR components for an optimal prediction of molecular properties of both equilibrium and out-of-equilibrium structures. |
Thursday, March 17, 2022 10:36AM - 10:48AM |
S01.00014: Machine learning density functionals from the random-phase approximation Stefan Riemelmoser, Carla Verdi, Merzuk Kaltak, Georg Kresse The RPA-OEP method allows us to construct local exchange-correlation potentials corresponding to the random-phase approximation (RPA) energy functional. We develop a machine learning (ML) approach that short-cuts the RPA-OEP equation and maps the RPA to a pure density functional. The ingredients for the ML-RPA energy density are only averages of the electronic density and its gradient in some real-space environment. That is, our ML-RPA functionals can be considered as non-local extensions to the usual gradient approximations. The exchange-correlation potentials provided from RPA-OEP reference calculations serve as derivative information for the ML fit. This greatly enhances the data set size contrast to common approaches using only energies for fitting. In this talk, we will present our ML-RPA framework from a theoretical and technical perspective and show practical applications to real systems. |
Follow Us |
Engage
Become an APS Member |
My APS
Renew Membership |
Information for |
About APSThe American Physical Society (APS) is a non-profit membership organization working to advance the knowledge of physics. |
© 2025 American Physical Society
| All rights reserved | Terms of Use
| Contact Us
Headquarters
1 Physics Ellipse, College Park, MD 20740-3844
(301) 209-3200
Editorial Office
100 Motor Pkwy, Suite 110, Hauppauge, NY 11788
(631) 591-4000
Office of Public Affairs
529 14th St NW, Suite 1050, Washington, D.C. 20045-2001
(202) 662-8700