Bulletin of the American Physical Society
APS March Meeting 2020
Volume 65, Number 1
Monday–Friday, March 2–6, 2020; Denver, Colorado
Session M45: Emerging Trends in Molecular Dynamics Simulations and Machine Learning IIIFocus
|
Hide Abstracts |
Sponsoring Units: DCOMP GDS DSOFT DPOLY Chair: Dvora Perihia, Clemson University Room: 706 |
Wednesday, March 4, 2020 11:15AM - 11:51AM |
M45.00001: The Self Learning Kinetic Monte Carlo (SLKMC) method augmented with data analytics for adatom-island diffusion on surfaces Invited Speaker: Talat Rahman The Self-Learning Kinetic Monte Carlo (SLKMC) method with its usage of a pattern recognition enabled the collection of a large database of diffusion pathways and their energetics for two-dimensional adatom islands containing 2-50 atoms on fcc(111) metal surfaces [1]. A variety of diffusion mechanisms involving single and multiple island atoms were uncovered in long time (comparable to experiments) KMC simulations. In this talk, I will present results for the diffusion kinetics of two dimensional adatoms islands in homoepitaxial and heteroepitaxial systems. With examples of diffusion of Ag and Pd adatom islands on Ag(111) and Pd(111), and that of Cu and Ni adatoms islands on Ni(111) and Cu(111) [2], I will draw attention to the relative role of lateral interactions and binding energy in the size dependence of island diffusion characteristics. These and a few other descriptors have been the key for application of data driven techniques for the training of predictive models, which we find to yield activation energy barriers that are accurate and obtained with little computational cost. Efforts are also underway to obtain reliable neural network derived interatomic potentials that have accuracy similar to those obtained from density functional theory based calculations. These results are promising for the development of tools for multiscale modeling of morphological evolution of nanostructured systems. |
Wednesday, March 4, 2020 11:51AM - 12:03PM |
M45.00002: Accelerated Discovery of Dielectric Polymer Materials Using Graph Convolutional Neural Networks Ankit Mishra, Pankaj Rajak, Ekin Dogus Cubuk, Ken-ichi Nomura, Rajiv Kalia, Aiichiro Nakano, Ajinkya Deshmukh, Lihua Chen, Greg Sotzing, Yang Cao, Ramamurthy Ramprasad, Priya Vashishta Polynorbornene (PNB) is an important amorphous polymer system, which has potential applications as a high energy density polymer due to its high breakdown strength with low dielectric loss and high thermal stability. Moreover, electrical properties of PNB can be significantly enhanced by incorporation of defects or synthesis with controlled crystallinity by hydrogenation reaction. However, this process is challenging since it involves experimental synthesis and characterization of combinatorial large number of polymer systems to identify potential candidates. Here, we propose a deep learning-based graph convolutional neural network (GNN) model that can identify polymer systems capable of exhibiting increased energy and power density. The GNN model is trained to predict dielectric constant for a polymer, where the training data for the high frequency dielectric constant of the PNB polymers are computed via ab-initio molecular dynamics simulation. Our model can significantly aid experimental synthesis of potentially new dielectric polymer materials which is otherwise difficult using simplistic statistical procedures. |
Wednesday, March 4, 2020 12:03PM - 12:15PM |
M45.00003: Deep Learning embedding layers for better prediction of atomic forces in solids Sivan Niv, Goren Gordon, Amir Natan The evaluation of atomic forces and total energy is a key challenge for large-scale atomistic simulations of materials. In recent years, machine learning techniques are successfully used to predict potential energies and to derive the atomic forces through the energy gradient. The training data is usually produced by quantum calculations, typically Density Functional Theory (DFT). The direct prediction of atomic forces by deep learning (DL) models was recently demonstrated by us and other groups, it has the advantage of being local and slightly faster while still maintaining state of the art mean absolute error (MAE). A disadvantage is that the predicted forces might be non-conserving. Like models which predict the energy, direct force models should behave well under symmetry operations and permutation of atoms. Here, we show how the use of self-learned embedding layers help to achieve better models for direct prediction of atomic forces. We also examine some sophisticated loss models to assure that the forces are smooth and close to conserving. We demonstrate this by the calculation of phonons in several solids and by the analysis of force derivatives in systems where we move single atoms and compare the DL predicted force derivatives to the DFT results. |
Wednesday, March 4, 2020 12:15PM - 12:27PM |
M45.00004: A molecular dynamics study of water crystallization using deep neural network potentials of ab-initio quality Pablo Piaggi, Roberto Car We study the crystallization of water into hexagonal ice (Ih) using molecular dynamics simulations. We describe the complex interactions between water molecules using deep neural network potentials[1] and employ state of the art enhanced sampling methods[2] to convert reversibly liquid water into ice Ih. From the simulations we calculate the difference in free energy between these two phases. The ice Ih configurations that emerge contain proton disorder as observed in experiments[3]. The proton disorder has an important contribution to the entropy of the solid[4] that most free energy methods are unable to capture. We assess whether our technique is able to capture it and we study the effect of the interaction potential in the proton disorder. |
Wednesday, March 4, 2020 12:27PM - 12:39PM |
M45.00005: Machine learning force field using decomposed atomic energies from ab initio calculations Lin-Wang Wang
|
Wednesday, March 4, 2020 12:39PM - 12:51PM |
M45.00006: Machine learning to derive quantum-informed and chemically-aware force fields to simulate interfaces and defects in hybrid halide perovskites Ross E Larsen, Matthew Jankousky, Derek Vigil-Fowler, Aaron M Holder, K. Grace Johnson The paradigm for creating materials for energy applications is no longer simply discovering a single material but instead involves combining multiple materials to achieve a desired functionality. Simulating interfaces between disparate materials or entire devices requires large systems, often approximated by classical simulations based on force fields (FFs). The accuracy of such simulations can be questionable because standard FFs may not accurately respond to changing chemical environments near an interface. We demonstrate a machine learning (ML) approach to predict quantum-derived atomic properties (e.g., charge, dipole moment, etc.) from descriptors of the local environment. The properties are used to compute chemically-aware, many-body inter-atomic forces, because the local environment descriptors encode more than just pairwise information. We apply these ML-derived FFs to several all-inorganic halide perovskite systems, CsBX3 (B = Sn, Pb, X = Br, I) with local and extended defects, and to the interfaces between these materials, to correctly capture the anomalous charge and electrostatic multipole dynamics observed in these systems. We report on the MLFFs methods developed, and the resulting materials implications of charge dynamics on halide perovskite functionality. |
Wednesday, March 4, 2020 12:51PM - 1:03PM |
M45.00007: Active Learning of Coarse Grained Force Fields with Gaussian Process Regression Blake Duschatko, Jonathan Vandermause, Nicola Molinari, Boris Kozinsky Many physically relevant spatial and temporal scales remain inaccessible with leading MD techniques, especially in soft matter and composite materials. Consequently, it is common practice to replace the all-atom representation with effective beads using coarse graining approaches. However, development of coarse grained force fields is a laborious procedure, and available force fields tend to lose orientation information, making all-atom reconstruction, or fine-graining, difficult. We propose a novel machine learning method for automatically constructing coarse grained force fields by active learning. In addition these force fields contain predictive uncertainty and allow for fine-grain reconstruction. We demonstrate that Gaussian Process Regression can alleviate the need for large initial all-atom trajectories that are generally required for achieving thermodynamically consistent results in the latent space. Moreover, we will discuss the performance of such models in the context of a variety of molecular systems possessing different amounts of rotational symmetry. |
Wednesday, March 4, 2020 1:03PM - 1:15PM |
M45.00008: External Potential Ensembles to Improve the Learning of Transferable Coarse-Grained Potentials Kevin Shen, Kris T Delaney, M. Scott Shell, Glenn H Fredrickson Simulation of complex, heterogeneous molecular systems requires models that are transferable across environmental conditions. However, current bottom-up strategies for parametrizing accurate coarse-grained (CG) models often rely on explicitly targeting the selected thermodynamic quantities (measured from experiments or atomistic simulations), which can be difficult to obtain. We argue that this information limitation can be overcome by coarse graining using thermodynamically informative ensembles, where variables conjugate to the thermodynamic variables of interest are allowed to fluctuate. We demonstrate this approach by using external potential ensembles with the relative entropy optimization to parametrize highly coarse grained models of solvent mixtures from atomistic force fields. The CG models can reproduce the activity coefficient of the atomistic model to within 0.1kT accuracy across the entire composition range, without explicitly measuring and matching chemical potentials during the parameterization process. This approach allows for the efficient transfer of exciting improvements of atomistically detailed models to fast, coarse-grained simulations of macro-scale systems. |
Wednesday, March 4, 2020 1:15PM - 1:27PM |
M45.00009: Data-driven parameterization of coarse-grained models of soft materials using machine learning tools Lilian Johnson, Frederick Phelan Many advances have been made in coarse-grained (CG) models of polymers to reduce computational effort yet capture the relaxation behavior imparted by hierarchal structure, resulting in two primary classes: “bottom-up” methods which preserve chemical-specificity and “top-down” methods which reproduce physical properties. Here, we combine a bottom-up coarse-grained model with a dissipative potential to obtain a chemically specific, thermodynamically consistent, and dynamically correct model. We parametrize the conservative forces using the iterative Boltzmann inversion (IBI) method to develop a CG force field from short all-atom (AA) simulations to recover AA structure, and thus, thermodynamics. We employ machine learning and filtering techniques to produce smooth distributions that enable automation and rapid convergence to smooth force profiles. We develop a similar approach for parameterization of the dissipative potential to correct the dynamics of the IBI-generated force field. In this method, we match AA diffusivity as a proxy for tuning monomeric friction. We demonstrate this method for oligomers in the melt state. Efforts to develop these methods into complementary automated packages will be discussed. |
Wednesday, March 4, 2020 1:27PM - 1:39PM |
M45.00010: JAX, M.D.
End-to-End Differentiable, Hardware Accelerated, Molecular Dynamics in Pure Python Sam Schoenholz, Ekin Dogus Cubuk A large fraction of computational science involves simulating the dynamics of particles that interact via pairwise or many-body interactions. These simulations, called Molecular Dynamics (MD), span a vast range of subjects from physics to drug discovery. Most MD software involves significant use of handwritten derivatives and code reuse across C++, FORTRAN, and CUDA. This is reminiscent of the state of machine learning (ML) before automatic differentiation became popular. Here, we bring the substantial advances in software that have taken place in ML to MD. JAX, M.D. is an end-to-end differentiable MD package written entirely in Python that can be just-in-time compiled to CPU/GPU/TPU. JAX MD allows researchers to iterate extremely quickly and lets researchers easily incorporate ML into their workflows. Finally, since all of the simulation code is in Python, researchers can have unprecedented flexibility in setting up experiments. JAX MD also allows researchers to take derivatives through whole-simulations as well as seamlessly incorporate neural networks into simulations. In this presentation we explore the architecture of JAX MD and its capabilities through several vignettes. Code is at github.com/google/jax-md along with a Colab notebook with experiments from the presentation. |
Wednesday, March 4, 2020 1:39PM - 1:51PM |
M45.00011: A neural network interatomic potential for molten NaCl Qingjie Li, Emine Kucukbenli, Stephen Lam, Boris Khaykovich, Efthimios Kaxiras, Ju Li Molten salts have been widely exploited for clean energy applications such as molten salt reactor (MSR) and concentrated solar power (CSP) technologies. These applications impose stringent requirements on the choice of molten salts such as excellent thermophysical properties, stability under extreme conditions, tolerance to impurities as well as compatibility with major structural materials. Optimizing/searching appropriate molten salt systems thus calls for a deep understanding of the underlying molecular structures, chemistry and dynamics in a vast salts space. In this talk, we present the application of artificial neural-network (NN) in training accurate interatomic potentials that enable fast evaluations of salt properties on desired time-and length-scales. In particular, we highlight the feasibility of neural-network interatomic potential to accurately predict the short to medium range structures and thermophysical properties of ionic liquids. We also propose a generic strategy to address the short-range interactions that are generally difficult to be learned by artificial NN. |
Wednesday, March 4, 2020 1:51PM - 2:03PM |
M45.00012: Simulating Aluminum Corrosion Using DFT Trained Deep Neural Network Potentials Wissam A Saidi, Shyam Dwaraknath Current materials challenges necessitate simulations at length and time scales that often exceed the capabilities of the state of the art in density functional theory (DFT). Many effective Hamiltonian methods that can scale beyond these limits, such as cluster expansion, tight-binding, and interatomic potentials, often require a significant amount of expertise to train and employ. Machine learning methods such as deep neural networks exchange expertise for data-volume in what are typically expert-driven processes. Here we demonstrate the power of a deep neural network potential (DNP) to model the stability of various phases and terminations of Al2O3 on Al. This model builds off previous work using DFT to demonstrate that the relative stability of alpha-, gamma-, and amorphous Al2O3 changes with the film thickness, but was limited to one coherency constraint. With our DNP, we are able to find lower strain but less coherent interfaces for all three phases altering the layer thickness at which relative stability shifts. More importantly, we see strong correlations with interface chemistry suggesting that the environment chemical state can play a strong role in the nucleation and early stages of Al2O3 film growth. |
Wednesday, March 4, 2020 2:03PM - 2:15PM |
M45.00013: Tensor-Field Molecular Dynamics: A Deep Learning model for highly accurate, symmetry-preserving force-fields from small data sets Simon Batzner, Lixin Sun, Tess E Smidt, Boris Kozinsky Simulating the dynamic behavior of molecules and extended materials over large time-scales and with high fidelity has been a long-standing goal in computational physics. Recently, Deep Neural Networks have shown great promise in learning energies and atomic forces from atomistic data, thereby providing access to efficient and accurate interatomic force-fields. However, most existing methods still require the construction of very large reference training sets, consisting of tens of thousands of structures, often computed with expensive first-principles approaches. This provides a challenging bottleneck in the construction of interatomic force-fields, limiting Deep Learning-based approaches to systems for which such large training sets are feasible to generate. We present a framework to learn highly accurate Machine-Learning Force-Fields from small training sets. We show that our proposed method is able to obtain high-accuracy force predictions on a variety of different atomic systems, including organic molecules, bulk solids as well as complex interfaces and discuss the resulting Molecular Dynamics simulations. |
Follow Us |
Engage
Become an APS Member |
My APS
Renew Membership |
Information for |
About APSThe American Physical Society (APS) is a non-profit membership organization working to advance the knowledge of physics. |
© 2024 American Physical Society
| All rights reserved | Terms of Use
| Contact Us
Headquarters
1 Physics Ellipse, College Park, MD 20740-3844
(301) 209-3200
Editorial Office
100 Motor Pkwy, Suite 110, Hauppauge, NY 11788
(631) 591-4000
Office of Public Affairs
529 14th St NW, Suite 1050, Washington, D.C. 20045-2001
(202) 662-8700