Bulletin of the American Physical Society
APS March Meeting 2023
Volume 68, Number 3
Las Vegas, Nevada (March 5-10)
Virtual (March 20-22); Time Zone: Pacific Time
Session K53: Designing Neural Networks for the Structure of Physics DataFocus
|
Hide Abstracts |
Sponsoring Units: GDS DCOMP Chair: William Ratcliff, National Institute of Standards and Technology Room: Room 307 |
Tuesday, March 7, 2023 3:00PM - 3:36PM |
K53.00001: Unified Graph Neural Network Force-field for the Periodic Table Invited Speaker: Kamal Choudhary Graph neural networks (GNN) have been shown to provide substantial performance improvements for atomistic material representation and modeling compared with descriptor-based machine learning models. While most existing GNN models for atomistic predictions are based on atomic distance information, they do not explicitly incorporate bond angles, which are critical for distinguishing many atomic structures. Furthermore, many material properties are known to be sensitive to slight changes in bond angles. We present an Atomistic Line Graph Neural Network (ALIGNN), a GNN architecture that performs message passing on both the interatomic bond graph and its line graph corresponding to bond angles. We demonstrate that angle information can be explicitly and efficiently included, leading to improved performance on multiple atomistic prediction tasks for both scalar and vector value data. Moreover, we develop a unified atomistic line graph neural network-based force-field (ALIGNN-FF) that can model both structurally and chemically diverse materials with any combination of 89 elements from the periodic table. To train the ALIGNN-FF model, we use the JARVIS-DFT dataset which contains around 75000 materials and 4 million energy-force entries, out of which 307113 are used in the training. We demonstrate the applicability of this method for fast optimization atomic structures in the crystallography open database and by predicting accurate crystal structures using genetic algorithm for alloys. |
Tuesday, March 7, 2023 3:36PM - 3:48PM |
K53.00002: Structure-motif-based material network for functional material discovery. Anoj Aryal, Huta Banjade, Qimin Yan Data driven approach and machine learning (ML) techniques has shown a significant impact on many disciplines including material science. The success of any ML algorithms relies on the effective representation of the material systems of interest. Structure motifs, essential building components of solid-state materials, have been recognized to be strongly correlated with material properties and are playing a significant role in material design. In this talk, we will discuss the construction of a material network using a structure-motif-based connection measure algorithm, to identify and categorize materials sharing common properties. Structure motif information and the connection types with neighboring motifs are encoded in a feature vector for each motif in a compound. This set of feature vectors are used for similarity measurements between any two material nodes in a general material network. In our initial effort, all the known oxide materials are mapped in a network graph and the connection patterns among these compounds are analyzed. We will discuss the potential use of this motif-based material network for identifying unknown functional materials for technical applications, including transparent conducting oxides, battery materials, and topological materials. |
Tuesday, March 7, 2023 3:48PM - 4:00PM |
K53.00003: Physically informed graph neural networks for prediction of optical properties of solid materials Can Ataca, Akram Ibrahim Despite the success of machine learning (ML) to predict complex materials properties, current ML methods are purely data-driven approaches that do not incorporate physical principles. These methods often suffer from interpretation difficulties, required large training sets, and probable poor generalization outside the observational domain. We here develop a physically informed graph neural network (GNN) to predict the frequency-dependent dielectric function of solid crystals, from which we can calculate all optical properties. The accurate prediction of optical properties using first-principles methods such as density functional theory (DFT) can be a computationally tedious task and becomes almost impossible for large systems. The dielectric function is a high-dimensional, complex-valued, and tensorial target output which presents additional difficulties to ML models. We augment our GNN with a learning bias which penalizes the model for predicting unphysical features in the dielectric function such as bandgaps, resonance frequencies, resonance amplitudes, etc. Our model is trained and validated on a database of DFT-calculated dielectric spectra for a pool of 17,805 different materials obtained from the JARVIS-DFT database [1]. The physical consistency achieved by our physically informed GNN model makes it more generalizable outside the training domain, and thus more reliable to screen new functional materials of arbitrary compositional and structural diversity. |
Tuesday, March 7, 2023 4:00PM - 4:36PM |
K53.00004: Mario Geiger Invited Speaker: Mario Geiger
|
Tuesday, March 7, 2023 4:36PM - 4:48PM |
K53.00005: Understanding Self-Assembly Behavior with Self-Supervised Learning Matthew Spellings, Maya Martirossyan, Julia Dshemuchadse Recently, deep learning models trained on enormous amounts of data using simple language modeling tasks have shown great promise when applied to new problems, including the generation of novel text. These results have spurred the proliferation of attention mechanisms, which are particularly useful for their power and the ability to inspect model behavior by viewing attention weights for a given input. In this work, we show several permutation- and rotation-equivariant neural network architectures using attention mechanisms to solve self-supervised tasks on point clouds. We show how the representations learned by these networks can be applied to understand the structural evolution of systems of self-assembling particles. Equivariant architectures such as those shown here can help apply the power of deep learning to new condensed matter systems, opening the door to powerful ways to analyze and even generate novel local environments within ordered structures. |
Tuesday, March 7, 2023 4:48PM - 5:00PM |
K53.00006: Metric geometry tools for automatic structure phase map generation Kiran Vaddi, Karen Li, Lilo Pozzo Extracting a phase map that provides a hierarchical summary of high-throughput experiments is a long-standing bottleneck for achieving acceleration in material discovery. A phase map that underpins the inherent properties of materials is typically denoted using a composition-structure map but can be extended to other relevant parameters such as synthesis. In this talk, we describe a statistical tool to efficiently obtain a phase map from high-throughput measurements. Specifically, we focus on the multi-scale characterization of nanostructures using small-angle scattering (SAS) that identifies structural features and correlations at different length scales when the scattering intensity at different angles is measured. The resulting plot of intensity vs wave-vector q, which is related to angle, is then inspected to understand and correlate structural features to material properties, composition, and processing conditions combinedly defined as the design space. Advances in high-throughput experiments and measurement speeds at synchrotron facilities allow us to collect high-quality data at a much faster rate, but the analysis of such data then becomes a bottleneck. A phase map provides a quick summary of correlations in the design space based on the structures formed and characterized by their SAS profiles. Prior studies have shown that the phase map needs to be continuous over the design space and the correlations are to be defined by a similarity between SAS profiles. We pose both constraints as a geometric feature of the phase map where continuity is obtained by defining a shape-based metric topology of SAS profiles and geometric diffusions defined by the linear operators on the design space. We apply the proposed methodology to scattering, diffraction, and spectroscopy to showcase the broad applicability of the method. We show that resulting phase maps are continuous, have an inherent shape similarity between regions identified as the same phase, and are invariant to shifted, broad, or missing peaks that might result from experimental limitations. |
Tuesday, March 7, 2023 5:00PM - 5:12PM |
K53.00007: Graph neural network accelerated generalizable stress field prediction for mesh-based finite element simulations Bowen Zheng, Zeqing Jin, Changgon Kim, Grace X Gu Finite element (FE) simulation is an important numerical method for structural analysis. However, one persistent issue is that the computational cost grows rapidly as the studied geometry becomes more complex. It offers great value if one can forward predict accurate FE results such as the stress distribution of a component without running expensive simulations. Deep learning techniques have been widely used for such prediction tasks. However, traditional frameworks such as the convolutional neural network (CNN) are not well suited for this problem. This is because CNN is based on grid-like, predefined filters, while the mesh in the FE simulations is highly irregular and variable. Graph neural network (GNN), a novel deep learning structure that operates on graph objects, can make predictions based on the learned relationships between vertices and edges. This is particularly suitable for our task because the components of GNN show a strong resemblance to nodes and element edges in the mesh-based FE simulation. In our study, we develop GNN models to predict stress and strain distributions in a body subject to external loads. Our GNN model achieves high prediction accuracies for 2D and 3D solid mechanics problems. In addition, the approach is highly generalizable because the way GNN learns does not depend on the shape of the structure, but on the intrinsic physics of the FE methodology. Our method may shed light on the fast prediction of stress and strain fields for complex useful engineering structures. |
Tuesday, March 7, 2023 5:12PM - 5:24PM |
K53.00008: Modeling the Band Structure of Periodic Crystals with Physics-Informed Neural Networks Circe Hsu, Daniel T Larson, Gabriel R Schleder, Marios Mattheakis, Efthimios Kaxiras Accurate computation of the electronic band structure is important for understanding material properties. Traditional methods, such as density functional theory, are highly successful but become computationally costly for large systems. We propose a neural network architecture to model the wavefunction and band structure of a periodic crystal. This type of Physics-Informed Neural Network (PINN) solves the Schrödinger equation using a data-free approach. We apply our network to a series of 1-dimensional potentials, demonstrating accurate prediction of the Bloch wavefunctions and band structures when compared to numerically computed solutions. Finally, we demonstrate how our approach allows for further generalization, and discuss the future of our approach. |
Tuesday, March 7, 2023 5:24PM - 5:36PM |
K53.00009: Using CycleGANs to construct training data for other Machine Learning models Abid A Khan, Chia-Hao Lee, Pinshane Y Huang, Bryan K Clark Supervised machine learning (ML) has found its way into the scientific community proving to be incredibly useful for analyzing and classifying large datasets. Constructing these useful ML models, however, requires large amounts of training data that usually comes from experiments. Often, this data requires tedious labeling, partially defeating the purpose of ML models in the first place. Simulation data on the other hand, is usually more efficient to obtain and already comes prelabeled. However, these simulated images are often limited by the oversimplified model and deviate from the experimental images, limiting the accuracy and precision of ML training. We present an approach to generating "experimental"-like data by employing a cycleGAN to automatically add realistic features and noise profiles to simulated data. We specifically use data from scanning tunneling electron microscopy (STEM) and show how ML models better evaluate experimental data when trained with data generated from a cycleGAN. |
Tuesday, March 7, 2023 5:36PM - 5:48PM |
K53.00010: Contrastive Learning Reveals the Trajectory of Protein Structure Evolution Yong Wei, Baofu Qiao, Tao Wei, Hanning Chen The molecular structure of a protein in three-dimensional space can be represented by the spatial distances of all possible amino acid residue pairs, formulating a symmetric matrix, called contact map. Two categories of protein structure evolution data are investigated in this work: sequences of contact maps of (1) lysozyme adsorption on a graphene surface obtained by discontinuous molecular dynamics (DMD) simulations, and (2) human cell receptor ACE2 binding with the wild-type SARS-CoV-2 spike protein and with key mutants via large-scale all-atom explicit solvent molecular dynamics simulations. The contrastive learning machine learning model learns the feature representations of contact maps by maximizing the agreement between a positive pair (xi,xj) via a loss function, in which xi and xj are correlated views of the same contact map x, generated by stochastic data augmentations τ~Τ and τ’~Τ, respectively. The extracted contact map feature representations are then grouped into stages using k-means clustering to reveal stages of protein structure evolution trajectories. Experimental results show that these protein structure evolution stages obtained by the contrastive learning models are invaluable to studying the protein folding path in the adsorption processes and understanding the allosteric regulation mechanism of SARS-CoV-2 spike protein in the receptor-binding domain (RBD)-ACE2 binding processes. |
Tuesday, March 7, 2023 5:48PM - 6:00PM |
K53.00011: Geometric Dynamic Variational Autoencoders for Learning Nonlinear Dynamics Ryan Lopez, Paul J Atzberger We develop data-driven methods incorporating geometric and topological information to learn parsimonious representations of nonlinear dynamics from observations. We develop approaches for learning nonlinear state space models of the dynamics for general manifold latent spaces using training strategies related to Variational Autoencoders (VAEs). Our methods are referred to as Geometric Dynamic (GD) Variational Autoencoders (GD-VAEs). We learn encoders and decoders for the system states and evolution based on deep neural network architectures that include general Multilayer Perceptrons (MLPs), Convolutional Neural Networks (CNNs), and Transpose CNNs (T-CNNs). Motivated by problems arising in parameterized PDEs and physics, we investigate the performance of our methods on tasks for learning low dimensional representations of the nonlinear Burgers equations, constrained mechanical systems, and spatial fields of reaction-diffusion systems. GD-VAEs provide methods for obtaining representations for use in learning tasks involving dynamics. |
Follow Us |
Engage
Become an APS Member |
My APS
Renew Membership |
Information for |
About APSThe American Physical Society (APS) is a non-profit membership organization working to advance the knowledge of physics. |
© 2024 American Physical Society
| All rights reserved | Terms of Use
| Contact Us
Headquarters
1 Physics Ellipse, College Park, MD 20740-3844
(301) 209-3200
Editorial Office
100 Motor Pkwy, Suite 110, Hauppauge, NY 11788
(631) 591-4000
Office of Public Affairs
529 14th St NW, Suite 1050, Washington, D.C. 20045-2001
(202) 662-8700