Bulletin of the American Physical Society
APS March Meeting 2019
Volume 64, Number 2
Monday–Friday, March 4–8, 2019; Boston, Massachusetts
Session C22: Building the Bridge to Exascale: Applications and Opportunities for Materials, Chemistry, and Biology IIIFocus
|
Hide Abstracts |
Sponsoring Units: DCOMP DBIO DPOLY DCMP Chair: Jack Deslippe, Lawrence Berkeley Natl Lab Room: BCEC 157C |
Monday, March 4, 2019 2:30PM - 2:42PM |
C22.00001: OpenAtom: massively-parallel simulations for molecular and electronic dynamics Minjung Kim, Subhasish Mandal, Eric Mikida, Kavitha Chandrasekar, Qi Li, Eric Bohm, Nikhil Jain, Laxmikant Kale, Glenn Martyna, Sohrab Ismail-Beigi OpenAtom (OA) is an open-source, massively parallel software application that performs ab initio molecular dynamics simulations (AIMD) and ground and excited states calculations utilizing a plane-wave basis set. OA shows excellent scaling performance on thousands of compute nodes by employing overdecomposition and asynchrony strategies that the Charm++ parallel framework upon which OA is built provides. Here, we describe recent advances in OA capabilities: 1) Path-integral Car-Parinello MD simulations implementation and performance results of a hydrogen adsorption in a metal-organic framework (MOF) and 2) release of a GW method implementation for quasiparticle properties and the scaling results up to 32K cores on Mira and Blue Waters. We will also discuss our collaborative efforts and ongoing development in OA concerning the projector augmented wave method, reduced order O(N3) GW calculations, porting to next generation machines with multiple GPGPUs per node, and extreme scale platform concerns for post-petascale and exascale environments. |
Monday, March 4, 2019 2:42PM - 2:54PM |
C22.00002: Automated Discovery of Chemical Mechanisms using Reactive Molecular Dynamics James Koval, Ahmed Ismail Reactive molecular dynamics simulations allow for changes in chemical computation through the dynamic calculation of bond orders between atoms. This can allow for the determination of large-scale reaction mechanisms if sufficiently large numbers of reaction events can be captured. Determination of "critical pathways" and elimination of closed "unstable loops" that return to the original reactants therefore requires the study of large systems of potentially tens of thousands of atoms for tens of nanoseconds—creating a large "data science" problem analyzing gigabytes or even terabytes of data. We demonstrate a recently developed algorithm that can construct such reaction mechanisms and pathways as well as provide inputs for quantum mechanical calculations to determine reaction rates. As examples, we apply this approach to the combustion mechanism of non-petroleum-based alternative fuel candidates. |
Monday, March 4, 2019 2:54PM - 3:06PM |
C22.00003: MOLECULAR DYNAMICS SIMULATIONS OF AN ENTIRE HIV VIRION Juan Perilla, Tyler Reddy The HIV viral particle contains all the necessary components to infect a human cell. The so-called virion is made of glycoproteins, lipids, the Gag polyprotein, viral RNA and other essential proteins. After budding, several biological process occur inside the virion, including a major re-arrangement of the virion's cargo commonly referred as maturation. Here, we present the steps performed towards the construction of an atomistic model of a mature and immature virion. Our model includes all major components, including glycosylated proteins, twenty-four lipid types and capsid protein. Our effort constitutes one of the major efforts to construct a realistic HIV virion at atomic resolution. In addition, we discuss the techniques developed to prepare the system and the steps required to simulate and analyze our atomistic HIV virion model, which contains over 800 million atoms. |
Monday, March 4, 2019 3:06PM - 3:18PM |
C22.00004: Co-design in molecular dynamics for exascale Sam Reeve, James Belak The continuing push towards exascale computing capabilities and accompanying shift toward heterogeneous hardware has highlighted the need for rethinking of even well-established computational methods. We focus here on molecular dynamics, assessing common algorithmic choices (e.g. ordering of computation and data layout in memory) and discussing new choices for novel machines. This talk will primarily explore how changes in data structure, particularly use of hybrid array-of-structs-of-arrays (AoSoA), i) both enable and require changes to compute kernels and ii) map well to multiple hardware architectures (e.g. both CPU and GPU). This exploration is enabled through the Co-design center for Particle Applications (CoPA) Cabana library for particle-based simulation methods. |
Monday, March 4, 2019 3:18PM - 3:30PM |
C22.00005: Extending the accuracy, size, and duration of atomistic simulations on exascale hardware Danny Perez, Arthur F. Voter, Anders Niklasson, Christian Negre, Marc Cawkwell, Blas Pedro Uberuaga, Steven James Plimpton, Aidan Thompson, Mitchell A Wood, Mary Alice Cusentino, Brian Wirth, Li Yang Because of its unparalleled predictive power, molecular dynamics (MD) has established itself as a workhorse of computational materials science. However, the limited strong-scalability of conventional MD combined with the exponential increase in parallelism currently leaves wide swaths of the theoretically-accessible simulation space inaccessible in practice, by only allowing for the simulation of larger systems but not of longer times. Fulfilling the promises of the exascale era will therefore require exposing new levels of parallelism in order to make the whole Accuracy/Size/Time simulation space accessible. The EXAALT project aims at addressing this challenge for systems that evolve through sequences of rare, thermally activated, events. By combining conventional domain decomposition with replication, speculation, and localization approaches, we show that novel computational techniques deployed at the exascale have the potential to dramatically extend the simulation space that will be accessible to MD over the next decade. |
Monday, March 4, 2019 3:30PM - 3:42PM |
C22.00006: Large-Scale Simulations of Protein Self-Assembly Jens Glaser, Sharon Glotzer Despite steady advances in computing methodology and in the accuracy of all-atom force fields, large scale simulations of biological self-assembly processes still defy the current capabilities of computer simulation. However, with simplified models it is sometimes possible to extract the important physics on the relevant time and length scales. Here we present results of our efforts to simulate the nucleation of protein crystals on the Titan supercomputer, employing large-scale simulations of rigid protein models to form the experimentally observed crystal structure. It has been hypothesized that biological crystallization occurs non-classically via an intermediate of a liquid droplet, to overcome the large barriers to nucleation via critical fluctuations. Using a model with an artificially tunable specificity, we test this hypothesis. We outline how more powerful simulations of biological self-assembly can be achieved on upcoming pre-exascale architectures using HOOMD-blue with support for NVLINK node-local communication. |
Monday, March 4, 2019 3:42PM - 4:18PM |
C22.00007: First-principles simulations of electronic excitations and real-time dynamics on high-performance super computers Invited Speaker: Andre Schleife Excited electronic states and their ultrafast dynamics are foundational to how we interact with or probe many materials. Thanks to advanced experimentation, electronic excitations are accessible with high accuracy and time resolution, however, solid theoretical understanding is crucial for a detailed interpretation of experimental results. To this end, first-principles electronic-structure methods, such as many-body perturbation theory and time-dependent density functional theory, are powerful tools that provide highly accurate insight. |
Monday, March 4, 2019 4:18PM - 4:30PM |
C22.00008: Massively-parallel time-dependent density functional theory calculations for optical near-field excitations in silicon Masashi Noda, Kenji Iida, Maiku Yamaguchi, Kazuya Ishimura, Takashi Yatsui, Katsuyuki Nobusada, Kazuhiro Yabana
|
Monday, March 4, 2019 4:30PM - 4:42PM |
C22.00009: Combined Next-Generation Neutron Vibrational Spectroscopy and High-Accuracy Massively Parallel DFT Calculations Benchmark Electronic Descriptions of Complex Organic Molecular Systems Anup Pandey, Ada Sedova, Luke Daemen, Yongqiang Cheng, Anibal J. Ramirez-Cuesta Vibrations in crystals govern their fundamental properties. In organic molecular crystals, low frequency vibrations depend on intermolecular interactions which are difficult to describe accurately with density functional theory (DFT) and must be carefully benchmarked. Inelastic neutron scattering (INS), complemented by accurate theoretical studies, can provide vital information on vibrations and molecular forces. Next-generation INS instruments measure THz and far-IR regions with excellent resolution; the VISION spectrometer is unique in providing high signal allowing for high-quality low-frequency data. Impressive agreement with experimental spectra is obtained using DFT calculations, however, discrepancies still exist. Frequency errors lead to incorrect Helmholtz free-energy estimates used in predictions of the relative stability of crystals, and incorrect intensities indicate poorly described intermolecular forces. By exploiting the parallel nature of finite-displacement methods, we can calculate the spectra of complex pharmaceutical and bio-organic solids using very large basis sets and a number of DFT functionals and dispersion descriptions in a few hours using the Titan supercomputer. Such benchmarks can transform theoretical studies of organic materials. |
Monday, March 4, 2019 4:42PM - 4:54PM |
C22.00010: ExaAM: Additive manufacturing process modeling at the fidelity of the microstructure James Belak, John turner In FY17, the USDOE Exascale Computing Project (ECP) initiated projects to design and develop simulation codes to use exascale computing, including the Exascale Additive Manufacturing Project (ExaAM), a partnership between LLNL, LANL, and ORNL. Exascale Computing will enable AM process modeling at the fidelity of the microstructure. Here we discuss what this means, in particular, tight coupling of Process-Structure-Property calculations. Macroscopic continuum codes are used to simulation the metal melt-refreeze, within which mesoscopic codes (PF and CA) are used to simulate the development of microstructure. This microstructure is then used by crystal plasticity codes to calculate local properties. Here we focus on in situ coupling so that the calculated microstructure is relevant to the complex thermo-mechanical conditions of AM processing. Examples will be given for the SLM process of metal additive manufacturing along with comparison to experimental observations. |
Monday, March 4, 2019 4:54PM - 5:06PM |
C22.00011: Cross-scale atomistic simulations of crystal plasticity Vasily Bulatov Predictions of crystal plasticity directly from the atomic motion have been regarded as unthinkable given the severe limits on time- and length-scale accessible to direct MD simulations. We will discuss our recent direct MD simulations of compressive straining of single crystals of Ta and Al. One of our simulations, Livermore BigBig (LBB) simulation, is by far the largest MD simulation ever performed: it generated a fully dynamic trajectory of over 2.1 billion atoms over 5 microseconds of simulated MD trajectory. LBB generated nearly 80 exabytes of recordable trajectory data only a tiny fraction of which was saved on disk in a highly compressed/post-processed form available for further analysis. As opposed to multiscale, LBB and other simulations of its magnitude can be regarded as cross-scale being sufficiently large to be statistically representative of collective action of dislocations resulting in macroscopic crystal flow and yet fully resolved to every atomic “jiggle and wiggle”. We will discuss new insights into physics of crystal plasticity brought about by our simulations and discuss challenges and strategies for on-the-fly learning on immense data generated in such simulations. |
Monday, March 4, 2019 5:06PM - 5:18PM |
C22.00012: Beyond Petascale: HPC and Polymeric Materials Design Monojoy Goswami Designing advanced polymeric materials for novel applications is challenging due to the experimental complexity associated with each and every polymer. Achieving success in polymeric materials design consequently relies on trial-and-error experimental techniques that often fail to achieve the predetermined goal. Computational modeling along different experimental techniques, therefore, has become the forefront of research in materials design. Recent developments in HPC, particularly from Petascale to Exascale supercomputing, are quickly shaping the precise design principles of polymeric materials. In this talk, I will highlight the importance of beyond-petascale computing in advanced manufacturing, polyelectrolyte complexation and phase change materials using molecular dynamics simulations that can help achieve future materials design goal. |
Monday, March 4, 2019 5:18PM - 5:30PM |
C22.00013: Towards Exascale Quantum Transport Calculations Wenchang Lu, Emil Briggs, Jerry Bernholc Beyond Moore’s law devices will approach nanometer dimensions and operate in the regime where quantum and atomic-scale effects become important. In this regime, classical concepts of device design cease to be predictive and need to be augmented by quantum simulations of key parts of devices and circuits. Based on the non-equilibrium Green’s function (NEGF) methodology within the density functional theory, we have developed a NEGF module within the real-space multigrid (RMG) suite of codes (www.rmgdft.org) by which the quantum transport properties can be studied for nanoscale devices containing tens of thousands of atoms. Multilevel parallelization with MPI, threads and/or Cuda programming is implemented to enable adaptation to future exascale supercomputers. The module can be used to simulate and design new quantum devices at the atomic level. For a system with ten thousand atoms, the NEGF module’s performance scales linearly from 100 to 1000 nodes on the Summit supercomputer at ORNL. With an efficient implementation of GPU acceleration using the new Cuda-managed memory capability, our benchmark calculations show a 3.5 to 4.5 speed-up over CPU-only calculations. |
Follow Us |
Engage
Become an APS Member |
My APS
Renew Membership |
Information for |
About APSThe American Physical Society (APS) is a non-profit membership organization working to advance the knowledge of physics. |
© 2023 American Physical Society
| All rights reserved | Terms of Use
| Contact Us
Headquarters
1 Physics Ellipse, College Park, MD 20740-3844
(301) 209-3200
Editorial Office
1 Research Road, Ridge, NY 11961-2701
(631) 591-4000
Office of Public Affairs
529 14th St NW, Suite 1050, Washington, D.C. 20045-2001
(202) 662-8700