Bulletin of the American Physical Society
APS March Meeting 2015
Volume 60, Number 1
Monday–Friday, March 2–6, 2015; San Antonio, Texas
Session Y23: Focus Session: Petascale Science and Beyond: Applications and Opportunities in Materials Science and Chemistry III |
Hide Abstracts |
Sponsoring Units: DCOMP Chair: Thomas Schulthess, Swiss Federal Institute of Technology (ETH) Room: 202B |
Friday, March 6, 2015 8:00AM - 8:36AM |
Y23.00001: Aneesur Rahman Prize Talk: Working at the Speed of Light Invited Speaker: John Joannopoulos Photonic crystals are periodic dielectric structures possessing a photonic band gap that forbids propagation of a certain frequency range of light. This gap, and other curious properties of these systems, enable control of light with amazing facility and produce effects that are impossible to achieve with conventional optics. By combining analytical theory with state-of-the-art numerical calculations, examples of novel, and even anomalous, light behavior will be presented. [Preview Abstract] |
Friday, March 6, 2015 8:36AM - 8:48AM |
Y23.00002: Scale-Bridging Modeling of Material Dynamics: Petascale Assessments of the Road to Exascale Timothy Germann Within the multi-institutional, multi-disciplinary Exascale Co-design Center for Materials in Extreme Environments (ExMatEx), we are engaging domain (computational materials) scientists, applied mathematicians, computer scientists, and hardware architects, in order to establish the relationships between algorithms, software stacks, and architectures needed to enable exascale-ready materials science application codes within the next decade. We anticipate that we will be able to exploit hierarchical, heterogeneous architectures to achieve more realistic large-scale simulations with adaptive physics refinement, and are using tractable application scale-bridging proxy application testbeds to assess new approaches and requirements. Our focus has been on scale-bridging strategies that accumulate (or recompute) a distributed response database from fine-scale calculations, in a top-down rather than bottom-up multiscale approach. To evaluate and exercise the task-based programming models, databases, and runtime systems required to perform such many-task computation workflows, we are carrying out petascale demonstrations in 2015 which I will describe in this talk. [Preview Abstract] |
Friday, March 6, 2015 8:48AM - 9:00AM |
Y23.00003: Large Scale Molecular Dynamics Simulation of Polymeric Materials Monojoy Goswami, Jan-Michael Carrillo, Rajeev Kumar, Bobby Sumpter In this talk, I will present a series of large-scale molecular simulations of polymer nanocomposites and block copolymers (BCP). We will discuss three different problems in this talk that requires large-scale computation: 1) hydrated RNA dynamics on a nanodiamond (ND) surface for drug-delivery applications, 2) poly(3-hexylthiophene) (P3HT) and PCBM nanocomposites for the application in organic photovoltaics (OPV) and (3) amphiphilic BCP self assembly in surfactant solution for membrane separation technology applications. We simulate problem (1) using fully atomistic NAMD simulation and discuss the puzzling discovery of faster RNA dynamics on ND surface. LAMMPS MD code is used to simulate problems (2) and (3). Here we explain the importance of nano-domains in P3HT:PCBM nanocomposites in designing OPV and the criterion for surfactant mediated self-assembly of amphiphilic BCP in solution. [Preview Abstract] |
Friday, March 6, 2015 9:00AM - 9:12AM |
Y23.00004: Fast Analysis of Time-Resolved Scattering Data Alexander Hexemer, Dinesh Kumar, Singanallur Venkatakrishnan, Abhinav Sarje, Simon Patton, Sherry Li, Jack Deslippe, Craig Tull, Eli Dart, Feng Liu, Thomas Russell, Enrique Gomez, Chenhui Zhu, Eric Schaible, Polite Stewart Organic Photovoltaics hold promise to reduce costs and increase efficiency. Most efforts have focused on spin-coating to fabricate high performance devices, a process that is not amenable to large scale fabrication. This mismatch in device fabrication processes makes it difficult to translate quantitative results obtained from laboratory scale devices to commercially prepared large area devices. Using a mini-slot die coater, designed and build in house, we address this issue, where the commercial process is translated to the laboratory setting. Grazing Incidence Small Angle X-ray Scattering was used to probe the change in morphology during the printing process. HIPGISAXS was used to fit the data in real-time by utilizing different ASCR facilities. SPOT orchestrated the workflow for the data: the transfer from the beamline to NERSC and subsequently to the TITAN supercomputer for fitting and back to NERSC. [Preview Abstract] |
Friday, March 6, 2015 9:12AM - 9:24AM |
Y23.00005: Real-time calculations of dynamical effects in x-ray spectra J.J. Rehr, J.J. Kas, A.J. Lee An understanding of dynamical effects and inelastic losses in x-ray spectra due to the sudden creation of a core-hole and photoelectron has long been of interest. Here we present a real-time approach for calculations of core level x-ray absorption and x-ray photoemission spectra that account for the dynamic response in terms of a spectral function that includes intrinsic, extrinsic and interference terms. Our approach is based on a factorization in terms of the core-hole Green's function and a time-correlation function that avoids the need for ultra-short time-steps. The approach extends a time-correlation function approach for XAS,\footnote{A. J. Lee, F. D. Vila and J. J. Rehr, Phys. Rev. B {\bf 86}, 115107 (2012)} and a real-time TDDFT approach for XPS.\footnote{J. J. Kas, F. D. Vila, J. J. Rehr, and S. A. Chambers, arXiv:1408.2508} The approach permits a real-space picture of many-body excitations such as satellites and inelastic losses analogous to that for XPS. The method is implemented using an adaptation of the Crank-Nicholson time-evolution algorithm with PAW transition matrix elements. Illustrative examples are presented for a number of systems. [Preview Abstract] |
Friday, March 6, 2015 9:24AM - 9:36AM |
Y23.00006: Atomistic Materials Modeling on Petascale Platforms Using SNAP Aidan Thompson, Laura Swiler, Christian Trott, Stephen Foiles, Garritt Tucker The growing availability of capacity computing for atomistic materials modeling has encouraged the use of high-accuracy computationally intensive interatomic potentials, such as SNAP. These potentials also happen to scale well on petascale computing platforms. SNAP has a very general form and uses machine-learning techniques to reproduce the energies, forces, and stress tensors of a large set of small configurations of atoms, which are obtained using high-accuracy quantum electronic structure (QM) calculations. The local environment of each atom is characterized by a set of bispectrum components of the local neighbor density projected on to a basis of hyperspherical harmonics in four dimensions. The computational cost per atom is much greater than that of simpler potentials such as Lennard-Jones or EAM, while the communication cost remains modest. We discuss a variety of strategies for implementing SNAP in the LAMMPS molecular dynamics package. We present scaling results obtained running SNAP on three different classes of machine: a conventional Intel Xeon CPU cluster; the Titan GPU-based system; and the combined Sequoia and Vulcan BlueGene/Q. [Preview Abstract] |
Friday, March 6, 2015 9:36AM - 9:48AM |
Y23.00007: Molecular dynamics simulation: at a crossroad between molecular biophysics and petascale computing Xiaolin Cheng High-performance computing (HPC) has become crucial for most advances made in chemistry and biology today. In particular, biophysical simulation is capable of helping generate critical new insights and drive the direction of experimentation. In this talk, I will discuss our work towards addressing some fundamental membrane biophysical questions using HPC capabilities at Oak Ridge National Laboratory. I will first provide a synopsis of our current progress in developing molecular dynamics (MD) techniques that make efficient use of massively parallel supercomputers. I will then discuss a few examples of large-scale MD simulations of biomembrane vesicles, an effort aimed at shedding light on the lateral organization and cross-layer coupling in biologically-relevant membranes. In conclusion, I will discuss a few scientific and technical challenges faced by MD simulation at the exascale. [Preview Abstract] |
Friday, March 6, 2015 9:48AM - 10:00AM |
Y23.00008: Large-scale atomistic simulations of surface nanostructuring by short pulse laser irradiation Chengping Wu, Maxim Shugaev, Leonid Zhigilei The availability of petascale supercomputing resources has expanded the range of research questions that can be addressed in the simulations and, in particular, enabled large-scale atomistic simulations of short pulse laser nanostructuring of metal surfaces. A series of simulations performed for systems consisting of 10$^{\mathrm{8}}$ -- 10$^{\mathrm{9}}$ atoms is used in this study to investigate the mechanisms responsible for the generation of complex multiscale surface morphology and microstructure. At low laser fluence, just below the spallation threshold, a concurrent occurrence of fast laser melting, dynamic relaxation of laser-induced stresses, and rapid cooling and resolidification of the transiently melted surface region is found to produce a sub-surface porous region covered by a nanocrystalline layer. At higher laser fluences, in the spallation and phase explosion regimes, the material disintegration and ejection driven by the relaxation of laser-induced stresses and/or explosive release of vapor leads to the formation of complex surface morphology that can only be studied in billion-atom simulations. The first result from a billion atom simulation of surface nanostructuring performed on Titan will be discussed in the presentation. [Preview Abstract] |
Friday, March 6, 2015 10:00AM - 10:12AM |
Y23.00009: Retention of hydrogen and helium in monocrystal of tungsten Jack Wells, Predrag Krstic Beginning with either perfect or damaged mono-crystal of tungsten, we bombard the surface with a mix of isotopes of H and He at the impact energy range from 1-100 eV in order to predict the retention rate of the impinging atoms as well as their distribution inside the material, in particular inside the vacancies. Our calculation is based on molecular dynamics simulation using high-performance computing and bond-order potentials. The goal is to distinguish between the following alternative outcomes: (1) the retention rate is proportional to the number of vacancies -- consistent with recent experiments on the retention of H in damaged W, (2) vacancies will be filled by aggregates of H or He, leading to unstable surfaces, e.g., bubbling and blistering of the surface, and (3) some fraction of hydrogen and helium will fill the interatomic space in the W crystal lattice, creating a ``protective layer'' or bubbles and blisters close to the surface, even in absence of significant tungsten lattice defects, and (4) the impact direction and the crystal surface cut influence significantly ratio of effects 1-3. [Preview Abstract] |
Friday, March 6, 2015 10:12AM - 10:24AM |
Y23.00010: Teraflops and beyond: GPU-based MD exploration of emergent phenomena Dennis Rapaport Molecular dynamics (MD) simulation of emergent phenomena can be computationally demanding because of the broad range of length and time scales that must be covered, ranging from the individual particles out to where the collective behavior is expressed; the fact that simulations of this type are often subject to unpredictable outcomes is a further complication. Examples of MD studies of emergent behavior include discrete-particle modeling of hydrodynamic instabilities (e.g., thermal convection cells), complex segregation processes in granular systems modeled with inelastic particles (e.g., in a rotating drum), and supramolecular self-assembly (e.g., the growth of icosahedral shells corresponding to viral capsids). The comparatively large and long simulations required for these problems benefit substantially from massively parallel GPU-based implementation, with even a single GPU typically providing an order of magnitude speedup over a conventional CPU. A sampling of newly obtained exploratory results for these and similar problems [arXiv:1409.5958] will be described, along with the methodology; the results offer a tantalizing hint of the kinds of phenomena that can be explored, and what might be achieved given the appropriate resources. [Preview Abstract] |
Follow Us |
Engage
Become an APS Member |
My APS
Renew Membership |
Information for |
About APSThe American Physical Society (APS) is a non-profit membership organization working to advance the knowledge of physics. |
© 2024 American Physical Society
| All rights reserved | Terms of Use
| Contact Us
Headquarters
1 Physics Ellipse, College Park, MD 20740-3844
(301) 209-3200
Editorial Office
100 Motor Pkwy, Suite 110, Hauppauge, NY 11788
(631) 591-4000
Office of Public Affairs
529 14th St NW, Suite 1050, Washington, D.C. 20045-2001
(202) 662-8700