Bulletin of the American Physical Society
APS March Meeting 2014
Volume 59, Number 1
Monday–Friday, March 3–7, 2014; Denver, Colorado
Session G27: Focus Session: Petascale Science and Beyond: Applications and Opportunities for Materials Science II |
Hide Abstracts |
Sponsoring Units: DCOMP Chair: Thomas Schulthess, ETH Zurich/ Swiss National Supercomputing Center (CSCS) Room: 501 |
Tuesday, March 4, 2014 11:15AM - 11:27AM |
G27.00001: Multi-Million-Atom Molecular Dynamics Simulations of Polymer Nanoparticle Composites using Explicit Solvent Treatment Sanket Deshmukh, Ganesh Kamath, Derrick Mancini, Subramanian Sankaranarayanan Poly(N-isopropylacrylamide) (PNIPAM) is a thermosensitive polymer that is well-known for its lower critical solution temperature (LCST) around 305K. Below the LCST, PNIPAM is soluble in water, and above this temperature, polymer chains collapse and transform into a globule-state. Our simulations of systems consisting of single polymer chains in presence of explicit water molecules ($\sim$ 50 K atoms) predicted the LCST of PNIPAM close to the observed experimental value of $\sim$ 305 K. This study also suggested the importance of using an explicit water model in studying the coil-to-globule transition in thermo-sensitive polymers. In the current studies, we are carrying out MD simulations of composites of PNIPAM inorganic nanoparticles in the aqueous solution using an explicit solvent treatment. We study the effect of grafting density on the coil-to-globule transition of the PNIPAM brushes. We graft PNIPAM polymer chains consisting of 60 monomer units onto a gold nanoparticle with varying grafting densities. Studied system consisted of $\sim$3 million atoms. All the simulations were carried out below (275K) and above (325K) the LCST of PNIPAM. Simulation trajectories are analyzed for structural and dynamical properties. In particular, we look at the morphology of the uncollapsed and collapsed structures, and relate this to observation in scattering measurements. Future work will expand this approach to studying the dynamics of agglomeration of such brush structures to form self-assembled nanocomposites. [Preview Abstract] |
Tuesday, March 4, 2014 11:27AM - 11:39AM |
G27.00002: Experiment-scale molecular simulation study of liquid crystal thin films Trung Dac Nguyen, Jan-Michael Y. Carrillo, Michael A. Matheson, W. Michael Brown Supercomputers have now reached a performance level adequate for studying thin films with molecular detail at the relevant scales. By exploiting the power of GPU accelerators on Titan, we have been able to perform simulations of characteristic liquid crystal films that provide remarkable qualitative agreement with experimental images. We have demonstrated that key features of spinodal instability can only be observed with sufficiently large system sizes, which were not accessible with previous simulation studies. Our study emphasizes the capability and significance of petascale simulations in providing molecular-level insights in thin film systems as well as other interfacial phenomena. [Preview Abstract] |
Tuesday, March 4, 2014 11:39AM - 11:51AM |
G27.00003: Petascale Molecular Dynamics Simulations of Thermal Annealing of P3HT:PCBM Active Layers in Bulk Heterojunctions Jan-Michael Carrillo, Rajeev Kumar, Monojoy Goswami, S. Michael Kilbey II, Bobby Sumpter, W. Michael Brown Using petascale coarse-grained molecular dynamics simulations, we have investigated the thermal annealing of poly(3-hexylthiophene) (P3HT) and Phenyl-C61-butyric acid methyl ester (PCBM) blends in the presence of a silicon substrate. The simulations were run on the Titan supercomputer using 21{\%} of the capacity of the machine. This is in contrast to recent studies, which were unable to obtain results representative of the entire thermal annealing process because of limited simulation time and size. The simulations are in agreement with neutron reflectivity (NR) and near edge X-ray fine structure (NEXAFS) experiments and reveal a vertical composition profile of the bulk heterojunction normal to the substrate with enrichment of PCBM near the substrate. We demonstrate that the addition of short P3HT chains, as a third component of the blend, can be used to alter the morphology of the active layer. [Preview Abstract] |
Tuesday, March 4, 2014 11:51AM - 12:27PM |
G27.00004: Petascale resources and CP2K: enabling sampling, large scale models or correlation beyond DFT Invited Speaker: Joost VandeVondele Already with modest computer resources, GGA DFT simulations of models containing a few hundred atoms can contribute greatly to chemistry, physics and materials science. With the advent of petascale resources, new length, time and accuracy scales can be explored. Recently, we have made progress in all three directions: \begin{enumerate} \item A novel Tree Monte Carlo (TMC) algorithm introduces a further level of parallelism and allows for generating long Markov chains. Sampling 100'000s of configurations with DFT, the dielectric constant and order-disorder transition in water ice Ih/XI has been studied.[1] \item The removal of all non-linear scaling steps from GGA DFT calculations and the development of a massively parallel GPU-accelerated sparse matrix library make structural relaxation and MD possible for systems containing 10'000s of atoms.[2] \item A well parallelized implementation of a novel algorithm to compute four center intergrals over molecular states (RI-GPW), allows for many-body perturbation theory (MP2, RPA) calculations on a few hundred atoms. Sampling liquid water at the MP2 level yields a very satisfying model of liquid water, without empirical parameters.[3,4] \end{enumerate} References:\\[0pt] [1] Mandes Sch\"onherr, Ben Slater, J\"rg Hutter, and Joost VandeVondele, submitted. \newline [2] Joost VandeVondele, Urban Borstnik, J\"urg Hutter, JCTC 8, 3565 (2012) \newline [3] Mauro Del Ben, Mandes Sch\"onherr, J\"urg Hutter, and Joost VandeVondele JPCL 4, 3753 (2013) \newline [4] Mauro Del Ben, J\"urg Hutter, and Joost VandeVondele, JCTC 9, 2654 (2013) \newline [Preview Abstract] |
Tuesday, March 4, 2014 12:27PM - 12:39PM |
G27.00005: Optimizing GW for Petascale HPC and Beyond Jack Deslippe, Andrew Canning, Yousef Saad, James Chelikowsky, Steven Louie The traditional GW-Bethe-Salpeter (BSE) approach has, in practice, been prohibitively expensive on systems with more than 50 atoms. We show that through a combination of methodological and algorithmic improvements, the standard GW-BSE approach can be applied to systems with hundreds of atoms. We will discuss the massively parallel GW-BSE implementation in the BerkeleyGW package (on-top of common DFT packages) including the importance of hybrid MPI-OpenMP parallelism, parallel IO and library performance. We will discuss optimization strategies for and performance on many-core architectures. [Preview Abstract] |
Tuesday, March 4, 2014 12:39PM - 12:51PM |
G27.00006: Computing quasiparticle energies and band offsets for large systems Marco Govoni, Giulia Galli We present a massively parallel implementation $[1]$ of a method $[2]$ recently proposed for the calculations of quasiparticle energies of molecules and solids, which does not require the explicit evaluation of single particle virtual states. Explicit inversion and storage of large dielectric matrices are also avoided and frequency integration is explicitly carried out, without resorting to plasmon pole models. We present application to complex semiconducting interfaces, inclusive of order and disordered systems, with more than one thousand electrons. \\ $[1]$ M. Govoni and G. Galli, in preparation.\\ $[2]$ H-V. Nguyen et al. Phys. Rev. B 85, 081101(R) (2012); T.A. Pham et al. Phys Rev. B 87, 155148 (2013) [Preview Abstract] |
Tuesday, March 4, 2014 12:51PM - 1:03PM |
G27.00007: An accurate and scalable O(N) algorithm for First-Principles Molecular Dynamics computations on petascale computers and beyond Daniel Osei-Kuffuor, Jean-Luc Fattebert We present a truly scalable First-Principles Molecular Dynamics algorithm with O(N) complexity and fully controllable accuracy, capable of simulating systems of sizes that were previously impossible with this degree of accuracy. By avoiding global communication, we have extended W. Kohn's condensed matter ``nearsightedness'' principle to a practical computational scheme capable of extreme scalability. Accuracy is controlled by the mesh spacing of the finite difference discretization, the size of the localization regions in which the electronic wavefunctions are confined, and a cutoff beyond which the components of the overlap matrix can be omitted when computing selected elements of its inverse. We demonstrate the algorithm's excellent parallel scaling for up to 100,000 atoms on 100,000 processors, with a wall-clock time of the order of one minute per molecular dynamics time step. [Preview Abstract] |
Tuesday, March 4, 2014 1:03PM - 1:15PM |
G27.00008: Accelerating Hybrid Density Functional Theory Molecular Dynamics William Dawson, Francois Gygi For many systems, accurate First-Principles Molecular Dynamics (FPMD) simulations require the use of hybrid density functional theory. Molecular Dynamics requires short wall clock times and thus highly scalable parallel algorithms. The Qbox[1] code implements the recursive subspace bisection algorithm[2,3] which accelerates hybrid density functional theory calculations by creating a set of localized orbitals to reduce the number of exchange integrals computed. This approach allows for controlled accuracy and requires no a priori assumptions about localization. We discuss heuristic algorithms for improving the scalability and performance of this approach. We then demonstrate these improvements in applications to aqueous solutions and water-metal interfaces. \\{} [1] http://eslab.ucdavis.edu/software/qbox \\{} [2] F.~Gygi, Phys.~Rev.~Lett.~{\bf 102}, 166406 (2009).\\{} [3] F.~Gygi and I.~Duchemin, J.~Chem.~Theory Comput.~{\bf 9}, 582 (2012). [Preview Abstract] |
Tuesday, March 4, 2014 1:15PM - 1:27PM |
G27.00009: Phonon Quasi-Particles and Anharmonic Free Energy in Complex Systems Dong-Bo Zhang, Tao Sun, Renata Wentzcovitch We use a hybrid strategy to obtain anharmonic frequency shifts and lifetimes of phonon quasi-particles from first principles molecular dynamics simulations in modest size supercells. This approach is effective irrespective of crystal structure complexity and facilitates calculation of full anharmonic phonon dispersions, as long as phonon quasi-particles are well defined. We validate this approach to obtaining anharmonic effects with calculations in MgSiO$_{\mathrm{3}}$-perovskite, the major Earth forming mineral phase. First, we reproduce irregular temperature induced frequency shifts of well characterized Raman modes. Second, we combine the phonon gas model (PGM) with quasi-particle frequencies and reproduce free energies obtained using a direct approach such as thermodynamic integration. Using thoroughly sampled quasi-particle dispersions with the PGM we then obtain first-principles anharmonic free energy in the thermodynamic limit (N $\to \infty )$. [Preview Abstract] |
Tuesday, March 4, 2014 1:27PM - 1:39PM |
G27.00010: Million atom DFT calculations using coarse graining and petascale computing Don Nicholson, Kh. Odbadrakh, G.D. Samolyuk, R.E. Stoller, X.G. Zhang, G.M. Stocks Researchers performing classical Molecular Dynamics (MD) on defect structures often find it necessary to use millions of atoms in their models. It would be useful to perform density functional calculations on these large configurations in order to observe electron-based properties such as local charge and spin and the Helmann-Feynman forces on the atoms. The great number of atoms usually requires that a subset be ``carved'' from the configuration and terminated in a less that satisfactory manner, e.g. free space or inappropriate periodic boundary conditions. Coarse graining based on the Locally Self-consistent Multiple Scattering method (LSMS) and petascale computing can circumvent this problem by treating the whole system but dividing the atoms into two groups. In Coarse Grained LSMS (CG-LSMS) one group of atoms has its charge and scattering determined prescriptively based on neighboring atoms while the remaining group of atoms have their charge and scattering determined according to DFT as implemented in the LSMS. The method will be demonstrated for a one-million-atom model of a displacement cascade in Fe for which 24,130 atoms are treated with full DFT and the remaining atoms are treated prescriptively. [Preview Abstract] |
Tuesday, March 4, 2014 1:39PM - 1:51PM |
G27.00011: Atomic Structure Prediction with Large-Scale High Performance Computing Cai-Zhuang Wang, Bruce Harmon, Manh Cuong Nguyen, Xin Zhao, Kai-Ming Ho Many unknown binary or ternary materials for energy applications have very complex crystal structures, containing large number of atoms in their unit cells and possible uncertainty in composition. Computational prediction for atomic structures of such complex materials is a highly demanding work. Advances in modern large-scale high performance computational resources and computational algorithms now make it feasible to perform an efficient crystal structure prediction. We developed an adaptive genetic algorithm to perform large-scale structure search on high performance supercomputer. Examples of successful structure prediction/solving of complex materials will be presented. Further applications of the adaptive genetic algorithm to aid material discoveries will be discussed. [Preview Abstract] |
Tuesday, March 4, 2014 1:51PM - 2:03PM |
G27.00012: ABSTRACT WITHDRAWN |
Tuesday, March 4, 2014 2:03PM - 2:15PM |
G27.00013: Dissipative Particle Dynamics Simulations at Extreme Scale: GPU Algorithms, Implementation and Applications Yu-Hang Tang, George Karniadakis \noindent We present a scalable dissipative particle dynamics simulation code, fully implemented on the Graphics Processing Units (GPUs) using a hybrid CUDA/MPI programming model, which achieves 10-30 times speedup on a single GPU over 16 CPU cores and almost linear weak scaling across a thousand nodes. A unified framework is developed within which the efficient generation of the neighbor list and maintaining particle data locality are addressed. Our algorithm generates strictly ordered neighbor lists in parallel, while the construction is deterministic and makes no use of atomic operations or sorting. Such neighbor list leads to optimal data loading efficiency when combined with a two-level particle reordering scheme. A faster \textit{in situ} generation scheme for Gaussian random numbers is proposed using precomputed binary signatures. We designed custom transcendental functions that are fast and accurate for evaluating the pairwise interaction. Computer benchmarks demonstrate the speedup of our implementation over the CPU implementation as well as strong and weak scalability. A large-scale simulation of spontaneous vesicle formation consisting of 128 million particles was conducted to illustrate the practicality of our code in real-world applications. [Preview Abstract] |
Follow Us |
Engage
Become an APS Member |
My APS
Renew Membership |
Information for |
About APSThe American Physical Society (APS) is a non-profit membership organization working to advance the knowledge of physics. |
© 2023 American Physical Society
| All rights reserved | Terms of Use
| Contact Us
Headquarters
1 Physics Ellipse, College Park, MD 20740-3844
(301) 209-3200
Editorial Office
1 Research Road, Ridge, NY 11961-2701
(631) 591-4000
Office of Public Affairs
529 14th St NW, Suite 1050, Washington, D.C. 20045-2001
(202) 662-8700