Bulletin of the American Physical Society
APS March Meeting 2017
Volume 62, Number 4
Monday–Friday, March 13–17, 2017; New Orleans, Louisiana
Session B7: Computational Physics at the Petascale and Beyond IIFocus

Hide Abstracts 
Sponsoring Units: DCOMP DMPDCMP DCPDBIO Chair: Nichols Romero, Argonne National Laboratory Room: 266 
Monday, March 13, 2017 11:15AM  11:51AM 
B7.00001: Adaptive sampling strategies with highthroughput molecular dynamics Invited Speaker: Cecilia Clementi Despite recent significant hardware and software developments, the complete thermodynamic and kinetic characterization of large macromolecular complexes by molecular simulations still presents significant challenges. The high dimensionality of these systems and the complexity of the associated potential energy surfaces (creating multiple metastable regions connected by high free energy barriers) does not usually allow to adequately sample the relevant regions of their configurational space by means of a single, long Molecular Dynamics (MD) trajectory. Several different approaches have been proposed to tackle this sampling problem. We focus on the development of ensemble simulation strategies, where data from a large number of weakly coupled simulations are integrated to explore the configurational landscape of a complex system more efficiently. Ensemble methods are of increasing interest as the hardware roadmap is now mostly based on increasing core counts, rather than clock speeds. The main challenge in the development of an ensemble approach for efficient sampling is in the design of strategies to adaptively distribute the trajectories over the relevant regions of the systems' configurational space, without using any a priori information on the system global properties. We will discuss the definition of smart “adaptive sampling” approaches that can redirect computational resources towards unexplored yet relevant regions. Our approaches are based on new developments in dimensionality reduction for high dimensional dynamical systems, and optimal redistribution of resources. [Preview Abstract] 
Monday, March 13, 2017 11:51AM  12:03PM 
B7.00002: Freud: a software suite for highthroughput simulation analysis Eric Harper, Matthew Spellings, Joshua Anderson, Sharon Glotzer Computer simulation is an indispensable tool for the study of a wide variety of systems. As simulations scale to fill petascale and exascale supercomputing clusters, so too does the size of the data produced, as well as the difficulty in analyzing these data. We present \textit{Freud}, an analysis software suite for efficient analysis of simulation data. Freud makes no assumptions about the system being analyzed, allowing for general analysis methods to be applied to nearly any type of simulation. Freud includes standard analysis methods such as the radial distribution function, as well as new methods including the potential of mean force and torque and local crystal environment analysis. Freud combines a Python interface with fast, parallel C$++$ analysis routines to run efficiently on laptops, workstations, and supercomputing clusters. Data analysis on clusters reduces data transfer requirements, a prohibitive cost for petascale computing. Used in conjunction with simulation software, Freud allows for smart simulations that adapt to the current state of the system, enabling the study of phenomena such as nucleation and growth, intelligent investigation of phases and phase transitions, and determination of effective pair potentials. [Preview Abstract] 
Monday, March 13, 2017 12:03PM  12:15PM 
B7.00003: Efficient Calculation of Exact Exchange Within the Quantum Espresso Software Package Taylor Barnes, Thorsten Kurth, Pierre Carrier, Nathan Wichmann, David Prendergast, Paul Kent, Jack Deslippe Accurate simulation of condensed matter at the nanoscale requires careful treatment of the exchange interaction between electrons. In the context of planewave DFT, these interactions are typically represented through the use of approximate functionals. Greater accuracy can often be obtained through the use of functionals that incorporate some fraction of exact exchange; however, evaluation of the exact exchange potential is often prohibitively expensive. We present an improved algorithm for the parallel computation of exact exchange in Quantum Espresso, an opensource software package for planewave DFT simulation. Through the use of aggressive load balancing and onthefly transformation of internal data structures, our code exhibits speedups of approximately an order of magnitude for practical calculations. Additional optimizations are presented targeting the manycore Intel XeonPhi ``Knights Landing'' architecture, which largely powers NERSC's new Cori system. We demonstrate the successful application of the code to difficult problems, including simulation of water at a platinum interface and computation of the Xray absorption spectra of transition metal oxides. [Preview Abstract] 
Monday, March 13, 2017 12:15PM  12:27PM 
B7.00004: Parallel performance for large scale GW calculation using the OpenAtom software Subhasish Mandal, Minjung Kim, Eric Mikida, Kavitha Chndrasekar, Eric Bohm, Nikhil Jain, Laxmikant V. Kale, Glenn J. Martyna, Sohrab IsmailBeigi One of the accurate {\it ab initio} electronic structure methods that goes beyond density functional theory (DFT) to describe excited states of materials is GWBSE method. Due to extreme computational demands of this approach, most {\it ab initio} GW calculations have been confined to small units of cells of bulklike materials. We will describe our collaborative efforts to develop new parallel software that permits large scale and efficiently parallel GW calculations. Our GW software is interfaced with the open source ab initio plane wave pseudopotential OpenAtom software (http://charm.cs.uiuc.edu/OpenAtom/) that takes the advantage of Charm++ parallel framework. We will present our realspace computational approach, parallel algorithms and parallel scaling performance for the GW calculation and compare to other available open source software. [Preview Abstract] 
Monday, March 13, 2017 12:27PM  12:39PM 
B7.00005: Large scale ab initio molecular dynamics using the OpenAtom software Sohrab IsmailBeigi, Subhasish Mandal, Minjung Kim, Eric Mikida, Eric Bohm, Prateek Jindal, Nikhil Jain, Laxmikant Kale, Glenn Martyna First principles molecular dynamics approaches permit one to simulate dynamic and timedependent phenomena in physics, chemistry, and materials science without the use of empirical potentials or ad hoc assumptions about the interatomic interactions since they describe electrons, nuclei and their interactions explicitly. We describe our collaborative efforts in developing and enhancing the OpenAtom open source ab initio density functional software package based on plane waves and pseudopotentials (http://charm.cs.uiuc.edu/OpenAtom/). OpenAtom takes advantage of the Charm++ parallel framework. We present parallel scaling results on a large metal organic framework (MOF) material of scientific and potential technological interest for hydrogen storage. In the process, we highlight the capabilities of the software which include molecular dynamics (CarParrinello or BornOppenheimer), kpoints, spin, path integral “beads” for quantum nuclear effects, and parallel tempering for exploration of complex phase spaces. Particular efforts have been made to ensure that the different capabilities interoperate in various combinations with high performance and scaling. Comparison to other available open source software will also be assessed. [Preview Abstract] 
Monday, March 13, 2017 12:39PM  12:51PM 
B7.00006: Exploring ultrafast dynamics in photoexcited layered materials by largescale quantum molecular dynamics simulations Aravind Krishnamoorthy, Lindsay Bassman, Aiichiro Nakano, Rajiv Kalia, Priya Vashishta, Hiroyuki Kumazoe, Masaaki Misawa, Fuyuki Shimojo Understanding ultrafast dynamics in photoexcited fewlayer transition metal dichalcogenide crystals is crucial for synthesis and functionalization of these materials. These dynamics also hold the key to unraveling phenomena such as anisotropic thermal transport and anomalous lattice expansion. But, a thorough investigation of such dynamics requires computationallydemanding \textit{ab initio} methods to capture electronphonon interactions as well as a laterallylarge simulation cells to account for longrange vibrational modes that are not sampled in smallscale DFT calculations. Here, we present results from our nonadiabatic QMD simulations of mono and fewlayer TMDCs at experimentallyrealized sub$\mu$m length scales, made possible through our linearscaling DFT method. We discuss how largescale simulations allow us to model phenomena like electronlattice coupling, correlated atomic motion and localized configurational change and address recent experimental observations in these material systems. [Preview Abstract] 
Monday, March 13, 2017 12:51PM  1:03PM 
B7.00007: Multimillionatom Reactive Molecular Dynamics Simulations on Oxidation of SiC Nanoparticles Ying Li, Nichols Romero Hightemperature oxidation of siliconcarbide nanoparticles (nSiC) underlies a wide range of technologies from highpower electronic switches for efficient electrical grid and thermal protection of space vehicles to selfhealing ceramic nanocomposites. Here, multimillionatom reactive molecular dynamics simulations validated by ab initio quantum molecular dynamics simulations predict unexpected condensation of large graphene flakes during hightemperature oxidation of nSiC. Initial oxidation produces a molten silica shell that acts as an autocatalytic `nanoreactor' by actively transporting oxygen reactants while protecting the nanocarbon product from harsh oxidizing environment. Percolation transition produces porous nanocarbon with fractal geometry, which consists of mostly sp2 carbons with pentagonal and heptagonal defects. This work suggests a simple synthetic pathway to high surfacearea, lowdensity nanocarbon with numerous energy, biomedical and mechanicalmetamaterial applications, including the reinforcement of selfhealing composites. [Preview Abstract] 
Monday, March 13, 2017 1:03PM  1:15PM 
B7.00008: Workflow Management Systems for Molecular Dynamics on Leadership Computers Jack Wells, Sergey Panitkin, Danila Oleynik, Shantenu Jha Molecular Dynamics (MD) simulations play an important role in a range of disciplines from Material Science to Biophysical systems and account for a large fraction of cycles consumed on computing resources. Increasingly science problems require the successful execution of "many" MD simulations as opposed to a single MD simulation. There is a need to provide scalable and flexible approaches to the execution of the workload. We present preliminary results on the Titan computer at the Oak Ridge Leadership Computing Facility that demonstrate a general capability to manage workload execution agnostic of a specific MD simulation kernel or execution pattern, and in a manner that integrates disparate gridbased and supercomputing resources. Our results build upon our extensive experience of distributed workload management in the highenergy physics ATLAS project using PanDA (Production and Distributed Analysis System), coupled with recent conceptual advances in our understanding of workload management on heterogeneous resources. We will discuss how we will generalize these initial capabilities towards a more production level service on DOE leadership resources. [Preview Abstract] 
Monday, March 13, 2017 1:15PM  1:27PM 
B7.00009: Neutron Scattering in Chemistry: Experiments, Models and Statistical Description of Physical Phenomena Timmy Ramirez Cuesta Incoherent inelastic neutron scattering spectroscopy is a very powerful technique that requires the use of abinitio models to interpret the experimental data. Albeit not exact the information obtained from the models gives very valuable insight into the dynamics of atoms in solids and molecules, that, in turn, provides unique access to the vibrational density of states. It is extremely sensitive to hydrogen since the neutron cross section of hydrogen is the largest of all chemical elements. Hydrogen, being the lightest element highlights quantum effects more pronounced than the rest of the elements.In the case of noncrystalline or disordered materials, the models provide partial information and only a reduced sampling of possible configurations can be done at the present. With very large computing power, as exascale computing will provide, a new opportunity arises to study these systems and introduce a description of statistical configurations including energetics and dynamics characterization of configurational entropy. As part of the ICEMAN project, we are developing the tools to manage the workflows, visualize and analyze the results. To use state of the art computational methods and most neutron scattering that using atomistic models for interpretation of experimental data [Preview Abstract] 
Monday, March 13, 2017 1:27PM  1:39PM 
B7.00010: Highly Efficient Parallel Multigrid Solver For LargeScale Simulation of Grain Growth Using the Structural Phase Field Crystal Model Zhen Guan, Dmitry Pekurovsky, Jason Luce, Katsuyo Thornton, John Lowengrub The structural phase field crystal (XPFC) model can be used to model grain growth in polycrystalline materials at diffusive timescales while maintaining atomic scale resolution. However, the governing equation of the XPFC model is an integralpartialdifferentialequation (IPDE), which poses challenges in implementation onto high performance computing (HPC) platforms. In collaboration with the XSEDE Extended Collaborative Support Service, we developed a distributed memory HPC solver for the XPFC model, which combines parallel multigrid and P3DFFT. The performance benchmarking on the Stampede supercomputer indicates near linear strong and weak scaling for both multigrid and transfer time between multigrid and FFT modules up to 1024 cores. Scalability of the FFT module begins to decline at 128 cores, but it is sufficient for the type of problem we will be examining. We have demonstrated simulations using 1024 cores, and we expect to achieve 4096 cores and beyond. Ongoing work involves optimization of MPI/OpenMPbased codes for the Intel KNL ManyCore Architecture. This optimizes the code for coming preexascale systems, in particular manycore systems such as Stampede 2.0 and Cori 2 at NERSC, without sacrificing efficiency on other general HPC systems. [Preview Abstract] 
Monday, March 13, 2017 1:39PM  1:51PM 
B7.00011: Large Scale GW Calculations on the Cori System Jack Deslippe, Mauro Del Ben, Felipe da Jornada, Andrew Canning, Steven Louie The NERSC Cori system, powered by 9000+ Intel XeonPhi processors, represents one of the largest HPC systems for openscience in the United States and the world. We discuss the optimization of the GW methodology for this system, including both node level and systemscale optimizations. We highlight multiple large scale (thousands of atoms) case studies and discuss both absolute application performance and comparison to calculations on more traditional HPC architectures. We find that the GW method is particularly well suited for manycore architectures due to the ability to exploit a large amount of parallelism across many layers of the system. [Preview Abstract] 
Monday, March 13, 2017 1:51PM  2:03PM 
B7.00012: Open release of the DCA++ project Urs Haehner, Raffaele Solca, Peter Staar, Gonzalo Alvarez, Thomas Maier, Michael Summers, Thomas Schulthess We present the first open release of the DCA++ project, a highly scalable and efficient research code to solve quantum manybody problems with cutting edge quantum cluster algorithms. The implemented dynamical cluster approximation (DCA) and its DCA$^+$ extension with a continuous selfenergy capture nonlocal correlations in strongly correlated electron systems thereby allowing insight into highT$_c$ superconductivity. With the increasing heterogeneity of modern machines, DCA++ provides portable performance on conventional and emerging new architectures, such as hybrid CPUGPU and Xeon Phi, sustaining multiple petaflops on ORNL's Titan and CSCS' Piz Daint. Moreover, we will describe how best practices in software engineering can be applied to make software development sustainable and scalable in a research group. Software testing and documentation not only prevent productivity collapse, but more importantly, they are necessary for correctness, credibility and reproducibility of scientific results. [Preview Abstract] 
Monday, March 13, 2017 2:03PM  2:15PM 
B7.00013: OpenRBC: Redefining the Frontier of Red Blood Cell Simulations at Protein Resolution YuHang Tang, Lu Lu, He Li, Leopold Grinberg, Vipin Sachdeva, Constantinos Evangelinos, George Karniadakis We present a fromscratch development of OpenRBC, a coarsegrained molecular dynamics code, which is capable of performing an unprecedented in silico experiment  simulating an entire mammal red blood cell lipid bilayer and cytoskeleton modeled by 4 million mesoscopic particles  on a single shared memory node. To achieve this, we invented an adaptive spatial searching algorithm to accelerate the computation of shortrange pairwise interactions in an extremely sparse 3D space. The algorithm is based on a Voronoi partitioning of the point cloud of coarsegrained particles, and is continuously updated over the course of the simulation. The algorithm enables the construction of a latticefree cell list, i.e. the key spatial searching data structure in our code, in $O(N)$ time and space space with cells whose position and shape adapts automatically to the local density and curvature. The code implements NUMA/NUCAaware OpenMP parallelization and achieves perfect scaling with up to hundreds of hardware threads. The code outperforms a legacy solver by more than 8 times in timetosolution and more than 20 times in problem size, thus providing a new venue for probing the cytomechanics of red blood cells. [Preview Abstract] 
Follow Us 
Engage
Become an APS Member 
My APS
Renew Membership 
Information for 
About APSThe American Physical Society (APS) is a nonprofit membership organization working to advance the knowledge of physics. 
© 2020 American Physical Society
 All rights reserved  Terms of Use
 Contact Us
Headquarters
1 Physics Ellipse, College Park, MD 207403844
(301) 2093200
Editorial Office
1 Research Road, Ridge, NY 119612701
(631) 5914000
Office of Public Affairs
529 14th St NW, Suite 1050, Washington, D.C. 200452001
(202) 6628700