Bulletin of the American Physical Society
APS March Meeting 2017
Volume 62, Number 4
Monday–Friday, March 13–17, 2017; New Orleans, Louisiana
Session B7: Computational Physics at the Petascale and Beyond IIFocus Session
|
Hide Abstracts |
Sponsoring Units: DCOMP DMP-DCMP DCP-DBIO Chair: Nichols Romero, Argonne National Laboratory Room: 266 |
Monday, March 13, 2017 11:15AM - 11:51AM |
B7.00001: Adaptive sampling strategies with high-throughput molecular dynamics Invited Speaker: Cecilia Clementi Despite recent significant hardware and software developments, the complete thermodynamic and kinetic characterization of large macromolecular complexes by molecular simulations still presents significant challenges. The high dimensionality of these systems and the complexity of the associated potential energy surfaces (creating multiple metastable regions connected by high free energy barriers) does not usually allow to adequately sample the relevant regions of their configurational space by means of a single, long Molecular Dynamics (MD) trajectory. Several different approaches have been proposed to tackle this sampling problem. We focus on the development of ensemble simulation strategies, where data from a large number of weakly coupled simulations are integrated to explore the configurational landscape of a complex system more efficiently. Ensemble methods are of increasing interest as the hardware roadmap is now mostly based on increasing core counts, rather than clock speeds. The main challenge in the development of an ensemble approach for efficient sampling is in the design of strategies to adaptively distribute the trajectories over the relevant regions of the systems' configurational space, without using any a priori information on the system global properties. We will discuss the definition of smart “adaptive sampling” approaches that can redirect computational resources towards unexplored yet relevant regions. Our approaches are based on new developments in dimensionality reduction for high dimensional dynamical systems, and optimal redistribution of resources. [Preview Abstract] |
Monday, March 13, 2017 11:51AM - 12:03PM |
B7.00002: Freud: a software suite for high-throughput simulation analysis Eric Harper, Matthew Spellings, Joshua Anderson, Sharon Glotzer Computer simulation is an indispensable tool for the study of a wide variety of systems. As simulations scale to fill petascale and exascale supercomputing clusters, so too does the size of the data produced, as well as the difficulty in analyzing these data. We present \textit{Freud}, an analysis software suite for efficient analysis of simulation data. Freud makes no assumptions about the system being analyzed, allowing for general analysis methods to be applied to nearly any type of simulation. Freud includes standard analysis methods such as the radial distribution function, as well as new methods including the potential of mean force and torque and local crystal environment analysis. Freud combines a Python interface with fast, parallel C$++$ analysis routines to run efficiently on laptops, workstations, and supercomputing clusters. Data analysis on clusters reduces data transfer requirements, a prohibitive cost for petascale computing. Used in conjunction with simulation software, Freud allows for smart simulations that adapt to the current state of the system, enabling the study of phenomena such as nucleation and growth, intelligent investigation of phases and phase transitions, and determination of effective pair potentials. [Preview Abstract] |
Monday, March 13, 2017 12:03PM - 12:15PM |
B7.00003: Efficient Calculation of Exact Exchange Within the Quantum Espresso Software Package Taylor Barnes, Thorsten Kurth, Pierre Carrier, Nathan Wichmann, David Prendergast, Paul Kent, Jack Deslippe Accurate simulation of condensed matter at the nanoscale requires careful treatment of the exchange interaction between electrons. In the context of plane-wave DFT, these interactions are typically represented through the use of approximate functionals. Greater accuracy can often be obtained through the use of functionals that incorporate some fraction of exact exchange; however, evaluation of the exact exchange potential is often prohibitively expensive. We present an improved algorithm for the parallel computation of exact exchange in Quantum Espresso, an open-source software package for plane-wave DFT simulation. Through the use of aggressive load balancing and on-the-fly transformation of internal data structures, our code exhibits speedups of approximately an order of magnitude for practical calculations. Additional optimizations are presented targeting the many-core Intel Xeon-Phi ``Knights Landing'' architecture, which largely powers NERSC's new Cori system. We demonstrate the successful application of the code to difficult problems, including simulation of water at a platinum interface and computation of the X-ray absorption spectra of transition metal oxides. [Preview Abstract] |
Monday, March 13, 2017 12:15PM - 12:27PM |
B7.00004: Parallel performance for large scale GW calculation using the OpenAtom software Subhasish Mandal, Minjung Kim, Eric Mikida, Kavitha Chndrasekar, Eric Bohm, Nikhil Jain, Laxmikant V. Kale, Glenn J. Martyna, Sohrab Ismail-Beigi One of the accurate {\it ab initio} electronic structure methods that goes beyond density functional theory (DFT) to describe excited states of materials is GW-BSE method. Due to extreme computational demands of this approach, most {\it ab initio} GW calculations have been confined to small units of cells of bulk-like materials. We will describe our collaborative efforts to develop new parallel software that permits large scale and efficiently parallel GW calculations. Our GW software is interfaced with the open source ab initio plane wave pseudopotential OpenAtom software (http://charm.cs.uiuc.edu/OpenAtom/) that takes the advantage of Charm++ parallel framework. We will present our real-space computational approach, parallel algorithms and parallel scaling performance for the GW calculation and compare to other available open source software. [Preview Abstract] |
Monday, March 13, 2017 12:27PM - 12:39PM |
B7.00005: Large scale ab initio molecular dynamics using the OpenAtom software Sohrab Ismail-Beigi, Subhasish Mandal, Minjung Kim, Eric Mikida, Eric Bohm, Prateek Jindal, Nikhil Jain, Laxmikant Kale, Glenn Martyna First principles molecular dynamics approaches permit one to simulate dynamic and time-dependent phenomena in physics, chemistry, and materials science without the use of empirical potentials or ad hoc assumptions about the interatomic interactions since they describe electrons, nuclei and their interactions explicitly. We describe our collaborative efforts in developing and enhancing the OpenAtom open source ab initio density functional software package based on plane waves and pseudopotentials (http://charm.cs.uiuc.edu/OpenAtom/). OpenAtom takes advantage of the Charm++ parallel framework. We present parallel scaling results on a large metal organic framework (MOF) material of scientific and potential technological interest for hydrogen storage. In the process, we highlight the capabilities of the software which include molecular dynamics (Car-Parrinello or Born-Oppenheimer), k-points, spin, path integral “beads” for quantum nuclear effects, and parallel tempering for exploration of complex phase spaces. Particular efforts have been made to ensure that the different capabilities interoperate in various combinations with high performance and scaling. Comparison to other available open source software will also be assessed. [Preview Abstract] |
Monday, March 13, 2017 12:39PM - 12:51PM |
B7.00006: Exploring ultrafast dynamics in photoexcited layered materials by large-scale quantum molecular dynamics simulations Aravind Krishnamoorthy, Lindsay Bassman, Aiichiro Nakano, Rajiv Kalia, Priya Vashishta, Hiroyuki Kumazoe, Masaaki Misawa, Fuyuki Shimojo Understanding ultrafast dynamics in photoexcited few-layer transition metal dichalcogenide crystals is crucial for synthesis and functionalization of these materials. These dynamics also hold the key to unraveling phenomena such as anisotropic thermal transport and anomalous lattice expansion. But, a thorough investigation of such dynamics requires computationally-demanding \textit{ab initio} methods to capture electron-phonon interactions as well as a laterally-large simulation cells to account for long-range vibrational modes that are not sampled in small-scale DFT calculations. Here, we present results from our non-adiabatic QMD simulations of mono and few-layer TMDCs at experimentally-realized sub-$\mu$m length scales, made possible through our linear-scaling DFT method. We discuss how large-scale simulations allow us to model phenomena like electron-lattice coupling, correlated atomic motion and localized configurational change and address recent experimental observations in these material systems. [Preview Abstract] |
Monday, March 13, 2017 12:51PM - 1:03PM |
B7.00007: Multimillion-atom Reactive Molecular Dynamics Simulations on Oxidation of SiC Nanoparticles Ying Li, Nichols Romero High-temperature oxidation of silicon-carbide nanoparticles (nSiC) underlies a wide range of technologies from high-power electronic switches for efficient electrical grid and thermal protection of space vehicles to self-healing ceramic nanocomposites. Here, multimillion-atom reactive molecular dynamics simulations validated by ab initio quantum molecular dynamics simulations predict unexpected condensation of large graphene flakes during high-temperature oxidation of nSiC. Initial oxidation produces a molten silica shell that acts as an autocatalytic `nanoreactor' by actively transporting oxygen reactants while protecting the nanocarbon product from harsh oxidizing environment. Percolation transition produces porous nanocarbon with fractal geometry, which consists of mostly sp2 carbons with pentagonal and heptagonal defects. This work suggests a simple synthetic pathway to high surface-area, low-density nanocarbon with numerous energy, biomedical and mechanical-metamaterial applications, including the reinforcement of self-healing composites. [Preview Abstract] |
Monday, March 13, 2017 1:03PM - 1:15PM |
B7.00008: Workflow Management Systems for Molecular Dynamics on Leadership Computers Jack Wells, Sergey Panitkin, Danila Oleynik, Shantenu Jha Molecular Dynamics (MD) simulations play an important role in a range of disciplines from Material Science to Biophysical systems and account for a large fraction of cycles consumed on computing resources. Increasingly science problems require the successful execution of "many" MD simulations as opposed to a single MD simulation. There is a need to provide scalable and flexible approaches to the execution of the workload. We present preliminary results on the Titan computer at the Oak Ridge Leadership Computing Facility that demonstrate a general capability to manage workload execution agnostic of a specific MD simulation kernel or execution pattern, and in a manner that integrates disparate grid-based and supercomputing resources. Our results build upon our extensive experience of distributed workload management in the high-energy physics ATLAS project using PanDA (Production and Distributed Analysis System), coupled with recent conceptual advances in our understanding of workload management on heterogeneous resources. We will discuss how we will generalize these initial capabilities towards a more production level service on DOE leadership resources. [Preview Abstract] |
Monday, March 13, 2017 1:15PM - 1:27PM |
B7.00009: Neutron Scattering in Chemistry: Experiments, Models and Statistical Description of Physical Phenomena Timmy Ramirez Cuesta Incoherent inelastic neutron scattering spectroscopy is a very powerful technique that requires the use of ab-initio models to interpret the experimental data. Albeit not exact the information obtained from the models gives very valuable insight into the dynamics of atoms in solids and molecules, that, in turn, provides unique access to the vibrational density of states. It is extremely sensitive to hydrogen since the neutron cross section of hydrogen is the largest of all chemical elements. Hydrogen, being the lightest element highlights quantum effects more pronounced than the rest of the elements.In the case of non-crystalline or disordered materials, the models provide partial information and only a reduced sampling of possible configurations can be done at the present. With very large computing power, as exascale computing will provide, a new opportunity arises to study these systems and introduce a description of statistical configurations including energetics and dynamics characterization of configurational entropy. As part of the ICE-MAN project, we are developing the tools to manage the workflows, visualize and analyze the results. To use state of the art computational methods and most neutron scattering that using atomistic models for interpretation of experimental data [Preview Abstract] |
Monday, March 13, 2017 1:27PM - 1:39PM |
B7.00010: Highly Efficient Parallel Multigrid Solver For Large-Scale Simulation of Grain Growth Using the Structural Phase Field Crystal Model Zhen Guan, Dmitry Pekurovsky, Jason Luce, Katsuyo Thornton, John Lowengrub The structural phase field crystal (XPFC) model can be used to model grain growth in polycrystalline materials at diffusive time-scales while maintaining atomic scale resolution. However, the governing equation of the XPFC model is an integral-partial-differential-equation (IPDE), which poses challenges in implementation onto high performance computing (HPC) platforms. In collaboration with the XSEDE Extended Collaborative Support Service, we developed a distributed memory HPC solver for the XPFC model, which combines parallel multigrid and P3DFFT. The performance benchmarking on the Stampede supercomputer indicates near linear strong and weak scaling for both multigrid and transfer time between multigrid and FFT modules up to 1024 cores. Scalability of the FFT module begins to decline at 128 cores, but it is sufficient for the type of problem we will be examining. We have demonstrated simulations using 1024 cores, and we expect to achieve 4096 cores and beyond. Ongoing work involves optimization of MPI/OpenMP-based codes for the Intel KNL Many-Core Architecture. This optimizes the code for coming pre-exascale systems, in particular many-core systems such as Stampede 2.0 and Cori 2 at NERSC, without sacrificing efficiency on other general HPC systems. [Preview Abstract] |
Monday, March 13, 2017 1:39PM - 1:51PM |
B7.00011: Large Scale GW Calculations on the Cori System Jack Deslippe, Mauro Del Ben, Felipe da Jornada, Andrew Canning, Steven Louie The NERSC Cori system, powered by 9000+ Intel Xeon-Phi processors, represents one of the largest HPC systems for open-science in the United States and the world. We discuss the optimization of the GW methodology for this system, including both node level and system-scale optimizations. We highlight multiple large scale (thousands of atoms) case studies and discuss both absolute application performance and comparison to calculations on more traditional HPC architectures. We find that the GW method is particularly well suited for many-core architectures due to the ability to exploit a large amount of parallelism across many layers of the system. [Preview Abstract] |
Monday, March 13, 2017 1:51PM - 2:03PM |
B7.00012: Open release of the DCA++ project Urs Haehner, Raffaele Solca, Peter Staar, Gonzalo Alvarez, Thomas Maier, Michael Summers, Thomas Schulthess We present the first open release of the DCA++ project, a highly scalable and efficient research code to solve quantum many-body problems with cutting edge quantum cluster algorithms. The implemented dynamical cluster approximation (DCA) and its DCA$^+$ extension with a continuous self-energy capture nonlocal correlations in strongly correlated electron systems thereby allowing insight into high-T$_c$ superconductivity. With the increasing heterogeneity of modern machines, DCA++ provides portable performance on conventional and emerging new architectures, such as hybrid CPU-GPU and Xeon Phi, sustaining multiple petaflops on ORNL's Titan and CSCS' Piz Daint. Moreover, we will describe how best practices in software engineering can be applied to make software development sustainable and scalable in a research group. Software testing and documentation not only prevent productivity collapse, but more importantly, they are necessary for correctness, credibility and reproducibility of scientific results. [Preview Abstract] |
Monday, March 13, 2017 2:03PM - 2:15PM |
B7.00013: OpenRBC: Redefining the Frontier of Red Blood Cell Simulations at Protein Resolution Yu-Hang Tang, Lu Lu, He Li, Leopold Grinberg, Vipin Sachdeva, Constantinos Evangelinos, George Karniadakis We present a from-scratch development of OpenRBC, a coarse-grained molecular dynamics code, which is capable of performing an unprecedented in silico experiment --- simulating an entire mammal red blood cell lipid bilayer and cytoskeleton modeled by 4 million mesoscopic particles --- on a single shared memory node. To achieve this, we invented an adaptive spatial searching algorithm to accelerate the computation of short-range pairwise interactions in an extremely sparse 3D space. The algorithm is based on a Voronoi partitioning of the point cloud of coarse-grained particles, and is continuously updated over the course of the simulation. The algorithm enables the construction of a lattice-free cell list, i.e. the key spatial searching data structure in our code, in $O(N)$ time and space space with cells whose position and shape adapts automatically to the local density and curvature. The code implements NUMA/NUCA-aware OpenMP parallelization and achieves perfect scaling with up to hundreds of hardware threads. The code outperforms a legacy solver by more than 8 times in time-to-solution and more than 20 times in problem size, thus providing a new venue for probing the cytomechanics of red blood cells. [Preview Abstract] |
Follow Us |
Engage
Become an APS Member |
My APS
Renew Membership |
Information for |
About APSThe American Physical Society (APS) is a non-profit membership organization working to advance the knowledge of physics. |
© 2025 American Physical Society
| All rights reserved | Terms of Use
| Contact Us
Headquarters
1 Physics Ellipse, College Park, MD 20740-3844
(301) 209-3200
Editorial Office
100 Motor Pkwy, Suite 110, Hauppauge, NY 11788
(631) 591-4000
Office of Public Affairs
529 14th St NW, Suite 1050, Washington, D.C. 20045-2001
(202) 662-8700