Bulletin of the American Physical Society
APS March Meeting 2012
Volume 57, Number 1
Monday–Friday, February 27–March 2 2012; Boston, Massachusetts
Session W26: Focus Session: What is Computational Physics? New Technologies and Their Application |
Hide Abstracts |
Sponsoring Units: DCOMP Chair: Timothy Germann, Los Alamos National Laboratory Room: 257B |
Thursday, March 1, 2012 11:15AM - 11:51AM |
W26.00001: The Future of Scientific Computing Invited Speaker: Thom Dunning Computing technologies are undergoing a dramatic transition. Multicore chips with up to eight cores are now available from many vendors. This trend will continue, with the number of cores on a chip continuing to increase. In fact, many-core chips, e.g., NVIDIA GPUs, are now being seriously explored in many areas of scientific computing. This technology shift presents a challenge for computational science and engineering--the only significant performance increases in the future will be through the increased exploitation of parallelism. At the same time, petascale computers based on these technologies are being deployed at sites across the world. The opportunities arising from petascale computing are enormous--predicting the behavior of complex biological systems, understanding the production of heavy elements in supernovae, designing catalysts at the atomic level, predicting changes in the earth's climate and ecosystems, and designing complex engineered systems. But, petascale computers are very complex systems, built from multi-core and many-core chips with 100,000s to millions of cores, 100s of terabytes to petabytes of memory, and 10,000s of disk drives. The architecture of petascale computers has significant implications for the design of the next generation of science and engineering applications. In this presentation, we will provide an overview of the directions in computing technologies as well as describe the petascale computing systems being deployed in the U.S. and elsewhere. [Preview Abstract] |
Thursday, March 1, 2012 11:51AM - 12:03PM |
W26.00002: GPU Acceleration of the Qbox First-Principles Molecular Dynamics Code William Dawson, Francois Gygi The availability of double precision graphics cards provides an opportunity to speed up electronic structure computations. We modify the Qbox [1] code to utilize Fermi GPUs on the Keeneland [2] platform. We use the CUFFT library to speed up Fourier transforms and perform asynchronous communication to cut down the cost of data transfers. The modified code is used in simulations of a 64-molecule water system with an 85 Ry plane wave energy cut off. Preliminary results show a 2-3 times speedup in the calculation of the charge density and in the application of the Hamiltonian operator to the wave function. We present these findings as well as further speedups measured in other parts of the code. \\[4pt] [1] http://eslab.ucdavis.edu/software/qbox\\[0pt] [2] http://keeneland.gatech.edu [Preview Abstract] |
Thursday, March 1, 2012 12:03PM - 12:15PM |
W26.00003: Quantum Monte Carlo in the era of petascale computers Jeongnim Kim, Kenneth Esler, Jeremy McMinis, Miguel Morales, Bryan Clark, Luke Shulenburger, David Ceperley Continuum quantum Monte Carlo (QMC) methods are a leading contender for high accuracy calculations for the electronic structure of realistic systems, especially on massively parallel high-performance computers (HPC). The performance gain on recent HPC systems is largely driven by increasing parallelism: the number of compute cores of a SMP and the number of SMPs have been going up, as the Top500 list attests. However, the available memory as well as the communication and memory bandwidth per element has not kept pace with the increasing parallelism. This severely limits the applicability of QMC and the problem size it can handle. (OpenMP,CUDA)/MPI hybrid programming provides applications with simple but effective solutions to overcome efficiency and scalability bottlenecks on large-scale clusters based on multi/many-core SMPs. We discuss the design and implementation of hybrid methods in QMCPACK and analyze its performance on multi-petaflop platforms characterized by various memory and communication hierarchies. Also presented are QMC calculations of bulk systems, including defects in semiconductors. [Preview Abstract] |
Thursday, March 1, 2012 12:15PM - 12:27PM |
W26.00004: GPU accelerated replica-exchange simulations of polymers Jonathan Gross, Michael Bachmann Precise estimation of physical quantities using Monte Carlo computer simulations strongly depends on the amount of statistical data gathered during the simulation. Being able to increase the performance of the sampling process will allow more accurate results in a shorter time period. To employ the parallel tempering replica-exchange algorithm on parallel hardware such as multicore CPUs and GPUs turns out to be very suitable for the task. We achieve rapid speedups in our investigation of an exemplified bead-spring polymer model. Identification and classification of phase-like transitions were done by analyzing the microcanonical entropy. [Preview Abstract] |
Thursday, March 1, 2012 12:27PM - 12:39PM |
W26.00005: Graphics Processing Unit Accelerated Hirsch-Fye Quantum Monte Carlo Conrad Moore, Sameer Abu Asal, Kaushik Rajagoplan, David Poliakoff, Joseph Caprino, Karen Tomko, Bhupender Thakur, Shuxiang Yang, Juana Moreno, Mark Jarrell In Dynamical Mean Field Theory and its cluster extensions, such as the Dynamic Cluster Algorithm, the bottleneck of the algorithm is solving the self-consistency equations with an impurity solver. Hirsch-Fye Quantum Monte Carlo is one of the most commonly used impurity and cluster solvers. This work implements optimizations of the algorithm, such as enabling large data re-use, suitable for the Graphics Processing Unit (GPU) architecture. The GPU's sheer number of concurrent parallel computations and large bandwidth to many shared memories takes advantage of the inherent parallelism in the Green function update and measurement routines, and can substantially improve the efficiency of the Hirsch-Fye impurity solver. [Preview Abstract] |
Thursday, March 1, 2012 12:39PM - 12:51PM |
W26.00006: ABSTRACT WITHDRAWN |
Thursday, March 1, 2012 12:51PM - 1:03PM |
W26.00007: ABSTRACT WITHDRAWN |
Thursday, March 1, 2012 1:03PM - 1:15PM |
W26.00008: A simple yet powerful open-source tool for scientific computing Larry Engelhardt I will introduce new open-source software for easily carrying out common mathematical task (plotting, animating, differentiating, integrating, and solving systems of equations) and fitting experimental data. This software requires no special syntax or programming, and it is designed to allow you to communicate mathematical results to your colleagues or students in a manner that is interactive, productive, and efficient. The current version can be downloaded from http://www.compadre.org/osp/items/detail.cfm?ID=11250. [Preview Abstract] |
Thursday, March 1, 2012 1:15PM - 1:27PM |
W26.00009: Multicanonical Modeling of Commercial WDM Optical Communication Systems David Yevick, George Soliman The multicanonical method has been extensively applied to optical and more recently wireless communication systems. Here we outline our recent work on the simulation of electronically compensated polarization multiplexed wavelength-division-multiplexed quadrature phase shift keyed optical communication systems influenced by polarization mode dispersion and fiber nonlinearites. This constitutes to our knowledge the first complete multicanonical analysis of a realistic commercial system. [Preview Abstract] |
Thursday, March 1, 2012 1:27PM - 1:39PM |
W26.00010: Mantid, A high performance framework for reduction and analysis of neutron scattering data Jon Taylor, O. Arnold, J. Bilheaux , A. Buts, S. Campbell, M. Doucet, N. Draper, R. Fowler, M. Gigg, V. Lynch, A. Markvardsen, K. Palmen, P. Parker, P. Peterson, S. Ren, M. Reuter, A. Savici, R. Taylor, R. Tolchenov, R. Whitley, W. Zhou, J. Zikovsky The use of large scale facilities by researchers in the field of condensed matter, soft matter and the life sciences is becoming ever more prevalent in the modern research landscape. Facilities such as SNS and HiFNR at ORNL and ISIS at RAL have ever increasing user demand and produce ever increasing volumes of data. One of the single most important barriers between experiment and publication is the complex and time consuming effort that individual researchers apply to data reduction and analysis. The objective of the Manipulation and Analysis Toolkit for Instrument Data or MANTID [1] framework is to bridge this gap with a common interface for data reduction and analysis that is seamless between the user experience at the time of the experiment and at their home institute when performing the final analysis and fitting of the data.\\[4pt] [1] http://www.mantidproject.org/ [Preview Abstract] |
Thursday, March 1, 2012 1:39PM - 1:51PM |
W26.00011: SU(N) Clebsch-Gordan coefficients and non-Abelian symmetries Arne Alex, Lukas Everding, Peter Littelmann, Jan von Delft The numerical treatment of models with SU($N$) benefits greatly from the Wigner-Eckart theorem. Its application requires the explicit knowledge of the Clebsch-Gordan coefficients (CGCs) of the group SU($N$). We present an algorithm for the explicit numerical calculation of SU($N$) CGCs based on the \emph{Gelfand-Tsetlin pattern} calculus. Further exploitation of the Weyl symmetry of SU($N$) irreducible representations (irreps) leads to a significant speed-up compared to our previous algorithm (J.~Math.~Phys.\ 52, 023507, 2011). Our algorithm works for arbitrary $N$ and tensor products of two arbitrary SU($N$) irreps. It is well-suited for numerical implementation; we provide a well-tested computer code for download and online use. Possible applications of our code include numerical treatments of quantum many-body systems using the numerical renormalization group (NRG), the density matrix renormalization group (DMRG), and general tensor network methods. [Preview Abstract] |
Thursday, March 1, 2012 1:51PM - 2:03PM |
W26.00012: Efficient interacting many body similations using GPUs Tobias Kramer Graphics Processing Units (GPUs) provide an ideal tool to study interacting systems using classical machanics with huge speedups for example in molecular dynamics. The quantum-mechanical calculations of many-body systems require additional work, but are feasible using additional degrees of freedom to incorporate quantum-mechanical effects [1]. As an example of the method I show the self-consistent solution to the current transport in a magnetic field can be obtained from a microscopic model with thousands of Coulomb interacting electrons. This yields a microscopic model of the Hall effect [2]. For few electron systems I compare the electronic density evolution based on the GPU classical-quantum model to TD-DFT calculations and discuss prospects of GPUs for solving the Schrodinger equation for many-particles. \\[4pt] [1] Time dependent approach to transport and scattering in atomic and mesoscopic systems, T. Kramer AIP Conf. Proc., 1334, 142 (2011) \\[0pt] [2] Self-consistent calculation of electric potentials in Hall devices, T. Kramer, V. Krueckl, E. Heller, and R. Parrott Phys. Rev. B, 81, 205306 (2010) [Preview Abstract] |
Follow Us |
Engage
Become an APS Member |
My APS
Renew Membership |
Information for |
About APSThe American Physical Society (APS) is a non-profit membership organization working to advance the knowledge of physics. |
© 2024 American Physical Society
| All rights reserved | Terms of Use
| Contact Us
Headquarters
1 Physics Ellipse, College Park, MD 20740-3844
(301) 209-3200
Editorial Office
100 Motor Pkwy, Suite 110, Hauppauge, NY 11788
(631) 591-4000
Office of Public Affairs
529 14th St NW, Suite 1050, Washington, D.C. 20045-2001
(202) 662-8700