Bulletin of the American Physical Society
APS March Meeting 2013
Volume 58, Number 1
Monday–Friday, March 18–22, 2013; Baltimore, Maryland
Session J9: Invited Session: Computational Physics at the Bleeding Edge: To Exascale and Beyond |
Hide Abstracts |
Sponsoring Units: DCOMP Chair: Timothy Germann, Los Alamos National Laboratory Room: 308 |
Tuesday, March 19, 2013 2:30PM - 3:06PM |
J9.00001: Intricacies of modern supercomputing illustrated with recent advances in simulations of strongly correlated electron systems Invited Speaker: Thomas C. Schulthess The continued thousand-fold improvement in sustained application performance per decade on modern supercomputers keeps opening new opportunities for scientific simulations. But supercomputers have become very complex machines, built with thousands or tens of thousands of complex nodes consisting of multiple CPU cores or, most recently, a combination of CPU and GPU processors. Efficient simulations on such high-end computing systems require tailored algorithms that optimally map numerical methods to particular architectures. These intricacies will be illustrated with simulations of strongly correlated electron systems, where the development of quantum cluster methods, Monte Carlo techniques, as well as their optimal implementation by means of algorithms with improved data locality and high arithmetic density have gone hand in hand with evolving computer architectures. The present work would not have been possible without continued access to computing resources at the National Center for Computational Science of Oak Ridge National Laboratory, which is funded by the Facilities Division of the Office of Advanced Scientific Computing Research, and the Swiss National Supercomputing Center (CSCS) that is funded by ETH Zurich. [Preview Abstract] |
Tuesday, March 19, 2013 3:06PM - 3:42PM |
J9.00002: Seeking a sustainable approach for computational science Invited Speaker: Robert Harrison Many are now questioning whether our current approaches to developing software for science and engineering are sustainable. In particular, can we deliver to society and the nation the full benefits expected from high-performance simulation at the peta and exascales? Or is innovative science being stifled by the increasing complexities of all aspects of our problem space (rapidly changing hardware, software, multidisciplinary physics, etc.)? Focusing on applications in chemistry and materials science, and motivated by the co-design of exascale hardware and software, I will discuss many of these issues including how chemistry has already been forced to adopt solutions that differ quite sharply to those in the mainstream, and how these solutions position us well for the technology transitions now under way. Radical changes in how we compute, going all the way back to the underlying numerical representation and algorithms used for the simulation, also promise great enhancements to both developer productivity and the accuracy of simulations. [Preview Abstract] |
Tuesday, March 19, 2013 3:42PM - 4:18PM |
J9.00003: Nicholas Metropolis Award for Outstanding Doctoral Thesis Work in Computational Physics Lecture: The Janus computer, a new window into spin-glass physics Invited Speaker: David Yllanes Spin glasses are a longstanding model for the sluggish dynamics that appears at the glass transition. They enjoy a privileged status in this context, as they provide the simplest model system both for theoretical and experimental studies of glassy dynamics. However, in spite of forty years of intensive investigation, spin glasses still pose a formidable challenge to theoretical, computational and experimental physics. The main difficulty lies in their incredibly slow dynamics. A recent breakthrough has been made possible by our custom-built computer, Janus, designed and built in a collaboration formed by five universities in Spain and Italy. By employing a purpose-driven architecture, capable of fully exploiting the parallelization possibilities intrinsic to these simulations, Janus outperforms conventional computers by several orders of magnitude. After a brief introduction to spin glasses, the talk will focus on the new physics unearthed by Janus. In particular, we recall our numerical study of the nonequilibrium dynamics of the Edwards-Anderson Ising Spin Glass, for a time that spans eleven orders of magnitude, thus approaching the experimentally relevant scale (i.e. seconds). We have also studied the equilibrium properties of the spin-glass phase, with an emphasis on the quantitative matching between non-equilibrium and equilibrium correlation functions, through a time-length dictionary. Last but not least, we have clarified the existence of a glass transition in the presence of a magnetic field for a finite-range spin glass (the so-called de Almeida-Thouless line). We will finally mention some of the currently ongoing work of the collaboration, such as the characterization of the non-equilibrium dynamics in a magnetic field and the existence of a statics-dynamics dictionary in these conditions. [Preview Abstract] |
Tuesday, March 19, 2013 4:18PM - 4:54PM |
J9.00004: Programming for 1.6 Millon cores: Early experiences with IBM's BG/Q SMP architecture Invited Speaker: James Glosli With the stall in clock cycle improvements a decade ago, the drive for computational performance has continues along a path of increasing core counts on a processor. The multi-core evolution has been expressed in both a symmetric multi processor (SMP) architecture and cpu/GPU architecture. Debates rage in the high performance computing (HPC) community which architecture best serves HPC. In this talk I will not attempt to resolve that debate but perhaps fuel it. I will discuss the experience of exploiting Sequoia, a 98304 node IBM Blue Gene/Q SMP at Lawrence Livermore National Laboratory. The advantages and challenges of leveraging the computational power BG/Q will be detailed through the discussion of two applications. The first application is a Molecular Dynamics code called ddcMD. This is a code developed over the last decade at LLNL and ported to BG/Q. The second application is a cardiac modeling code called Cardioid. This is a code that was recently designed and developed at LLNL to exploit the fine scale parallelism of BG/Q's SMP architecture. Through the lenses of these efforts I'll illustrate the need to rethink how we express and implement our computational approaches. [Preview Abstract] |
Tuesday, March 19, 2013 4:54PM - 5:30PM |
J9.00005: Overcoming Communication Latency Barriers in Massively Parallel Molecular Dynamics Simulations on Anton Invited Speaker: Ron Dror Strong scaling of scientific applications on parallel architectures is increasingly limited by communication latency. This talk will describe the techniques used to reduce latency and mitigate its effects on performance in Anton, a massively parallel special-purpose machine that accelerates molecular dynamics (MD) simulations by orders of magnitude compared with the previous state of the art. Achieving this speedup required both specialized hardware mechanisms and a restructuring of the application software to reduce network latency, sender and receiver overhead, and synchronization costs. Key elements of Anton's approach, in addition to tightly integrated communication hardware, include formulating data transfer in terms of counted remote writes and leveraging fine-grained communication. Anton delivers end-to-end inter-node latency significantly lower than any other large-scale parallel machine, and the total critical-path communication time for an Anton MD simulation is less than 3{\%} that of the next-fastest MD platform. [Preview Abstract] |
Follow Us |
Engage
Become an APS Member |
My APS
Renew Membership |
Information for |
About APSThe American Physical Society (APS) is a non-profit membership organization working to advance the knowledge of physics. |
© 2024 American Physical Society
| All rights reserved | Terms of Use
| Contact Us
Headquarters
1 Physics Ellipse, College Park, MD 20740-3844
(301) 209-3200
Editorial Office
100 Motor Pkwy, Suite 110, Hauppauge, NY 11788
(631) 591-4000
Office of Public Affairs
529 14th St NW, Suite 1050, Washington, D.C. 20045-2001
(202) 662-8700