Bulletin of the American Physical Society
APS April Meeting 2013
Volume 58, Number 4
Saturday–Tuesday, April 13–16, 2013; Denver, Colorado
Session G6: Invited Session: Computational Physics at the Bleeding Edge |
Hide Abstracts |
Sponsoring Units: DCOMP Chair: Timothy Germann, Los Alamos National Laboratory Room: Governor's Square 15 |
Sunday, April 14, 2013 8:30AM - 9:06AM |
G6.00001: Computational Astrophysics at the Bleeding Edge: Simulating Core Collapse Supernovae Invited Speaker: Anthony Mezzacappa Core collapse supernovae are the single most important source of elements in the Universe, dominating the production of elements between~oxygen and iron and likely responsible for half the elements heavier than iron. They result from the death throes of massive stars, beginning with~stellar core collapse and the formation of a supernova shock wave that must ultimately disrupt such stars. Past, first-principles models most often led to the frustrating~conclusion the shock wave stalls and is not revived, at least given the physics included in the models. However, recent progress in the context of~two-dimensional, first-principles supernova models is reversing this trend, giving us hope we are on the right track toward a solution of one~of the most important problems in astrophysics. Core collapse supernovae are multi-physics events, involving general relativity, hydrodynamics and magnetohydrodynamics, nuclear burning, and radiation transport in the form of neutrinos, along with a detailed nuclear physics equation of state and neutrino weak interactions. Computationally, simulating these catastrophic stellar events presents an exascale computing challenge. I will discuss past models and milestones in core collapse supernova theory, the state of the art, and future requirements. In this context, I will present~the results and plans of the collaboration led by ORNL and the University of Tennessee. [Preview Abstract] |
Sunday, April 14, 2013 9:06AM - 9:42AM |
G6.00002: Computational Cosmology at the Bleeding Edge Invited Speaker: Salman Habib Large-area sky surveys are providing a wealth of cosmological information to address the mysteries of dark energy and dark matter. Observational probes based on tracking the formation of cosmic structure are essential to this effort, and rely crucially on N-body simulations that solve the Vlasov-Poisson equation in an expanding Universe. As statistical errors from survey observations continue to shrink, and cosmological probes increase in number and complexity, simulations are entering a new regime in their use as tools for scientific inference. Changes in supercomputer architectures provide another rationale for developing new parallel simulation and analysis capabilities that can scale to computational concurrency levels measured in the millions to billions. In this talk I will outline the motivations behind the development of the HACC (Hardware/Hybrid Accelerated Cosmology Code) extreme-scale cosmological simulation framework and describe its essential features. By exploiting a novel algorithmic structure that allows flexible tuning across diverse computer architectures, including accelerated and many-core systems, HACC has attained a performance of 14 PFlops on the IBM BG/Q Sequoia system at 69\% of peak, using more than 1.5 million cores. [Preview Abstract] |
Sunday, April 14, 2013 9:42AM - 10:18AM |
G6.00003: Computational Plasma Physics at the Bleeding Edge: Simulating Kinetic Turbulence Dynamics in Fusion Energy Sciences Invited Speaker: William Tang Advanced computing is generally recognized to be an increasingly vital tool for accelerating progress in scientific research in the 21st Century. The imperative is to translate the combination of the rapid advances in super-computing power together with the emergence of effective new algorithms and computational methodologies to help enable corresponding increases in the physics fidelity and the performance of the scientific codes used to model complex physical systems. If properly validated against experimental measurements and verified with mathematical tests and computational benchmarks, these codes can provide more reliable predictive capability for the behavior of complex systems, including fusion energy relevant high temperature plasmas. The magnetic fusion energy research community has made excellent progress in developing advanced codes for which computer run-time and problem size scale very well with the number of processors on massively parallel supercomputers. A good example is the effective usage of the full power of modern leadership class computational platforms from the terascale to the petascale and beyond to produce nonlinear particle-in-cell simulations which have accelerated progress in understanding the nature of plasma turbulence in magnetically-confined high temperature plasmas. Illustrative results provide great encouragement for being able to include increasingly realistic dynamics in extreme-scale computing campaigns to enable predictive simulations with unprecedented physics fidelity. Some illustrative examples will be presented of the algorithmic progress from the magnetic fusion energy sciences area in dealing with low memory per core extreme scale computing challenges for the current top 3 supercomputers worldwide. These include advanced CPU systems (such as the IBM-Blue-Gene-Q system and the Fujitsu K Machine) as well as the GPU-CPU hybrid system (Titan). [Preview Abstract] |
Follow Us |
Engage
Become an APS Member |
My APS
Renew Membership |
Information for |
About APSThe American Physical Society (APS) is a non-profit membership organization working to advance the knowledge of physics. |
© 2024 American Physical Society
| All rights reserved | Terms of Use
| Contact Us
Headquarters
1 Physics Ellipse, College Park, MD 20740-3844
(301) 209-3200
Editorial Office
100 Motor Pkwy, Suite 110, Hauppauge, NY 11788
(631) 591-4000
Office of Public Affairs
529 14th St NW, Suite 1050, Washington, D.C. 20045-2001
(202) 662-8700