Bulletin of the American Physical Society
60th Annual Meeting of the Divison of Fluid Dynamics
Volume 52, Number 12
Sunday–Tuesday, November 18–20, 2007; Salt Lake City, Utah
Session BS: Mini-Symposium I: Turbulence Simulations and Advanced Cyberinfrastructure |
Hide Abstracts |
Chair: P.K. Yeung, Georgia Institute of Technology Room: Salt Palace Convention Center Ballroom EG |
Sunday, November 18, 2007 10:34AM - 11:00AM |
BS.00001: DNS of incompressible turbulence in a periodic box with up to 4096$^{3}$ grid points Invited Speaker: Turbulence of incompressible fluid obeying the Navier-Stokes (NS) equations under periodic boundary conditions is one of the simplest dynamical systems keeping the essence of turbulence dynamics, and suitable for the study of high Reynolds number (\textit{Re}) turbulence by direct numerical simulation (DNS). This talk presents a review on DNS of such a system with the number $N^{3 }$of the grid points up to 4096$^{3}$, performed on the Earth Simulator (ES). The ES consists of 640 processor nodes (=5120 arithmetic processors) with 10TB of main memory and the peak performance of 40 Tflops. The DNSs are based on a spectral method free from alias error. The convolution sums in the wave vector space were evaluated by radix-4 Fast Fourier Transforms with double precision arithmetic. Sustained performance of 16.4 Tflops was achieved on the 2048$^{3}$ DNS by using 512 processor nodes of the ES. The DNSs consist of two series; one is with $k_{max }$\textit{$\eta $}$\cong $1 (Series 1) and the other with$ k_{max }$\textit{$\eta $}$\cong $2 (Series 2), where $k_{max}$ is the highest wavenumber in each simulation, and \textit{$\eta $} is the Kolmogorov length scale. In the 4096$^{3}$ DNS, the Taylor-scale Reynolds number $R_{\lambda }\cong $1130 (675) and the ratio $L$/\textit{$\eta $} of the integral length scale$ L$ to \textit{$\eta $ } is approximately 2133(1040), in Series 1 (Series 2). Such DNS data are expected to shed some light on the basic questions in turbulence research, including those on (i) the normalized mean rate of energy dissipation in the high \textit{Re} limit, (ii) the universality of energy spectrum at small scale, (iii) scale- and \textit{Re}- dependences of the statistics, and (iv) intermittency. We have constructed a database consisting of (a) animations and figures of turbulent fields (b) statistics including those associated with (i)-(iv) noted above, (c) snapshot data of the velocity fields. The data size of (c) can be very large for large $N$. For example, one snapshot of single precision data of the velocity vector field of the 4096$^{3}$ DNS requires approximately 0.8 TB. [Preview Abstract] |
Sunday, November 18, 2007 11:00AM - 11:26AM |
BS.00002: Simulating wall-bounded turbulence Invited Speaker: In the past 20 years, direct simulations of turbulent channels have grown from $Re_\tau \approx 200$ to 2000, and the number of grid points, from about 5~Mp to 20~Gp. This has given us access to buffer layer dynamics, and now to incipient logarithmic layers and cascades. We can now do conceptual experiments on the latter (100~Mp), which should soon lead to dynamic understanding. DNS of channels has become a full partner of experiments, both in data quality and in Reynolds number. Challenges persist. A full log layer would require $Re_\tau \approx 10^4$ and a Petapoint. That should happen in 5-10 years. Other flows have progressed slower. ZPG boundary layers are only now being pushed to $Re_\tau \approx 10^3$ (5~Gp), and almost nothing is available on the APG BL. We have grown used to computer growth, but new problems are appearing. The processor count for DNSes has grown by a factor of 10 per decade (now about 1000). It is not clear whether present methods will work in $10^5-10^6$ processors. Even now, CPU time is becoming stochastic as different users compete for memory access. Postprocessing is a bottleneck. We now store O(10~\mbox{TB}) per simulation (1~KB/point). It will soon grow into PBs. Postprocessing software tends to be ad-hoc, and not well adapted to shared facilities. Managing TBs in `personal' computers is possible, but hard. Even harder, and more important, is to share this amount of information among groups. The prospects are however good. The community has faced similar challenges before and, with the help of computer scientists and others, it has succeeded up to now. [Preview Abstract] |
Sunday, November 18, 2007 11:26AM - 11:52AM |
BS.00003: Cyber-enabled investigations in Lagrangian turbulence Invited Speaker: Direct numerical simulation (DNS) of three-dimensional homogeneous isotropic turbulence on a periodic domain constitute an important laboratory for the study of the fundamental dynamical properties of advected particles. Along with the standard integration of the Eulerian field, several models for the transport of Lagrangian particle can be used: from the most simplistic to the more sophisticated ones. We will report on recent state-of-the-art numerical efforts aimed at paralleling recent, current (and future) experimental investigations. As a matter of fact, at present, both experiments and numerical simulations do not have enough resolution in order to fully disclose the phenomenology of the many aspects of Lagrangian turbulence. In this respect the route towards Petascale computing is definitely an essential way for investigating the physics of Lagrangian turbulence. Particularly important, when dealing with large-scale simulations, is the issue of large-volume I/O and, in particular, of its successive post-processing. Efforts towards collaborative explorations, towards the development of common data-formats, of state-of-the-art numerical databases, and other ideas directed at maximizing the outcome from numerical efforts will be presented and discussed. The international database of CFD cases, iCFDdatabase - http://cfd.cineca.it, from where raw data -coming from tens of different scientific cases- can be freely downloaded, will also be briefly presented and discussed. [Preview Abstract] |
Sunday, November 18, 2007 11:52AM - 12:18PM |
BS.00004: Analysis of Turbulence Datasets using a Database Cluster: Requirements, Design, and Sample Applications Invited Speaker: The massive datasets now generated by Direct Numerical Simulations (DNS) of turbulent flows create serious new challenges. During a simulation, DNS provides only a few time steps at any instant, owing to storage limitations within the computational cluster. Therefore, traditional numerical experiments done during the simulation examine each time slice only a few times before discarding it. Conversely, if a few large datasets from high-resolution simulations are stored, they are practically inaccessible to most in the turbulence research community, who lack the cyber resources to handle the massive amounts of data. Even those who can compute at that scale must run simulations again forward in time in order to answer new questions about the dynamics, duplicating computational effort. The result is that most turbulence datasets are vastly underutilized and not available as they should be for creative experimentation. In this presentation, we discuss the desired features and requirements of a turbulence database that will enable its widest access to the research community. The guiding principle of large databases is ``move the program to the data'' (Szalay et al. ``Designing and mining multi-terabyte Astronomy archives: the Sloan Digital Sky Survey,'' in ACM SIGMOD, 2000). However, in the case of turbulence research, the questions and analysis techniques are highly specific to the client and vary widely from one client to another. This poses particularly hard challenges in the design of database analysis tools. We propose a minimal set of such tools that are of general utility across various applications. And, we describe a new approach based on a Web services interface that allows a client to access the data in a user-friendly fashion while allowing maximum flexibility to execute desired analysis tasks. Sample applications will be discussed. This work is performed by the interdisciplinary ITR group, consisting of the author and Yi Li(1), Eric Perlman(2), Minping Wan(1), Yunke Yang(1), Randal Burns(2), Shiyi Chen(1), Gregory Eyink (3) Alex Szalay (4) with the following departmental affiliations: (1) Mechanical Engineering, (2) Computer Science (3) Applied Mathematics \& Statistics, (4) Physics and Astronomy. [Preview Abstract] |
Sunday, November 18, 2007 12:18PM - 12:44PM |
BS.00005: On Cyber-enabled investigations of turbulent mixing and dispersion on a periodic domain Invited Speaker: Direct numerical simulation (DNS) of homogeneous isotropic turbulence on a 3D periodic domain is an important benchmark problem in developments towards Petascale computing. Advanced cyberinfrastructure resources are expected to allow simulations using $12288^3$ grid points by the year 2011. Subject to a careful choice of parameters and adherence to appropriate accuracy requirements, simulations of such size (or larger) can be expected to provide great opportunities for fundamental studies of turbulence, mixing, and dispersion, including the role of intermittency, viscous-convective scaling, Lagrangian Kolmogorov similarity, and Richardson scaling. However, as the problem sizes considered continue to grow, a number of highly nontrivial challenges arise. Effective use of $O(10^4)$ to $O(10^6)$ processors requires a highly scalable domain decomposition scheme with efficient interprocessor communication. Large volume I/O is very demanding on the system hardware and can become a new bottleneck especially for post-processing which is vital for contributions to physical understanding and science impact. In this talk we will discuss recent progress and experiences with a 2D domain decomposition scheme which has been tested to perform very well in both strong scaling and weak scaling up to $4096^3$ on 32768 processors. We also make the case for developing collaborative strategies in making data and algorithms available to the research community, with help from leading supercomputer centers. [Preview Abstract] |
Follow Us |
Engage
Become an APS Member |
My APS
Renew Membership |
Information for |
About APSThe American Physical Society (APS) is a non-profit membership organization working to advance the knowledge of physics. |
© 2024 American Physical Society
| All rights reserved | Terms of Use
| Contact Us
Headquarters
1 Physics Ellipse, College Park, MD 20740-3844
(301) 209-3200
Editorial Office
100 Motor Pkwy, Suite 110, Hauppauge, NY 11788
(631) 591-4000
Office of Public Affairs
529 14th St NW, Suite 1050, Washington, D.C. 20045-2001
(202) 662-8700