J.E. CAZES, J.E. TOHLINE, H.S. COHL, AND P.M. MOTL
Louisiana State University
Department of Physics & Astronomy
Baton Rouge, LA 70803-4001
The Astrophysical Problem From a careful analysis of stellar populations, it has been clear for at least the past fifteen years that the vast majority of stars in our neighborhood of the Galaxy are in binary systems.1 That is to say, well over fifty percent of all stars have a stellar companion about which they orbit. In this sense, our own solar system is an exception because the Sun is not gravitationally bound in an orbit about another star. Although we have gained a general appreciation of how stars form from low-density gas in the interstellar medium of our Galaxy,2 we do not yet understand why stars preferentially form in pairs.
In a general sense, we understand that stars form from the gravitational collapse of large, slowly rotating gas clouds. Initially, the collapse is dynamical and, due to conservation of angular momentum, each individual gas cloud rapidly spins up. At the end of this initial phase of dynamical gravitational collapse, the cloud resembles an oblate spheroid rotating about its short axis. The cloud is supported against further dynamical collapse by a combination of thermal pressure and rotation. But cloud contraction generally continues on a longer (thermal) time scale as each cloud slowly loses thermal energy via radiation processes. Still conserving angular momentum throughout this slow contraction phase, the cloud rotates faster and becomes more and more flattened. At a sufficiently rapid rate of rotation, it becomes energetically favorable for a self-gravitating gas cloud to evolve to a nonaxisymmetric configuration. Analytical studies of incompressible fluids suggest that initially the energetically preferred structure will be ellipsoidal in shape. We believe that as this nonaxisymmetric structure continues to cool, however, it will suffer an additional instability, causing it to fission into two centrally condensed clouds. If such a "fission instability" indeed arises, it should explain why the vast majority of stars form in pairs.
In order to build realistic 3D models of these nonaxisymmetric gas clouds, we must turn to numerical techniques. The computational fluid dynamics (CFD) code used to model our fluid flows and the relevant equations are described in the following section. Since the numerical techniques produce large quantities of data, the techniques required to examine the data become almost as important as the techniques that were used in the simulations that produced the data. As we describe in the Visualization section, below, we rely heavily on 3D volume-rendering and animation techniques to analyze our CFD simulations. The heterogeneous computing environment, hereafter referred to as the HCE, is a tool we have developed to link these two pieces of our research project together.
Computational Fluid Dynamic Simulations
In order to test the fission hypothesis of binary star formation, one must first construct physically relevant equilibrium models. Although we are unable to create nonaxisymmetric equilibrium gas cloud models with nontrivial flows, we are able to routinely construct axisymmetric equilibrium models with differential rotation. Using a self-consistent-field (SCF) technique developed by Hachisu3, we are able to create a wide variety of axisymmetric rapidly rotating gas clouds with compressible equations of state. (The interested reader is referred to an online version of Hachisu's SCF technique that we have developed for both instructional4 and application purposes.)
To test the fission hypothesis of binary star formation, one must examine whether or not these axisymmetric equilibrium models are unstable toward the development of nonaxisymmetric structure and, for models which have been found to be unstable, determine whether the nonlinear development of the instability leads to fission. In order to study the nonlinear development of nonaxisymmetric instabilities in rotating gas clouds, we have developed a CFD code to solve the following coupled set of partial differential equations:
These equations describing the inviscid flow of self-gravitating continuum fluids, together with an appropriate equation of state, relate the time and spatial variation of the fluid velocity v to the pressure P, the mass density r, the specific internal energy e , and the gravitational potential F in a physically consistent fashion.4, 5 By restricting our discussions to physical systems that are governed by this set of equations, we are assuming that no electromagnetic forces act on the fluid (eg., the effects of magnetic fields on an ionized plasma are not considered) and, in the absence of dynamically generated shocks, all compressions and rarefactions are assumed to happen adiabatically.
Our CFD code is patterned after the ZEUS-2D code developed by Stone and Norman6, but it has been extended to handle three-dimensional (3D) fluid systems, and has been written in High-Performance-Fortran (HPF) to execute on a variety of different massively parallel computing platforms. In order to resolve well the structural properties of our 3D flows, we have found it necessary to utilize at least 1283 grid zones. Hence, on parallel computing platforms with 64-bit floating-point processors, each 3D array requires 16 MBytes of storage and a typical simulation demands a minimum of 8 GBytes of RAM. Most of our simulations over the past several years have been performed on an 8K-node MasPar MP-1 in LSU's Concurrent Computing Laboratory for Materials Simulation, and on the Cray T3E at both the NAVOCEANO DoD Major Shared Resource Center and the San Diego Supercomputer Center, although our CFD code's performance has been measured on a variety of different machine architectures.7
In a very real sense, each of our astrophysical fluid simulations generates data that can fill a very large, four-dimensional data array for each of the principal fluid variables. The arrays are 4D because each variable is defined on a three-dimensional spatial lattice as a function of time. Figuring out how to sort through this large amount of data to garner useful physical information about the fluid flow is a major challenge by itself.
When conducting nonlinear stability analyses of rapidly rotating gas clouds in the context of star formation, as described above, we have found the most useful diagnostic tool to be an animation sequence which shows the time-evolution of isodensity surfaces in each evolving cloud. In order to generate such an animation sequence, historically we (as well as other researchers) usually have adopted the following plan:
Although effective, this plan which relegates the data analysis (visualization) to a post-processing task is generally inefficient and puts large demands on data storage. Also, local workstations usually lack the sophisticated software and specialized hardware used by the Visualization Centers to allow the interactive exploration of large 3D data sets and create high quality images of multiple isosurfaces.
As we have attempted to describe here in simple tabular form, our solution to the stated problem has been to develop a heterogeneous computing environment (HCE) through which our two primary computational tasks (CFD simulation and visualization) are performed simultaneously on two separate computing platforms, each of which has been configured to handle the assigned task in an optimum fashion. Communication between the tasks (the link) is accomplished over existing local area networks.
|A Heterogeneous Computing Environment|
|Task||CFD Simulation||the link||Visualization|
|Platform||data transfer||process control||Platform|
|1993||MasPar MP1||NFS cross-mounted disks||unix sockets||Sparcstation|
|1998||Cray T3E||ftp||remote shell script||SGI Onyx|
Initially, in 1993, we developed the HCE utilizing existing computing platforms within LSU's CCLMS. Specifically, the fluid simulations were performed on our 8K-node MasPar MP-1 using a MasPar fortran (mpf) version of the CFD code described above. The primary visualization task (rendering one green 3D isodensity surface at various instants in time during the CFD simulation) was performed on any one of several Sun Microsystems workstations utilizing the commercial software package, IDL.
At predetermined intervals of time during the CFD simulation, our mpf program would recognize that a volume-rendered image of the flow should be constructed for inclusion in an animation sequence. At each of these instants in time, data was transferred from the MasPar CFD application to the Sparcstation rendering tool by simply writing one 3D data array (of cloud densities) to an NFS cross-mounted disk. Process control for the visualization task was then passed from the MasPar to the Sparcstation via unix sockets. After spawning the visualization task, the MasPar would continue following the evolution of the fluid flow, running the CFD simulation task in parallel with the visualization task on the Sparcstation. As the volume-rendering algorithm finished generating each image, it would immediately delete the 3D density data array, thereby conserving substantial disk space.
|One example animation that was created in this manner is presented in Movie 1. This animation shows the evolution of a stable, common-envelope binary star system that was studied recently by New and Tohline.8|
More recently we have developed an HCE at two separate national supercomputing computing laboratories: the DoD Major Shared Resource Center at the Naval Oceanographic Offices at the Stennis Space Center and the NSF-supported San Diego Supercomputer Center. Our CFD simulations are performed on the Cray T3E utilizing an HPF version of our simulation code and the Portland Group's HPF (PGHPF) compiler. The primary visualization task (rendering four nested 3D isodensity surfaces at each specified instant in time) is performed on an SGI Onyx using Alias|Wavefront software.
At both supercomputing centers, these two selected hardware platforms have not been installed with the idea that they would provide an HCE for the general user. Indeed, at neither center do the platforms even share cross-mounted disks. Hence, we have had to rely upon ftp commands to transfer data from the Cray T3E to a disk that is mounted on the SGI Onyx, and process control for the visualization task has been passed via a remote shell script.
Figure 1 illustrates schematically how the HCE links the Cray T3E and the SGI Onyx together. Each color represents a shell script used in processing the data. The first two shell scripts, JVSRUN and JEEVES, are run on the T3E. JVSRUN is the link between our CFD code and the shell scripts that actually process the data. JVSRUN is separate from JEEVES to allow the CFD code to continue running without waiting for the data transfer to finish. JEEVES is responsible only for transferring the data to the SGI for processing, while HOLMES controls the visualization process and archiving of the results. For a more detailed view of these processes, one can click in one of the colored sections of Figure 1 to go to the appropriate appendix file and view the actual shell script.
Below, are four examples of uses of this technique. The first example was produced by the HCE technique described above. The following three examples were produced by hand from an interesting time slice of our data. Although, using the techniques outlined above, we could just as easily have automated any one of these processes to run concurrently with our simulations. In the future, we probably will automate the creation of a VRML file at the end of each simulation run.
|Movie 2 is an animation sequence that was produced entirely through our current heterogeneous computing environment. It shows the nonlinear development of a two-armed, spiral mode instability that develops spontaneously in rapidly rotating self-gravitating gas clouds.|
|In Movie 3, we see an example of an equatorial flow at an instant in time. We've set this up so the green balls follow the streamlines of the instantaneous flow field. By placing tracer particles in our simulation and writing out their positions with each isosurface, it would be trivial to change this from a movie of tracer particles following a streamline fixed in time to a movie of tracer particles tracing a streak line evolving in time.|
|Movie 4 illustrates the 3D flow field along with various isodensity surfaces. These surfaces could just as easily be isosurfaces of pressure, temperature, or any other scalar quantity.|
|Finally, we have an example of a VRML model. This VRML file was created in much the same way Movie 4 was created. The major advantage of the VRML file is its interactive nature. It allows us to view the 3D flow along with the density isosurfaces from many different angles. For the previous movies, choosing the viewing angle is something that must be set up ahead of time and chosen carefully, since our technique destroys the data after processing it.|
Each of the movies presented here and the VRML model are examples of what could be produced using the HCE. The first two movies were the only ones actually produced using this technique, but the processes used to create the other movies are very similar. For our current research, the isodensity surfaces and streamlines were sufficient to provide insight to our models. But for future problems, more physics will need to be added to the simulations which will increase their complexity and, therefore, the difficulty of analyzing the data. These techniques can be modified to just as easily produce animations or 3D models of other global quantities, such as the temperature, the pressure, the vorticity, etc.. For our purposes, we would like to extend our capabilities to producing movies with evolving vector fields and 3D animations that can be rotated and scaled while they are evolving.
With the advent of parallel supercomputers, we finally have the capability to do realistic fully three dimensional astrophysical fluid simulations, but to accomplish these results we sacrifice the interactivity and quick response we get from desktop systems. Analyzing these data sets on our local desktop systems is a slow, cumbersome, and sometimes impossible process. The HCE provides an elegant solution to this problem. It allows us to match the strengths of the expensive high-end systems at the supercomputing centers with the appropriate area of our problem. The T3E is used for the large scale CFD simulation, while the SGI Onyx is utilized to promptly visualize our data. This allows for an almost instantaneous examination of results, while minimizing the amount of data that must be archived locally. This allows us to process our data on a much finer timescale than ever before possible.
There are a few caveats. For this technique to work well, one must have a well posed problem which, in general, produces results within an expected range. If the results are completely unknown, it would be difficult to set up an automated analysis technique beforehand. Another problem is with the automation of the analysis technique. Most software packages are written to allow interactive data analysis. It was also non-trivial to set up the rendering software we use to handle the data in a batch mode.
This work has been supported, in part, by the U.S. National Science Foundation through grant AST-9528424, also, in part, by NASA/LaSPACE under grant NGT5-40335 and the Louisiana Board of Regents, LEQSF, and, in part, by grants of high-performance-computing time at the NAVOCEANO DoD Major Shared Resource Center in Stennis, MS, through the PET program and at the San Diego Supercomputer Center.