Heterogeneous Computing Environment Tohline, Cazes & Cohl

3. Our Solution

As we have attempted to describe here in simple tabular form, our solution to the stated problem has been to develop a heterogeneous computing environment (HCE) through which our two primary computational tasks (CFD simulation and visualization) are performed simultaneously on two separate computing platforms, each of which has been configured to handle the assigned task in an optimum fashion. Communication between the tasks (the link) is accomplished over existing local area networks.

A Heterogeneous Computing Environment
Task CFD Simulation the link Visualization
Platform data transfer process control Platform
1993 MasPar MP1 NFS cross-mounted disks unix sockets Sparcstation
1998 Cray T3E ftp remote shell script SGI Onyx

Initial Configuration

Initially, in 1993, we developed the HCE utilizing existing computing platforms within LSU's CCLMS. Specifically, the CFD simulations were performed on our 8K-node MasPar MP-1 using a MasPar fortran (mpf) version of the DNS algorithm described above. The primary visualization task (rendering one green 3D isodensity surface at various instants in time during the CFD simulation) was performed on any one of several Sun Microsystems workstations utilizing the commercial software package, IDL.

At predetermined intervals of time during the CFD simulation, our mpf program would recognize that a volume-rendered image of the flow should be constructed for inclusion in an animation sequence. At each of these instants in time, data was transferred from the MasPar CFD application to the Sparcstation rendering tool by simply writing one 3D data array (of cloud densities) to an NFS cross-mounted disk. Process control for the visualization task was then passed from the MasPar to the Sparcstation via unix sockets. After spawning the visualization task, the MasPar would continue following the evolution of the fluid flow, running the CFD simulation task in parallel with the visualization task on the Sparcstation. As the volume-rendering algorithm finished generating each image, it would immediately delete the 3D density data array, thereby conserving substantial disk space.

One example animation sequence that was created in this manner is presented in the above animated gif. The animation shows the evolution of a stable, common-envelope binary star system that was studied recently by New and Tohline.11

Present Configuration

More recently we have developed an HCE at two separate national supercomputing computing laboratories: the NSF-supported San Diego Supercomputer Center and the recently established DoD Major Shared Resource Center at the Naval Oceanographic Offices at the Stennis Space Center. Our CFD simulations are performed on the Cray T3E utilizing an HPF version of our simulation code and the Portland Group's HPF (PGHPF) compiler. The primary visualization task (rendering four nested 3D isodensity surfaces at each specified instant in time) is performed on an SGI Onyx using Alias|Wavefront software.

At both supercomputing centers, these two selected hardware platforms have not been installed with the idea that they would provide an HCE for the general user. Indeed, at neither center do the platforms even share cross-mounted disks. Hence, we have had to rely upon ftp commands to transfer data from the Cray T3E to a disk that is mounted on the SGI Onyx, and process control for the visualization task has been passed via a remote shell script. Figure 2 illustrates schematically how the link between the Cray T3E and the SGI platforms is initiated on the Cray T3E, and Figure 2b shows illustrates how the link is completed on the SGI Onyx.

Return to Article Outline