Merger of Compact Stars

Principal Investigator:

Joel E. Tohline
Department of Physics & Astronomy
Louisiana State University
tohline@rouge.phys.lsu.edu

Collaborators:

Kimberly C. B. New [knew@galileo.physics.drexel.edu]
John Cazes [cazes@rouge.phys.lsu.edu]


Scientific Objectives

The mergers of compact stars (white dwarfs and neutron stars) in binary systems are expected to be relatively strong sources of gravitational radiation. Because the final coalescence of compact binaries includes strong, three-dimensional tidal effects, it is necessary to use hydrodynamic techniques to properly follow their evolution.

We have begun our hydrodynamic study of the merger of compact binaries with a detailed stability analysis of synchronized binaries with equal mass components using an explicit, finite-difference hydrodynamics code. The objective of this analysis was to identify binary systems whose components were unstable to merger on a dynamical timescale (on the order of a few initial orbital periods) and thus could be used as initial models for simulations of binary coalescence. This analysis demonstrated that binary systems with relatively soft (compressible) equations of state (including the zero--temperature white dwarf equation of state [Chandrasekhar 1967] and polytropic equations of state with large polytropic indices) are dynamically stable to merger. However we did identify some binaries with stiffer equations of state (polytropes with smaller polytropic indices and realistic neutron star equations of state) which are unstable to merger on a dynamical timescale.

The following animation sequence illustrates one of our recent simulations of the merger of an equal-mass, binary neutron star system, performed on the 8K-node, MasPar MP-1 at Louisiana State University.

MOVIE
The Coalescence of Two Neutron Stars.
(326K)

We plan to carry out further simulations in order to pinpoint the polytropic index at which this dynamical instability arises. We also plan to simulate the coalescence of several dynamically unstable binaries with different equations of state and total masses to see how these variations affect the outcome of the merger and the characteristics of the gravitational radiation emitted. It would also be of interest to extend these simulations to include the mergers of binaries with non--equal mass components having various spins and to incorporate more complex physics, such as the effects of general relativity and radiation transport, into our hydrodynamics code.


Computational Techniques

Current implementation

Our primary computational tool is a fortran algorithm that is a finite-difference representation of the multidimensional equations governing the dynamics of inviscid, compressible gas flows. Specifically, the employed finite-difference algorithm is based on the van Leer monotonic interpolation scheme described by Stone and Norman (1992), but extended to three dimensions on a uniform, cylindrical coordinate mesh. The fluid equations are advanced in time via an explicit integration scheme. At each time step, the Poisson equation is solved in order to determine, in a self-consistent fashion, how the fluid is to be accelerated in response to the fluid's own self-gravity. Presently, the Poisson equation is solved using a combined Fourier transformation and ADI (alternating-direction, implicit) scheme.

This numerical algorithm is currently written in mpf (MasPar Fortran), which is a language patterned after Fortran90 but which includes certain extensions along the lines of HPF (high-performance fortran) permitting efficient implementation on a distributed-memory, SIMD-archictecture machine. Numerous simulations have been successfully performed over the past three years on the 8,192-node MP-1 computer in the Concurrent Computing Laboratory for Materials Simulation in the Department of Physics and Astronomy at Louisiana State University and on an MP-2 at the Scalable Computing Laboratory of the Ames Laboratory at Iowa State University. On LSU's MP-1 system, we've achieved a parallel efficiency of approximately 35% and execution speeds approximately three times that of a single-node Cray-Y/MP; on the MP-2 the code out-performs the Cray-C90 by a factor of two. The table below documents the performance of our hydrocode on five different machine architectures: Cray-Y/MP, Cray C90, MasPar MP-1, MasPar MP-2, and Thinking Machines CM-5.

The accompanying movie illustrates one recent simulation of the merger of an equal-mass, binary neutron star system. We consider that our multi-year efforts to develop the necessary tools to simulate the dynamical evolution of astrophysical fluid flows on a massively parallel, SIMD-architecture computer have been successful.

Project Objectives

Distributed-memory, MIMD (or SPMD) architecture machines, such as the IBM SP-2, now offer considerably more computing power and versatility than the SIMD-architecture MasPar. Our objective is to successfully port our hydrodynamic code to the SP-2 so that our simulations can be extended to much more complex systems. There is a need to perform the simulations at significantly higher spatial resolution and to include more mathematical relations governing the detailed microscopic physics of such systems.

We plan to port our mpf code to the SP-2 by modifying it to conform to the specifications of IBM's HPF compiler. The required modifications should be relatively minor, as we have already had experience successfully modifying the code to execute on the CM-5 (using cmf) and on the Cray C-90 (using Cray's Fortran 90). If the implementation of our finite-difference algorithms on the SP-2 proves to be as efficient as on the MasPar, we estimate that execution times on a 16-node, SP-2 will roughly equal the execution times we have achieved on our 8K-node, MP-1 system. On a 64-node system, we therefore expect to gain a factor of 4 in execution speed.


Hydrocode Timings*
Compiler nodes Total time
(sec)
Seconds per
timestep
Y/MP Ratio MP-1 Ratio
Cray Y/MP f77 1 2660.0 13.00 1.00 0.36
MasPar MP-1 mpf 8,192 947.4 4.74 2.81 1.00
Cray
C90
Fortran90 1 802.8 4.01 3.31 1.18
MasPar MP-2 mpf 8,192 388.6 1.94 6.84 2.44
" " 4,096 681.4 3.41 3.90 1.39
CM-5 cmf
Block3D
32 1098.3 5.49 2.42 0.86
" " 64 584.6 2.92 4.56 1.62
" " 128 319.3 1.60 8.33 2.97
" " 256 187.0 0.93 14.23 5.07
SP2 XLHPF
Block3D
16 982.4 4.91 2.71 0.96
" " 64 471.4 2.36 5.64 2.01
" " 128 374.6 1.87 7.10 2.53

FOOTNOTE:

*To obtain these execution times, the hydrocode was run for 200 integration timesteps utilizing a grid resolution in cylindrical coordinates of 128 x 64 x 64. It should be noted that the timing comparisons were obtained with a purely hydrodynamic version of the code, that is, a solution to the Poisson equation and, hence, the self-gravity of the fluid was not included. Only minor changes in the mpf code were required before it could be compiled and successfully run on the C90 and the CM-5. However, because a Fortran90 compiler was not available on the Y/MP, we utilized VAST to first convert the mpf code to f77 before the code was compiled and run on the Y/MP.