Numrel 2005

Scott Hawley and Richard Matzner, University of Texas at Austin matzner2-at-physics.utexas.edu
This workshop was organized by Dr. Joan Centrella, the Leader of the Gravitational Astrophysics Laboratory of the Exploration of the Universe Division at NASA/Goddard. The presentations at the workshop are available on line at: http://astrogravs.gsfc.nasa.gov/conf/numrel2005/presentations/

The workshop offered attendees an excellent overview of cutting-edge research throughout the field. Representatives from most of the major numerical relativity groups (AEI, Austin, Baton Rouge, Brownsville, Caltech, Goddard, Penn State...) were present and presented new research results. We heard about remarkable new progress in being able to simulate binary black hole interactions and to extract waveforms. Some of this work (by Pretorius, and a collaboration including Pollney and Diener) had been previously presented and/or posted at gr-qc, but two of the presentations (by Zlochower, representing a collaboration led by UTB, and by Choi representing a collaboration led by NASA/Goddard) had been previously unpublished in any form.

The opening was given by Dr. Nick White, the Director of the Exploration of the Universe Division at NASA/Goddard. We immediately went into a presentation by Joan Centrella: ``Gravitational Wave Astrophysics, Compact Binaries, and Numerical Relativity". Centrella began with a discussion of the fundamental aspects of gravitational radiation. She followed with a discussion of the sensitivity of the LIGO detectors, which are at essentially full sensitivity as the Science Run S5 begins. Real gravitational wave science has begun. Even higher sensitivity (in a much lower frequency band) is expected for the proposed space-borne detector LISA.

What are the expected sources for gravitational wave detectors? LISA will be sensitive to supermassive black hole mergers. ``Every" galaxy has a central supermassive black hole; ``every" galaxy has undergone a merger. ``X-type" radio sources (in disturbed galaxies) show sudden changes in jet direction, which is assumed to lie along the spin direction; a merger could lead to a flip of the spin, producing such a change, so these are considered supporting evidence for the existence of mergers. For LISA and (perhaps) for LIGO, intermediate mass black holes (hundreds of solar masses) are possible sources of gravitational radiation. Here there is fairly strong evidence from X-ray sources in active star forming galaxies. Stellar mass black holes can result from the explosion of high mass stars, and there is a chance that binary black holes can form by this method. This is of course the underlying assumption in computational gravitational waveform prediction from binary black hole mergers. Finally, there is recent evidence, from observations of the short gamma ray bursters GRB 050509b, GRB 050724, suggesting they are mergers (neutron star/ neutron star or neutron star/black hole) that leave a black hole which shows evidence of accretion-driven X-rays after the merger. These could be sources for Advanced LIGO.

Deirdre Shoemaker discussed ``Complexity in Gravitational Waveforms from BH mergers". Her point was that in fact we are seeing little complexity. How do we understand the relative similarity of all gravitational waveforms obtained via numerical simulations? Shoemaker put forth a few ways in which ``complexity" could be added to (and ultimately of course, extracted from) the waveforms. Shoemaker began by noting that there is beginning to be a strong convergence of results in binary black hole simulations, at least for simple configurations. She presented head-on simulations from Zlochower et al, Fiske et al., and from Sperhake et al. The waveforms are very similar, and are rather smooth with a rapid onset to apparent ringdown behavior. For the existing orbit-and-plunge simulations, the agreement is not especially tight. However they too show fairly smooth behavior and fairly rapid onset of ringdown.

Shoemaker raises the question: why is there so little complication in the waveforms? She points out that this is in contrast to the case with matter present, as in neutron-star binaries. She gives several examples (Duffert et al., Faber et al., Duez et al. ) which have more structure near the end of the merger than occurs in the binary black hole case. A core collapse waveform shows quite complicated behavior (Zwerger and Mueller). Shoemaker presented one aspect of complexity in nonequal mass head-on collisions. ``Kicks" occur in these cases, and the waveforms have somewhat more structure, but still show early onset of ringdown.

In order to examine the mechanism of early ringdown onset Shoemaker considered scalar field ringdowns, carried out in a sequence of fixed spacetimes which constitute a sequence of quasi-circular initial data (Pfeiffer et al 2004). The ringdown is Fourier transformed, and the frequency identified in the scalar ringdown. For a single black hole the frequency of the ringdown is related to the mass (in Schwarzschild) by $ M_{BH}= 0.29293/\omega$. In these data, the ADM mass measures the total system mass, roughly twice the horizon mass of one hole. If the holes are well separated (corresponding to a time well prior to the merger in an evolution), $M_{BH}$ determined from the ringdown will be the mass of one of the two equal masses. As one considers holes close enough together (in principle corresponding to later time in the merger), the system acts as if it has a single effective ringdown potential with a mass equal to the ADM mass. For her survey Shoemaker finds the transition occurs roughly at the ``ISCO", with a horizon separation of $8m. $ This again support s the idea the merger forms a common potential and quickly moves into the final black hole ringdown, fairly early in the evolution (when the ISCO is reached).

Yosef Zlochower, from The University of Texas at Brownsville, discussed ``Accurate Binary Black Hole Evolutions Without Excision". This previously unpublished work caused quite a stir. Zlochower gave an introduction to the BSSN method with punctures. In the puncture method, one sets data for black hole evolution by using modified Brill-Lindquist data. These data have $r^{-1}$ singularities in the conformal factor, at the coordinate centers of the black holes. Previous to this workshop, known approaches either used excision, to cut the singular region out of the computational domain (obviously not specific to puncture data), or held the punctures fixed at their initial coordinate positions, treated the singular parts analytically, and computationally evolved only the subdominant nonsingular part of the conformal factor. (Work by Bruegmann and collaborators, reported at the workshop by Tichy, uses corotating coordinates to evolve a black hole binary for one orbit in a system where the punctures are at fixed coordinate position, though Bruegmann et al. also considered excised evolution.) However Zlochower then showed inspiral and merger results in which the full punctures were evolved and traveled across the computational grid! This was so unexpected that it led to a flurry of questions, and many points were only partially explicated before the next presentation had to start. However the work has now been posted on arXiv.org: gr-qc/0511048. That publication explains that the code is designed to evolve the inverse of the conformal factor. That, and some care to the behavior of the lapse, allows evolution of the full system, including the ``singular" conformal factor. Second order convergence is shown. The data correspond to late inspiral. The holes perform about half an orbit before a common apparent horizon forms. A waveform is extracted, and the energy radiated is of order 2.8%, argued to be accurate to about 10% of this value. About 14% of the total angular momentum was radiated. The domain represented in the evolution extends to approximately 60M (where M is the total mass), via the use a a multiple step ``fisheye" transformation.

The next presentation, by Dae-Il Choi from Goddard Space Flight Center: ``Gravitational Waveforms from Coalescing Binary Black Holes", presented very similar results by very similar methods. This work has also been posted on arXiv.org: gr-qc/0511103. (Apparently the two groups were unaware of the others' work until these presentations.) Choi's presentation described a numerical regularization of the conformal factor singularity, which does not involve introducing the inverse of the conformal factor. Instead, straightforward finite differencing, together with a careful choice of the lapse allows the evolution of the puncture system. The initial data put the holes, and hence the punctures, on the z = 0 plane. The Goddard code uses a cell centered formulation, so no point at which quantities are evaluated actually contains this plane. For nonspinning holes, the orbits stay in the $z=0$ plane and thus every computed quantity is finite. The Goddard code uses fixed mesh refinement with eight levels of factor of two steps. The outer boundary is at $128 M$, and the inner box resolution in three different resolution simulations is $M/16$, $M/24$, and $M/32$. Second order convergence is shown. The simulation yields a waveform and the total radiated energy is $0.0330M$, where $M$ is the total mass of the system. The angular momentum radiated is $J=0.138 M^2$. Further, the radiated energy was computed in two ways. The first computation was by integrated gravitational radiation flux across a sphere at radius equal to $25$ times the horizon radius of the final black hole. Then this was compared to the difference of the initial and final ADM mass. The loss of mass-energy closely checks between these two methods. In fact the two measures of the wave energy loss agree to better than 5% (of the $\approx 3\%$ of the total energy that is radiated), showing at least $3-$digit accuracy.

Alessandra Buonanno of the University of Maryland discussed ``Predictions for the last stages of inspiral and plunge using analytical techniques". She described a method (the Effective One Body (EOB) method), to resume post-Newtonian expansions, in a way that appears to converge better than the straightforward PN expansions. Within analytical calculations the EOB is the only method which can approach a description of the dynamics and the gravity-wave signal beyond the adiabatic approximation. It can also provide initial data ($g_{ij}$, $K_{ij}$) for black holes close to the plunge to be used by numerical relativity. The real difficulty in carrying out these processes lies in understanding how accurate the EOB method really is, since it is an expansion in a PN-like parameter. At some point must we use more accurate (presumably computational) description. Current results indicate good agreement between numerical and analytical estimates of the binding energy without spin effects. Already in its current state the method can be used as a diagnostic for (or to fit) numerical relativity results, and can also suggest parameters to vary in templates to provide more complete coverage of the space of possible waveforms.

Wolfgang Tichy (Florida Atlantic University) gave a discussion of ``Simulations of orbiting black holes" This work is a review of work done by Tichy, Jensen and Bruegmann (Phys. Rev. Lett. 92 211101 (2004); gr-qc/0312112). This approach uses puncture initial data for two orbiting black holes, with a (fairly standard) modified BSSN (Baumgarte Shapiro Shibata Nakamura) evolution system which replace all undifferentiated $\tilde \Gamma$ by derivatives of the metric, and ensures the tracelessness of the traceless conformal extrinsic curvature by explicit subtraction of the trace of $\tilde A_{ij}$ from $\tilde A_{ij}$ after each time step. (The time integration is Iterated Crank Nicholson). The outer boundary is a ``lego sphere", with Sommerfeld outer boundary conditions for all evolved quantities.

The evolution is run with the holes excised (comparable results were found with puncture evolution that analytically handled the singular part of the conformal factor). The excision was carried out on lego-spheres of the black holes inside the horizon by the process of copying the time derivative at next interior point onto excision boundary (simple excision). Singularity avoiding gauge (lapse) was used to prevent the slice from running into physical singularities. The code is based on the BAM infrastructure, using fixed mesh refinement (FMR) for efficiency. Seven levels of nested boxes were constructed around each black hole. Outer boundaries were compared at 24M, 48M, and 96 M. The code evolves equal mass nonspinning holes, and uses quadrant symmetry. Because of these settings, the code can be run on a laptop. Perhaps the most important innovation was the use of co-moving coordinates enforced via the shift vector, which compensate for black hole orbital motion. The dominant terms in the shift correspond to rigid rotation. However it is the case that the data do not describe exactly circular motion, so the black holes can drift from their coordinate location in a specific evolution. An interesting algorithm is used to modify the shift to center the value of the lapse function (a proxy for the apparent horizon location) at a fixed specific coordinate location. The evolutions can evolve up to about 125M, more than the orbital time-scale inferred from the initial data. However, no waveforms have been published from these runs.

Peter Diener (LSU) discussed ``Recent developments in binary black hole evolutions". The results were obtained with a large group of collaborators (M. Alcubierre, B. Bruegmann, F. Guzman, I. Hawke, S. Hawley, F. Herrmann, M. Koppitz, D. Pollney, E. Seidel, R. Takahashi, J. Thornburg and J. Ventrella) associated with the Max Planck Institute Albert Einstein Institute in Potsdam, Germany, and with LSU.

This is another example of the surprising gains that have been made in computation of binary black hole interactions. Diener presented work using fixed mash refinement, with corotating coordinates, and an active adjustment of the shift vector to keep the holes centered in the corotating coordinates. Developing this code was prompted by the earlier work by Bruegmann, Tichy and Jansen, reported above by W. Tichy. Very interestingly, in this code the gauge used had five adjustable parameters, which control the behavior of the lapse and the shift. The lapse and shift are determined by driver conditions that evolve them toward ``1+log" lapse and co-moving shift (which follows the centers of the black holes). These parameters define the factor of the lapse in the time derivative controlling the shift, for instance.

Two sets of parameters were used: Gauge Choice 1, and Gauge Choice 2. It was found that the lifetime of the simulation depended on the gauge used (Gauge choice 1 ran longer, to about 140M; Gauge choice 2 ran to about 80M). Also the computed proper distance (at a given time) between the apparent horizons depended on the gauge choice. When the two gauges were run at various resolutions, however, one could do convergence on the proper separation, and Richardson extrapolation. The result is that the extrapolated separations from the two sets of simulations overlap very closely in the range where both are available ($t$ < $80M$). This is a very interesting result. It means that the code is definitely doing something right. It strongly suggests that the different implementations of the driver conditions are leading to very closely the same time slicing. It also indicates that quite fine discretization is needed to achieve the convergent regime, and to achieve reasonable accuracy.

Denis Pollney of the Albert Einstein institute in Potsdam, spoke on ``Evolutions of Binary Black Hole Spacetimes in the Last Orbit" He broke his talk into two sections: 1. Evolutions of Helical Killing Vector Data; 2. Evolving Single Black Holes Using a Multi-patch Code.

In the binary orbit (helical Killing vector) work, comparisons were made of evolutions from data set in one of two ways: Punctures with parameters along an effective potential sequence developed by Cook (1994), and thin sandwich data using a Helical Killing Vector condition, constructed by Grandclement, Gourgoulhon and Bonnazolla (2002) Meudon data. This code is as described in Diener's talk above; in particular it is a rectangular coordinate code. A series of orbits and head-on collisions can be produced in this code, and in particular, results similar to those of the earlier work by Bruegmann, Tichy and Jansen were accomplished.

The second aspect of Pollney's presentation concerns patching to spherical domains, and in particular conforming surfaces for inner (excision) and outer boundary surfaces. The current work has concentrated on single black holes, and uses Thornburg's inflated cube coordinate system, which is multi-patch (six patches) and uses interpolation between adjacent grids. Angular coordinates are chosen so that adjacent patches share coordinates perpendicular to their mutual boundary so the method needs only 1D interpolations. The method was tested in a hydrodynamics code (Whisky) to show that shock propagation is correctly handled across the interfaces in a single hole background. Evolution of a single distorted black hole by this method showed inter-patch effects well below finite difference accuracy.

Frans Pretorius of the University of Alberta described his binary black hole simulations. [Phys. Rev. Lett. 95 121101 (2005); gr-qc/0507014]. This work integrates several approaches which are not in wide use within the numerical relativity community -- use of a second-order formulation of the Einstein equations, compactification of spatial infinity, adaptive mesh refinement, and a harmonic gauge. The code is based on generalized harmonic coordinates, so every component of the Einstein equations obeys a wave equation (with nonlinear source). Source functions are added to the definition of the harmonic gauge in order to be able to choose slicing and shift conditions. These equations are discretized directly into second-order in time finite difference equations. Adaptive mesh refinement follows the holes and excision removes the singularities from the domain. Numerical dissipation is used to control instabilities that otherwise arise near the black holes. A final interesting point is that the spatial domain is compactified, which provides an ``inexpensive'' implementation of the outer boundary. In addition, Pretorius used constraint damping which adds a linear combination of the gauge constraints to the metric evolution, and which produces extended stable evolution. Initial data are set up using boosted field collapse. The initial data slice is conformally flat maximal. The harmonic condition gives the initial time derivatives of the lapse and shift. The Hamilton and momentum constraints are solved for the initial conformal factor and shift.

Data as set are equal mass components, in an approximately circular orbit (eccentricity of order 0.2 or less), with a proper distance between holes of approximately 16M (coordinate separation of 13M), where the initial ADM mass is 2.4M. Each black hole has a velocity of abut 0.16, and zero spin angular momentum. The evolution traces out about 2/3 of an orbit to one full orbit before the horizons merge. The final black hole has an angular momentum Kerr parameter $a=0.7M_f$. Presumably because of dissipation and lack of resolution in the spatial compactification, the extracted waveform has a faster than $r^{-1}$ falloff, though the shape is very well preserved at different wave extraction radii. Regardless of the falloff, one estimates about 5% of the total initial energy is radiated.

Mark Scheel (CalTech) discussed ``Solving Einstein's Equations using Spectral Methods". He began with a description of the method, which provides exponential convergence as the number of basis functions is increased (though the coefficient in the exponential must be studied in each case). In the pseudospectral approach, analysis of the basis functions leads one to define specific collocation points $x_n$. Expansions to truncated series in terms of basis amplitudes and basis functions (truncated to a maximum number, $N$) can be inverted exactly to obtain the basis amplitudes by carrying out a weighted sum with $N$ terms, over the function at the specified collocation points, multiplied by the basis functions at those points. This transformation between space and spectral representation is an algebraic process. In the nonlinear case derivatives are computed in spectral space, nonlinear terms are evaluated in physical space. Boundary conditions are imposed analytically on characteristic fields.

The method uses characteristic decomposition, and complicated domain decomposition; every domain maps either to a spherical region or to a sphere, where the basis functions are defined. An example for a single black hole had 54 sub-domains. Because of the fact that the sum over basis functions defined a function everywhere, no explicit interpolation is needed to transfer values between patches.

The KST code [Kidder, Scheel, Teukolsky, Phys. Rev. D 64 064017 (2001)] is a parameterized hyperbolic code. This was used with ``quasi-equilibrium" conformal thin sandwich data to compute binary black hole interaction in a co-moving frame. By using characteristic decompositions, no boundary conditions are needed at the BH horizons (excision) and Sommerfeld-like outer boundary conditions were imposed at the outer $r= 320M_{BH}$. The was a free evolution, the constraints were solved only initially. For moderately short evolutions the constraints converged. However by $t \approx 20M$ the convergence was lost. A fix in the shift vector to keep the apparent horizons centered in coordinate location (and spherical) improved the behavior for about another $10M$, but convergence was then lost. Suggested fixes were to impose elliptic gauge conditions, or some sort of driver condition.

The final topic of Scheel's talk concerned an effort to construct a pseudospectral version of Pretorius code, necessarily adapted to a first order form, this code works extremely well for single Schwarzschild black holes. However, very strong instabilities are found when trying to do co rotation problems. with moderately large domains. Even flat space is unstable (when $ R\Omega > 0.7$, with $R$ the size of the domain an $\Omega
$ the angular velocity). The KST system does not have this problem.

Harald Pfeiffer (CalTech) discussed ``Quasi-equilibrium binary black hole initial data". The basic idea is that there is approximate time independence in a corotating frame; this implies an approximate helical Killing vector. Time-independence in corotating frame implies vanishing time derivatives. The idea is that the initial data for black holes in not too close orbit approximately satisfy these condition; construct data that exactly has these properties. The solution proceeds by a conformal solve of the resulting elliptic equations to obtain the lapse, the conformal factor and the shift vector The co rotation requires a boundary condition on the shift vector of $\beta^i = (\omega \times r)^i$. The boundary condition on the lapse at infinity is $N=1$, and on the conformal factor $\psi = 1$. Inner boundary conditions on $\psi$ and $\beta$ are written at the apparent horizons, which are assumed in the data to be stationary and isolated (no shear of their generators). The so called extended conformal thin sandwich formalism also sets the time derivative of the extrinsic curvature on the initial slice to zero. This formalism leads to some curious double valuedness in the ADM energy as a function of wave amplitude in the conformal background. This apparently can be understood physically, and can be evaded by considering only the standard conformal thin sandwich approach.

One possible difficulty is that evolution codes typically evolve inside the apparent horizon, but these data are produced with the apparent horizon as its inner boundary.

Greg Cook (Wake Forest)gave an update ``Black-Hole Binary Initial Data: Getting the Spin Right". This was an update on the construction of quasi-circular binary black hole data. All of his results made use of the conformal thin-sandwich method. There are two approaches that yield a sequence of quasi-circular orbits. In one approach, the binding energy is plotted vs the orbital angular momentum, with fixed total angular momentum. The minimum of each of these curves defines an effectively circular orbit. The second approach makes use of the fact that in true quasi-circular motion all the fields should be constant in the corotating frame. one compares the value of the Komar mass (defined only for stationary spacetimes) with the ADM mass. Consider the corotating Black Hole case. By computing a number of test cases, Cook found a modification of the lowest-order corotating condition for the tidal field seen by a black hole at its horizon, in the presence of a second: $\beta^i = \alpha\psi^{-2} \tilde s^i + \Omega_{BH} \xi^i$ at the horizon, where $\tilde s^i $ is the spacelike normal to the horizon, and $\xi^i$ is a spatial unit vector. The first term $(
\alpha\psi^{-2} \tilde s^i)$ is well established; it is an outward directed shift component that counteracts the inward fall of the coordinates. The second term had previously been taken as the angular rate $\Omega_0$ measured at infinity. However Alvi (2000) pointed out that the rate at the horizon of the tidal rotation is given by $\Omega = \Omega_0 - \eta M/b + ...$ , where $b$ is a measure of the separation, $\eta=m_1m_2/M^2$ , and $M$ is the total mass. Cook expresses this completely in terms of the rotation rate and finds $\Omega = \Omega_0 - \eta (M \Omega_0)^{2/3} + ...$ . With this correction for co rotation, Cook finds complete agreement between the helical Killing Vector and the effective potential methods.

Scott Hawley (University of Texas, Austin) gave a summary of some recent work (with Richard Matzner and Michael Vitalo) validating an efficient multigrid-with-excision code that produces binary black hole data. The code is a node centered code, and uses a particular way of defining the excision. The excised points are those on any grid which lie inside the excision radius. Consequently, except for very special choices of the parameters, the excised regions are larger on the finer grids. The code is parallelized, and exists in a fixed-refinement version. However Hawley spoke about the unigrid code. To test the code, Hawley compared the computed binding energy to predictions of a lowest-order spin-spin coupling due to Wald. For relatively close placement of the momentarily stationary holes in the initial data set (coordinate separation of $10m$, where each hole has mass parameter mass $m$), one obtains a binding energy variation with spin that has the dependence on angle suggested by Wald, and an amplitude about 10% higher. This latter difference is attributed to the closeness of the holes in these runs. (The binding energy is computed by assuming the horizon area determines the intrinsic mass, and subtracting that from the ADM mass.) More exploratory work will be carried out in the future, with the FMR version of the code.

Stu Shapiro (Illinois), ``Binaries Containing Neutron Stars: The Merger Aftermath", described a number of results concerning neutron stars, and neutron star/neutron star and neutron star/black hole mergers. He began by showing computations indicating that certain rotating hypermassive stars are dynamically stable, but other physics (turbulent viscosity, magnetic braking, neutrinos/ gravitational waves) can lead to delayed collapse to black holes and delayed gravitational wave bursts. Shapiro described simulations of neutron-star binary systems mass ratio $0.9$ to $1.0$. The work found a critical mass approximately $2.5$ to $2.7
M_{solar}$ for the merged star. Exceeding this critical mass leads to prompt collapse; less than the critical mass leads to an hypermassive remnant and delayed ($100msec$) collapse. The associated gravitational wave frequency is in the $3$ to $4 kHz$ range, a possible target for AdLIGO. A theoretical question associated with rotating collapse concerns whether data with $J/M^2 > 1$ can collapse beyond the neutron-star stage. Computational experiments, some described by Shapiro, do not do so. (Note that a Kerr solution with $J/M^2 > 1$ has a naked singularity.) The behavior of the matter in the simulations is a rotation induced bounce. Simple Newtonian arguments explain the results by angular momentum conservation preventing collapse to within a horizon. Shapiro also described new, more realistic simulations involving General Relativistic MHD, and including realistic shear viscosity. These effects may contribute to delayed collapse with a gravitational wave burst, enhanced collapse bounce shocks, and possible magnetic jets. These topics are some of the most astrophysically relevant things which computational physicists can approach, and suggest exciting AdLIGO connections: coincident (triggered) detection between GRBs and their associated gravitational radiation, with a reasonable event rate.

John Baker gave a summary and``future directions" talk Friday afternoon.

The work in this conference that produced waveforms (Zlochower, Choi and Pretorius) produced waveforms that are remarkably similar in general features. In particular, most of the waveform``looks like" a ringdown waveform, and this is where most of the energy is radiated. It does appear that we can define a``generic" waveform for black hole mergers, appropriate to template generation. One of the outcomes of the meeting was a brief meeting of an ad-hoc committee chaired by John Baker to define data standards for, and to collect, waveform data from simulations, ensure consistency with the standards, and to post them at a public website.


Jorge Pullin 2006-02-28