The probability of detectingnGW events, given a rate ofR=µCBCs in the mass space considered by the high-mass search, per volume per time, is:
p(n|µ) = µne−µ
n! , (7.22)
as the GW signals are expected to be Poisson distributed with a mean number ofµ. We can use Bayes’
theorem to construct the posterior probability for the rate, given the observation ofnevents:
p(µ|n) = p(n|µ)p(µ)
R p(n|µ)p(µ)dµ, (7.23)
wherep(µ)is the prior probability distribution of the expected number of events for our search.
Using theloudest event statisticmeans that we set the threshold for detection at the FAR of our loudest foreground event,FAR [126]. The value of thisd FAR for each of our analysis times is listed in Table 8.1.d We use this threshold when calculating oursensitivity. In general, this sensitivity will be a function of the component masses of the CBC system considered. In order to capture this dependence, we calculate the
sensitivity separately for different bins in component mass. The first step in evaluating the sensitivity is calculating the efficiency of recovering our software injections in each set of mass bins, as a function of distance to the source:
Ei,j(r) = Ni,jfound(r)
Ni,jperformed(r), (7.24)
whereiandjlabel the bins for the masses of each of the objects in the binary, and the bar indicates that we have averaged over sky position and orientation. To be considered found, the injection must have a lower FAR than the loudest foreground event. This efficiency is calculated separately for each of the analysis times in our experiment; each row in Table 7.2 is one analysis time. The efficiency can be used to calculate the sensitive volume of each of the analysis times:
Vi,j= Z
4πr2Ei,j(r)dr. (7.25)
The total sensitivity of the search is then simply
[V T]i,j=
t=24X
t=1
Vi,jt ∗Tt, (7.26)
wheretindexes the analysis time,Tis the length of the analysis time, and the sensitivityV T is still specified separately for each pair of mass binsi, j.
7.8.1 Upper limit calculation for the rate of high-mass binary mergers
We also use the loudest event statistic to calculate our upper limits on the volume-time density of mergers of black hole binary systems in the mass ranges considered. The subtlety of this approach is whether to consider the loudest foreground event signal or background. If it is considered background, then we have the probability of detecting 0 events:
p(0|µ) =e−µ=e−RV T, (7.27) whereV T is the total sensitivity of the search as defined in Equation (7.26);Ris the rate (per volume, per time) of CBC coalescences in the mass space defined by the high-mass search. µ= RV T is the expected number of signal events, depending on the value ofR, whose posterior probability density function we wish to determine. Again, the calculations are performed for each pair of mass binsi, j, but I will drop the subscripts in this section. On the other hand, if the loudest foreground event is considered signal, we have the probability of detecting 1 event:
p(1|µ) =µe−µ. (7.28)
We can express both of these possibilities with the single equation:
p([0,1]|µ) = (1 +µΛ)e−µ
R(1 +µΛ)e−µdµ, (7.29)
whereΛis generally described as
Λ = dlnpsignal(x) dx
dlnpbackground(x) dx
−1
, (7.30)
where the distributions for these probabilities are taken from our injections (signal) and timeslides (back- ground) in terms ofx=−FARdtTtfor the loudest event statistic (where the analysis time indextis written out to make it explicit that the statistic is different for each analysis time). Assuming the background is a Poisson process,
pbackground(x) =ex, (7.31)
soΛcan simplify to
Λt= dlnVt(FAR)d dFARd
1 Tt
, (7.32)
wheretindexes the analysis time andV is defined as in Equation (7.25). Λ = 0corresponds to the loudest foreground event being background andΛ =∞corresponds to the loudest event being signal [23].
We compute the Bayesian likelihoods (which are proportional to the numerator in Equation (7.29)) for this posterior probability distribution for each analysis time, marginalizing over the statistical uncertainties in the volume due to the finite number of software injections; see Reference [127] for details. The likelihoods for each analysis period are then multiplied. The prior probabilities are taken from the results of the search for high-mass CBCs in LIGO’s S5 data. The calibration uncertainty is marginalized over at this final stage because the nature of the errors implies they are significantly correlated between analysis times. In order to turn this posterior into a rate statement, we normalize the posterior and integrate it to 90%. This gives us a 90% confidence upper limit on the rate of high-mass CBCs.
It should be mentioned that there are uncertainties in the waveforms, but these are not taken into account.
The calibration errors that gave us a systematic uncertainty of 42% in volume are so overestimated that we feel it is okay to not add in the additional uncertainty in the waveforms, which is hard to quantify in the first place (since we don’t have any astrophysical waveforms to compare our theoretical ones to anyway!).
The upper limit calculation is the main scientific result of a search for GWs, in the absence of detection.
Because astrophysical observations of systems of interest are rare (see Section 2.1), placing an upper limit of the volume-time density of such merging systems is extremely scientifically valuable.