110
each time-sliced filter, and the outputs of these new time-sliced filters form an ensemble of partial SNR streams. By linearity of the filtering process, these partial SNR streams can be summed to reproduce the SNR of the full template.
Since waveforms with neighboring intrinsic source parameters have similar time-frequency evolu- tion, it is possible to design computationally efficient time slices for an extended region of parameter space rather than having to design different time slices for each template. We construct from our time-sliced filters S sub-banks Bs ={hsi}i, and within each sub-bank we choose time slice bound- aries with the smallest power-of-two sample rates that sub-critically sample all time-sliced templates in that bank. The time slices consist of theS intervals [t0, t1),[t1, t2), . . . ,[tS−1, tS), sampled at frequenciesf0, f1, . . . , fS−1, wherefs is at least twice the highest nonzero frequency component of any filter in the bankBsfor thesth time slice. The time-sliced templates can then be downsampled in each interval without aliasing, so we define them as
hsi[k]≡
hi
hkffsi
ifts6k/fs< ts+1
0 otherwise.
(5.5)
We note that the time slice decomposition in Eqn. 5.4 is manifestly orthogonal since the time slices are disjoint in time. In the next section, we examine how to reduce the number of filters within each time slice via the singular value decomposition (SVD) of the time-sliced templates.
In the case where the time-frequency relationship is not known precisely, as for example during merger, we can still apply this multibanding prescription. We treat these cases as we have done here, except that we err on the side of over sampling.
Anyn×mrectangular matrixM can be uniquely decomposed into a unitaryn×nmappingU, a unitarym×mmappingV, and ann×mdiagonal scaling map Σ such that
M =UΣV†, (5.6)
where † denotes the conjugate transpose and Σ has all non-negative values. This decomposition is known as thesingular value decomposition. An equivalent expression for the SVD is given by
M = Xn i=1
σi|uiihvi|, (5.7)
where |ih| is the vector outer product, σi are called the singular values, and ui and vi are the normalized (left and right) eigenvectors ofU andV, respectively. The first representation is most convenient for the actual software implementation of the SVD, whereas the second is useful for understanding the theoretical properties of the expansion and why it works for GW data analysis.
If the singular values are ordered so thatσi≥σi+1, then this decomposition is unique.
Now we apply the SVD to a template bank. Let M be a matrix whose rows correspond to the whitened, normalized time domain representation of templates in a bank. As usual, the SNR for a normalized template ˆhin the frequency domain is given by
ρ= 4Re Z ∞
0
ˆh∗i(f)˜s(f)
Sn(f) df. (5.8)
Note that with this expression for the SNR, we have not maximized over coalescence time or phase.
For any time seriess(t) and power spectrumSn(f), we define the whitened time seriessW(t) by
sW(t) = Z ∞
0
˜ s(f)
pSn(f)e2πif tdf. (5.9)
The whitened time series is the inverse Fourier transform of the signal Fourier transform, after the latter has been rescaled by the noise amplitude spectral density. Note that for white noise,Sn(f) is a constant and the whitening process has no effect, as one might expect.
We now use Parseval’s theorem to move back to the time domain representations of the filters.
According to our definitions
ρ= 4Re Z ∞
0
hˆ∗W(f)˜sW(f)df, (5.10)
112 which by Parseval’s theorem implies
ρ= 4Re Z ∞
0
ˆhW(t)sW(t)dt. (5.11)
Since the matrix M has as rows the time-domain whitened templates, the SNR for all templates may be computed with the single matrix operation
ρ= 4M sW, (5.12)
noting that both the whitened data sW and the matrix elements of M are real. Now we see the power of the SVD. We carry out the SVD for M, and then truncate the expansion in Eqn. 5.7 to obtain anapproximate expression for the SNR,
ρL≈ XL i=1
σi|uiihvi|si, (5.13)
where L≤nis the truncation order. The quantity ρL is a column vector, whose rows correspond to the signal-to-noise for each template in the bank before maximization over coalescence time and phase.
By filtering with the orthogonal templates and allowing for a small loss in SNR, we significantly reduce the computational cost. The problem is to achieve L n while still having ρL ≈ρ. The crucial question therefore is how much SNR is lost in making the approximation in Eqn. 5.13. A straightforward calculation shows that the expected loss of SNR due to truncation of the SVD expansion is given by
(∆ρ)2=D
|ρ−ρL|2E
= Xn i=L+1
σi2. (5.14)
The normalized SNR loss ∆ρ/ρis referred to as theSVD toleranceand is clearly determined by the number of basis templates that are kept in the approximation. We refer to the inner products
ρ⊥,i=σihvi|si (5.15)
as theorthogonal orpartialSNRs. We note in particular that these SNRs are non-physical. In order to detect real signals, we need to be able to reconstruct the original SNR. In terms of the orthogonal
(a) Physical templates (b) Orthogonal templates
Figure 5.6: Illustration of the transformation from correlated physical templates to orthogonal unphysical templates. Template banks for the detection of GW signals from CBCs are highly redundant by design.
By applying a singular value decomposition to the template waveforms, one can pick out the dominant contributing pieces and filter only with these. In doing so, we change from filtering with the physical templates (left) and unphysical templates (right). With this method, the number of required filters can be reduced by an order of magnitude or more with a negligible loss of SNR.
SNRs, the reconstructed SNR is given by
ρL= XL i=1
ρ⊥,i|uii, (5.16)
which follows from Eqn. 5.13. Since we have not maximized over phase, we must include inM two filters for each set of intrinsic parameters: one filter with ϕ0 = 0 and one with ϕ0 = π/4. Then combining the SNRs for these two filters in quadrature will yield the maximization over coalescence phase.
As shown in Fig. 5.5 (right), the SVD can yield greater than an order of magnitude reduction in the required number of filters, even at the strictest of SVD tolerance. The key to the efficacy of this technique is that we fill each matrix M with templates that are very nearby in parameter space, maximizing the redundancy among them. A graphical illustration of the singular value decomposition applied to inspiral signals is shown in Fig. 5.6. The orthogonal templates shown in Fig. 5.6b are sorted in order of decreasing singular values (top to bottom). We see clearly that the orthogonal templates with the smallest singular values bear almost no resemblance to the original templates, an indication of the negligible contribution to the signal-to-noise from this part of the
114
Figure 5.7: Architecture of the gstlalanalysis pipeline. The above flow chart depicts the workflow of the gstlalpipeline for CBC searches. First one constructs the template bank and performs the time-slicing decomposition discussed above, choosing the optimal sampling frequency for each slice. The data is read from disc, whitened and split into a number of parallel streams. Each stream corresponds to one of the time slices, and the data in the stream are down-sampled to match the filters. Filtering the data with the orthogonal SVD templates, we create an equal number of SNR streams. If the orthogonal filters show a large excursion, then the physical SNR is reconstructed from the partial streams. GStreamerhandles the synchronization and up-sampling seamlessly such that the SNR may be added together across time slices to obtain the full SNR time series. Finally, the reconstructed SNR time series is further analyzed (clustered, checked for coincidence, etc.) to obtain a trigger.
signal space.