Gravitational waves (gws) are now routinely observed by the two Advanced LIGO detectors Aasi and others (2015) and the Advanced Virgo detector Acernese and others (2015). At the end of the last observing period, the KAGRA detector Akutsu and others (2019) joined the network and is expected to aid observations in the future. During three observing runs gws from compact binary sources have been identified, almost all of which are consistent with the merger of bbh systems Abbott and others (2019, 2020); Nitz et al. (2021); Abbott and others (2021).
Many searches for gws from compact-binary coalescence use matched filtering to separate potential signals from the background detector noise Abbott and others (2020); Messick and others (2017); Dal Canton et al. (2020); Adams et al. (2016). Matched filtering is a technique that convolves a set of pre-calculated template waveforms, each representing a possible source with different component masses, spins, etc., with the detector’s data and is known to be optimal for Gaussian noise Allen et al. (2012). A snr time series is calculated for each template waveform; candidates are identified by a peak in the snr time series that also passes data quality Nuttall and others (2015); Abbott and others (2016, 2020) checks. In a second step the candidate detections from one detector are cross-validated with the candidate detections from other detectors to further increase the significance of the reported events and rule out false positives Abbott and others (2020); Usman and others (2016); Sachdev and others (2019). For sources where the gravitational-wave signal is unknown or poorly modelled other search algorithms detect coincident excess power in different detectors and don’t require a model Klimenko and others (2016).
Deep learning has started to be explored as an alternative approach to building an algorithm to detect gws George and Huerta (2018b, a); Gabbard et al. (2018); Dreissigacker and Prix (2020); Schäfer et al. (2020); Krastev et al. (2021); Wei et al. (2021); Wei and Huerta (2021); Cuoco and others (2021); Huerta and Zhao (2021). It may potentially target signals which are currently challenging for matched filter search algorithms due to computational limitations Wei et al. (2021, 2020). The computational cost of these modeled searches scales with the number of templates required by the parameter space. Certain effects like higher-order modes Harry et al. (2018), precession Harry et al. (2016), eccentricity Nitz et al. (2019); Lenon et al. (2021), or the inclusion of sub-solar mass systems Nitz and Wang (2021, 2021) potentially require millions of templates and are thus computationally prohibitive to analyze. Deep learning may also be more sensitive when the noise is non-Gaussian Zevin and others (2017); Essick et al. (2020); Wei and Huerta (2021).
In our previous work Schäfer et al. (2021) we explored the sensitivity of a simple neural network to non-spinning bbh sources in Gaussian noise for a single detector. We tested how different training strategies influence the training procedure and the final efficiency of the network. Our results showed that under the given conditions the network can closely reproduce the sensitivity of matched filtering and that most efficient convergence is reached when a range of low snr signals is provided throughout training.
Here we extend our previous work to two detectors. To do so, we use the same single detector network explored in Schäfer et al. (2021) and apply it individually to the data from both observatories. This procedure produces a list of candidate events for each detector. We then search for coincident events between the two, where two events are assumed to be coincident if they are within the maximum time-of-flight difference between both detectors. We assume this difference to be since the networks are trained to be insensitive to variations on such scale.
The network uses the usr modification we introduced in Schäfer et al. (2021). It outputs a single detector ranking statistic. Here we use it to construct a network ranking statistic. This network ranking statistic turns out to be the sum of the individual ranking statistics minus a correction factor.
The main advantage of this approach is the trivial computation of the search background which enables robust detection claims at comparable statistical significance ( per years) to existing production methodology. By applying time shifts larger than the time-of-flight difference between the detectors to the data from only one observatory, we can create large amounts of data which by construction cannot contain any astrophysical coincident candidates. By applying the time shifts to the single detector events rather than the input data directly, we can skip re-evaluating the entire test set and efficiently look for coincident events. This is a well established method that has already been successfully applied Abbott and others (2020); Usman and others (2016); Sachdev and others (2019). By this approach we can probe the search down to a false-alarm rate (far) of 1 false-alarm per
months. The far estimates how often a candidate is produced by the search under the null hypothesis of no astrophysical candidates. Our far-estimate is limited by the assigned hardware resources rather than the available data.
We compare this search to an equivalent matched filter search Nitz et al. (2021). We find that the deep learning search still retains of the sensitivity of a two-detector matched filter search when the latter is restricted to using the timing difference between the detectors as the only means for determining coincident events. However, the matched filter search also extracts some information on the parameters of the signal. When we also require matching templates and the phase and amplitude of the triggered templates to be consistent between detectors Nitz et al. (2017), the machine learning search only retains of the sensitivity.
We then construct a single network that operates on the data from both detectors. The idea is that the network may then be able to learn, summarize, and cross-correlate signal characteristics between detectors. To do so, we remove the last layer of the original networks applied to the individual detectors and concatenate their output. Thereby the input data are compressed to a dimensional latent space. Dense layers are used to correlate the concatenated outputs and condense it into a single ranking statistic.
Using a single network complicates the background estimation, as time shifts between the detectors can in principle not be applied after evaluating the individual data streams. However, the two-detector network architecture is constructed such that the data from different detectors is analyzed by individual sub-networks, concatenated and processed by a third sub-network. This enables us to process the bulk of the data only once and apply time shifts to the individual detector sub-network outputs. To obtain the ranking statistic we are then only required to run the time-shifted data through the final, small sub-network.
We find that networks constructed this way are not able to improve the sensitivity over a time coincidence analysis of the single detector machine learning events. We test three different approaches to training these networks but none show any improvement.
Ii Coincident Search from Independent Single-Detector Networks
The algorithm explored in this section uses a network trained on data from a single detector and uses it to find coincidences in multiple detectors. It is one of the most simple extensions and has two advantages. Firstly, networks trained on data from a single detector can be re-used which reduces requirements to computational resources. Secondly, the search background can be estimated using well established and efficient algorithms allowing for much higher confidence in candidate detections.
We use the same network as in Schäfer et al. (2021), which is an adaptation of the network presented in Gabbard et al. (2018). It consists of 6 stacked convolutional layers followed by 3 dense layers. An overview of the architecture is given in Table 1.
|layer type||kernel size||output shape|
|Input + BatchNorm1d|
|Conv1D + ELU||64|
|MaxPool1D + ELU||4|
|Conv1D + ELU||32|
|MaxPool1D + ELU||3|
|Conv1D + ELU||16|
|MaxPool1D + ELU||2|
|Dense + Dropout + ELU|
|Dense + Dropout + ELU|
|Dense + Softmax|
The last layer contains a Softmax activation function, which we remove during testing. InSchäfer et al. (2021) we showed that this modification, which we called usr, allows the network to be tested at lower fars than otherwise possible.
The Softmax activation for the first output neuron is given by
where is the network output before the activation function and . When is strongly positive, the denominator in (1) and thus the fraction numerically evaluates to . This leads to problems when setting the threshold value to use to determine true positive detections Schäfer et al. (2021).
However, equation (1) is bijective and can be inverted
This quantity is monotonic and we can thus do statistics on directly, avoiding numerical instabilities while still using the Softmax activation during training.
ii.2 Data Sets and Training
The input to the network is a time series of duration sampled at . This allows for signals up to a frequency of to be resolved which is sufficient for the considered parameter space.
The network is trained on signals from non-spinning bbhs with component masses uniformly distributed from to . We enforce and for each pair of masses uniformly draw coalescence phases . The signals are generated with the waveform model
SEOBNRv4_opt Devine et al. (2016) (optimized version of
SEOBNRv4 Bohé and others (2017)) and scaled to varying optimal snrs in the range during training. The time of merger is varied from to from the start of the input window to decrease the dependency of the network on the exact signal position. Each signal is whitened by the analytic model for the detector power spectral density (psd)
aLIGOZeroDetHighPower Collaboration (2018). For further details on the training set please refer to Schäfer et al. (2021).
Notably, we do not vary the sky position, inclination or polarization during training. For a single detector, variations in these parameters can be fully expressed by changes in the distance, which is fixed by choosing a specific snr, and the phase . For a two detector setup this degeneracy is broken as a time-of-flight difference is introduced and the amplitudes and phases are correlated in the two detectors. However, our search algorithm is largely parameter agnostic. This means that its output does not depend on the amplitude or phase. Thus, we do not have information on whether or not the search responds to consistent signals. Finally, the time-of-flight difference is on the order of the variation of the merger time within the training set and can, therefore, not be resolved. In section III the network has access to data from both observatories and the data is adjusted accordingly.
All noise is Gaussian and simulated from the
aLIGOZeroDetHighPower psd Collaboration (2018). We explicitly generate colored noise and whiten it afterwards. This in principle allows to extend our training to real noise.
The training set contains noise samples, of which are combined with unique signals. The validation set111In out previous work Schäfer et al. (2021) what we call validation set here was named efficiency set. contains noise samples and unique signal samples, which we subsequently scale to snrs . This set is used to calculate the efficiency of the network at a fixed fap of
. The fap is the fraction of discrete noise samples misclassified as signals. The efficiency is the fraction of discrete signal samples correctly classified as signals at a given fap.
The test set contains a month of continuous simulated noise for each of the two detectors in Hanford and Livingston. We inject signals with parameters drawn from the distributions shown in Table 2 into both data streams. Injections are separated by a random time between to . To enable the networks to process this data, the continuous stream is sliced into million overlapping, correlated samples. Each sample is whitened individually by the analytic psd.
We construct a second test set for background estimation. This set contains the same time domain noise as the first test set but no injections are performed. We pre-process this second data set in the same way we pre-process the first data set for the network to be able to process it.
The network is trained for epochs and we use the network with the highest average efficiency over all snrs for the analysis carried out here. We use the Adam optimizer with a learning rate of , , and Kingma and Ba (2014)
. We use a variant of the binary cross-entropy which was designed to stay finite as loss function
where is for a signal-class sample and for a noise-class sample, is the prediction of the network, is the mini-batch size, and .
ii.3 Single Detector Events
To apply the network to data of duration longer than the input of the network, we use a sliding window with step size . The contents of each window are whitened individually by the psd model. At each step the network outputs a set of two numbers, the difference of which we use as our ranking statistic.
We apply the same network to the data from both detectors individually. We, thus, receive two output time series of ranking statistics. To determine notable events in the individual detectors we apply a threshold to both time series and cluster the resulting points above the threshold into events. A point exceeding the threshold is counted towards a cluster if it is within of the cluster boundaries. We choose a threshold on the usr output of , which corresponds to a Softmax output of .
The search algorithm produces a list of events, where an event is a tuple . Each event is a time at which the network predicts a signal to be present with a ranking statistic . The ranking statistic can be used to assign a significance to the event.
ii.4 Coincident Events
A signal will be present in the data of all detectors if it is of astrophyiscal origin. Its snr in each detector depends on the location and orientation of the source. The number of false alarms can, thus, be reduced by requiring that the event is picked up by multiple detectors at similar times.
To quantify the significance of an event detected by more than one observatory, a combined ranking statistic is required. For simplicity we restrict our current analysis to two detectors. However, this approach is extendable to any number of detectors.
If the network was using the final Softmax activation during evaluation a combined ranking statistic would come straight forwardly from the interpretation of the output as a probability.
The combined ranking statistic is the sum of the single detector ranking statistics minus a correction term.
We consider an event in one detector to be coincident with another event in the other detector if the event times are within of each other. This time difference is chosen to be the maximum time resolution the networks can achieve due to the time variation in the training set.
We construct a list of coincidence events from the single detector list by the above condition. Each coincident event is assigned the combined ranking statistic (II.4) and the time in the Hanford detector.
ii.5 Background Estimation
To estimate the far at different ranking statistic values we evaluate the same noise used to search for signals but omit injecting the gws. This ensures that all events found in this data set are noise artifacts and are not influenced by close by injections.
The lowest far that can be probed is limited by the duration of the analyzed data. Our test set covers one month. The duration can be increased by shifting the data in one of the detectors by a time larger than the maximum time-of-flight duration between the detectors. Rather than shifting the data itself one may instead alter the event times returned by the search. This allows us to skip reanalyzing the full data for each time step and only requires us to look for coincidences between the events from one detector and the time shifted events from the second detector. Increasing the amount of background by applying time shifts is a well established method that has already been successfully applied in production searches Abbott and others (2020); Usman and others (2016); Sachdev and others (2019).
We choose a time shift of and apply any possible integer multiple of this step size. We then search for coincidences in these events as detailed in subsection II.4. This procedure increases our background to years.
A list of fars at different network ranking statistics is obtained by counting the number of events in the way described above with a larger ranking statistic.
The sensitive volume of a search can be estimated by
when it is derived on data containing injections which are distributed uniformly in volume Usman and others (2016). Here is the far at which the volume is being calculated, is the maximum distance of any injection, is the volume of a sphere with radius , is the number of signals detected with a far and is the total number of injected signals. We report the radius of a sphere with volume instead of the sensitive volume.
We analyze a month of simulated data from the two detectors Hanford and Livingston, assuming the psd
aLIGOZeroDetHighPower Collaboration (2018). The data contains injections drawn from the distribution shown in Table 2. We apply the network to the data from both detectors individually as described in subsection II.3. The resulting single detector events are correlated and a list of coincident events is produced as detailed in subsection II.4. We then pick out any events that are within of an injection. These events are called foreground events from here on out.
To determine the search background, we evaluate the same month of noise used to find the foreground events. However, this data does not contain any injections. The networks return a list of single detector events, which are correlated and shifted in time to increase the effective duration of the analyzed data as detailed in subsection II.5. The resulting coincident events are called background events from here on out.
We can then assign a far to any foreground event. To do so we count the number of background events with a ranking statistic larger than the ranking statistic of the considered foreground event. This number is divided by the effective duration of the analyzed background to obtain a far. The sensitive volume is then obtained from equation (6) and converted to a distance. The sensitive distance as a function of the far is obtained by evaluating the sensitive volume at the fars of all foreground events.
ii.7 Matched Filtering
The template bank contains unique waveforms and is constructed such that no more than of the snr of any signal is lost due to the discreteness of the bank. It covers the same mass range of to as the training set of the networks and spins are set to . The individual templates are generated using the waveform model
IMRPhenomD Husa et al. (2016); Khan et al. (2016) and placed stochastically.
To run the matched filter search we use the program
pycbc_inspiral Nitz et al. (2021). It is setup to use a snr threshold of in both detectors to create two sets of single detector triggers. These two sets are then checked for coincidence by two different approaches.
One approach handles the matched filter triggers analogous to the network single detector triggers, i.e. they are clustered and turned into single detector events as described in subsection II.3. In this case the ranking statistic is the snr returned by the best matching template. We then look for coincidences as described in subsection II.4 by requiring two events in different detectors to be separated by no more than . The combined ranking statistic in this case is given by
This disregards the information about the possible parameters obtained from the best matching template and only looks for time coincidence, i.e. no signal consistency is required.
The other approach leverages the signal information and checks for phase and amplitude correlation as well as requiring that the templates matching the data are consistent between detectors. In particular we utilize the combined ranking statistic given in equation (2) of Nitz et al. (2017) and find coincidences as described therein.
ii.8 Evaluation and Comparison to Matched Filtering
In Figure 1 we show the injections that were found and missed by the network coincident search at a far of false alarm per month. The x-axis shows the optimal snr of the injections in the Hanford detector and the y-axis shows the optimal snr in the Livingston detector. The color indicates the network ranking statistic as calculated by equation (II.4). Missed injections are marked with a red cross. A network snr of as calculated by equation (7) is highlighted by the black line.
Figure 1 shows that the combined ranking statistic (II.4) is correlated with the network snr. As the network snr increases so does the combined ranking statistic. The loudest missed injection has a network snr of . However, the signal is most dominantly seen in the Hanford detector with a single detector snr of , whereas Livingston has an optimal snr due to the location of the source. Therefore, it is not surprising that the signal does not show up in both detectors and is missed by the coincidence search. When considering only the detector in which the signal is observable with lower snr, the loudest missed signal has a optimal snr of in that detector.
In Figure 2 we show the sensitive distance of different algorithms as a function of the far. The orange lines show the sensitivity curves of the machine learning based algorithms whereas the purple lines show the sensitivities of a comparable matched filter search. The dashed lines show the sensitivity of the searches when only a single detector is considered. We compare those to a two-detector search where we require coincident detections in both detectors. The filled orange line and the dash-dotted purple line show the comparison between the machine learning and matched filter algorithms, respectively, when both impose the same coincidence condition. The filled purple line shows a more realistic application of matched filtering where the consistency of the time of arrival, the phase, the amplitude, as well as the parameters of the best matching template are required.
We find a significant improvement of up to at a given far when the machine learning algorithm has access to data from both detectors compared to using only data from a single detector. Furthermore, we can probe fars down to false alarms per month without needing to increase the amount of evaluated data by applying time shifts between detectors as described in subsection II.5. In principle this limit may be decreased even further and time shifts are only limited by the time-of-flight difference between the detectors. The large increase in the available background potentially greatly increases the statistical significance of any event.
The sensitivities of the machine learning search algorithms are compared to an equivalent matched filter search. For the single detector searches given by the dashed lines in Figure 2 we find that the machine learning algorithm retains at least of the sensitivity at a fixed far of the matched filter analogue. This corresponds to a maximum absolute separation of . This difference in sensitivity is basically unchanged when data from two detectors is considered and both the machine learning as well as the matched filter search calculate coincidences only based on the timing in the different detectors. The corresponding curves in Figure 2 are the filled orange and the dash-dotted purple line, respectively. In this case, the machine learning algorithm retains at least of the sensitivity of the time coincidence matched filter search which corresponds to an absolute separation of .
However, matched filtering also carries information about the intrinsic parameters of the source, the relative phase, and the relative amplitudes in the two detectors. This information can be used to further constrain coincidences and improve the ranking statistic Nitz et al. (2017) by testing for signal consistency. We compare the time coincidence machine learning search (filled, orange line in Figure 2) to this matched filter coincidence search utilizing signal consistency checks (filled, purple line in Figure 2). The machine learning search now only retains at least of the sensitivity in far regions where both are defined. This corresponds to an absolute separation of .
We truncate the sensitivity curve of any search that has access to data from both detectors in Figure 2 at a far of false alarms per month. This is done due to a large number of true positives at high fars originating from random noise coincidences. This means that the search returns a coincident event that is caused by a particular noise realization which happens to coincide with an injection with an optimal snr below the trigger threshold. Many of these injections should thus not be recoverable but are detected at high far due to these noise fluctuations. At a far of per month we expect less then of these false associations. Another reason to only compare the sensitivity at low fars of the machine learning and the matched filtering based searches are the thresholds used to find triggers. The matched filter search uses a threshold of snr whereas the machine learning search uses a threshold on the usr ranking statistic of . Because there is no direct relation between these two statistics, we cannot guarantee that both thresholds correspond to similar signal strengths. It may be possible that one search excludes weak signals which are found by the other based on this difference in the threshold.
The sensitivity difference between machine learning and matched filtering stays constant between using data from a single detector and using data from two detectors when matched filtering may only check for time consistency between detection candidates from the two observatories. The performance difference increases when matched filtering also checks for signal consistency. It is, therefore, reasonable to believe that a multi detector machine learning search may be more sensitive when it too can check for signal consistency. This would either require the single detector network to output parameter estimates of the detected signal alongside a ranking statistic or a single network that uses the data from both detectors as input. In the following section III we explore the second hypothesis.
Iii Two Detector Network
The deep learning algorithm presented in section II is significantly less sensitive than the full matched filter analysis that takes signal consistency into account. On the other hand, when the deep learning algorithm is compared to the matched filter search where signal consistency is ignored, the difference in sensitivity is comparable to the difference in sensitivity for a single detector. This gives reason to believe that the difference in sensitivity compared to the full matched filter search could be reduced when the network may operate on the data from both detectors and consider coincidences itself.
We construct a network that uses data from both detectors while still retaining the ability to efficiently estimate a large background. The network from section II is still applied to the data from the two detectors individually. However, the final layer is removed and the output-neurons from both networks are concatenated. We then add more fully connected layers to look for coincidences between the detectors. An overview of the network is shown in Figure 3.
The last layer from the single detector network is removed to create a large latent space. A matched filter search compresses the input data into the ranking statistic, the time of the merger, and the parameters of the best matching template. The intention is that neurons may be sufficient for a comparable compression and that the additional layers that operate on the concatenated outputs could perform a signal consistency analysis.
The sub-networks A and B in Figure 3 are intended to act as encoders that reduce the dimensional input into a latent space of dimension
. It may be interesting in the future to train these sub-networks initially as autoencodersKramer (1991) from which only the encoder is used for detection purposes afterwards. Autoencoders are neural networks which in the most simple form consist of an encoder network and a decoder network. The encoder network compresses the input to some lower dimensional latent representation whereas the decoder uses that lower dimensional representation to reconstruct the input. Other studies have already found that autoencoders have potential applications in gw data analysis Shen et al. (2019); Gabbard et al. (2019).
iii.2 Data Sets and Training
The network is trained on data similar to that presented in subsection II.2. However, the data is extended to two detectors and sources are uniformly distributed in the sky. The latter change is required due to the amplitude and phase correlations in the two detectors.
We utilize the pre-trained single detector network used in section II in two different ways. In both cases the single detector parts of the two detector network (A and B in Figure 3) are initialized with the weights of the pre-trained model from section II. However, for one of the two networks, these weights are then not optimized during training, leaving only the weights of the final fully connected layers (C in Figure 3
) to be adjusted. This approach is known as transfer learningWeiss et al. (2016) and has been successfully applied for different problems Tan et al. (2018); George et al. (2018); Mesuga and Bayanay (2021). The second network optimizes the weights of the entire network. We also train a third network of the same architecture, where all parameters are initialized randomly and optimized during training.
The same optimizer settings and loss function described in subsection II.2 are used to train all three networks for epochs. They are trained with a Softmax activation on the final layer, which is removed during evaluation. Each network is only trained once and the epoch with the highest efficiency on the validation set is chosen for further analysis.
iii.3 Coincident Events
Because the networks output a single value when given the data from two detectors, we interpret that output as a coincidence ranking statistic at the corresponding time. We then perform the same clustering and thresholding described in subsection II.3 to obtain a list of coincident events.
iii.4 Background Estimation
Determining the background of the two detector network is more challenging than for the single detector network from subsection II.5, as there is no direct way of performing time shift in a computationally efficient way. One would, therefore, naively be limited by the duration of the analyzed data or would have to re-evaluate the entire month of test data multiple times. However, the network is designed in such a way that the data from both detectors are still analyzed individually and combined only at later stages. We evaluate the single detector data individually with the sub-networks A and B from Figure 3 and store those outputs. We then permute the order of the outputs from sub-network B such that it corresponds to a time shift with respect to the output from sub-network A. Finally, sub-network C is applied to the concatenated data from sub-network A and B for many different time shifts. Since sub-network C is very simple and time shifts can be generated trivially this process generates months of background within on a NVIDIA RTX 2070 Super.
iii.5 Evaluation and Comparison to Matched Filtering
Figure 4 shows the sensitive distance of the various networks as a function of the far and compares them to the results presented in subsection II.8. All curves are truncated at a far of per month due to the large number of false associations described in subsection II.8. The three networks utilizing the data from both detectors described in this section are labeled as ”Machine learning network coinc.”. The matched filter results are shown in purple, where the dash-dotted line considers only time coincidence and the filled line also takes the consistency of intrinsic source parameters, phase, and amplitude into account. The orange line corresponds to the network from section II.
The networks described in this section were designed to be able to take signal consistency into account by reducing the input data to a large latent space. As such we were expecting sensitivities at low fars to be larger than those obtained from time coincidence between single detector events produced by the single detector network.
However, we find that at low fars all of the two detector networks are roughly as sensitive as the network tested in subsection II.8. Therefore, they are still less sensitive than the matched filter equivalent and do not seem to take signal consistency into account. For high fars, on the other hand, they are more sensitive. We suspect that the large time variation of the peak amplitude of may be responsible for this behavior. The networks are, thereby, trained to be insensitive to variations in timing of less then , which may produce phase and amplitude variations in a broad range.
In this paper we have extended the single detector deep learning gw search algorithm from Gabbard et al. (2018); Schäfer et al. (2021) to two detectors and compared it to an equivalent matched filter algorithm. We found that the most simple extension, applying the one detector network to the data from two detectors individually and searching for coincident events, retains of the sensitivity of matched filtering, when only the time consistency between detectors is required. This fraction drops to when signal consistency between detectors is also considered.
To operate on data from two observatories, we constructed a two detector ranking statistic for the machine learning search based on the single detector usr ranking statistic proposed in Schäfer et al. (2021). This ranking statistic proved to be correlated with the network snr.
We also highlighted the advantages of using a single detector network to construct a two detector search. Firstly, the single detector network does not need to be re-trained to be applied to the second detector, if both have similar noise characteristics. Secondly, this approach enables an efficient background estimation by applying relative time shifts to the recovered single detector events. This allows to test the two detector search to almost arbitrarily low fars at low computational expenses. This method has already proven to be effective and reliable in state-of-the-art classical search algorithms Abbott and others (2020); Usman and others (2016); Sachdev and others (2019).
Because using a single detector network restricts one to check for coincidences based solely on the timing difference, we tested a simple network that operates on data from both detectors directly. This allows the network in principle to construct internal signal representations which can be correlated between observatories. The network was constructed by removing the final layer of the single detector network, concatenating the outputs and adding a few fully connected layers to check for coincident events. The final fully connected layers, thus, receive latent variables for each detector that can be checked for coincidence.
This design of the two detector network allowed us to do efficient background estimation. By applying relative time shifts to the outputs of the individual detector sub-networks, only the final few fully connected layers need to be evaluated for all shifts. The bulk of the computation, namely evaluating the input data of the detectors, only needs to be done once.
The network architecture was trained in three different ways; randomly initialized parameters for the entire network, parameters of the sub-networks initialized from of the single detector network, and parameters of the individual detector sub-networks fixed to the single detector parameters and optimizing only the final fully connected layers.
We found that all of these networks have very similar performance at low fars. Neither of them performed substantially better than the initial network that looked for time coincident events between the single detector network outputs. It, therefore, seems as if the network architecture explored here is unable to learn any additional information about the signal. This may be caused by the allowed time-variance offor signals in the training set, which may limit the time resolution of the network and thus overshadow correlations in any other parameters. More sophisticated network architectures with higher time resolution may improve our findings. First promising steps have already been taken by Wei et al. (2021). Using an autoencoder to find a more meaningful latent representation of the input data may also be of use.
While the sensitivity was not improved by using a single network to process the data of two detectors, we still want to highlight that the method of determining the background may be of use for future networks.
Here we limited our research to gws from non-spinning binary black holes with signal duration and Gaussian noise. Any of these simplifications are desirable to be lifted. Especially considering real noise may increase the gap in sensitivity between the single detector and multi detector search algorithm, by vetoing glitches. While we considered only two detectors an extension to a larger network should be trivial and may follow studies such as Davies et al. (2020).
We thank Ondřej Zelenka, Frank Ohme, and Bernd Brügmann for valuable discussions and their scientific input. We acknowledge the Max Planck Gesellschaft and the Atlas cluster computing team at Albert-Einstein Institut (AEI) Hannover for support.
- Advanced LIGO. Class. Quantum Grav. 32, pp. 074001. External Links: Cited by: §I.
- TensorFlow: large-scale machine learning on heterogeneous systems. Note: Software available from tensorflow.org External Links: Cited by: §II.2.
- GW150914: The Advanced LIGO Detectors in the Era of First Discoveries. Phys. Rev. Lett. 116 (13), pp. 131103. External Links: Cited by: §I.
- GWTC-1: A Gravitational-Wave Transient Catalog of Compact Binary Mergers Observed by LIGO and Virgo during the First and Second Observing Runs. Phys. Rev. X 9 (3), pp. 031040. External Links: Cited by: §I.
- A guide to LIGO–Virgo detector noise and extraction of transient gravitational-wave signals. Class. Quant. Grav. 37 (5), pp. 055002. External Links: Cited by: §I.
- GWTC-2: Compact Binary Coalescences Observed by LIGO and Virgo During the First Half of the Third Observing Run. External Links: Cited by: §I, §I, §I, §II.5, §IV.
- Observation of Gravitational Waves from Two Neutron Star–Black Hole Coalescences. Astrophys. J. Lett. 915 (1), pp. L5. External Links: Cited by: §I.
- Advanced Virgo: a second-generation interferometric gravitational wave detector. Class. Quantum Grav. 32 (2), pp. 024001. External Links: Cited by: §I.
- Low-latency analysis pipeline for compact binary coalescences in the advanced gravitational wave detector era. Class. Quant. Grav. 33 (17), pp. 175012. External Links: Cited by: §I.
- KAGRA: 2.5 Generation Interferometric Gravitational Wave Detector. Nature Astron. 3 (1), pp. 35–40. External Links: Cited by: §I.
- FINDCHIRP: An Algorithm for detection of gravitational waves from inspiraling compact binaries. Phys. Rev. D 85, pp. 122006. External Links: Cited by: §I.
- Improved effective-one-body model of spinning, nonprecessing binary black holes for the era of gravitational-wave astrophysics with advanced detectors. Phys. Rev. D 95 (4), pp. 044028. External Links: Cited by: §II.2.
- Keras. Note: https://keras.io Cited by: §II.2.
- LIGO Algorithm Library - LALSuite. Note: free software (GPL) External Links: Cited by: §II.2, §II.2, §II.6.
- Enhancing Gravitational-Wave Science with Machine Learning. Mach. Learn. Sci. Tech. 2 (1), pp. 011002. External Links: Cited by: §I.
- Realtime search for compact binary mergers in Advanced LIGO and Virgo’s third observing run using PyCBC Live. External Links: Cited by: §I.
- Extending the PyCBC search for gravitational waves from compact binary mergers to a global network. Phys. Rev. D 102 (2), pp. 022004. External Links: Cited by: §IV.
- Optimizing spinning time-domain gravitational waveforms for Advanced LIGO data analysis. Class. Quant. Grav. 33 (12), pp. 125025. External Links: Cited by: §II.2.
- Deep-Learning Continuous Gravitational Waves: Multiple detectors and realistic noise. Phys. Rev. D 102 (2), pp. 022005. External Links: Cited by: §I.
iDQ: Statistical Inference of Non-Gaussian Noise with Auxiliary Degrees of Freedom in Gravitational-Wave Detectors. External Links: Cited by: §I.
- Bayesian parameter estimation using conditional variational autoencoders for gravitational-wave astronomy. External Links: Cited by: §III.1.
- Matching matched filtering with deep networks for gravitational-wave astronomy. Phys. Rev. Lett. 120 (14), pp. 141103. External Links: Cited by: §I, §II.1, §IV.
- Deep Learning for Real-time Gravitational Wave Detection and Parameter Estimation: Results with Advanced LIGO Data. Phys. Lett. B 778, pp. 64–70. External Links: Cited by: §I.
- Deep Neural Networks to Enable Real-time Multimessenger Astrophysics. Phys. Rev. D 97 (4), pp. 044039. External Links: Cited by: §I.
- Classification and unsupervised clustering of LIGO data with Deep Transfer Learning. Phys. Rev. D 97 (10), pp. 101501. External Links: Cited by: §III.2.
- Searching for the full symphony of black hole binary mergers. Phys. Rev. D 97 (2), pp. 023004. External Links: Cited by: §I.
- Searching for Gravitational Waves from Compact Binaries with Precessing Spins. Phys. Rev. D 94 (2), pp. 024012. External Links: Cited by: §I.
- Advances in Machine and Deep Learning for Modeling and Real-time Detection of Multi-Messenger Sources. External Links: Cited by: §I.
- Frequency-domain gravitational waves from nonprecessing black-hole binaries. I. New numerical waveforms and anatomy of the signal. Phys. Rev. D 93 (4), pp. 044006. External Links: Cited by: §II.7.
- Frequency-domain gravitational waves from nonprecessing black-hole binaries. II. A phenomenological model for the advanced detector era. Phys. Rev. D 93 (4), pp. 044007. External Links: Cited by: §II.7.
- Adam: A Method for Stochastic Optimization. arXiv e-prints, pp. arXiv:1412.6980. External Links: Cited by: §II.2.
- Method for detection and reconstruction of gravitational wave transients with networks of advanced detectors. Phys. Rev. D 93 (4), pp. 042004. External Links: Cited by: §I.
Nonlinear principal component analysis using autoassociative neural networks. AIChE Journal 37 (2), pp. 233–243. External Links: Cited by: §III.1.
- Detection and Parameter Estimation of Gravitational Waves from Binary Neutron-Star Mergers in Real LIGO Data using Deep Learning. Phys. Lett. B 815, pp. 136161. External Links: Cited by: §I.
- Eccentric Binary Neutron Star Search Prospects for Cosmic Explorer. External Links: Cited by: §I.
- Analysis Framework for the Prompt Discovery of Compact Binary Mergers in Gravitational-wave Data. Phys. Rev. D 95 (4), pp. 042001. External Links: Cited by: §I.
- On the Efficiency of Various Deep Transfer Learning Models in Glitch Waveform Detection in Gravitational-Wave Data. External Links: Cited by: §III.2.
- Gwastro/pycbc: 1.18.0 release of pycbc External Links: Cited by: §I, §II.7.
- 3-OGC: Catalog of gravitational waves from compact-binary mergers. External Links: Cited by: §I.
- Detecting binary compact-object mergers with gravitational waves: Understanding and Improving the sensitivity of the PyCBC search. Astrophys. J. 849 (2), pp. 118. External Links: Cited by: §I, §II.7, §II.8.
- Search for Eccentric Binary Neutron Star Mergers in the first and second observing runs of Advanced LIGO. Astrophys. J. 890, pp. 1. External Links: Cited by: §I.
- Search for gravitational waves from the coalescence of sub-solar mass and eccentric compact binaries. External Links: Cited by: §I.
- Search for gravitational waves from the coalescence of sub-solar mass binaries in the first half of Advanced LIGO and Virgo’s third observing run. External Links: Cited by: §I.
- Improving the Data Quality of Advanced LIGO Based on Early Engineering Run Results. Class. Quant. Grav. 32 (24), pp. 245005. External Links: Cited by: §I.
- The GstLAL Search Analysis Methods for Compact Binary Mergers in Advanced LIGO’s Second and Advanced Virgo’s First Observing Runs. External Links: Cited by: §I, §I, §II.5, §IV.
- Detection of gravitational-wave signals from binary neutron star mergers using machine learning. Phys. Rev. D 102 (6), pp. 063015. External Links: Cited by: §I.
- Training Strategies for Deep Learning Gravitational-Wave Searches. External Links: Cited by: §I, §I, §I, Figure 2, §II.1, §II.1, §II.1, §II.2, §IV, §IV, footnote 1.
- Denoising Gravitational Waves with Enhanced Deep Recurrent Denoising Auto-Encoders. External Links: Cited by: §III.1.
- A survey on deep transfer learning. In Artificial Neural Networks and Machine Learning – ICANN 2018, V. Kůrková, Y. Manolopoulos, B. Hammer, L. Iliadis, and I. Maglogiannis (Eds.), Cham, pp. 270–279. External Links: Cited by: §III.2.
- The PyCBC search for gravitational waves from compact binary coalescence. Class. Quant. Grav. 33 (21), pp. 215004. External Links: Cited by: §I, §I, §II.5, §II.6, §IV.
- Deep Learning with Quantized Neural Networks for Gravitational Wave Forecasting of Eccentric Compact Binary Coalescence. External Links: Cited by: §I.
- Deep learning for gravitational wave forecasting of neutron star mergers. Phys. Lett. B 816, pp. 136185. External Links: Cited by: §I.
- Deep Learning Ensemble for Real-time Gravitational Wave Detection of Spinning Binary Black Hole Mergers. Phys. Lett. B 812, pp. 136029. External Links: Cited by: §I, §IV.
- A survey of transfer learning. Journal of Big Data 3 (1), pp. 9. External Links: Cited by: §III.2.
- Gravity Spy: Integrating Advanced LIGO Detector Characterization, Machine Learning, and Citizen Science. Class. Quant. Grav. 34 (6), pp. 064003. External Links: Cited by: §I.