Gravitational-wave parameter estimation with autoregressive neural network flows

02/18/2020 ∙ by Stephen R. Green, et al. ∙ 0

We introduce the use of autoregressive normalizing flows for rapid likelihood-free inference of binary black hole system parameters from gravitational-wave data with deep neural networks. A normalizing flow is an invertible mapping on a sample space that can be used to induce a transformation from a simple probability distribution to a more complex one: if the simple distribution can be rapidly sampled and its density evaluated, then so can the complex distribution. Our first application to gravitational waves uses an autoregressive flow, conditioned on detector strain data, to map a multivariate standard normal distribution into the posterior distribution over system parameters. We train the model on artificial strain data consisting of IMRPhenomPv2 waveforms drawn from a five-parameter (m_1, m_2, ϕ_0, t_c, d_L) prior and stationary Gaussian noise realizations with a fixed power spectral density. This gives performance comparable to current best deep-learning approaches to gravitational-wave parameter estimation. We then build a more powerful latent variable model by incorporating autoregressive flows within the variational autoencoder framework. This model has performance comparable to Markov chain Monte Carlo and, in particular, successfully models the multimodal ϕ_0 posterior. Finally, we train the autoregressive latent variable model on an expanded parameter space, including also aligned spins (χ_1z, χ_2z) and binary inclination θ_JN, and show that all parameters and degeneracies are well-recovered. In all cases, sampling is extremely fast, requiring less than two seconds to draw 10^4 posterior samples.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 12

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

The task of gravitational-wave parameter estimation is to determine the system parameters that gave rise to observed detector strain data. This is accomplished using Bayesian inference. Assuming a

likelihood model for strain data conditioned on system parameters , and a prior distribution

, one obtains through Bayes’ theorem the

posterior distribution,

(1)

where the normalization is called the model evidence. Generally, one can evaluate the likelihood explicitly (although this may be computationally expensive) and the prior is also known, so this allows for calculation of the posterior up to normalization. One can then use an algorithm such as Markov chain Monte Carlo (MCMC) to obtain samples from the posterior.

Standard sampling algorithms are effective, but they can be computationally costly. Indeed, for binary black holes, obtaining a sufficient number of posterior samples can take on the order of days, whereas for binary neutron stars, this extends to weeks. Especially for binary neutron stars, which might have multimessenger counterparts, it is critical to reduce this time to provide accurate information to electromagnetic observers.

There have recently been several efforts to speed up parameter estimation by using deep learning Chua and Vallisneri (2020); Gabbard et al. (2019); Shen et al. (2019)

. The main tool of deep learning is the neural network, which is a trainable and very flexible function approximator. Neural networks can have millions of parameters, which are tuned through stochastic gradient descent to optimize a loss function. In this way, very complex functions can be approximated. Since conditional distributions can be parametrized by functions, neural networks can be used to model gravitational-wave posteriors.

A key observation of Chua and Vallisneri (2020); Gabbard et al. (2019) is that Bayes’ rule can be applied in such a way that training only requires samples from the prior and the gravitational-wave likelihood, i.e., strain realizations . Although the network learns the posterior distribution, posterior samples are not needed for training. Training also does not require any likelihood evaluations, and for this reason this approach is known as likelihood-free inference. Since obtaining posterior samples and evaluating the likelihood are expensive, the likelihood-free approach is very fast in comparison. It is also applicable in contexts where a simulation of the data generative process is available, but the likelihood function may be unknown.

In Chua and Vallisneri (2020), this approach was applied successfully with a multivariate Gaussian posterior model, i.e., , where the mean and covariance matrix are given by neural networks. Once trained on simulated waveforms and noise realizations, the Gaussian posterior model is trivial to sample.

The challenge, then, is to define a sufficiently flexible model distribution for the posterior. Gaussians are good approximations for very high signal-to-noise ratio, but posteriors can in general have higher moments and multimodality. Ref. 

Chua and Vallisneri (2020)

also experimented with Gaussian mixture models and posterior histogram outputs. The performance of the Gaussian mixture, however, was not a significant improvement over the single Gaussian, and although histogram was effective, it is limited to describing one or two parameters.

A promising approach to increase the flexibility of a posterior model is to introduce latent variables and perform variational Bayesian inference. The posterior is then given by marginalization over the latent variables, i.e., . With and both taken to be Gaussian, this nevertheless gives non-Gaussian . This can be implemented using the variational autoencoder framework Kingma and Welling (2013); Rezende et al. (2014), and recent results Gabbard et al. (2019) for gravitational-wave parameter estimation with a conditional variational autoencoder (CVAE) have demonstrated the recovery of non-Gaussian posteriors (although not multimodality).

In this work, we use the method of normalizing flows Rezende and Mohamed (2015) (specifically, masked autoregressive flows Kingma et al. (2016); Papamakarios et al. (2017)) to increase the flexibility of the gravitational-wave posterior. A normalizing flow is an invertible mapping on a sample space , with simple Jacobian determinant. Using the change of variables rule for probabilities, this induces a transformation,

(2)

from a base distribution to a new distribution . Here . Starting from a simple standard normal distribution for , one can obtain a more complex distribution by applying a normalizing flow. To describe conditional distributions, such as gravitational-wave posteriors, a normalizing flow may also be conditioned on additional variables; for our application, we condition on detector strain data .

The fact that is invertible means that the normalizing flow enables both sampling and density evaluation of , provided this is possible for . To sample, one first samples , and then is a sample from . For given , to evaluate the density, the inverse mapping is used in the right hand side of (2).

Normalizing flows may be used to model gravitational-wave posterior distributions directly, and this is the first application that we describe. We fix the base distribution to be multivariate standard normal of the same dimension as the system parameter space. We then take the basic flow unit to be a three-hidden-layer MADE neural network Germain et al. (2015) conditioned on , and we stack several of these to obtain a sufficiently flexible posterior model. This is called a masked autoregressive flow (MAF) Papamakarios et al. (2017). Since the density can be evaluated directly through (2), the network can be trained using stochastic gradient descent to maximize the likelihood over the network parameters that the training data ( pairs) came from the model.

Normalizing flows can also be incorporated into the CVAE: they can be used to enhance the flexibility of the encoder, decoder, and prior component networks, thereby increasing the overall flexibility of the model Kingma et al. (2016); Chen et al. (2016). In section II of this work, we describe all of the above networks in further detail.

In section III we present the results of our experiments in using these models to describe gravitational-wave posteriors over a five-dimensional space of system parameters. Following Gabbard et al. (2019), we study binary black hole waveforms over the parameters

, with added noise drawn from a stationary Gaussian distribution with fixed power spectral density. We work in the time domain. We find that the basic MAF network achieves results comparable to 

Gabbard et al. (2019), but with the advantage of not having any latent variables to marginalize over. In our experiments, however, neither of these networks successully models the posterior over , which is multimodal. We next test the more powerful autoregressive CVAE, which does succeed in modeling the multimodality in

. To validate our recovered posteriors, we present p–p plots consistent with uniformly distributed percentile scores in each parameter, as well as comparisons to MCMC sampling.

We then expand the parameter space to include aligned spins and binary inclination in section IV. We train the CVAE network with autoregressive flows to model the posterior over this eight-dimensional space. We find that the network once again successfully models all parameters. This includes the various degeneracies, e.g., between and , and between and . We validate our results again with a p–p plot.

This work is organized as follows. In the following section we describe in more detail the various types of neural networks that we use for parameter estimation. In section III we describe our experiments with the five-dimensional parameter space, and in section IV the enlarged eight-dimensional space. Finally, in section V we conclude with a discussion of potential further improvements.

Software:

All of our neural networks are implemented in PyTorch 

Paszke et al. (2019), with the autoregressive network implemented using Pyro Bingham et al. (2019). We used emcee Foreman-Mackey et al. (2013) to generate MCMC comparisons, and ChainConsumer Hinton (2016) to produce corner plots.

Notation:

The various vector spaces, along with their dimensionalities are given in table 

1.

space dimension description
X n system parameters
Y m strain data
Z l latent variables
Table 1: Vector spaces.

Ii Preliminaries

In this section we review important deep learning concepts, and we discuss them in the context of gravitational-wave parameter estimation. The first two subsections describe ideas that have already been implemented for parameter estimation, in Chua and Vallisneri (2020) and Gabbard et al. (2019), respectively, and the last two describe the autoregressive flows that we explore in this work.

ii.1 Neural network models of gravitational-wave posteriors

Suppose we have a posterior distribution . Our aim is to train a neural network to give an approximation to . The “true” posterior is itself a model for the physical system. For gravitational waves, it is defined through Bayes’ theorem in terms of a prior over the system parameters and a likelihood . The likelihood depends on a choice of waveform model and a measured noise power spectral density : it is the probability density that the residual is drawn from a stationary Gaussian noise distribution with PSD . For further details on the noise model and likelihood, see, e.g., Abbott et al. (2019).

For stationary Gaussian noise and known , it is trivial to sample from the likelihood. In contrast, the “inverse problem” of sampling from the posterior—sampling over parameters —requires an algorithm such as MCMC and many evaluations of the waveform model and the likelihood. This repeated comparison between model waveforms and data is computationally expensive, and for this reason we wish to develop the neural network model, .

The first step in developing the model is to parametrize the posterior in terms of a neural network. For now, we take as our model a multivariate normal distribution Chua and Vallisneri (2020), although our main interest later will be in defining more flexible models. That is, we take

(3)

where the mean and covariance matrix are functions of data defined by a feedforward neural network. (To ensure is positive definite and symmetric, it is in practice better to take the Cholesky decomposition, , where is lower-triangular with positive diagonal entries. is then modeled with the neural network.)

Feedforward neural networks can have a variety of configurations, but the simplest consists of a sequence of fully-connected layers. The output of the first hidden layer (of dimension ) is

(4)

where is a matrix, and is a -dimensional vector. is an element-wise nonlinearity, typically taken to be a Rectified Linear Unit (ReLU),

(5)

The output is then passed through a second hidden layer,

(6)

and so on, for as many hidden layers as desired. A final affine transformation is then applied, and the outputs repackaged into a vector and lower-triangular matrix . A suitable nonlinearity should be applied to ensure positive diagonal components of , but otherwise the components of and should be unconstrained. The weight matrices

and bias vectors

are initialized randomly (see, e.g., Goodfellow et al. (2016)) and then trained through stochastic gradient descent to optimize a loss function.

With the posterior defined, the loss function should be chosen so that after training, is a good approximation to . We therefore take it to be the expectation value (over ) of the cross-entropy between the two distributions,

(7)

i.e., we maximize the likelihood over the network parameters (the weights and biases) that the training data came from the model. Minimizing is equivalent to minimizing the expectation value of the Kullback-Leibler (KL) divergence between the true and model posteriors, since is fixed.

The loss function in the form (7) is actually not ideal for our purposes. The reason is that it requires sampling from —a very costly operation. Instead, as pointed out by Chua and Vallisneri (2020); Gabbard et al. (2019), we can use Bayes’ theorem in a very advantageous way, to write

(8)

To train the network now requires sampling from the likelihood, not the posterior. On a minibatch of training data of size , we approximate

(9)

where and . The loss function can be explicitly evaluated using the expression (II.1).

The gradient of

with respect to the network parameters can be calculated using backpropagation (i.e., the chain rule), and the network optimized with stochastic gradient descent. The training set consists of parameter samples

and waveforms ; random noise realizations are added at train time to obtain data samples .

ii.2 Variational autoencoders

One way to increase the flexibility of the model is to introduce latent variables , and define the gravitational-wave posterior by first sampling from a prior over , and then from a distribution over , conditional on . In other words,

(10)

If one takes and to both be multivariate Gaussians, then is a Gaussian mixture of Gaussians. In this way one can describe a more flexible posterior.

At first glance it is not clear how to train such a model: the posterior (10) is intractable, since evaluation involves marginalizing over . If one knew the posterior111The variational posterior should not be confused with the gravitational-wave posterior . It should be clear from context to which posterior distribution we are referring. , then

(11)

could be evaluated directly, but is also intractable.

A (conditional on ) variational autoencoder Kingma and Welling (2013); Rezende et al. (2014) is a deep learning tool for treating such a latent variable model. It introduces a recognition (or encoder) model , which is an approximation to the posterior . As with the first two networks, the recognition network should have tractable density and be easy to sample, e.g., a multivariate Gaussian. One can then take the expectation (over ) of the logarithm of the posterior,

(12)

where the last term is the KL divergence,

(13)

Since this is nonnegative, is known as the variational lower bound on . If is identical to , then .

The variational autoencoder maximizes the expectation of over the true distribution. The associated loss function can be written

(14)

The three networks, , , and , are trained simultaneously. To evaluate the loss, the outer two expectation values are treated the same as in the previous subsection. The inner expectation value is evaluated using a Monte Carlo approximation, typically with a single sample from . For Gaussian and the KL divergence term may be calculated analytically; otherwise, a single Monte Carlo sample suffices.

For training, it is necessary to take the gradient of the loss with respect to network parameters. The stochasticity of the Monte Carlo integral estimates must, therefore, be carefully treated. This can be done by using the reparametrization trick Kingma and Welling (2013); Rezende et al. (2014)

, namely by treating the random variable as an additional input to the network drawn from a fixed distribution. For example, if

is multivariate Gaussian with mean and Cholesky matrix , then

(15)

is a sample from . With this trick, one can now take the gradient of with respect to network parameters.

This setup is called a variational autoencoder because the first term in the loss function has the form of an autoencoder. The recognition network is known as the encoder, and as the decoder. This first term (the reconstruction loss) is minimized if is recovered after being encoded into and then decoded. The other term in the loss function (the KL loss) pushes the encoder to match the prior and acts as a regulator. When the variational autoencoder works as an autoencoder, the latent variables can give a useful low-dimensional representation of .

The recent work Gabbard et al. (2019) used a CVAE with diagonal Gaussian networks to model gravitational-wave posterior distributions, achieving excellent results over the four parameters . In the following two subsections we describe the use of masked autoregressive flows to build even more general distributions.

ii.3 Masked autoregressive flows

In this subsection we review the concept of a masked autoregressive flow, a type of normalizing flow that we use in our work to map simple distributions into more complex ones. We refer the reader to Papamakarios et al. (2017) for a much more in-depth discussion.

Consider a probability density . Without any loss of generality, this may be written using the chain rule as

(16)

An autoregressive model restricts each conditional distribution in the product to have a particular parametrized form. We will take this to be univariate normal 

Papamakarios et al. (2017),

(17)

for .

In Kingma et al. (2016) it was observed that an autoregressive model defines a normalizing flow. In other words, suppose . Then

(18)

gives a sample from . This mapping is defined recursively in (18), but the inverse mapping,

(19)

is nonrecursive. Because is autoregressive, the Jacobian determinant is very simple,

(20)

Hence, defines a normalizing flow. Starting from a simple base distribution, the change of variables rule (2) can be used to evaluate the density .

An autoregressive flow may be modeled by a neural network with masking Germain et al. (2015). That is, one starts with a fully connected network with inputs, outputs, and several hidden layers to define , but then carefully sets certain connections to zero such that the autoregressive property holds. This defines a MADE network Germain et al. (2015). Because the inverse direction is nonrecursive, this requires just a single pass through the network Kingma et al. (2016). To map in the forward direction requires passes. During training it is, therefore, preferable to only require inverse passes. For gravitational waves, the parameter space has reasonably low dimensionality, so even the forward pass is not too expensive.

With a single MADE network, the first component is independent of all other components, and follows a fixed Gaussian distribution. To achieve sufficient generality, it is necessary to stack several MADE networks in sequence, permuting the order of the components between each pair Kingma et al. (2016). This is called a masked autoregressive flow (MAF) Papamakarios et al. (2017)

. For stability during training it is also useful to insert batch normalization layers 

Ioffe and Szegedy (2015) between the MADE layers, and between the base distribution and the first MADE layer; in the context of a flow, these also contribute to the Jacobian Papamakarios et al. (2017). MAFs and related networks have been used to model very complex distributions over high-dimensional spaces, including audio Oord et al. (2016) and images Van den Oord et al. (2016).

In the context of gravitational-wave parameter estimation, the sample space is relatively low-dimensional. To model the posterior distribution, however, each MADE flow unit should be made conditional on the (high-dimensional) strain data , while maintaining the autoregressive property over . We can then take a standard normal base distribution, and flow it through all the MADE and batch normalization layers, to obtain the complex posterior. The loss function is the same as in section II.1, but with the change of variables rule used to evaluate the density, i.e.,

(21)

where now denotes the entire sequence of flows in the network. Notice that this involves only the inverse flow , which is fast to evaluate. This network is fast to train, but somewhat slower to sample.

ii.4 Combined models

More powerful models can be obtained by combining autoregressive flows with the variational autoencoder. Each of the three networks comprising the CVAE—the encoder , the decoder , and the prior —can be made more flexible by applying autoregressive flows to their sample spaces. We discuss each of these possibilities in turn. In our experiments, we found that the best performance was achieved when combining all three. In all cases, the CVAE loss function (II.2) is optimized, with the change of variables rule used to evaluate the component densities.

ii.4.1 Encoder with inverse autoregressive flow

Normalizing and autoregressive flows were first proposed as a way to increase the flexibility of the encoder network Rezende and Mohamed (2015); Kingma et al. (2016). This is important because the CVAE loss function (II.2) differs from the desired cross-entropy loss (8) by the expectation of the KL divergence between and the intractable . If this can be made smaller, then the two losses converge, hence should be as flexible as possible.

A flexible encoder is also desired to avoid a situation called “posterior collapse.” The reconstruction and KL loss terms are in competition during training, and if the KL loss term collapses to zero, the network can fail to autoencode. In this situation, the encoder matches the prior, so it ignores its input; the latent variables contain no information about beyond what is contained in . This can happen either because the loss gets stuck in an undesired local minimum during training, or the configuration with vanishing KL loss is actually the global minimum Chen et al. (2016). The former can be alleviated by careful training strategies such as annealing the KL loss term Bowman et al. (2015). The latter can occur because the use of latent variables incurs a cost related to  Chen et al. (2016). If the decoder is sufficiently powerful such that it can model on its own, then and is the global minimum. The network simply decides it is not worthwhile to use the latent variables.

Thus, a flexible encoder is important for performance and to make full use of latent variables. Since evaluation of requires sampling from , to map the Gaussian into a more complex distribution an inverse autoregressive flow (IAF) should be used Kingma et al. (2016). Sampling is fast since the inverse flow (19) does not involve recursion. The density evaluation needed for the KL loss term only needs to take place for sampled from , so caching may be used.

Each MADE layer comprising the encoder IAF may be conditioned on , , and an optional additional output of the Gaussian encoder, . (The initial work Kingma et al. (2016) used only .) In our experiments we found it most effective to condition only on and . It is possible that the high dimensionality of meant that it overpowered .

ii.4.2 Decoder with masked autoregressive flow

Since the reconstruction loss requires density evaluation of the decoder distribution, one should apply a forward MAF after the Gaussian distribution to increase flexibility of . The MADE layers may be conditioned on (as in the basic MAF of section III.2.2) and the latent variable . This powerful decoder increases the risk of posterior collapse, so it is useful to use also the IAF encoder.

ii.4.3 Prior with masked autoregressive flow

As described in Chen et al. (2016), an autoregressive flow at the end of the prior network effectively adds flexibility to the encoder network and the decoder network. For fast training, one should again use a forward MAF, and its MADE layers should be conditioned on .

Although a prior MAF and an encoder IAF are very closely related, they differ in that the IAF can be conditioned on . We found that this had a significant impact on performance, and therefore included both autoregressive flows in our models.

Iii Nonspinning binaries

In this section we describe our experiments in using deep learning models to describe nonspinning coalescing black hole binaries. Binaries are described by five parameters: the masses and , the luminosity distance , the time of coalescence , and the phase of coalescence . These parameters and their ranges are chosen to facilitate comparison with Gabbard et al. (2019).

iii.1 Training data

Training data for our models consist of pairs of parameters and strain time series . Parameters are sampled from a prior distribution , which is uniform over each parameter except for the volumetric prior over . Parameter ranges are taken from Gabbard et al. (2019),

rCl 35 M_⊙ ≤& m_1, m_2 &≤80 M_⊙,
1000 Mpc ≤& d_L &≤3000 Mpc,
0.65 s ≤& t_c &≤0.85 s,
0 ≤& ϕ_0 &≤2π.

We also take .

Strain realizations are in the time domain and 1 s long () with a sampling rate of . (We found that the sampling rate of of Gabbard et al. (2019) was not sufficient to eliminate Gibbs ringing.) Strain data consist of a waveform , deterministic from parameters , and random noise , sampled from the Advanced LIGO Aasi et al. (2015) Zero Detuned High Power PSD zdh (2009). We took to be the “” polarization of IMRPhenomPv2 waveforms Khan et al. (2016), which, following Gabbard et al. (2019)

, we whitened in the frequency domain using the PSD before taking the inverse Fourier transform to the time domain. Finally, we rescaled our waveforms by dividing by

so that in the end, time domain noise is described by a standard normal distribution in each time sample, i.e., .

The whitening and rescaling procedure serves two purposes: it means that we can easily draw noise realizations at train time by sampling from a standard normal distribution, and it ensures that the input data to the neural network has approximately zero mean and unit variance. The latter condition (“standardization” of data) has been shown to improve training performance. Similarly, we rescale

to have zero mean and unit variance in each component across the training set.

Our dataset consists of pairs, which we split into 90% training data and 10% validation data. Noise realizations are sampled at train time to give strain data ; a sample time series is given in figure 1. The median signal-to-noise ratio (SNR) of our training set is 25.8, and the complete SNR distribution is given in given in figure 2. Our dataset is the same size as that of Gabbard et al. (2019) and a factor of smaller than the effective dataset of Chua and Vallisneri (2020). Larger datasets are generally preferred to prevent overfitting, but they are also more costly to prepare and store. We would like to build a system that can generalize well with as small a dataset as possible in order to minimize these costs when moving to more complicated and longer waveforms in the future. By keeping track of performance on the validation set when training our models, we found that our training set was sufficiently large to avoid overfitting. We also experimented with a -element dataset; this had slightly reduced, but still acceptable, performance.

Figure 1: Sample waveform and strain realization . The SNR for this injection is 24.9.
Figure 2: Histogram of SNR values for our training set.

iii.2 Experiments

Model Layers Parameters () Train time (h)
CVAE 12 31.6 5.6
MAF 20 4.0 6.5
CVAE+ 72 18.8 14.8
Table 2: Network depth, number of trainable parameters, and training time. Numbers of trainable parameters are approximated as the numbers of weights; for autoregressive flows, we included an estimate of the number of unmasked weights.
Model Train loss Test loss KL train loss KL test loss
CVAE
MAF
CVAE+
Table 3:

Final values of training and validation set loss functions after 250 epochs.

We modeled our gravitational-wave posteriors with three types of neural network described in section II: a CVAE (similar to Gabbard et al. (2019)

), a MAF, and a CVAE with autoregressive flows appended to the three subnetworks (denoted CVAE+). We selected hyperparameters based on choices in the literature 

Papamakarios et al. (2017); Gabbard et al. (2019) and on sweeps through hyperparameter space. Approximate network sizes and training times for each architecture are listed in table 2, and final loss functions after training in table 3.

All of our networks use ReLU nonlinearities. They were trained for 250 epochs with a batch size of 512, using the Adam optimizer Kingma and Ba (2014). The learning rate was reduced by a factor of two every 80 epochs. Sampling performance results for the three network architectures are collected in figures 6 and 10.

iii.2.1 Cvae

Our CVAE network is designed to be similar to that of Gabbard et al. (2019). The encoder, decoder, and prior distributions are all taken to be diagonal Gaussians, each parametrized by fully connected neural networks with three hidden layers of dimension 2048. The latent space is of dimension . We used the same initial training rate as Gabbard et al. (2019), of 0.0001.

(a)
(b)
(c)
Figure 6: One- and two-dimensional marginalized posterior distributions for the strain realization shown in figure 1, comparing output from CVAE, MAF, and CVAE+ neural networks. Each figure is constructed from samples, and contours represent 50% and 90% credible regions. For the CVAE+ network, MCMC results are overlayed for comparison. The CVAE+ network is the only one capable of capturing the multimodality in the posterior.
(a)
(b)
(c)
Figure 10:

p–p plots for the three neural network models. For each marginalized one-dimensional posterior distribution, the curve represents the cumulative distribution function of the percentile scores of the true values of the parameter. Each plot is constructed from the same

parameter choices and strain realizations. Deviations from the diagonal represent a failure of the model to learn the true posterior for that parameter. KS test -values are provided in the legends.

To train the CVAE, we optimize the variational lower bound loss of (II.2), with a single Monte Carlo sample to estimate the expectation value over . Since is only an upper bound on the cross-entropy loss , the value of the loss function alone is not entirely indicative of performance. Indeed, when posterior collapse occurs (see section II.4.1), the gap between and can vanish; in this case, the value of is larger than that of a network with the same and no posterior collapse. For this reason, it is important to also use other metrics to evaluate performance. For CVAE models we always quote the KL loss as an indication that the latent space is being used.

To encourage use of the latent space, we found it to be beneficial to use KL annealing during training, i.e., we multiplied the KL loss part of by a factor between zero and one during the early stages. This reduces the importance of the KL loss term compared to the reconstruction loss. For all CVAE-based moels, we adopted a cyclic KL annealing strategy Fu et al. (2019): for the first 6 epochs we used annealing factors of , and we repeated this cycle 3 more times; see figure 11. We also ignored the KL loss term whenever it was less than 0.2.

In figure LABEL:cornercvae, we show a posterior distribution corresponding to the strain data of figure 1. This is constructed from posterior samples, obtained using formula (10) with the prior and decoder networks as follows: (1) pass through the prior network to obtain the distribution , (2) draw latent-space samples from the prior, (3) pass these through the decoder network to obtain distributions , and finally (4) draw one sample from each of these distributions.

By inspection of the posterior, it is clear that the latent space is being used in the model, since the distribution is not a diagonal Gaussian. The distributions for most of the parameters look reasonable, and they cover the true values of the parameters. The phase of coalescence, , is, however, not being captured at all. Indeed, should be precisely -periodic because our training set waveforms only contain the mode of the signal. Moreover, since we are taking a single polarization, should be resolvable.

We can evaluate the performance of the network with a p–p plot Veitch et al. (2015). To do this, we compute posterior distributions, each comprised of samples, for different strain realizations (i.e., for drawn from the prior over parameters and a noise realization). For each one-dimensional posterior, we then compute the percentile value of the true parameter. For each parameter, the p–p curve is then the cumulative density function of the percentile values. This is shown in figure LABEL:cvaepp for the CVAE network. If the CDF is diagonal, then the percentile values are distributed according to a uniform distribution, as one would expect for the true one-dimensional posteriors. We can see that the CVAE appears to capture all of the one-dimensional distributions except for .

To confirm that the percentile scores are well-distributed, we also performed a Kolmogorov-Smirnov test. We calculated the KS statistic to compare the distribution of percentile scores to a uniform distribution. We found that had miniscule -value, as expected, the -value of was 0.15, and all other -values were greater than 0.29. This indicates that all parameters are well-recovered, except for .

iii.2.2 Maf

Next, we modeled the gravitational-wave posterior directly using a masked autoregressive flow. As described in section III.2.2, the MAF network describes the posterior in terms of an invertible mapping from a five-dimensional standard normal distribution into the gravitational-wave posterior . The flow is autoregressive over and has arbitrary dependence on strain data . The MAF network does not involve latent variables, and optimizes the cross-entropy with the data distribution. Thus the loss function [given in (II.3)] can be used to directly compare performance for different models.

Our best performing MAF network consists of five MADE units, each with three hidden layers of dimension 512, and conditioned on . We also inserted batch normalization layers between each pair of MADE units, and between the first MADE unit and the base space. We found this to be important for training stability. We were able to train the MAF successfully at a higher initial learning rate than the CVAE, of 0.0004. Following Papamakarios et al. (2017), at the end of each training epoch, before computing the validation loss, we set the stored mean and variance vectors of the batch normalization layers to the mean and variance over the entire training set. (We did this also for the CVAE+ network below.)

In figure LABEL:cornermaf, we show a corner plot for the same strain data as before. Samples are obtained by first sampling from the standard normal base space, and then applying the the flow . All quantities appear to be well-modeled except, again, for .222In some experiments with deeper MAF networks, we were in fact able to properly resolve the posterior. This was, however, difficult to reproduce and somewhat unstable because MAF networks use element-wise affine transformations (18), and these may not be flexible enough to consistently resolve multimodality. We leave further investigation to future work. This is consistent with the p–p plot shown in figure LABEL:mafpp. The KS statistic -values were 0.14 for , and they were greater than 0.5 for all other parameters (except for ). (The p–p plots for all networks are computed from the same strain realizations, so it is consistent to see the same KS statistic -values for the different networks.)

Performance of the MAF network is comparable to that of the CVAE, with (in our case) one eighth the number of trainable parameters. In both cases, all parameters except for are well-modeled by the networks. In addition, the final loss values given in table 3 are very close for both networks, with a slight edge in validation loss for the MAF. However, since is an upper bound on , the cross-entropy loss for the CVAE network may actually be lower than that of the MAF. Indeed, the fact that the CVAE made use of the latent space suggests that this gap is nonzero. Comparison of the training and validation loss functions shows slight overfitting for the CVAE, but none for the MAF.

iii.2.3 Cvae+

Finally, we experimented with combinations of CVAE and MAF networks. As described in section II.4, all three component distributions of the CVAE can be made more flexible by applying MAF transformations to the diagonal Gaussian distributions, thereby increasing the total modeling power of the network Kingma et al. (2016); Chen et al. (2016). Indeed, appendix A of Kingma et al. (2016) shows that a single linear autoregressive flow is capable of generalizing a Gaussian distribution from diagonal to generic covariance.

For the combined models, the initial Gaussian distributions (the base spaces for the autoregressive flows) are modeled in the same way as in section III.2.1, as fully connected three-hidden-layer networks. However, we reduced the size of hidden layers to 1024 dimensions. We also kept the dimensional latent space. We found that best performance was achieved by applying autoregressive flows to all three component distributions as follows:

We added an IAF after the initial Gaussian encoder. This was made conditional on and extra 8-dimensional context output from the initial encoder. We also experimented with conditioning on , but found this to reduce performance.

We added a MAF after the initial Gaussian prior network. This was made conditional on .

We added a MAF after the initial Gaussian decoder network. This was conditional on and .

In all cases the MAF/IAF parts consist of 5 MADE units (three hidden layers of 512 dimensions each) and batch normalization layers. Although the CVAE+ network contains far more layers than the basic CVAE, it has roughly half the number of free parameters since we reduced the width of the hidden layers in the Gaussian components.

Training was performed by optimizing , with the change of variables rule used to evaluate the component densities. Sampling was similar to the basic CVAE, but now with the appropriate flows applied after sampling from the Gaussians.

We found that optimization was best when we trained the Gaussian distributions at a learning rate of 0.0001, and the autoregressive networks at a higher rate of 0.0004, combining the rates of the previous two subsections. As always, the learning rate was reduced by half every 80 epochs.

It is especially important for the CVAE+ network to apply some sort of KL annealing to encourage use of the latent space. An important difference compared to the basic CVAE of section III.2.1 is that now the decoder network is sufficiently powerful to model much of the posterior on its own. Indeed, it is just as flexible as the MAF network of the previous subsection, which produced the posterior in figure LABEL:cornermaf, and it is well known that a CVAE will often ignore latent variables if the decoder is sufficiently powerful Bowman et al. (2015); Chen et al. (2016). This strategy combined with the flexible encoder distribution resulted in higher KL validation loss for the CVAE+ network (6.77) than the basic CVAE (4.49) by the end of training; see table 3. A plot showing the KL and total loss terms as training proceeds is given in figure 11.

Figure 11: Training and validation loss for each epoch for the CVAE+ network. Spikes at early stages arise from the cyclic KL annealing. Since training and validation loss coincide, there is negligible overfitting.

A sample gravitational-wave posterior distribution for CVAE+ is given in figure LABEL:fig:cornercvaeall. In contrast to the simpler models, this captures the periodicity in . Sampling was very fast, requiring to obtain the samples that make up this figure. The p–p plot in figure LABEL:cvae+pp shows excellent statistical performance, with all KS statistic -values greater than 0.35.

For validation of our network against standard methods, figure LABEL:fig:cornercvaeall also includes samples obtained using MCMC. We used the emcee ensemble sampler Foreman-Mackey et al. (2013) with the same prior and likelihood as defined by our training set construction. There is clear agreement between MCMC sampling and neural network sampling, with slight deviation in the mass posteriors. We expect that this could be reduced with hyperparameter improvements or further training. The KL divergence between the two distributions shown is estimated Wang et al. (2009) at

(23)
(24)

Iv Inclined binaries with aligned spins

To test our method with a more challenging dataset, we augmented the parameter space to include nonzero aligned spins and inclination angle. We took uniform priors on , , and between and , keeping the rest of the prior distribution unchanged. The neural network was trained to model posterior distributions over the eight-dimensional parameter space . We held the dataset size fixed at elements, again split into the training set and into the validation set. The median SNR for this training set is 17.2, the mean is 19.1, and the range is .

We tested only the CVAE+ model, since this performed best in previous experiments. Because gravitational-wave posteriors over the eight-dimensional parameter space have increased complexity, we increased the capacity of our network by doubling the latent space dimension to , the number of MADE units to 10 per component network, and the dimension of the IAF context variable to 16. We also found performance was best when we froze the Gaussian part of the prior network, but aside from this all other hyperparameters were unchanged. This increased the total number of trainable parameters to .

We trained again for 250 epochs, with final total loss values of (train) and (validation), and final KL loss values of 11.05 (train) and 11.14 (validation), showing very little overfitting. This also indicates that for the extended parameter space, the network made heavier use of the latent space. (This observation motivated the increased size of the latent space.)

rt

Figure 12: Sample posterior distribution for eight-dimensional parameter space, from samples of CVAE+. Inset shows derived quantities: chirp mass , asymmetric mass ratio , and effective spin parameter . The waveform injection has SNR of 29.6.
Figure 13: p–p plot for one-dimensional posteriors for eight-dimensional parameter space, modeled with CVAE+ network. Constructed from injections.

In figure 12 we show a representative posterior distribution produced by the neural network. It continues to capture the degeneracy, as well as the new and degeneracies that arise in the extended parameter space. In contrast to the smaller parameter space of the previous section, the posterior over is simply the prior. The reason for this is that in our waveform model, there is a three-parameter degeneracy between the two spins , , and the phase , where small changes in the spins cause large changes in the phase. We confirmed with MCMC sampling that the posterior over is correct. A p–p plot for the extended parameter space is presented in figure 13. By inspection, this shows good recovery of all parameters. Moreover, all parameters have KS -values greater than 0.2.

The inset in figure 12 shows the posterior distribution over the chirp mass , asymmetric mass ratio , and effective spin parameter . These are derived quantities, with sampling done instead over the parameters listed above. Although posteriors are simpler over the derived parameters, to test our method we chose to sample over parameters with more nontrivial posteriors.

Sampling from the larger CVAE+ model used for the extended parameter space is slightly slower than the smaller model of the previous section, now requiring to obtain posterior samples. This is partially due to the larger 16-dimensional latent space: the forward pass through a MAF is recursive, so twice as many passes are required to sample from the variational prior and decoder. Both MAFs also have twice as many layers.

V Conclusions

In this work we introduced the use of masked autoregressive flows for improving deep learning of gravitational-wave posterior distributions. These learnable mappings on sample spaces induce transformations on probability distributions, and we used them to increase the flexibility of neural network posterior models. The models that we built can all be rapidly sampled, requiring to obtain samples.

For nonspinning, quasi-circular binary black holes, and a single gravitational-wave detector (a five-dimensional parameter space) we compared models involving a single MAF, a CVAE, and a CVAE with autoregressive flows applied to the encoder, decoder, and prior networks (CVAE+). We found that the performance of the single MAF and CVAE models were comparable, and that best performance was achieved by the CVAE+ model. The CVAE+ model was able to capture the bimodality in the phase , which eluded the other models.

We then considered a larger eight-dimensional parameter space, with aligned spins and nonzero binary inclination angle. With a higher-capacity CVAE+ network, we successfully recovered the posterior distribution over all parameters. This demonstrates that our approach extends to higher-dimensional parameter spaces. Modest increase in network capacity may, however, be required.

Although best performance was achieved with the CVAE+ model, an advantage of models without latent variables (such as the MAF alone) is that it is possible to directly evaluate the probability density function. Since the posterior distribution that the MAF models is normalized, one could then calculate the Bayesian evidence by separately evaluating the likelihood and prior. (In CVAE+, this would require marginalization over

.) Moreover, since the MAF loss function is just the cross-entropy loss with the true distribution, this means that loss functions of competing models can be compared directly. It would therefore be worthwhile to also try to improve performance of the basic MAF model.

In contrast to typical applications of these deep learning tools, the space on which all of our models are conditioned is of much higher dimensionality than the space

that we are modeling. One way to improve performance further may be to introduce convolutional neural networks to pre-process the strain data

and compress it to lower dimensionality. In the future, when we extend our models to treat longer waveforms, higher sampling rates, and additional detectors, this will become even more important.

Going forward, it will also be important to understand better the uncertainty associated to the neural network itself, particularly in regions of parameter space that are not well covered by training data. Rather than taking the maximum likelihood estimate for neural network parameters, as we did in this work, one can model them as random variables with some probability distribution. This distribution can be learned through variational inference, and ultimately marginalized over, resulting in a slightly broader posterior over system parameters Blundell et al. (2015); Shen et al. (2019). In our work, overfitting was not a problem, but such approaches to neural network uncertainty could be useful in situations where the binary system parameter space is not well covered due to high waveform generation costs.

As the rate of detected gravitational-wave events grows with improved detector sensitivity, methods for rapid parameter estimation will become increasingly desirable. For deep learning approaches to become viable alternatives to standard inference methods, they must be extended to cover the full space of binary system parameters, and to treat longer duration waveforms from multiple detectors with detector PSDs that vary from event to event. The methods discussed in this work bring us closer to these goals.

Acknowledgements.

We would like to thank R. Cotesta, A. Ghosh, A. Matas, and S. Ossokine for helpful discussions. C.S. was supported by the EPSRC Centre for Doctoral Training in Data Science, funded by the UK Engineering and Physical Sciences Research Council (grant EP/L016427/1) and the University of Edinburgh.

References