On the variance of radio interferometric calibration solutions: Quality-based Weighting Schemes

11/01/2017 ∙ by Etienne Bonnassieux, et al. ∙ 0

SKA-era radio interferometric data volumes are expected to be such that new algorithms will be necessary to improve images at very low computational costs. This paper investigates the possibility of improving radio interferometric images using an algorithm inspired by an optical method known as "lucky imaging", which would give more weight to the best-calibrated visibilities used to make a given image. A fundamental relationship between the statistics of interferometric calibration solutions and those of the image-plane pixels is derived in this paper, relating their covariances. This "Cov-Cov" relationship allows us to understand and describe the statistical properties of the residual image. In this framework, the noise-map can be described as the Fourier transform of the covariance between residual visibilities in a new "(δ uδv)"-domain. Image-plane artefacts can be seen as one realisation of the pixel covariance distribution, which can be estimated from the antenna gain statistics. Based on this relationship, we propose a means of improving images made with calibrated visibilities using weighting schemes. This improvement would occur after calibration, but before imaging - it is thus ideally used between major iterations of self-calibration loops. Applying the weighting scheme to simulated data improves the noise level in the final image at negligible computational cost.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 5

page 6

page 11

page 12

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Interferometers sample Fourier modes of the sky brightness distribution corrupted by instrumental and atmospheric effects rather than measuring the sky brightness directly. This introduces two problems for astronomers to invert: calibration and imaging. Both of these problems are ill-conditioned.

The problem of imaging consists of correcting for the incomplete -coverage of any given interferometer by deconvolving the instrument’s Point-Spread Function (PSF) from images. Its poor conditioning comes from our limited a priori knowledge of the sky brightness distribution, combined with large gaps in our -coverage, which prevents us from placing strong constraints on image deconvolution. It can be better-conditioned in different ways, including through the use of weighting schemes (see Briggs 1995; Yatawatta 2014, and references therein) to improve image fidelity at the start of deconvolution. When inverting the imaging problem, we often assume that the sky is stable within the domain (i.e. is constant in time and frequency). There are exceptions, such as wide-band deconvolution algorithms (e.g. Rau & Cornwell 2011) that explicitly take into account the sky’s frequency-dependence, but still assume that the sky brightness distribution does not vary with time.

The problem of calibration is what concerns us in this paper. It consists of estimating and correcting for instrumental errors (which includes effects such as antenna pointing errors, but also the phase-delays caused by ionospheric activity, troposphere, etc). Calibration consists of solving for gain estimates, where a gain models the relationship between the electromagnetic field of an astrophysical source and the voltage that an antenna measures for this source. Because measurements are noisy, calibration often involves some fine-tuning of solution intervals, to ensure that the solutions are well-constrained while the solution intervals stay as small as signal-to-noise allows. The calibration inverse problem involves three competing statistical effects: thermal noise in the measurements, true gain variability, and sky model incompleteness. If gain solutions are sought individually for each measurement, then calibration estimates will be dominated by thermal noise, and will not adequately describe the actual gains. Similarly, if a single gain estimate is fitted to too many measurements, the intrinsic gain variability will be “averaged out”; for example, a choice of time and frequency interval that is too large will cause the solver to estimate a constant gain while the underlying function varies quickly, thereby missing much of the gain structure. This will introduce error which will be correlated in time and frequency. This occurs, for example, when solving for ionospheric phase delays: in the most extreme case, where the solution interval is significantly larger than the scale of ionospheric fluctuations, its varying phase can average out to zero over the interval in time and frequency. Finally, if the model being fitted is incomplete, unmodeled physical flux will likely be absorbed unpredictably into both the gain solutions and the residual visibilities: this absorption of physical flux into gain solutions is known as source suppression (see Grobler et al. 2014; Kazemi & Yatawatta 2013, and references therein).

In practice, it is reasonable to assume that gain variation is generally slower than some given scale: we can then reduce the noise of our gain estimates by finding a single gain solution for a small number of measurements, assuming that the underlying gain variation is very small and stable over short intervals. This is generally a valid hypothesis, but the specific value for the variation scale can be contentious. Indeed, while the noise level can often be treated as constant throughout an observation, the gain variability itself is generally not constant: there will be time periods where the gains will tend to remain constant for longer, and others where variability will be very quick. This means that, for any choice of calibration interval, some gain estimates will be better than others, and almost all could have been improved (at a cost to others) by a different choice of time (and frequency) intervals.

Since we have measurements which are better-calibrated than others (in that better estimates for their gains were obtained through chance alone), we could, in principle, take inspiration from “lucky imaging” (an optical-domain method for making good images: for more details, see Fried 1978, and references therein) to weigh our visibilities according to their calibration quality. Those weights would in effect be an improvement of currently existing methods such as clipping noisy residual visibilities: in the extreme case where all visibilities are equally-well calibrated except a few which are extremely noisy, it should be equivalent to clipping. Otherwise, the weights should show at least a slight improvement over clipping.

The key finding of the present paper is a fundamental relationship between the covariance matrices of residual visibilities and the map of the covariance in the image-plane: the “Cov-Cov relationship” between visibility covariance to image-plane covariance. We show that the pixel statistics in the image-plane are determined by a “noise-PSF”, convolved with each source in the sky (modeled or not). This noise-PSF is the product of the Fourier transforms of the gain covariance matrix with each cell mapped not from space to coordinates but rather between their respective covariance spaces - from a new differential Fourier plane (henceforth “”-plane) to the image-plane covariance space . This image-plane covariance space describes the variance in each pixel and the covariance between pixels111The noise-PSF also relates to , as shown in the matrix formalism, but this is not explicitly referenced in the text since visibility space is usually referred to as “the UV-plane” in literature, rather than “the UVW-space”.. It describes the expected calibration artefacts and thermal noise around each source, does not vary as a function of direction, and is convolved with each source in the field to yield the final error map. Because all unwanted (in our case, unphysical) signal can be thought of as noise, we will refer to the pixel variance map as the “noise-map”.

The notion of a -plane arises organically from the framework of radio interferometry: we are associating a correlation between visibilities to coordinates in covariance space, just as we associate the visibilities themselves to the -domain. The -plane is the natural domain of these correlations. As previously stated, even if all sources in the field are perfectly known and modeled, a poor choice of calibration interval can introduce correlated noise in the residuals, which would then introduce larger variance near sources in the noise-map. Conversely, if calibration is perfect, the noise-map should be completely flat (i.e. same variance for all pixels), as there would be no noise-correlation between pixels.

The main result of this paper consists of describing a new adaptive, quality-based weighting scheme based on this insight. Using the Cov-Cov relationship, we can create a new weighting scheme by estimating the residual visibility covariance matrix in a given observation. By weighting visibilities so as to change their covariance matrix, one can change the shape of the noise-PSF and thus improve the final image: this manifests as either decreased noise or decreased calibration artefacts. Note that this weighting is applied after calibration, but before image deconvolution: applying it will therefore not only improve the residual noise in the image, and thus the sensitivity achievable with a given pipeline, but will also improve deconvolution by minimising calibration artefacts in the field: it should thus effectively remove spurious, unphysical emission from final data products. Estimating the covariance matrix is the main difficulty of our framework: we do not know the underlying covariance matrix, and the conditioning of our estimation thereof is limited by the number of measurements within each solution interval. As such, we have no guarantee that our estimate of the corrected visibility covariance matrix is accurate. This problem can be alleviated, for example by estimating the covariance matrix for the antenna gains themselves, and use it to build the visibility covariance matrix: this effectively improves conditioning (cf Sec. 4).

This paper is split into four main sections. In Section 2, we derive the Cov-Cov relationship. With its newfound insights, we propose quality-based weighting schemes with which to improve radio interferometric images in Section 3. We follow in Section 4 by showing how to estimate, from real data, the covariance matrix from which the quality-based weights are derived. Our approach seems to give good results. Finally, we close the paper on a discussion of the applicability and limitations of the quality-based weighting scheme.

Scalars
Total number of pixels in image-plane
& number of cells in -grid
Total number of antennas in the array
Number of visibilities
Index for a single visibility.
Equivalent to
Equivalent to
Vectors
Residual image vector, size
Vector of , size
Contains gain products, size . See Eq. 12
Vector containing 1 in every cell, size
Vector of coordinates in differential Fourier
plane, of length .
Vector of sky coordinates, of length .
Matrices
Visibility seen by a baseline at time and
frequency . Size .
Fourier kernel for direction , antenna and
one pair. Size .
Jones matrix for antenna for one pair.
Contains the gains information. Size
Sky brightness distribution matrix, of size .
Noise matrix, of size . Contains a single
realisation of the thermal noise in each cell.
Fourier transform matrix, of size .
Baseline selection matrix, which picks out 1
visibility out of the full set. Size
convolution kernel that defines
the PSF.
Convolution matrix mapping one to .
The set of all determines the noise-PSF.
Size .
Table 1: Table recapitulating the meaning and dimensions of vectors and matrices used in Sec. 2.1. Only scalars which give matrix dimensions or indices are given here.

2 Building the Noise Map

In this section, we derive our first fundamental result: the Cov-Cov relationship, Eq. 25, which describes how the statistics of residual visibilities (and thus the antenna calibration solutions, henceforth “gains") relate to the statistics of the image plane, i.e. of images made using the associated visibilities. The dimensions of the matrices (denoted by boldface capital letters) and vectors (denoted by boldface lowercase letters) used in this paper are given in Table 1, along with the scalar numbers used to denote specific dimensions. All other variables are scalars.

2.1 The Cov-Cov Relationship in the plane

Let us begin by defining visibility gains. Using the Radio Interferometry Measurement Equation formalism for a sky consisting of a single point source (Hamaker et al. (1996), Smirnov (2011), and companion papers), we can write the following relation between the sky and the signal as measured by a single baseline at time and frequency :

(1)

All the quantities above are matrices. Eq. 1, implies a linear relationship between the coherency matrix and the visibilities recorded by a given baseline (), with the addition of a thermal noise matrix , which is also of shape and contains different complex-valued realisations of the noise in each cell. Since electric fields are additive, the sky coherency matrix can be described as the sum of the contributions from individual sources in directions in the sky. We also assume that the sky does not vary over time, i.e. that is not a function of time. The Jones matrices () contain the antenna gain information in matrix form, while is the Fourier kernel. Let us limit ourselves to the scalar case, which corresponds to assuming that emission is unpolarised. We assume that , where is the flux of our single point source and is the identity matrix. We also assume that , where is the complex-valued gains of antenna at time and frequency . This means that we assume that the gains are direction-independent, and so becomes . Similarly, , the Fourier kernel in the direction of the source, . has 1 realisation of in each cell, where:

(2)

where is the variance of the thermal noise. Let us denote each pair by , and ignore the sky’s frequency-dependence. The following scalar formulation is then equivalent to Eq. 1:

(3)
(4)

Calibration is the process of finding an accurate estimate of for all antennas , at all times and frequencies . Since we are in a direction-independent regime, the quality of our calibration then determines the statistical properties of the residual visibilities (and the image-plane equivalent, the residual image). The residual visibilities associated with calibration solutions are defined as our measured visibilities minus the gain-corrupted model visibilities. then denotes our calibration estimate for . We now begin to limit the generality of our framework by assuming that all sufficiently bright sources have been modeled and subtracted: unmodeled flux is then negligible. We can then write the residual visibilities as:

(5)

The flux values in the image-plane pixels222As opposed to the Fourier-plane pixels, which are the elements of the grid onto which the measured visibilities are mapped for imaging. are the Fourier transform of the visibility values mapped onto each pixel. This can be written as follows:

(6)
(7)

where are the directional cosine positions of a given pixel, and the Fourier coefficient mapping a point in Fourier space to a point on the image-plane. is the weight associated to a given visibility. Let us now write this using a matrix formalism. The contribution of a single visibility to the image-plane residuals can be written as:

(8)
(9)

where is a vector of the of Eq. 2 and is a vector of size , with

(10)
(11)
(12)
(13)

and is the scalar weight associated with each visibility. By default, : all visibilities then have the same weight, and then becomes the average of all . is a vector of all , and thus of size . is the Fourier kernel, of size . is a matrix of size : its purpose is to encode the -coverage. Each contains only a single non-zero cell, different for different . The height (number of rows) of is determined the size of the -grid, and its length (number of columns) by the number of visibilities.

The order of operations is thus: each residual visibility is assigned some weight and its -coordinates are set by . The inverse Fourier transform () is then applied to this grid, and so we recover its image-plane fringe. By averaging over all fringes, we recover the dirty image.

The residual image will thus depend on three quantities: the residual gains, the flux in the image, and the weighting scheme. Let us consider the relationship between the statistics of residual visibilities and the variance at a given point in the corresponding residual image.

2.2 Statistical Analysis

Figure 1: Fig. 1a shows the PSF image of a simulated 1Jy source at phase centre. Colourbar units are in Jansky. Fig. 1b shows the associated UV track, and Fig. 1c the corresponding tracks. Note that the plane does not have a homogeneous point density, but is denser near its origin: here, this is shown by plotting only 1 random point in 10000.

In the following analysis, we treat our gain solutions and thermal noise as random variables in order to compute the covariance matrix of our residual image,

. The diagonal of this matrix gives the variance for each pixel, while the wings give the covariance between pixels. Using the property that , we can apply the operator to Eq. 9 to write:

(14)
(15)
(16)

So far, we have only applied definitions. The net effect of (dimensions of ) is to encode where a given baseline samples the -plane, and map one cell at matrix coordinates from the correlation matrix onto the visibility grid. is not the gridding kernel, but rather the sampling matrix, which determines where we have measurements and where we do not. We can thus write that , where is the value from the appropriate cell and is the vector-of-ones of appropriate length (here, ). This allows us to write:

(17)
(18)

Here, is a Toeplitz matrix, i.e. a convolution matrix, associated with baseline . The set of all defines the convolution kernel which characterises the Point-Spread Function (henceforth PSF) associated with a given uv-coverage, of size . , meanwhile, is not generally Toeplitz. Its cells can be written as:

(19)

Let us investigate how the sky brightness distribution (i.e. -dependence) affects the noise-map. We can write the sum over as two sums: one over and one over . Thus:

(20)

Note that the only direction-dependent terms in the above are and , which are both inside (for both and ). By making the approximation that the Fourier kernels of different sources are orthogonal (i.e. that )333This hypothesis is equivalent to assuming that the sky is dominated by distant point sources, where “distant" means that the sources are multiple PSF Full-Width Half-Maximum apart we can write:

(21)
(22)
(23)
(24)

Note that , since those are the coordinates along the diagonal: for these values, the matrix-of-ones at the centre of becomes the identity matrix. Note also that , since . We can then write Eq. 20 as:

(25)

where we have now limited our formalism to the case where the sky is dominated by distant point-like sources. This is our fundamental result: assuming unpolarised emission coming from distant point sources and normally-distributed thermal noise, it gives a direct relationship between the covariance of the residual visibilities and the covariance of the residual image-pixel values. We thus call it the Cov-Cov relationship. It describes the statistical properties of the image-plane as the result of a convolution process changing an average noise level at different points in the image-plane, allowing us to describe the behaviour of variance and covariance in the image. Let us focus on the first.

By applying the operator (which returns the diagonal of an input matrix as a vector) to both sides of Eq.25, we can find an expression for the variance map in the image-plane:

(26)
(27)
(28)
(29)
(30)

In Eq 27, we have:

(31)

For , the diagonals of are the Fourier kernels mapping space to . can then be thought of as a Fourier transform. It is not a diagonal matrix. It behaves as a covariance fringe, allowing us to extend standard interferometric ideas to covariance space: each fringe can be thought of as a single “spatial filter" applied to the pixel covariance matrix. Just as a given baseline has coordinates in -space, a given correlation between baseline residual errors has coordinates in correlation space, which we will henceforth refer to as -space.

This space warrants further discussion: Fig. 1 shows, for a given -track (Fig. 1b), both the corresponding domain (Fig. 1c) and point-spread function (Fig. 1a). The symmetric, negative -track is treated as a separate track, and thus ignored. This means that we do not fully constrain the noise-PSF (since the covariance matrix of the symmetric track is simply the Hermitian of the first), but we do not seek to constrain it in this section, but rather to show that our results hold.. We can see that the -tracks are symmetrical about the origin. The space corresponding to a given -track can thus be most concisely described as a “filled -track", with its boundaries defined by the ends of the -track. The set of , each of which maps one value of the covariance matrix to a fringe in the image-plane, would then characterise a PSF equivalent for the noise distribution, which we refer to as the noise-PSF. In our formalism, the only source of statistical effects in the field are calibration errors and thermal noise. The average variance in all pixels will be given by the diagonal of the covariance matrix and the thermal noise, provided that it truly follows a normal distribution. The only effects which will cause the variance in the image-plane to vary from one pixel to the next are those mapped onto the covariance fringes, i.e. such position-dependent variance fluctuations will be caused by correlated gain errors, which are spurious signal introduced by erroneous gain estimates. Assuming all sources in the field are point-like and distant, then these variance fluctuations will follow a specific distribution, convolved to every source in the field. Since the variance fluctuations act as tracers for calibration artefacts, artefacts in the image can be understood as one realisation of the variance map, which is characterised by an average level determined by the variance in the gains and thermal noise, and a noise-PSF convolved with the sky brightness distribution. The actual artefacts in the image will still be noisy, as a realisation of the true variance map. For the same reason, in the absence of correlated gain errors, are all zero and is a diagonal matrix. We then recover a “flat" noise-map: the variance will be the same in all pixels, as the noise-PSF is absent. In the ideal case, were we to recover the true value of the gains for all times and frequencies, this becomes pure thermal noise.

2.3 Noise Map Simulations

Figure 2: Example of a non-stationary covariance matrix, which can be used to simulate .The colourbar units are Jy. The correlation scale is 40 cells, and the variability period is 500 cells. The matrix is made positive semi-definite (and therefore a covariance matrix) through SVD decomposition. The maximum size of the “bubbles" is determined by .

We have shown in Eq. 25 that there exists an analytical relationship between residual visibility statistics and image-plane residual statistics. This section gives details of simulations we have performed to support our claims on this “Cov-Cov" relationship. Specifically, we simulate residual visibilities for a single baseline by generating a set of correlated random numbers with zero mean and a distribution following a specified covariance matrix . It contains a periodic function of period along the diagonal, which is then convolved with a Gaussian of width

corresponding to the characteristic scale of correlation. The values of these parameters are chosen arbitrarily. A small constant term is added on the diagonal, the net value of which is strictly positive. This simulates a low thermal noise. Finally, singular value decomposition is used to ensure that this matrix is Hermitian positive semi-definite. The net effect is a non-stationary correlation: some residuals are correlated with their neighbours, and uncorrelated with others. An example of this covariance matrix for arbitrary parameter values is shown in Fig.

2. We see that, for any given point, correlation is stronger with some neighbours than others (as determined by ). Samples are drawn as follows: is built following user specifications as described above. We then find its matrix square root so as to apply it to random numbers generated from a normal distribution. We generate 2000 realisations of our random variables :

(32)
(33)
(34)

where is a matrix of dimensions . Since follows a normal distribution, and the covariance matrix of each is, by construction, . The covariance matrix of is thus also .

As for the -track, our simulations read a single one from a specified dataset. In this case, we read an 8-hour -track for a baseline between two arbitrary LOFAR stations (specifically, CS001HBA0 and RS310HBA) in an observation of the Bootes extragalactic field. The effective baseline length varies between 37.9km and 51.8km. The dataset included 20 channels, each with a spectral width of 97.7 kHz; the central observing frequency is 139 MHz. The temporal resolution is 1 measurement per second.

We compare the measured variance map , built by measuring the variance across realisations at each pixel in the image-plane, with the predicted variance map , built using the Cov-Cov relationship (Eq. 25). Since we are only interested in the variance map, rather than the covariance between pixels, we compute only the diagonal terms.

(35)
(36)

where the thermal noise is already incorporated into the diagonal of and Eq. 36 is merely the diagonal operator applied to the Cov-Cov relationship.

Figure 3: The three lines in each figure correspond to three horizontal cross-sections from images in Fig. 4. The units on the y-axis are dimensionless []. is the maximum characteristic error correlation length. In decreasing intensity, they correspond to , , and . The dashed lines correspond to the variance measured with 2000 realisations for each pixel, while the solid line corresponds to the predicted value at that pixel. There are 31 pixels. We do not show cross-sections for negative due to image symmetry about the origin.
Figure 4: Simulated noise-maps, compared with theoretical prediction. The pixel values are normalised by the average value of the covariance matrix: the units of the colourbar are thus dimensionless (). These are on the same angular scale as the source shown in Fig. 1.

2.3.1 Simulation with a single point source

We model our sky as containing a single 1 Jy point source at phase centre: we thus have . The source as seen through the set of -tracks used in our simulation, along with their corresponding space, are shown in Fig. 1. The simulated noise-map is calculated by drawing a large sample () of random numbers from the correlated distribution, thereby creating 2000 sets of residual visibilities. By Fourier-transforming the visibilities to the image-plane and taking the variance of the values for each image pixel (i.e. each pair) as per Eq. 35, we can estimate . The predicted noise-map, meanwhile, was found by assigning each cell of to the appropriate point in the plane and Fourier transforming from this plane into the image-plane, as per Eq. 36. We compare the outcome of simulating a large number of realisations and taking the variance across these realisations for each pixel with mapping the covariance matrix onto the -plane and using the Cov-Cov relationship.The results of our simulations are shown side-by-side in Fig. 4: the predicted and simulated noise-PSFs match. The peak-normalised predicted noise-PSF is less noisy, as shown in Fig. 3 for different correlation scales. This is expected, since it is calculated directly from the underlying distribution, rather than an estimate thereof. As , we expect the two methods to fully converge. As the maximum characteristic correlation length increases, the variance becomes ever more sharply peaked.

Since our simulated sky consists of a 1Jy source at phase centre, there is only one noise-PSF to modulate the average noise-map, and it lies at phase centre. Let us test our formalism further by considering a model with multiple point sources.

2.3.2 Simulation with 3 point sources

We wish to test our prediction that the noise-map can be described as a convolutional process modulating an average noise level. We thus perform another simulation, this time with three 1Jy point sources. The associated dirty image is shown in Fig. 5d.

Figure 5: Noise-map of sky with correlated gain errors and three point sources. The colourbars of (a), (b) and (c) have dimensionless units, while that of (d) is in Jansky. Note the presence of structure in the residuals (c): these show the limits of our hypothesis that sources are spatially incoherent.

This dirty image simply consists of performing a direct Fourier transform (i.e. without using a Fast Fourier Transform algorithm) on simulated visibilities corresponding to these three point sources. We now perform a similar test as above on this field. Firstly, we “apply" gain errors to these visibilities by multiplying our model with our residual gain errors. This allows us to find 2000 realisations of residual visibilities, and find the variance for each pixel across these realisations. This gives us the simulated noise-map, shown in Fig. 5a. Secondly, we perform a DFT from the differential Fourier plane to the plane as before, assigning one cell of to each point of the differential Fourier plane. This time, however, is not simply unity for all points in the differential Fourier plane. Instead, it is calculated for the three-point-source model, and applied for each point. This gives us the predicted noise-map in Fig. 5b. Finally, Fig. 5c shows the absolute value of the difference between the two images. We see that there is some structure present in these residuals: this is expected, as the PSF of the sources in the dirty imags clearly overlap. We are thus not quite in the regime where emission is fully spatially incoherent. Nevertheless, our predictions hold to better than accuracy.

It bears repeating that, for correlated noise, this map can be understood as a distribution map for calibration artefacts: the amount of spurious correlated emission seen by each baseline will determine the noise-map, and the true image-plane artefacts will then be one set of realisations of this underlying distribution.

3 Adaptive Quality-based Weighting Schemes

As discussed in Section 1, some intervals of an observation will have lower gain variability. These will show up in the gain covariance matrix as intervals with lower variance. Similarly, those with larger intrinsic gain variability will have greater error in their gain estimate. By giving greater weights to the former, and lower weights to the latter, we expect to be able to improve image reconstruction. We thus talk of adaptive quality-based weighting, as the weights will adapt based on the calibration quality.

The pixel variance is determined by the visibility covariance matrix, as shown in Eq. 27. The diagonal of the visibility covariance matrix will add a flat noise to all pixels, while its wings will determine the calibration artefact distribution, which will be convolved to the sky brightness distribution. We thus have two sources of variance in the image-plane. Minimising the far-field noise (i.e. the variance far from sources) in an image would involve down-weighting noisier calibration intervals while up-weighting the more quiescent ones, without taking noise-correlation between visibilities into account. This is because the far-field noise will be dominated by the diagonal component of the covariance matrix (cf. Eq. 27). By the same token, minimising calibration artefacts would involve down-weighting measurements with strongly-correlated noise, and up-weighting the less-correlated. This would not, however, minimise the diagonal component: in fact, it will likely exaggerate its up-weighting and down-weighting. As such, it will increase the constant level of the noise-map, but flatten the noise-PSF’s contribution. There are thus two competing types of noise that we seek to minimise: uncorrelated noise, which corresponds to (i.e. the diagonal components of the gain covariance matrix), and correlated noise, which corresponds to (i.e. its wings). Minimising the first will minimise far-field noise without optimally reducing artefacts, while minimising the last will minimise noise near sources at a cost to far-field noise. In the following sub-sections, we will discuss weighting schemes used to accomplish this.

3.1 Optimising sensitivity

The Cov-Cov relationship (Eq. 25) tells us that, far from any sources, the variance map (Eq. 27) is dominated by a constant term: the contribution from thermal noise and the diagonal of the residual visibility covariance matrix. Maximising sensitivity far from sources therefore implies minimising . This is equivalent to assigning visibilities weights inversely proportional to their variance:

(37)

For each baseline, those times with larger variance in the residuals will be down-weighted, and those with smaller variance will be up-weighted; this scheme does not require information about the underlying gains, only the error on our solutions. Since we are treating

as a constant for all antennas and all times, those times where our gains estimate is closer to the true gains will be up-weighted, and those moments where they are farther from the actual gains will be down-weighted: hence the term “adaptive quality-based weighting". Note that the diagonal of the weighted residuals’ covariance matrix should therefore become constant: this weighting scheme explicitly brings the residuals closer to what is expected in the case of perfect calibration, assuming uncorrelated noise. For the remainder of this paper, we will refer to these weights as

sensitivity-optimal weighting.

3.2 Minimising Calibration Artefacts

Minimising calibration artefacts - i.e. optimising the sensitivity near bright sources - means flattening the noise-map. Since the noise-map can be understood as a noise-PSF convolved with all the modeled sources in the sky modulating the background variance level, it will be flattest when its peak is minimised. From the Cov-Cov relationship (Eq. 25), we can see that, at the peak of the noise-PSF (which would be the variance at the pixel where ), the Fourier kernel is unity: the variance for that pixel is thus the sum of all the cells in the covariance matrix. By accounting for normalisation, we can write the variance at the centre of the noise-PSF as:

(38)

Our optimality condition is then, after some algebra:

(39)
(40)

We find that one which satisfies the above is:

(41)

where is a vector of ones. These weights depend only on calibration quality: badly-calibrated cells will include spurious time-correlated signal introduced by trying to fit the noise on visibilities. Down-weighting these cells helps suppress artefacts in the field, at the cost of far-field sensitivity. This weighting scheme is thus only a function of the relative quality of calibration at different times, boosting better-calibrated visibilities and suppressing poorly-calibrated visibilities. For the remainder of this paper, we will refer to these weights as artefact-optimal weighting.

4 Estimating the Covariance Matrix

In our simulations, we have worked from a known covariance matrix and shown that our predictions for the residual image’s behaviour hold. With real data, however, we do not have access to this underlying covariance matrix. Since our weights are extracted from said matrix, estimating it as accurately as possible remains a challenge: this is in turn limited by the number of samples which can be used for each cell.

Each cell in the covariance matrix is built by averaging a number of measurements, or samples. The more samples are available, the better our estimate becomes: once we have more samples than degrees of freedom, we say that our estimation is well-conditioned. Otherwise, it is poorly-conditioned. In this section, we will discuss ways in which we can improve the conditioning of the covariance matrix estimation.

4.1 Baseline-based Estimation

One way to improve the conditioning of our covariance matrix estimation is to make the same hypothesis as the calibration algorithm: we can treat the underlying gains as constant within each calibration interval. Provided this interval is known, this allows us to find a single estimate for each interval block of the covariance matrix, turning a matrix into a smaller equivalent, where is the number of solution intervals used for to find the gain solutions. We then improve our conditioning by a factor of , which is the number of samples in a calibration interval. The estimate of the covariance matrix is built by applying the covariance operator:

(42)

where the operator denotes taking the average over the full vector. If the calibration solver’s gain estimates are unbiased (i.e. ) and the model of the sky is sufficiently complete, this quantity should be zero. Having created , which will be of size , its cells can now be averaged over blocks of . This allows us to estimate the weights for each baseline and each time.

Mathematically, we retrace the steps of Section 2. In the absence of direction-dependent effects, we define the residual visibilities as before, and use them to define the normalised residual visibilities :

(43)
(44)

We then organise the residuals in cells:

(45)
(46)

corresponds to a matrix containing all the residual visibilities within one calibration cell , i.e. for where It is therefore of size , where is the number of calibration intervals in the observation. Normalising the residual visibilities by allows us to recover the underlying covariance matrix by multiplying the residual visibility matrix with its Hermitian conjugate:

(47)

Note that we have divided the noise term by the flux model , which can be very small in some cells. As such, care must be taken not to cause the relative thermal noise contribution to explode: those cells where this would occur are dominated by thermal noise, and information on the covariance matrix cannot be recovered from them.

In this framework, we simply treat the index as containing all the times and frequencies, for individual baselines, corresponding to a single calibration interval. is then an estimate of the residual visibility covariance matrix.

4.2 Antenna-based Estimation

In the subsection above, we assumed that finding one solution per interval will give us strong enough constraints to make the problem of estimating the covariance matrix well-conditioned: this may not be true in all cases. Conditioning may then need to be improved further: in this subsection, we show one way in which this can be done. There are others, e.g. using the rank of the matrix itself to find better-conditioned estimates of the covariance matrix at a lower resolution (i.e. a single estimate for a greater number of cells). They will not be presented in this paper, but are a possible avenue future work on this topic.

In estimating the covariance matrix for each baseline and each calibration cell, we are severely limited by the small number of samples in each cell. One way to overcome this problem is to find estimates for the variance of antenna gains, and use these to return to the baseline variances. In this formalism, we extend to include all visibilities pointing at a single antenna at a given time. Let us begin by writing an expression for the gain vector, which contains the gains for all antennas and all calibration cells:

(48)
(49)

and the variance on each antenna in each calibration cell is then:

(50)

As we can see, Eq. 50 is simply a vector form of Eq. 12. The residual gains of Eq. 12 can now be understood as random samples of the covariance between the gains for antennas and at a given time, assuming complete skymodel subtraction. We can thus define the variance sample matrix as an estimate of the variance matrix:

(51)
(52)

We define the residual matrix as:

(53)

where we explicitly place ourselves in the limits of our formalism, i.e. that we do not have direction-dependent gains. We now see that at the core of Eq. 53 lies , where . The K-matrix is defined as follows:

(54)

Since the residual matrix depends on the gains, we define the residual visibility vectors as:

(55)
(56)

corresponds to a matrix containing all the residual visibilities within one calibration cell , i.e. for where Let us define as the number of elements in each calibration cell. The residual variance sample matrix can now be built by multiplying the residual visibility matrix with its Hermitian conjugate:

(57)

Note that we do this because it allows us to turn a single noise realisation into a statistical quantity . We can relate to the variance of individual antenna gains:

(58)

To reach this point, in Eq. 23, we made the hypothesis that the sky brightness distribution is dominated by spatially incoherent emission. Applying this hypothesis again here, we can make the approximation that the cross-terms in the sum over average to zero: . We then have:

(59)
(60)
(61)

where denotes the Hadamard or entrywise product and . Thus, allows us to estimate the variance of each antenna and for each calibration cell by using all the visibilities pointing to that antenna within that calibration cell. With this information, we can then rebuild the baseline-dependent matrix, having improved our sampling by a factor of .

(a) Well-calibrated, unweighted restored image of the sky near the centre of the Extended Groth Strip. Used for comparison with the other images. Units of colourbar are Janskys. This image was made with data calibrated following best practice (solution intervals of 8 seconds, half the bandwidth). rms=5.87mJy/beam
(b) Poorly-calibrated, unweighted restored image of the sky near the centre of the Extended Groth Strip. Units of colourbar are Janskys. This image was made with the same data as for Fig. 5(a), but averaged in time and calibrated using larger gain solution intervals: 2 minutes and half the bandwidth. rms=86.4mJy/beam
(c) Image made using the same imaging parameters and corrected visibilities as Fig. 5(b), using sensitivity-optimal weighting. Units of colourbar are Janskys. rms=9.69mJy/beam
(d) Image made using the same imaging parameters and corrected visibilities as Fig. 5(b), using artefact-optimal weighting. Units of colourbar are Janskys. rms=15.8mJy/beam
Figure 6: Restored images of the centre of the Extended Groth Strip, as seen with an 8-hour observation using the full LOFAR array. Fig. 5(a) shows an image of the field made with good calibration intervals. Fig. 5(b) shows an image of the field made with poor calibration intervals. Fig. 5(c) shows image made with the same visibilities and imaging parameters, but with the application of the sensitivity-optimal weighting scheme. Fig. 5(d), similarly, differs from Fig 5(c) only in that artefact-optimal weights, rather than sensitivity-optimal weights, were used. The histograms of pixel values in each image have 1000 flux bins ranging from -0.16 Jy to 0.16 Jy. Their ordinates are in log scale. Pixel size is 1.5” in all images.

5 Applying the Correction to Simulated Data

Figure 7: The sensitivity-optimal (green) and artefact-optimal (red) weights both give improvements over the unweighted noise-map (blue).

In this section, we show the impact of our weighting schemes on a noise-map made from arbitrarily strongly-correlated residuals. Here, we assume that our sky contains only a single point source at phase centre: there is thus only a single instance of the noise-PSF, placed at phase centre, to modulate the average variance level. We sample this instance by taking a cross-section from to a large , keeping constant. The only difference between these cross-sections is the weighting scheme applied: unit weights for all visibilities (“Uncorrected", blue), sensitivity-optimal weights (green), and artefact-optimal weights (red). We plot both the result predicted by the Cov-Cov relationship (solid line) and the variance estimated across 2000 realisations (dashed line): the result is shown in Fig. 7. The two remain in such agreement throughout the cross-section as to be nearly indistinguishable.

There are a few significant points to note on this figure. Firstly, most of the power in the matrix lies along the diagonal: both weighting schemes thus give good improvements in variance across the map. The artefact-optimal weights, while decreasing the peak further, as expected, also increases the noise far from sources: this is due to the fact that the artefact-optimal weights are in a sense more “selective" than the sensitivity-optimal weights: they up-weigh and down-weigh more severely, and will only result in a constant covariance matrix if this matrix is zero everywhere outside of the diagonal. In effect, the noise-map becomes flatter, but much broader.

6 Applying the Correction to Real Data

(a) Well-calibrated, unweighted restored image of the sky near the centre of the Extended Groth Strip. Used for comparison with the other images. Calibration solution intervals used were 8 seconds, half the bandwidth. rms=5.87mJy/beam
(b) Image made using the same imaging parameters and corrected visibilities as Fig. 5(b), with the application of antenna-based, sensitivity-optimal weighting. Solution interval of 2 minutes, half the bandwidth. rms=6.69mJy/beam
Figure 8: Comparison between the well-calibrated image (i.e. the same image as Fig 5(a)) and antenna-based sensitivity-optimal weights. Units of both colourbars are Jansky. We see that we recover a very similar image, despite the fact that the data used for the weighted image are averaged by a factor of 8 compared to those used for the unweighted image.

In this section, we show the effect of adaptive quality-based weighting on real data. The dataset used in this section is a single sub-band from an 8-hour LOFAR observation centred on the Extended Groth Strip (=14:19:17.84,=52:49:26.49). The observation was performed on August 28th, 2014. The subband includes 8 channels of width 24.414 kHz each, for a total bandwidth ranging from 150.2 to 150.5 MHz. The data have been averaged in time to 1 data point per second. The data was calibrated using Wirtinger calibration (see Tasse 2014; Smirnov & Tasse 2015, and references therein) and a sky model consisting only of a nearby calibrator source, 3C295. A reference image (a cutout of which is shown in Fig. 5(a)) was made by calibrating the data according to best practice for LOFAR survey data: 1 calibration solution per 8 seconds and per 4 channels. The residual data was then corrected by the gain solutions and imaged using Briggs weighting (robust=0), pixel size of , and deconvolved using the default devoncolution algorithm in DDFacet (Tasse et al., in press).

The data was then time-averaged to create a new, 2.4 GB dataset with 1 data point per 8 seconds. Deliberately poor calibration was then performed on this dataset, solving for 1 calibration solution per 2 minutes (caeteris paribus). The resulting corrected residual data was imaged using the same imaging parameters as the reference image, and a cutout of the result is shown in Fig. 5(b). As expected, the very long calibration intervals introduce calibration artefacts in the image. The brightest sources are still visible, but much of the fainter emission is buried under these artefacts. We are then in a case where our residual visibilities are dominated by calibration error rather than sky model incompleteness.

Weights were then calculated based on the badly-calibrated residual visibilities. Fig. 5(c) was made using the same visibilities as Fig. 5(b) and applying baseline-based, sensitivity-optimal weight. Similarly, Fig. 5(d) used the poorly-calibrated residual visibilities with the application of baseline-based, artefact-optimal weighting. These weights are likely to be poorly-conditioned. In both cases, all other parameters were conserved.

Note that applying antenna-based sensitivity-optimal weighting to the badly-calibrated data (not shown here) allows us to recover the reference image with only a very small increase in rms (increased by a factor of 1.14). Further testing on complex field simulations will be required to ascertain the usefulness of artefact-optimal weighting: it is likely that it fails to correct the image fully due to the poor conditioning of the covariance matrix used here.

The pixel histograms show us that the weights do not completely mitigate the poor calibration interval choice, but certainly give a dramatic improvement over the unweighted, poorly-calibrated residuals. This is compatible with our statement that the weights give similar residuals in the image with a dramatic improvement in time at some cost in sensitivity. It is interesting to note that while Fig. 5(d) looks noisier than Fig 5(c), its residual flux histogram is actually closer to that of Fig. 5(a).

As for performance, the weights used for Fig. 5(d) took 8 hours of computing time on a single core444Core type: Intel(R) Xeon(R) CPU E5-2660 0 @ 2.20GHz, working on a 29 GB dataset, which is not particularly large for LOFAR data. Since the problem is massively parallel, this cost can be alleviated. The main bottleneck is likely due to very poor code optimization. As for the weights used for Fig. 7(b), they are computed in 1min6s on the same single core.

7 Discussion

This paper began by investigating the use of an algorithm inspired by “lucky imaging” to improve images made using radio interferometric data. By investigating the statistics of residual visibilities, we have made the following findings:

  • A relationship between the statistics of residual visibilities and residual pixel values (the “Cov-Cov relationship”).

  • A description of the noise-map in the image plane as a constant variance level modulated by a noise-PSF convolved with the sources in the field. This gives the variance in the flux of the image as a function of distance from the sources in the sky for a given calibration.

  • An Adaptive Quality-Based weighting scheme, which reduces the noise in the image (and the presence of calibration artefacts) by minimising either the constant noise term or the noise-PSF.

While our results are not a panacea for poor calibration, they show that we can not only improve images made with well-calibrated data, but also mitigate the worst effects of poorly-calibrated visibilities in otherwise well-calibrated datasets. Provided that the gain variability timescale is long enough at certain points of the observation, we can effectively get images of similar quality using both the “standard” best-practice calibration interval for LOFAR survey data (calibration solution interval of 8 seconds) and a significantly larger solution interval of 2 minutes (frequency interval unchanged). Of course, if no such stable interval exists, there will be no good intervals to upweigh, and we will be left only with equally-poor data chunks. This means that, in the right conditions, net pipeline time can be sped up by a factor of nearly three, at a slight cost in sensitivity. This increase will be greater than what could be achieved with existing comparable methods such as “clipping”.

We emphasize that the adaptive quality-based weighting schemes work because the noise-map describes the underlying noise distribution, of which calibration artefacts are one single realisation. To fully characterise the artefacts, the correlation between different pixels (i.e. off-diagonal elements of ) must be computed; this has not been done in this paper. Nevertheless, lesser constraints on the spatial distribution of artefacts can be found using only the diagonal elements of . The weighting schemes merely seek to minimise this spatial distribution as much as possible: the end result is fewer artefacts, which can be distributed across a much larger area. This is the source of the dramatic improvement from 5(b) to Fig. 5(c). We have simply down-weighted those visibilities where spurious signal was introduced by the calibration solutions, and up-weighted those visibilities where such signal was lesser. Since this spurious signal is the source of calibration artefacts, downweighting the associated visibilities reduces it dramatically. The poor improvement from Fig. 5(b) to 5(d) is likely due to limits in the conditioning of our estimation of the covariance matrix.

The work presented here can be improved upon, notably by working on improving the conditioning of our covariance matrix estimate: for real observations, it is impossible to have more than one realization of each gain value for all antennas. By treating each visibility within a calibration interval as a realization of the true distribution, we can better estimate the covariance matrix per baseline, and thus reach a better estimate of the variance in the image-plane. Of course, in practice, we can never access to the true, underlying time-covariance matrix for each baseline. Significant hurdles remain:

  • The impact of sky model incompleteness (since calibration requires a sky model) is ignored in this paper; we start by assuming that we have a complete sky model. In practice, of course, acquiring a complete sky model is often a key science goal in and of itself. The impact of this hypothesis therefore ought to be investigated in future work.

  • The conditioning of our covariance matrix estimation remains a concern. By using an antenna-based approach, we can improve conditioning by a factor of , but this is only one approach among many. Further work is needed to investigate which method, if any, proves optimal.

Acknowledgements.
This work is based upon research supported by the South African Research Chairs Initiative of the Department of Science and Technology and National Research Foundation. The authors thank the LOFAR Surveys KSP Primary Investigators for allowing us to use their data for our tests and images in this paper. They would also like to thank M. Atemkeng and T. Gobler for fruitful discussions over the course of this work. They helped shape this work through their insightful comments on both style and substance while proofreading.

References

  • Briggs (1995) Briggs, D. S. 1995, in Bulletin of the American Astronomical Society, Vol. 27, American Astronomical Society Meeting Abstracts, 1444
  • Fried (1978) Fried, D. L. 1978, Journal of the Optical Society of America (1917-1983), 68, 1651
  • Grobler et al. (2014) Grobler, T. L., Nunhokee, C. D., Smirnov, O. M., van Zyl, A. J., & de Bruyn, A. G. 2014, MNRAS, 439, 4030
  • Hamaker et al. (1996) Hamaker, J. P., Bregman, J. D., & Sault, R. J. 1996, A&AS, 117, 137
  • Kazemi & Yatawatta (2013) Kazemi, S. & Yatawatta, S. 2013, MNRAS, 435, 597
  • Rau & Cornwell (2011) Rau, U. & Cornwell, T. J. 2011, A&A, 532, A71
  • Smirnov (2011) Smirnov, O. M. 2011, A&A, 527, A106
  • Smirnov & Tasse (2015) Smirnov, O. M. & Tasse, C. 2015, MNRAS, 449, 2668
  • Tasse (2014) Tasse, C. 2014, ArXiv e-prints
  • Yatawatta (2014) Yatawatta, S. 2014, MNRAS, 444, 790