Distributed image reconstruction for very large arrays in radio astronomy

by   André Ferrari, et al.

Current and future radio interferometric arrays such as LOFAR and SKA are characterized by a paradox. Their large number of receptors (up to millions) allow theoretically unprecedented high imaging resolution. In the same time, the ultra massive amounts of samples makes the data transfer and computational loads (correlation and calibration) order of magnitudes too high to allow any currently existing image reconstruction algorithm to achieve, or even approach, the theoretical resolution. We investigate here decentralized and distributed image reconstruction strategies which select, transfer and process only a fraction of the total data. The loss in MSE incurred by the proposed approach is evaluated theoretically and numerically on simple test cases.



There are no comments yet.


page 4


Multi-frequency image reconstruction for radio-interferometry with self-tuned regularization parameters

As the world's largest radio telescope, the Square Kilometer Array (SKA)...

Towards radio astronomical imaging using an arbitrary basis

The new generation of radio telescopes, such as the Square Kilometer Arr...

3D Image Reconstruction from X-Ray Measurements with Overlap

3D image reconstruction from a set of X-ray projections is an important ...

Computational Imaging for VLBI Image Reconstruction

Very long baseline interferometry (VLBI) is a technique for imaging cele...

How to Calibrate Your Event Camera

We propose a generic event camera calibration framework using image reco...

Online radio interferometric imaging: assimilating and discarding visibilities on arrival

The emerging generation of radio interferometric (RI) telescopes, such a...

Data consistency networks for (calibration-less) accelerated parallel MR image reconstruction

We present simple reconstruction networks for multi-coil data by extendi...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

Since the commissioning of the first large radio interferometers in the 70s and 80s (such as the VLA in the USA and the WSRT) radio astronomy in the range of large wavelengths has grown dramatically, particularly with the development of more and more extended antenna arrays. In the prospect of the most sensitive radio telescope ever built, the SKA which will be operational in the 2020s, several new generation radio telescopes are being built or planned (LOFAR in the Netherlands, ASKAP and the Murchison Widefield Array Australia, e-MERLIN in the UK, e-EVN based in Europe, MeerKAT in South Africa, JVLA the United States).

As an example, LOFAR consists of 48 groups of antennas (stations), among which approximately 35,000 elementary antennas are located in the Netherlands. The “superterp”, the heart of LOFAR is a super-station: a cluster of six stations. Eight other stations, totalizing approximately 13,000 antennas are located in the surrounding countries. A project of a new super-station in Nançay (F) is under consideration. Within each station, antennas form a phased array which allows for digital beamforming simultaneously in several directions and frequency bands. The beam-formed data from the stations are centralized at the University of Groningen in the Netherlands where a supercomputer is responsible for the combination of the beam data from all stations. The resulting data are then stored on a cluster of ASTRON, the Netherlands Institute for Radio Astronomy, where the images (and other deliverables) are reconstructed. As a mean of comparison SKA will totalize 2.5 millions antennas, with a square kilometer collecting area distributed over an area of 5,000 km diameter.

Beyond specific objectives that distinguish these new fully digital “software telescopes”, they are all characterized by a great flexibility. Another common point is the amount of data which must be transferred to the central computer and processed. It amounts to 1 terabit/second for LOFAR and will be of the order of  exabyte/day for SKA (more than 100 times the global internet traffic). LOFAR uses a 1.5 Blue Gene/P for the data reduction and the computation of correlations. IBM et ASTRON will develop by 2024 a supercomputer to process and store 1 petabytes of data everyday [2].

This correspondence investigates the possibility to distribute the image reconstruction over the super-stations. The main objective is to avoid centralization of the sampled electromagnetic fields acquired by all stations in order to reduce the data transfer and the exponential increase in the calibration and computational load.

Section II recalls the basis of radio astronomy with aperture synthesis and proposes a strategy where each super-station uses all its antenna and one reference signal from the other super-stations. The loss of performances that follows is evaluated on a simple model using the Cramér Rao Lower Bound (CRLB). Section III shows that the image reconstruction problem can be written as a global variable consensus problem with regularization. Numerical simulations illustrate the performances of the proposed approach. A concluding section presents perpectives.

Ii Aperture synthesis for radio astronomy

Ii-a Standard aperture synthesis model

This section provides the basic equations of radio astronomy with multiple sensor array and describes a partial aperture synthesis strategy which aims to reduce data transfer, allowing a decentralized image reconstruction.

To simplify the notations and without loss of generality, we will not make explicit the wavelengths dependence and the Earth rotation and assume punctual antennas. The coordinates of the stations (within each station, a beam is created from the phased array) in a plane perpendicular to the line of sight are denoted as and the map of interest (the “image” of a region of the sky) is where

denotes the angular coordinates on the sky. The fundamental equation of interferometry relates the Fourier transform of the map to the spatial coherency (visibility) of the incoming electromagnetic field. A measurement of the coherency is obtained by correlating the signal acquired by a pair of stations

properly delayed located at and , giving in the noiseless case a point of visibility at spatial frequency :


See for example [9, 10] for a comprehensive description of radio astronomy and signal processing related tools.

Computation of obviously requires the transfer of signals from stations and in the same place. The stations are normally grouped in “super-stations” (e.g. the superterp for LOFAR) accounting for low frequencies ( small). Resolution is then increased by correlating signals between stations that can be located up to thousands of kilometers from each other.

Ii-B Reduced synthesis aperture model

In general, all possible correlations are computed in order to maximize the coverage (set of ). This requires a centralized system architecture. The resulting visibilities measurements are associated to the filtering of the visibility function by the Full Aperture (FA) spatial transfer function


where denotes the convolution, the weights count the number of beam pairs measuring the same spatial frequency and is the number of different sampled frequencies. The term is associated to “intra-super-stations” low-frequency correlations and to high-frequency “inter-super-stations” correlations. Note that in order to simplify the derivations, we will also denote super-station a single remote station.

In order to reduce the amount of transferred data, we propose to investigate a solution which consists in:

  1. exploiting all the low-frequencies by computing locally all the correlations inside each super-station. In this case, the low frequency term associated to the spatial transfer function (denoted as ) is still . Denote as the set of indices associated to the beams in super-station .

  2. Recovering the high frequency information by transferring only one single beam signal from each super-station to all other remote super-stations. If is the index of the reference beam in super-station transferred to the other super-stations, the resulting sampling pattern of visibilities associated to super-stations and is, see Fig. 1:

This solution is motivated by the fact that it allows to correctly sample the high frequencies, at the cost of a reduced SNR, while reducing the number of transfered electromagnetic signals. Other strategies than transferring a single antenna signal are of course possible. A solution which aims to preserve the SNR is to replace the signal indexed by by averaging of neighboring beams. It is important to emphasize that unless the averaging process is taken into account in the measurement equation, the bias introduced by the averaging of nearby beams must be negligible. This gain in SNR will be denoted as in the sequel. Finally, the overall Reduced Aperture (RA) antenna spatial transfer function is:


This strategy reduces the transfer of beams data w.r.t. a centralized processing as long as the number of stations inside each super-station is larger than the number of super-stations: for super-stations of stations each, the first requires transfers whereas the second .

Fig. 1: Reduced Aperture (RA) synthesis using super-stations and .

Ii-C Analysis of performances on a simple model

In order to evaluate analytically the loss related to the use of RA synthesis w.r.t. to a FA we consider a simple one dimensional case where the map is a shifted Gaussian shape with flux :


The unknown parameters are . The visibilities are:


where is a measurement noise assumed independent Gaussian circular with . The coefficient

takes into account the variance reduction that occurs when the visibility

is estimated from

different baselines.

The elements of the Fisher information matrix are computed using the Slepian-Bangs formula [8, p. 293] which gives:


Note that is not a function of the source position .

We compare the CRLBs on , and for two spatial transfer functions. In both cases the aperture configuration consists of two uniform sub-apertures separated by in order to sketch the behaviour of two super-stations:


where . In the FA mode, and for :

is assumed even, and we consider in the RA mode that : the reference beam is in the middle of the opposite super-station. As noted above, the low frequency term does not change. The high frequency spatial transfer function is now for :

The source width plays a central role in the estimation. For a point source, , high frequency measurements will bring a lot of information while performances for a very extended source () will be independent of the inter-stations visibilities. Figs. 2 and 3 give the results obtained for a configuration defined by , and . The source parameters are , and results are given for different values of . The gain is fixed to . Fig. 2 shows the two spatial transfer functions and . Fig. 3 shows the CRLBs associated to and , denoted respectively as and . The thresholding effect when reflects the shape of the visibility modulus given in Fig. 2: the frequency contribution of the source at the inter-station baseline becomes negligible for . For the order of magnitude of the loss of performances is 4dB. This loss of performance naturally strongly depends on , e.g. when the loss is 13dB.

Fig. 2: 1D illustration of and . Bottom plot shows for 2 characteristic values of .
Fig. 3: CRLB for the FA: , and the RA: .

Iii Distributed image reconstruction with partial aperture synthesis model

Iii-a Decentralized map reconstruction

The classical model for the map reconstruction is obtained vectorizing the sampled map

and the visibilities , , and reads [5, 6]:


where is the Fourier transform matrix, is a diagonal weighting-matrix including various operations (calibration, signal to noise weighting), and is a binary matrix which codes the sampling of the visibilities in the frequency plane. The noise vector is assumed . With these assumptions, the image restoration problem is usually casted as the general inverse problem:


where the regularization term can be for example a sparse analysis [3], a sparse synthesis [11] or a hybrid prior [4].


the zero-padded visibilities and

the so called “dirty image”, (11) can be rewritten as


where is a (circulant) convolution matrix. Using and , we have and the inverse problem (12) turns to be equivalent to the deconvolution problem


The vector can be partitioned as:


where denotes the vector of visibilities obtained by correlating all the beams inside super-station and vector , , by correlating beams in station with reference beam indexed by in station . More specifically is associated to and is associated to , see sec. II.B. Using the same partitioning for , and , Eq (12) can be rewritten as :


with . Using again the property of the zero padding matrix inside each norm in the sum (16), we obtain:


where . Note that and .

In (16), each sub-problem in the sum amounts to reconstruct from the intra-super-station and inter-super-station visibilities . In (17), is a dirty image obtained using only the visibilities and each sub-problem amounts to reconstruct from .

Eqs. (16,17) are particularly interesting for the derivation of distributed optimization algorithm, since they correspond to a global variable consensus problem [1, 7]. The next subsections evaluate the impact of the partial aperture models w.r.t. the standard model by numerical simulations.

Iii-B Array configuration and aperture synthesis

The shape of the sensor array used in the simulations consists of 10 super-stations of 10 stations each, as shown in Fig. 4. The stations follow a classical Y configuration with different rotations. Ten measurements are performed in a range of 3 hours taking into account the Earth rotation. The RA is obtained by computing for each station the correlation with the center beam of the other super-stations and with . For this experiment the ratio is and the visibilities are binned in a grid with Shannon sampling. The corresponding coverage is shown in Fig. 4.

Fig. 4: Left: sensor array configuration (without Earth rotation). Final coverage in normalized frequencies for FA, Eq. (3) and RA, Eq. (4).

The original image is shown in the upper part of Fig. 5 along with its Fourier transform. Note that the image contains both low and high frequency. The noise of variance is such that the measured visibilities SNR is 70 dB. The observed dirty images are also reported. The RA clearly leads to a less detailed dirty image.

Fig. 5: Original image and its Fourier transform (up). Dirty images observed with the full aperture (down left) and reduced aperture (down right).

Iii-C Distributed image reconstruction

The image reconstruction is performed by solving problem (17) with a regularized global variable consensus ADMM algorithm as described in [1]. The regularization term is with . This regularization ensures that the problem is strictly convex and limits the bias. At each iteration, this algorithm requires to solve a large scale linear problem of size the number of pixels in each super-station, [1, Eq. (7.6)]. This linear problem can be easily solved in the Fourier domain as discussed in [5]. Note that the quadratic regularization can be included in this step. The consensus step [1, Eq. (7.7)] is simply a projection on the positive orthant. A super-station sends the current image, Lagrangian multiplier and receive the consensus image.

Fig. 6 shows the reconstructed images obtained by solving (17) with and without positivity constraints for both aperture cases, along with the relative norm of the error . The FA leads obviously to a better reconstruction in both cases. However, while the loss in performance is rather important in the unconstrained case ( error instead of ), this gap is significantly reduced with the positivity constraint ( error instead of ).

Fig. 6: Reconstructed images and error . Up (down): reconstruction without (with) positivity constraint. Left : FA mode, right : RA mode.

Iv Conclusion and perspectives

This correspondence investigates a distributed strategy for the image reconstruction problem in radio astronomy when the number of stations inside each super-station is larger than the number of super-stations. It relies on a reduced aperture synthesis where each super-station uses all its beam and a single reference beam from the other super-stations. Part of the missing information for each super-station is then exchanged during the consensus step of the distributed algorithm. The loss of performances, related to the use of a reduced aperture synthesis, is evaluated on the image reconstruction by computer simulations.

The approach proposed in the paper processes all the data after they have been acquired. A natural extension is to reconstruct sequentially the image. Among the benefits of this setup which perfectly fits the operating mode of interferometers which progressively fill the frequency plane using the Earth rotation, is the possibility to optimize in real-time the observation mode. A straightforward solution is to make a number of iterations of the reconstruction algorithm of section III-C after each measurement and using a “warm start”. A much more challenging perspective it to select sequentially the frequency measurements used at each iteration. Whereas this communication relies on a “deterministic” pattern of frequency measurements, a better strategy would be to select at each iteration, among all visibilities, the ones that optimize the reconstruction according to some predefined criterion.


  • [1] S. Boyd, N. Parikh, and E. Chu. Distributed Optimization and Statistical Learning Via the Alternating Direction Method of Multipliers. In M. Jordan, editor,

    Foundations and Trends in Machine Learning

    , page 140. Now Publishers Inc, June 2011.
  • [2] P. C. Broekema, A.-J. Boonstra, V. C. Cabezas, et al. DOME: towards the ASTRON & IBM center for exascale technology. In Workshop on High-Performance Computing for Astronomy, June 2012.
  • [3] R. E. Carrillo, J. D. McEwen, and Y. Wiaux. Sparsity Averaging Reweighted Analysis (SARA): a novel algorithm for radio-interferometric imaging. Monthly Notices of the Royal Astronomical Society, 426(2):1223–1234, Oct. 2012.
  • [4] A. Dabbech, D. Mary, and C. Ferrari. Astronomical image deconvolution using sparse priors: An analysis-by-synthesis approach. In Acoustics, Speech and Signal Processing (ICASSP), 2012 IEEE International Conference on, pages 3665–3668, 2012.
  • [5] J. F. Giovannelli and A. Coulais. Positive deconvolution for superimposed extended source and point sources. Astronomy and Astrophysics, 439(1):401–412, Aug. 2005.
  • [6] U. Rau, S. Bhatnagar, M. A. Voronkov, and T. J. Cornwell. Advances in calibration and imaging techniques in radio interferometry. Proceedings of the IEEE, 97(8):1472–1481, 2009.
  • [7] A. H. Sayed. Diffusion Adaptation over Networks. In R. Chellapa and S. Theodoridis, editors, E-Reference Signal Processing. Elsevier, 2012.
  • [8] P. Stoica and R. Moses. Spectral analysis of signals. Pearson Prentice Hall, 2005.
  • [9] A. R. Thompson, J. M. Moran, and J. George W Swenson. Interferometry and Synthesis in Radio Astronomy. John Wiley & Sons, 2008.
  • [10] A.-J. van der Veen and S. J. Wijnholds. Signal Processing Tools for Radio Astronomy. Handbook of Signal Processing Systems. Springer, 2013.
  • [11] Y. Wiaux, L. Jacques, G. Puy, A. M. M. Scaife, and P. Vandergheynst. Compressed sensing imaging techniques for radio interferometry. Monthly Notices of the Royal Astronomical Society, 395(3):1733–1742, May 2009.