VLBInet: Radio Interferometry Data Classification for EHT with Neural Networks

10/14/2021
by   Joshua Yao-Yu Lin, et al.
0

The Event Horizon Telescope (EHT) recently released the first horizon-scale images of the black hole in M87. Combined with other astronomical data, these images constrain the mass and spin of the hole as well as the accretion rate and magnetic flux trapped on the hole. An important question for the EHT is how well key parameters, such as trapped magnetic flux and the associated disk models, can be extracted from present and future EHT VLBI data products. The process of modeling visibilities and analyzing them is complicated by the fact that the data are sparsely sampled in the Fourier domain while most of the theory/simulation is constructed in the image domain. Here we propose a data-driven approach to analyze complex visibilities and closure quantities for radio interferometric data with neural networks. Using mock interferometric data, we show that our neural networks are able to infer the accretion state as either high magnetic flux (MAD) or low magnetic flux (SANE), suggesting that it is possible to perform parameter extraction directly in the visibility domain without image reconstruction. We have applied VLBInet to real M87 EHT data taken on four different days in 2017 (April 5, 6, 10, 11), and our neural networks give a score prediction 0.52, 0.4, 0.43, 0.76 for each day, with an average score 0.53, which shows no significant indication for the data to lean toward either the MAD or SANE state.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 2

page 5

10/30/2018

A Hybrid Frequency-domain/Image-domain Deep Network for Magnetic Resonance Image Reconstruction

Decreasing magnetic resonance (MR) image acquisition times can potential...
07/06/2020

Can Un-trained Neural Networks Compete with Trained Neural Networks at Image Reconstruction?

Convolutional Neural Networks (CNNs) are highly effective for image reco...
04/21/2022

Scale-Equivariant Unrolled Neural Networks for Data-Efficient Accelerated MRI Reconstruction

Unrolled neural networks have enabled state-of-the-art reconstruction pe...
11/30/2021

PGNets: Planet mass prediction using convolutional neural networks for radio continuum observations of protoplanetary disks

We developed Convolutional Neural Networks (CNNs) to rapidly and directl...
06/13/2020

Data-driven determination of the spin Hamiltonian parameters and their uncertainties: The case of the zigzag-chain compound KCu_4P_3O_12

We propose a data-driven technique to estimate the spin Hamiltonian, inc...
11/07/2019

Deep neural network Grad-Shafranov solver constrained with measured magnetic signals

A neural network solving Grad-Shafranov equation constrained with measur...
02/28/2022

A Dynamical Estimation and Prediction for Covid19 on Romania using ensemble neural networks

In this paper, we propose an analysis of Covid19 evolution and predictio...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

The Event Horizon Telescope (EHT) is a globe-spanning network of millimeter wavelength observatories (; ). Data from individual observatories are combined to measure the Fourier components of a source intensity on the sky (EHTC III). The sparse set of Fourier components together with a regularization procedure can then be used to reconstruct an image of the source (EHTC IV). The resulting images of the black hole at the center of M87 (hereafter M87*) have a ringlike structure—attributed to emission from hot plasma surrounding the black hole—with an asymmetry that contains information about the motion of plasma around the hole (; ). Combined with data from other sources, the EHT images constrain the black hole spin and mass as well as properties of the surrounding magnetic field structure and strength (; ). More recently, the EHT has produced polarized emission maps of M87* (EHTC VII), and these imply a highly organized magnetic field structure in the source (EHTC VIII).

Black hole accretion flows can be divided into two qualitatively different states, called MAD and SANE, depending on the strength of the magnetic field near the event horizon. In the magnetically arrested disk (MAD) state, magnetic fields near the horizon are dynamically important enough to limit the flow of plasma onto the hole (Bisnovatyi-Kogan and Ruzmaikin, 1974; Ichimaru, 1977; Igumenshchev et al., 2003; Narayan et al., 2003). When MAD, the magnetic flux through the black hole horizon is at the maximum value sustainable by the accretion flow; additional flux escapes outward through the inflowing plasma. The flux can be characterized by the dimensionless flux ( accretion rate; ). In the MAD state .222We use Lorentz–Heaviside units rather than Gaussian units to be consistent with Porth 2019. In the latter system, . In the standard and normal evolution (SANE) state, by contrast, and the flow is organized in a conventional, geometrically thick, centrifugally supported disk. For a fixed accretion rate and black hole spin, MAD flows have stronger jets driven by the Blandford–Znajek effect. Is it possible to identify whether a source is in the MAD or SANE state directly from VLBI data?

Various methods have been proposed for using black hole images or VLBI data to constrain system parameters—see (Johnson et al., 2020; Gralla and Lupsasca, 2020; Lupsasca et al., 2018) for measuring black hole parameters using long-baseline visibilities and (Wong, 2021; Hadar et al., 2021; Chesler et al., 2021) through autocorrelations of position-dependent light echoes—however, the general parameter extraction problem is challenging because simulated images contain an abundance of information about not only the black hole spacetime and gross accretion flow properties, but also about fluctuating, turbulent structure in the accreting plasma. Past efforts to compare simulated GRMHD images with VLBI data have typically either fit to the visibilities constructed from simulated images (e.g., the “average image scoring” procedure from EHTC V, which runs into issues of attempting to match or marginalize over specific instantiations of turbulence between simulation and observation) or else have focused on observationally accessible proxies for physical quantities (e.g., the metric derived in Palumbo et al. 2020 and used in EHTC VIII). Here we consider a neural network (NN)-based approach to parameter extraction from VLBI data. This approach has the benefit of using all the data (as opposed to observationally accessible proxies) and can ideally distill from the data all information that constrains a model parameter.

Earlier work by van der Gucht et al. (2020) found evidence that neural networks could be used to infer values for a limited set of parameters in a library of synthetic black hole images of SANE flows. Work by Lin et al. (2020b) explored both MAD and SANE models and investigated neural network interpretability. Both these earlier efforts worked in the image domain, though the real data correspond to sparse sampling in the Fourier domain. In recent years, work by Sun and Bouman (2020); Sun et al. (2020); Morningstar et al. (2018) have shown that deep neural networks can be used for visibility-to-image reconstruction, as well as tracking their underlying distribution in a probabilistic way using certain evaluation loss metrics. Work by Popov et al. (2021)

used convolutional neural networks to estimate an inclination angle from complex visibility with synthetic disk-based model. Here we explore the possibility of using neural networks to perform an end-to-end analysis of synthetic M87* data in the visibility domain.

Our paper is organized as follows. In Section 2, we describe our procedure for generation of synthetic observations. In Section 3 we describe our neural network, and we present results in Section 4 and provide a discussion in Section 5.

2 Synthetic data generation


Figure 1: Example synthetic images based on numerical models of black hole accretion flows with M87-like parameters. Top row: sample images of MAD accretion flows. Bottom row: sample images of SANE flows.

2.1 Simulating black hole images with GRMHD/GRRT

The training data for our supervised machine learning pipeline consist of synthetic images based on general relativistic magnetohydrodynamic (GRMHD) simulations of black hole accretion. The image dataset used in this work is the same as the one used in

(Lin et al., 2020b). We obtained images from the image library, and split them into half for training and testing.

The GRMHD simulations were generated with the iharm3D code (Prather et al. JOSS, submitted) for five black hole spins () in both MAD and SANE states. Notice that there is a continuum of possible SANE states; our SANE models have . For selected spins and magnetic states we consider multiple GRMHD simulations initialized with different seed perturbations to assess the likelihood that our neural networks are over-fitting the simulations.

We then imaged the GRMHD simulation output using ipole  (Mościbrodzka and Gammie, 2018). To do this we must specify additional physical parameters: the approximate size of the black hole and its distance from the observer; the accretion rate (set so that the intensity integrated over the images matches the mm flux density of M87*); and a parameter that regulates assignment of the electron temperature from fluid variables (see EHTC V). is drawn uniformly from a discrete set of allowed values . In addition we must specify parameters related to the imaging process: the position angle (PA) of the source, the angular resolution of the image, and the coordinates of the black hole center on the image. To prevent the neural network from correlating imaging parameters with source parameters we introduce image-by-image variations: the PA is drawn uniformly from ; the angular scale is drawn uniformly from a small interval corresponding to a change in angular scale of the source (equivalently a 10% range in black hole mass or distance); the image center is drawn uniformly from a square region corresponding to a as displacement in each coordinate.


Figure 2: Visibility amplitude versus baseline length for a randomly-selected single training image with different noise components applied. Panel (a) shows the noise-free Fourier samples, panel (b) shows these samples after the addition of complex Gaussian noise, and panel (c) shows the same as (b) after additionally modulating each visibility by station-based complex gain noise (see subsection 2.2)

2.2 Synthetic Data Generation

For each image in the training and testing sets, we generated synthetic VLBI observations using eht-imaging (Chael et al., 2016, 2018) in a manner similar to that described in EHTC IV. The baseline coverage and observing cadence match the 2017 April 11 observations of M87 carried out by the EHT after coherently time-averaging each baseline’s complex visibilities on a per-scan basis (; ). The source image in each case is assumed to be located at the sky position of the M87 black hole, and the baseline thermal noise values have been taken from the original calibrated observations provided by the EHT collaboration (Event Horizon Telescope Collaboration, 2019).

In the absence of noise the complex visibilities are, according to the van Cittert–Zernike theorem (Thompson et al., 2017)

, samples of the Fourier transform of the sky emission. Real-world telescope arrays suffer from a number of a priori unknown systematics, however, the most severe of which are variations in the phase and amplitude of the signal received at each station caused by atmospheric turbulence and other signal path effects.

We have thus generated three classes of mock observations for each of the training and testing images: (1) a “noise free” observation that contains only the Fourier transform of the input image sampled on the EHT baselines, (2) a “thermal noise” observation in which each complex visibility has been modulated according to its measurement uncertainty, and (3) a “full noise” observation that includes station-based complex gain fluctuations. The synthetic gain amplitudes are modeled as Gaussian-distributed with unit mean and a standard deviation of 10% for all stations, while the synthetic gain phases are drawn uniformly from

; both gain amplitudes and phases are sampled independently for each station at each timestamp.

Gain variations are particularly severe at the short (1.3 mm) observing wavelength of the EHT, motivating the construction and use of so-called “closure” data products that are immune to station-based corruption. The two standard sets of closure quantities are closure phases and closure amplitudes, which are constructed using triangles and quadrangles of baselines, respectively. Denoting a visibility measurement on the baseline between stations and as , the closure phase on triangle is given by the argument of the directed triple product,

(1)

The closure amplitude on a quadrangle is given by

(2)

Both closure phases and closure amplitudes have the property that station-based noise cancels, making them robust observables. In this paper we construct two classes of neural networks, one to treat complex visibilities and the other to treat closure quantities.

Figure 3: VLBInet Pipeline.

3 Neural networks

Figure 4: Two neural network classes used in this paper. The first accepts complex visibilities as input, and the second accepts closure quantities as input.

We decided to use fully connected neural networks, also known as multilayer perceptrons (MLPs), for the MAD/SANE classification task. Previous work used convolutional NNs (CNNs) and image domain data

(van der Gucht et al., 2020; Lin et al., 2020b). We use sparsely sampled visibility domain data—where the input is not translation invariant—and therefore selected the MLP architecture as one of the simplest architectures that avoids possible inductive biases associated with the translational invariance embedded in, for example, CNNs (Battaglia et al., 2018). We will refer to this arrangement as VLBInet.

When working with complex visibilities, we decompose the complex numbers into amplitudes and phases and then project the phases into sines and cosines to enable phase wrapping. For both complex visibilities and closure quantities the data are organized in a consistent way so that each neuron in the input layer is associated with a particular point in the uv domain (complex visibilities) or triangle or quadrangle (closure quantities).

We trained 6 different versions of VLBInet, ranging over 2 different visibility inputs (complex/closure) and 3 different noise scenarios. All versions consist of 7 layers. For complex visibilities, the numbers of neurons at each layer are and the full model has trainable parameters. For closure quantities the numbers of neurons at each layer are and the model has trainable parameters. In MLPs each connection contains a weight and a bias. We use

as the activation function. We use binary cross-entropy (BCE) loss for the MAD/SANE since the output is expected to be in one of these two categories:

, where for MAD and for SANE. Here p is the score prediction with range from

. VLBInet is built using the Python 3 deep learning library

pytorch (Paszke et al., 2019).

4 Results

4.1 Results on synthetic data

Figure 5: Histogram of scores for MAD and SANE models. If the neural network could identify the model perfectly then all MADs would score 0 and all SANEs would score 1. The grey dashed line is the decision boundary: data with score

0.5 are classified as SANE model, and

0.5 are classified as MAD model.

Figure 5

shows the distribution of MAD/SANE scores for 20,000 samples drawn with uniform probability from the test set. Notice that the vertical axis is logarithmic. Evidently the data contain information about whether the source is MAD or SANE, and this information can be detected by the NN.

Figure 6 shows the performance of our VLBInet with different noise models and the form of VLBI data (complex visibilities/ closure quantities) are given, and we calculate their area under curve (AUC) (Bradley, 1997).

Figure 6: We calculate the area under Receiver operating characteristic (ROC) curve as a performance measure for different noise models along with complex visibilities/ closure quantities. Here we treat SANE model as positive.

To quantify how well the NN performs in the classification task, we split the data into 20 subsets and calculate the mean and standard deviation of the prediction across the subsets. The resulting uncertainty estimates are shown in Table 1.

For complex visibilities it is evident that almost no information is lost even when thermal noise is added; the real loss of information is due to gain corruption. By contrast, the closure quantities lose information via thermal noise but lose nothing through gain corruption. This is unsurprising since closure quantities are unaffected by gain errors.

For complex visibility, the MAD/SANE classifier network is able to correctly identify MAD/SANE with accuracy without noise, accuracy with thermal noise presence and achieve accuracy with complex visibility, thermal noise + amplitude gain + phase error. For closure visibility, the MAD/SANE classifier network is able to correctly identify MAD/SANE with accuracy without noise, accuracy with thermal noise presence and achieve accuracy with complex visibility, thermal noise + amplitude gain + phase error.

4.2 Results on real M87* data

We ran the trained VLBInet (trained on models with added noise and using closure quantities) on real M87 VLBI data taken on April 5th, 6th, 10th, and 11th, 2017. VLBInet output scores of , , , , respectively. Among the 20,000 data in our test set, there are instances that are similar to the prediction scores (within , one sigma from the score distribution for 4 days). Among the that are similar, of them are “MAD”.


Figure 7: Visibility amplitude of M87* taken at 2017 April 5th, 6th, 10th and 11th. The corresponding MAD/SANE scores are , , , for each day’s observation.

It is interesting that the results are similar across days, suggesting that our data are consistent with no strong indication (e.g., over ) with the MAD/SANE models. If the models faithfully represent the physical situation in M87*, improvements in data quality (e.g., better baseline coverage) are likely to enable a confident classification of the source as MAD or SANE.

Classification accuracy
Visibility form Noise free Thermal noise Thermal noise + complex gains
Complex
Closure
Table 1: Summary Statistics
Neural Networks MAD/SANE prediction score with closure quantities
Observational date Score Prediction
2017 April 5 SANE
2017 April 6 MAD
2017 April 10 MAD
2017 April 11 SANE
Table 2: Neural Networks MAD/SANE prediction score for M87 on all four observational dates

5 Discussion

We have presented a neural network pipeline that can perform data classification to identify the accretion state of M87* as either MAD or SANE using VLBI observations, after training on a library of simulations. Our pipeline operates directly on interferometric data products—either complex visibilities or closure quantities—and thus does not rely on an image reconstruction procedure. We find that the networks are able to achieve a % classification accuracy across the board, even in the presence of realistic levels of both thermal and complex gain data corruptions.

Our network performs its most accurate classification (80%) when the data are provided as noise-free complex visibilities and its least accurate classification (71%) when the data are provided as complex visibilities containing both thermal and complex gain uncertainties. The network achieves intermediate performance (76%) when it is provided with noise-free closure quantities and when the closure quantities contain realistic levels of noise the performance is more similar (73%) to that of the corrupted complex visibilities. These performance differences are qualitatively consistent with the expected relative amounts of source information retained in visibilities versus closure quantities (Blackburn et al., 2020). We also note that the classification accuracy using any variant of interferometric data product is considerably worse than that expected when performing classification using the original images, which can achieve essentially perfect (99%; Lin et al. 2020b) classification accuracy; again, this difference reflects the loss of information due to sparse Fourier sampling and the reduced angular resolution of the data.

Although we did not provide the neural network with explicit priors, our input training data set covered a limited set of parameter values and thus produced an implicit, data-driven prior through the machine learning process.

5.1 Comparison with traditional analysis pipelines

The initial theoretical analysis performed by the EHT included a comparison between observational data products and simulated images of black hole accretion flows. In a comparison such as this a (reduced chi squared) distance comparison of real and synthetic data is not useful because intra-model variations—due to changes in image features caused by turbulent fluctuations in the source, for example—are large compared to measurement errors.

In contrast to snapshot-by-snapshot comparisons, the average image scoring (AIS) procedure described in EHTC V was used to compute the likelihood the data would be drawn from a given model. The AIS procedure ruled out a few retrograde-spin accretion flow models based on variability, but it left much of the parameter space unconstrained. Compared to our data-driven approach, the AIS procedure is less constraining.

5.2 Future work

We have trained VLBInet only to classify M87 source models as MAD or SANE based on the 2017 EHT configuration. In future work we plan to extend VLBInet to estimate black hole mass and spin as well as parameters describing the state of the plasma, and we plan to consider other arrays such as the EHT 2021 configuration and prospective ngEHT configurations (Blackburn et al., 2019). We will also introduce more realistic data corruptions using a more sophisticated end-to-end VLBI synthetic data generation pipeline (e.g., Roelofs et al. 2020).

In this work, we applied the neural network to M87 VLBI data. There are few limitations, however: 1) if our GRMHD and GRRT simulations are flawed then the neural network, which has been trained on a simulation-based synthetic dataset, may not able to generalize to realistic data. 2) Similarly if the noise models used in the generation of synthetic VLBI observables are flawed then the neural network, which has been trained on the flawed noise model in the synthetic dataset, may not be able not able to generalize to realistic noise (e.g., realistic atmospheric, instrumental, and calibration effects) and hence could predict spurious result. 3) Our training set covers the model parameter space sparsely—for example, we use only five values for black hole spin—and the prediction could therefore be biased.

Although we have demonstrated the utility of our framework only for the EHT, we believe that it should in principle work for general classification analyses using interferometric data. For instance, a similar network trained on an appropriate set of simulated protoplanetary disk observations using ALMA could plausibly constrain properties such as disk inclination, thickness, and mass. In practice, using the network in such a fashion is likely to be limited by the availability of suitable simulations, perhaps motivating the development of simulation capabilities. This could have broad impact on the community, as interferometric techniques have been crucial for many astrophysical observational cases, including studies of dark matter substructure in strong gravitational lensing (Hezaveh et al., 2016; Lin et al., 2020a) and starburst galaxies (Vieira et al., 2013).

Since the EHT captures information about source polarization and time dependence, we also plan to study whether these additional data can further constrain physical parameters of the source. Synchrotron emission is naturally polarized and the local properties of the magnetic field in the plasma alter the electric vector position angle as radiation propagates through the plasma. Thus, the structure of the magnetic field in the plasma influences the orientation of linear polarization observed in black hole images.

Palumbo et al. (2020) and EHTC VIII showed that the qualitative and quantitative differences between the near-horizon magnetic field strength and structure suggest that linear polarization data might be particular useful in differentiating between MAD and SANE models. We will apply interpretability methods (Ghorbani et al., 2019; Lin et al., 2020b) to gain a more quantitative understanding of the features in the visibility domain. It is critical to understand the origin of accurate neural network classifications/regressions so that we can be confident that we are not overfitting.

As pointed out by Sun et al. (2020)

, the VLBI observing proccess can be viewed as a physics-constrained autoencoder in which the encoder is the sampling strategy through the radio telescope. Our work could potentially be helpful for evaluating telescope site candidates in a fast and automatic way

(Raymond et al., 2021)

. Visibility in some sense is similar to the hidden representation/bottleneck in the bottleneck of variational autoencoder. Traditionally, one would need to reconstruct the image and do data analysis on top of the reconstructed images. Since the whole VLBI process looks like an auto-encoder, we could in principle not only optimize the decoder (from measurement to parameters) but also optimize the encoder (from source to encoded representation that preserve the most information through sparse sampling)

(Sun and Bouman, 2020; Bakker et al., 2020).

The authors thank He Sun, Katie Bouman, Michael Janssen, Alexander Raymond, Bart Ripperda, Shep Doeleman, Gil Holder, Max Welling, Abhishek Joshi, Vedant Dhruv, David Ruhe, Michael Eickenberg, Shirley Ho, and David Spergel for useful discussion. The authors thank John Wardle, Shiro Ikeda for useful feedback. JL, GNW, BP, and CFG were supported by the National Science Foundation under grants AST 17-16327, OISE 17-43747, and AST 20-34306. GNW was supported in part by a Donald C. and F. Shirley Jones Fellowship and the Institute for Advanced Study. DWP acknowledges support provided by the NSF through grants AST-1952099, AST-1935980, AST-1828513, and AST-1440254, and by the Gordon and Betty Moore Foundation through grant GBMF-5278. This work has been supported in part by the Black Hole Initiative at Harvard University, which is funded by grants from the John Templeton Foundation and the Gordon and Betty Moore Foundation to Harvard University. This work used the Extreme Science and Engineering Discovery Environment (XSEDE) resource stampede2 at TACC through allocation TG-AST170024. This work utilizes resources supported by the National Science Foundation’s Major Research Instrumentation program, grant 1725729, as well as the University of Illinois at Urbana–Champaign. JL thanks the AWS Cloud Credits for Research program. JL and CFG thank the GCP Research Credits program.

References

  • T. Bakker, H. van Hoof, and M. Welling (2020) Experimental design for mri by greedy policy search. Advances in Neural Information Processing Systems 33. Cited by: §5.2.
  • P. W. Battaglia, J. B. Hamrick, V. Bapst, A. Sanchez-Gonzalez, V. Zambaldi, M. Malinowski, A. Tacchetti, D. Raposo, A. Santoro, R. Faulkner, et al. (2018) Relational inductive biases, deep learning, and graph networks. arXiv preprint arXiv:1806.01261. Cited by: §3.
  • G. S. Bisnovatyi-Kogan and A. A. Ruzmaikin (1974) The Accretion of Matter by a Collapsing Star in the Presence of a Magnetic Field. Ap&SS 28 (1), pp. 45–59. External Links: Document Cited by: §1.
  • L. Blackburn, S. Doeleman, J. Dexter, J. L. Gómez, M. D. Johnson, D. C. Palumbo, J. Weintroub, K. L. Bouman, A. A. Chael, J. R. Farah, et al. (2019) Studying black holes on horizon scales with vlbi ground arrays. arXiv preprint arXiv:1909.01411. Cited by: §5.2.
  • L. Blackburn, D. W. Pesce, M. D. Johnson, M. Wielgus, A. A. Chael, P. Christian, and S. S. Doeleman (2020) Closure statistics in interferometric data. The Astrophysical Journal 894 (1), pp. 31. Cited by: §5.
  • A. P. Bradley (1997) The use of the area under the roc curve in the evaluation of machine learning algorithms. Pattern recognition 30 (7), pp. 1145–1159. Cited by: §4.1.
  • A. A. Chael, M. D. Johnson, K. L. Bouman, L. L. Blackburn, K. Akiyama, and R. Narayan (2018) Interferometric Imaging Directly with Closure Phases and Closure Amplitudes. ApJ 857 (1), pp. 23. External Links: Document, 1803.07088 Cited by: §2.2.
  • A. A. Chael, M. D. Johnson, R. Narayan, S. S. Doeleman, J. F. C. Wardle, and K. L. Bouman (2016) High-resolution Linear Polarimetric Imaging for the Event Horizon Telescope. ApJ 829 (1), pp. 11. External Links: Document, 1605.06156 Cited by: §2.2.
  • P. Chesler, L. Blackburn, S. Doeleman, M. Johnson, J. Moran, R. Narayan, and M. Wielgus (2021) Light echos and coherent autocorrelations in a black hole spacetime. Classical and Quantum Gravity. Cited by: §1.
  • E. H. T. Collaboration (2019) First M87 Event Horizon Telescope Results. I. The Shadow of the Supermassive Black Hole. ApJ 875 (1), pp. L1. External Links: Document, 1906.11238 Cited by: §1.
  • Event Horizon Telescope Collaboration (2019a) First M87 Event Horizon Telescope Results. II. Array and Instrumentation. ApJ 875 (1), pp. L2. External Links: Document, 1906.11239 Cited by: §1, §2.2.
  • Event Horizon Telescope Collaboration (2019b) First M87 Event Horizon Telescope Results. III. Data Processing and Calibration. ApJ 875 (1), pp. L3. External Links: Document, 1906.11240 Cited by: §1, §2.2.
  • Event Horizon Telescope Collaboration (2019c) First M87 Event Horizon Telescope Results. IV. Imaging the Central Supermassive Black Hole. ApJ 875 (1), pp. L4. External Links: Document, 1906.11241 Cited by: §1, §2.2.
  • Event Horizon Telescope Collaboration (2019d) First M87 Event Horizon Telescope Results. V. Physical Origin of the Asymmetric Ring. ApJ 875 (1), pp. L5. External Links: Document, 1906.11242 Cited by: §1, §1, §2.1, §5.1.
  • Event Horizon Telescope Collaboration (2019e) First M87 Event Horizon Telescope Results. VI. The Shadow and Mass of the Central Black Hole. ApJ 875 (1), pp. L6. External Links: Document, 1906.11243 Cited by: §1.
  • Event Horizon Telescope Collaboration (2019) First m87 eht results: calibrated data. CyVerse Data Commons. Note: We release a data set to accompany the First M87 Event Horizon Telescope Results paper series (EHT Collaboration et al. 2019a,b,c,d,e,f, see the README.md file for full references). The data set is derived from the Science Release 1 (SR1) of the Event Horizon Telescope (EHT)’s April 2017 observation campaign (EHT Collaboration et al. 2019c). This data set contains M87 data for both low and high bands for all observed days (April 5th, 6th, 10th, 11th, 2017). Data from the 2017 observations were processed through three independent reduction pipelines (Blackburn et al. 2019, Janssen et al. 2019, Paper III). This release includes the fringe fitted, a-priori calibrated, and network calibrated data from the EHT-HOPS pipeline, which is the primary data set for the First M87 EHT results. Independent flux calibration is performed based on estimated station sensitvities during the campaign (Issaoun et al. 2017, Janssen et al. 2019). A description of the data properties, their validation, and estimated systematic errors is given in Paper III with additional details in Wielgus et al. (2019). The data are time averaged to 10 seconds and frequency averaged over all 32 intermediate frequencies (IFs). All polarization information is explicitly removed. To make the resulting uvfits files compatible with popular very-long-baseline interferometry (VLBI) software packages, the circularly polarized cross-hand visibilities RL and LR are set to zero along with their errors, while parallel-hands RR and LL are both set to an estimated Stokes I value. Measurement errors for RR and LL are each set to sqrt(2) times the statistical errors for Stokes I. External Links: Document Cited by: §2.2.
  • Event Horizon Telescope Collaboration (2021) First M87 Event Horizon Telescope Results. VIII. Magnetic Field Structure near The Event Horizon. ApJ 910 (1), pp. L13. External Links: Document Cited by: §1, §1, §5.2.
  • Event Horizon Telescope Collaboration (2021) First m87 event horizon telescope results. vii. polarization of the ring. The Astrophysical Journal Letters 910 (1), pp. L12. Cited by: §1.
  • A. Ghorbani, J. Wexler, J. Y. Zou, and B. Kim (2019) Towards automatic concept-based explanations. In Advances in Neural Information Processing Systems, pp. 9273–9282. Cited by: §5.2.
  • S. E. Gralla and A. Lupsasca (2020) Observable shape of black hole photon rings. Physical Review D 102 (12), pp. 124003. Cited by: §1.
  • S. Hadar, M. D. Johnson, A. Lupsasca, and G. N. Wong (2021) Photon ring autocorrelations. Phys. Rev. D 103 (10), pp. 104038. External Links: Document Cited by: §1.
  • Y. D. Hezaveh, N. Dalal, D. P. Marrone, Y. Mao, W. Morningstar, D. Wen, R. D. Blandford, J. E. Carlstrom, C. D. Fassnacht, G. P. Holder, et al. (2016) Detection of lensing substructure using alma observations of the dusty galaxy sdp. 81. The Astrophysical Journal 823 (1), pp. 37. Cited by: §5.2.
  • S. Ichimaru (1977) Bimodal behavior of accretion disks - Theory and application to Cygnus X-1 transitions. ApJ 214, pp. 840–855. External Links: Document Cited by: §1.
  • I. V. Igumenshchev, R. Narayan, and M. A. Abramowicz (2003) Three-dimensional Magnetohydrodynamic Simulations of Radiatively Inefficient Accretion Flows. ApJ 592 (2), pp. 1042–1059. External Links: Document, astro-ph/0301402 Cited by: §1.
  • M. D. Johnson, A. Lupsasca, A. Strominger, G. N. Wong, S. Hadar, D. Kapec, R. Narayan, A. Chael, C. F. Gammie, P. Galison, et al. (2020) Universal interferometric signatures of a black hole’s photon ring. Science advances 6 (12), pp. eaaz1310. Cited by: §1.
  • J. Y. Lin, H. Yu, W. Morningstar, J. Peng, and G. Holder (2020a) Hunting for dark matter subhalos in strong gravitational lensing with neural networks. arXiv preprint arXiv:2010.12960. Cited by: §5.2.
  • J. Y. Lin, G. N. Wong, B. S. Prather, and C. F. Gammie (2020b) Feature extraction on synthetic black hole images. arXiv preprint arXiv:2007.00794. Cited by: §1, §2.1, §3, §5.2, §5.
  • A. Lupsasca, A. P. Porfyriadis, and Y. Shi (2018) Critical emission from a high-spin black hole. Physical Review D 97 (6), pp. 064017. Cited by: §1.
  • W. R. Morningstar, Y. D. Hezaveh, L. P. Levasseur, R. D. Blandford, P. J. Marshall, P. Putzky, and R. H. Wechsler (2018) Analyzing interferometric observations of strong gravitational lenses with recurrent and convolutional neural networks. arXiv preprint arXiv:1808.00011. Cited by: §1.
  • M. Mościbrodzka and C. F. Gammie (2018) IPOLE - semi-analytic scheme for relativistic polarized radiative transport. MNRAS 475 (1), pp. 43–54. External Links: Document, 1712.03057 Cited by: §2.1.
  • R. Narayan, I. V. Igumenshchev, and M. A. Abramowicz (2003) Magnetically Arrested Disk: an Energetically Efficient Accretion Flow. PASJ 55, pp. L69–L72. External Links: astro-ph/0305029, Document Cited by: §1.
  • D. C. M. Palumbo, G. N. Wong, and B. S. Prather (2020) Discriminating Accretion States via Rotational Symmetry in Simulated Polarimetric Images of M87. ApJ 894 (2), pp. 156. External Links: Document, 2004.01751 Cited by: §1, §5.2.
  • A. Paszke, S. Gross, F. Massa, A. Lerer, J. Bradbury, G. Chanan, T. Killeen, Z. Lin, N. Gimelshein, L. Antiga, A. Desmaison, A. Kopf, E. Yang, Z. DeVito, M. Raison, A. Tejani, S. Chilamkurthy, B. Steiner, L. Fang, J. Bai, and S. Chintala (2019) PyTorch: an imperative style, high-performance deep learning library. In Advances in Neural Information Processing Systems 32, H. Wallach, H. Larochelle, A. Beygelzimer, F. dAlché-Buc, E. Fox, and R. Garnett (Eds.), pp. 8024–8035. Cited by: §3.
  • A. Popov, V. Strokov, and A. Surdyaev (2021) A proof-of-concept neural network for inferring parameters of a black hole from partial interferometric images of its shadow. Astronomy and Computing, pp. 100467. Cited by: §1.
  • O. e. al. Porth (2019) The Event Horizon General Relativistic Magnetohydrodynamic Code Comparison Project. ApJS 243 (2), pp. 26. External Links: Document, 1904.04923 Cited by: footnote 2.
  • A. W. Raymond, D. Palumbo, S. N. Paine, L. Blackburn, R. C. Rosado, S. S. Doeleman, J. R. Farah, M. D. Johnson, F. Roelofs, R. P. Tilanus, et al. (2021) Evaluation of new submillimeter vlbi sites for the event horizon telescope. The Astrophysical Journal Supplement Series 253 (1), pp. 5. Cited by: §5.2.
  • F. Roelofs, M. Janssen, I. Natarajan, R. Deane, J. Davelaar, H. Olivares, O. Porth, S. Paine, K. Bouman, R. Tilanus, et al. (2020) SYMBA: an end-to-end vlbi synthetic data generation pipeline-simulating event horizon telescope observations of m 87. Astronomy & Astrophysics 636, pp. A5. Cited by: §5.2.
  • H. Sun and K. L. Bouman (2020) Deep probabilistic imaging: uncertainty quantification and multi-modal solution characterization for computational imaging. arXiv preprint arXiv:2010.14462. Cited by: §1, §5.2.
  • H. Sun, A. V. Dalca, and K. L. Bouman (2020) Learning a probabilistic strategy for computational imaging sensor selection. In 2020 IEEE International Conference on Computational Photography (ICCP), pp. 1–12. Cited by: §1, §5.2.
  • A. R. Thompson, J. M. Moran, and Jr. Swenson (2017) Interferometry and Synthesis in Radio Astronomy, 3rd Edition. External Links: Document Cited by: §2.2.
  • J. van der Gucht, J. Davelaar, L. Hendriks, O. Porth, H. Olivares, Y. Mizuno, C. M. Fromm, and H. Falcke (2020) Deep horizon: a machine learning network that recovers accreting black hole parameters. Astronomy & Astrophysics 636, pp. A94. Cited by: §1, §3.
  • J. Vieira, D. P. Marrone, S. Chapman, C. De Breuck, Y. Hezaveh, A. Wei, J. Aguirre, K. Aird, M. Aravena, M. Ashby, et al. (2013) Dusty starburst galaxies in the early universe as revealed by gravitational lensing. Nature 495 (7441), pp. 344–347. Cited by: §5.2.
  • G. N. Wong (2021) Black Hole Glimmer Signatures of Mass, Spin, and Inclination. ApJ 909 (2), pp. 217. External Links: Document, 2009.06641 Cited by: §1.