1 Introduction
Galaxy Zoo was created because SDSSscale surveys could not be visually classified by professional astronomers (Lintott et al., 2008). In turn, Galaxy Zoo is being gradually outpaced by the increasing scale of modern surveys like DES (Flaugher, 2005), PanSTARRS (Kaiser et al., 2010), the KiloDegree Survey (de Jong et al., 2015), and Hyper SuprimeCam (Aihara et al., 2018).
Each of these surveys can each image galaxies as fast or faster than those galaxies are being classified by volunteers. For example, DECaLS (Dey et al., 2018) contains (as of Data Release 5) approximately 350,000 galaxies suitable for detailed morphological classification (applying < 17 and petroR90_r^{1}^{1}1petroR90_r is the Petrosian radius which contains 90% of the band flux > 3 arcsec, the cuts used for Galaxy Zoo 2 in Willett et al. 2013). Collecting 40 independent volunteer classifications for each galaxy, as for Galaxy Zoo 2 (Willett et al., 2013), would take approximately five years at the current classification rate. The Galaxy Zoo science team must therefore both judiciously select which surveys to classify and, for the selected surveys, reduce the number of independent classifications per galaxy. The speed at which we can accurately classify galaxies severely limits the scale, detail, and quality of our morphology catalogues, diminishing the scientific value of such surveys.
The next generation of surveys will make this speed limitation even more stark.
Euclid^{2}^{2}215,000 deg at 0.30 halflight radius PSF from 2022, Laureijs
et al. 2011, LSST^{3}^{3}318,000 deg to 0.39 halflight radius PSF from 2023, LSST
Science Collaboration et al. 2009 and WFIRST ^{4}^{4}42,000 deg at 0.12 halflight radius PSF from approx. 2025, Spergel
et al. 2013 are expected to resolve the morphology of unprecedented numbers of galaxies.
This could be revolutionary for our understanding of galaxy evolution, but only if such galaxies can be classified.
The future of morphology research therefore inevitably relies on automated classification methods.
Supervised approaches (given humanlabelled galaxies, predict labels for new galaxies) using convolutional neural networks (CNNs) are increasingly common and effective (Cheng et al. 2019).
CNNs outperform previous nonparametric approaches (Dieleman
et al., 2015; HuertasCompany
et al., 2015), and can be rapidly adapted to new surveys (Domínguez Sánchez et al., 2019a) and to related tasks such as light profile fitting (Tuccillo
et al., 2017).
Unsupervised approaches (cluster examples without any human labels) also show promise (Hocking
et al., 2015).
However, despite major progress in raw performance, the increasing complexity of classification methods poses a problem for scientific inquiry. In particular, CNNs are ‘black box’ algorithms which are difficult to introspect and do not typically provide estimates of uncertainty. In this work, we combine a novel generative model of volunteer responses with Monte Carlo dropout
(Gal et al., 2017a) to create Bayesian CNNs that predict posteriors for the morphology of each galaxy. Posteriors are crucial for drawing statistical conclusions that account for uncertainty, and so including posteriors significantly increases the scientific value of morphology catalogues. Our Bayesian CNNs can predict posteriors for surveys of any conceivable scale.Limited volunteer classification speed remains a hurdle; we need to collect enough responses to train our Bayesian networks. How do we train Bayesian networks to perform well while minimising the number of new responses required? Recent work suggests that transfer learning
(Lu et al., 2015) may be effective. In transfer learning, models are first trained to solve similar tasks where training data is plentiful and then ‘finetuned’ with new data to solve the task at hand. Results using transfer learning to classify new surveys, or to answer new morphological questions, suggest that models can be finetuned using only thousands (Ackermann et al., 2018; Khan et al., 2018) or even hundreds (Domínguez Sánchez et al., 2019b) of newlylabelled galaxies, with only moderate performance losses compared to the original task.Each of these authors randomly selects which new galaxies to label. However, this may not be optimal. Each galaxy, if labelled, provides information to our model; they are informative. Our hypothesis is that all galaxies are informative, but some galaxies are more informative than others. We use our galaxy morphology posteriors to apply an active learning strategy (Houlsby et al., 2011): intelligently selecting the most informative galaxies for labelling by volunteers. By prioritizing the galaxies that our strategy suggests would, if labelled, be most informative to the model, we can create or finetune models with even fewer newlylabelled data.
2 Posteriors for Galaxy Morphology
A vast number of automated methods have been used as proxies for ‘traditional’ visual morphological classification. Nonparametric methods such as CAS (Conselice, 2003) and Gini (Lotz et al., 2004)
have been commonly used, both directly and to provide features which can be used by increasingly sophisticated machine learning strategies
(Scarlata et al., 2007; Banerji et al., 2010; HuertasCompany et al., 2011; Freeman et al., 2013; Peth et al., 2016). Most of these methods provide imperfect proxies for expert classification (Lintott et al., 2008). The key advantage of CNNs is that they learn to approximate human classifications directly from data, without the need to handdesign functions aimed at identifying relevant features (LeCun et al., 2015). CNNs work by applying a series of spatiallyinvariant transformations to represent the input image at increasing levels of abstraction, and then interpreting the final abstraction level as a prediction. These transformations are initially random, and are ‘learned’ by iteratively minimising the difference between predictions and known labels. We refer the reader to LeCun et al. (2015) for a brief introduction to CNNs and to Dieleman et al. (2015), Lanusse et al. (2018), Kim & Brunner (2017) and Hezaveh et al. (2017) for astrophysical applications.Early work with CNNs immediately surpassed nonparametric methods in approximating human classifications (HuertasCompany et al., 2015; Dieleman et al., 2015). Recent work extends CNNs across different surveys (Domínguez Sánchez et al., 2019a; Khan et al., 2018) or increasingly specific tasks (Domínguez Sánchez et al., 2018; Tuccillo et al., 2017; HuertasCompany et al., 2018; Walmsley et al., 2018). However, these previous CNNs do not account for uncertainty in training labels, limiting their ability to learn from all available data (one common approach is to train only on ‘clean’ subsets). Previous CNNs are also not designed to make probabilistic predictions (though they have been interpreted as such), limiting the reliability of conclusions drawn using such methods (see Appendix A).
Here, we present Bayesian CNNs for morphology classification. Bayesian CNNs provide two key improvements over previous work:

We account for varying (i.e. heteroskedastic) uncertainty in volunteer responses

We predict full posteriors over the morphology of each galaxy
We first introduce a novel framework for thinking about Galaxy Zoo classifications in probabilistic terms, where volunteer responses are drawn from a binomial distribution according to an unobserved (latent) parameter: the ‘typical’ response probability (Section
2.1). We use this framework to construct CNNs that make probabilistic predictions of Galaxy Zoo classifications (Section 2.2). These CNNs predict a typical response probability for each galaxy by maximising the likelihood of the observed responses. By maximising the likelihood, they learn effectively from heteroskedastic labels; the likelihood reflects the fact that more volunteer responses are more indicative of the ‘typical‘ response than fewer responses. To account for the uncertainty in the CNN weights, we use Monte Carlo dropout (Gal et al., 2017a) to marginalise over possible CNNs (Section 2.3). Our final predictions (Section 2.7) are posteriors of how a typical volunteer would have responded, had they been asked about each galaxy. These can then be used to classify surveys of any conceivable scale (e.g. LSST, Euclid), helping researchers make reliable inferences about galaxy evolution using millions of labelled galaxy images.2.1 Probabilistic Framework for Galaxy Zoo
Galaxy Zoo asks members of the public to volunteer as ‘citizen scientists’ and label galaxy images by answering a series of questions. Figure 1 illustrates the web interface.
Formally, each Galaxy Zoo decision tree question asks volunteers to view galaxy image and select the most appropriate answer from the available answers . This reduces to a binary choice; where there are more than two available answers (), we can consider each volunteer response as either (positive response) or not (negative response).
Let be the number of volunteers (out of ) observed to answer for image . We assume that there is a true fraction of the population (i.e. all possible volunteers) who would give the answer for image . We assume that volunteers are drawn uniformly from this population, so that if we ask volunteers about image , we expect that the distribution over the number of positive answers to be binomial:
(1) 
(2) 
This will be our model for how each volunteer response was generated. Note that is a latent variable: we only observe the responses , never itself.
2.2 Probabilistic Prediction with CNNs
Having established a novel generative model for our data, we now aim to infer the likelihood of observing a particular for each galaxy (for brevity, we omit subscripts).
Let us consider the scalar output from our neural network as a (deterministic) prediction for , and hence a probabilistic prediction for :
(3) 
For each labelled galaxy, we have observed positive responses. We would like to find the network weights such that is maximised (i.e. to make a maximum likelihood estimate given the observations):
(4) 
(5) 
The combinatorial term is fixed and hence our objective function to minimise is
(6) 
We can create a probabilistic model for by optimising our network to make maximum likelihood estimates for the latent parameter from which is drawn.
In short, each network predicts the response probability that a random volunteer will select a given answer for a given image.
2.3 From Probabilistic to Bayesian CNN
So far, our model is probabilistic (i.e. the output is the parameter of a probabilistic model) but not Bayesian. If we asked volunteers, we would predict answers with a posterior of (where is our network prediction of for galaxy ). However, this treats the model,
, as fixed and known. Instead, the Bayesian approach treats the model itself as a random variable.
Intuitively, there are many possible models that could be trained from the same training data . To predict the posterior of given , we should marginalise over these possible models:
(7) 
We need to know how likely we were to train a particular model given the data available, . Unfortunately, we don’t know how likely each model is. We only observe the single model we actually trained.
Instead, consider dropout (Srivastava et al., 2014)
. Dropout is a regularization method that temporarily removes random neurons according to a Bernoulli distribution, where the probability of removal (‘dropout rate’) is a hyperparameter to be chosen. Dropout may be interpreted as taking the trained model and permuting it into a different one
(Srivastava et al., 2014). Gal (2016) introduced the approach of approximating the distributions of models one might have trained, but didn’t, with the distribution of networks from applying dropout:(8) 
removing neurons according to dropout distribution . This is the Monte Carlo Dropout approximation (hereafter MC Dropout). See Appendix B for a more formal overview.
Choosing the dropout rate affects the approximation; greater dropout rates lead the model to estimate higher uncertainties (on average). Following convention, we arbitrarily choose a dropout rate of 0.5. We discuss the implications of using an arbitrary dropout rate, and opportunities for improvement, in Section 4.
Applying MC Dropout to marginalise over models (Eqn. 7):
(9) 
In practice, following Gal (2016), we sample from with forward passes using dropout at test time (i.e. Monte Carlo integration):
(10) 
Using MC Dropout, we can improve our posteriors by (approximately) marginalising over the possible models we might have trained.
To demonstrate our probabilistic model and the use of MC Dropout, we train models to predict volunteer responses to the ‘Smooth or Featured’ and ‘Bar’ questions on Galaxy Zoo 2 (Section 2.5).
2.4 Data  Galaxy Zoo 2
Galaxy Zoo 2 (GZ2) classified all 304,122 galaxies from the Sloan Digital Sky Survey (SDSS) DR7 Main Galaxy Sample (Strauss et al., 2002; Abazajian et al., 2009) with < 17 and petroR90_r^{5}^{5}5petroR90_r is the Petrosian radius which contains 90% of the band flux > 3 arcsec. Classifying 304,122 galaxies required 60 million volunteer responses collected over 14 months.
GZ2 is the largest homogenous galaxy sample with reliable measurements of detailed morphology, and hence an ideal data source for this work. GZ2 has been extensively used as a benchmark to compare machine learning methods for classifying galaxy morphology. The original GZ2 data release (Willett et al., 2013) included comparisons with (preCNN) machine learning methods by Baillard et al. (2011) and HuertasCompany et al. (2011). GZ2 subsequently provided the data for seminal work on CNN morphology classification (Dieleman et al., 2015) and continues to be used for validating new approaches (Domínguez Sánchez et al., 2018; Khan et al., 2018).
We use the ‘GZ2 Full Sample’ catalogue (hereafter ‘GZ2 catalogue’), available from data.galaxyzoo.org. To avoid the possibility of duplicated galaxies or varying depth imaging, we exclude the ‘stripe82’ subset.
The GZ2 catalogue provides aggregate volunteer responses at each of the three postprocessing stages: raw vote counts (and derived vote fractions), consensus vote fractions, and redshiftdebiased vote fractions. The raw vote counts are simply the number of users who selected each answer. The consensus vote fractions are calculated by iteratively reweighting each user based on their overall agreement with other users. The debiased fractions estimate how the galaxy would have been classified if viewed at (Hart et al., 2016). Unlike recent work (Domínguez Sánchez et al., 2018; Khan et al., 2018), we use the raw vote counts. The redshiftdebiased fractions estimate the true morphology of a galaxy, not what the image actually shows. To predict what volunteers would say about an image, we should only consider what the volunteers see. We believe that debiasing is better applied after predicting responses, not before. We caution the reader that our performance metrics are therefore not directly comparable to those of Domínguez Sánchez et al. (2018) and Khan et al. (2018), who use the debiased fractions as ground truth.
2.5 Application
2.5.1 Tasks
To test our probabilistic CNNs, we aim to predict volunteer responses for the ‘Smooth or Featured’ and ‘Bar’ questions.
The ‘Smooth or Featured’ question asks volunteers ‘Is the galaxy simply smooth and rounded, with no sign of a disk?’ with (common^{6}^{6}6‘Smooth or Featured’ includes a third ‘Artifact’ answer. However, artifacts are sufficiently rare (0.08% of galaxies have ‘Artifact’ as the majority response) that predicting ‘Smooth’ or ‘Not Smooth’ is sufficient to separate smooth and featured galaxies in practice) answers ‘Smooth’ and ‘Featured or Disk’. As ‘Smooth or Featured’ is the first decision tree question, this question is always asked, and therefore every galaxy has 40 ‘Smooth or Featured‘ responses^{7}^{7}7Technical limitations during GZ2 caused 26,530 galaxies to have . We exclude these galaxies for simplicity.. With fixed to
responses, the loss function (Eqn.
6) depends only on (for a given model ).The ‘Bar’ question asks volunteers ‘Is there a sign of a bar feature through the center of the galaxy?’ with answers ‘Bar (Yes)’ and ‘No Bar’. Because ‘Bar’ is only asked if volunteers respond ‘Featured’ and ‘Not EdgeOn’ to previous questions, each galaxy can have anywhere from 0 to 40 total responses – typically around 10 (Figure 2). This scenario is common; only 2 questions are always asked, and most questions have total responses (Figure 2). Building probabilistic CNNs that learn better by appreciating the varying count uncertainty in volunteer responses is a key advantage of our design. We achieve this by maximising the likelihood of the observed responses given our predicted ‘typical’ response and (Section 2.2).
2.5.2 Architecture
Our CNN architecture is shown in Figure 3. This architecture is inspired by VGG16 (Simonyan & Zisserman, 2015), but scaled down to be shallower and narrower in order to fit our computational budget. We use a softmax final layer to ensure the predicted typical vote fraction lies between 0 and 1, as required by our binomial loss function (Equation 6).
We are primarily concerned with accounting for label uncertainty and predicting posteriors, rather than maximising performance metrics. That said, our architecture is competitive with, or outperforms, previous work (Section 2.7.1). Our overall performance can likely be significantly improved with more recent architectures (Szegedy et al., 2015; He et al., 2015; Huang et al., 2017) or a larger computational budget.
2.5.3 Augmentations
To generate our training and test images, we resize the original 424x424x3 pixel GZ2 png images shown to volunteers into 256x256x3 uint8^{8}^{8}8Unsigned 8bit integer i.e. 0255 inclusive. After rescaling, this is sufficient to express the dynamic range of the images (as judged by visual inspection) while significantly reducing memory requirements vs. the original 32bit float flux measurements. matrices and save these matrices in TFRecords (to facilitate rapid loading). When serving training images to our model, each image has the following transformations applied:

Average over channels to create a greyscale image

Random horizontal and/or vertical flips

Rotation through an angle randomly selected from to
(using nearestneighbour interpolation to fill pixels)

Adjusting the image contrast to a contrast uniformly selected from 98% to 102% of the original contrast

Cropping either randomly (‘Smooth or Featured’) or centrally (‘Bar’) according to a zoom level uniformly selected from 1.1x to 1.3x (‘Smooth or Featured’) or 1.7x to 1.9x (‘Bar’)

Resizing to a target size of 128x128(x1)
We train on greyscale images because colour is often predictive of galaxy type (E and S0 are predominantly redder, while S are bluer, Roberts & Haynes 1994) and we wish to ensure that our classifier does not learn to make biased predictions from this correlation. For example, a galaxy should be classified as smooth because it appears smooth, and not because it is red and therefore more likely to be smooth. Otherwise, we bias any later research investigating correlations between morphology and colour.
Random flips, rotations, contrast adjustment, and zooms (via crops) help the CNN learn that predictions should be invariant to these transformations  our predictions should not change because the image is flipped, for example. We choose a higher zoom level for ‘Bar’ because the original image radius for GZ2 was designed to show the full galaxy and any immediate neighbours (Willett et al., 2013) yet bars are generally found in the center of galaxies (Kruk et al., 2017). We know that the ‘Bar’ classification should be invariant to all but the central region of the image, and therefore choose to sacrifice the outer regions in favour of increased resolution in the centre. Cropping and resizing are performed last to minimise resolution loss due to aliasing. Images are resized to match our computational budget.
We also apply these augmentations at test time. This allows us to marginalise over any unlearned invariance using MC Dropout, as part of marginalising over networks (Section 2.3). Each permuted network makes predictions on a uniquelyaugmented image. The aggregated posterior (over many forward passes ) is therefore independent of e.g. orientation, enforcing our domain knowledge.
2.6 Experimental Setup
For each question, we randomly select 2500 galaxies as a test subset and train on the remaining galaxies (following the selection criteria described in Section 2.4). Unlike Domínguez Sánchez et al. (2018) and Khan et al. (2018), we do not select a ‘clean’ sample of galaxies with extreme vote fractions on which to train. Instead, we take full advantage of the responses collected for every galaxy by carefully accounting for the vote uncertainty in galaxies with fewer responses (Eqn 6).
For ‘Smooth or Featured’, we use a final training sample of 176,328 galaxies. For ‘Bar’, we train and test only on galaxies with (56,048 galaxies). Without applying this cut, we find that models fail to learn; performance fails to improve from random initialisation. This may be because galaxies with must have and so are almost all smooth and unbarred, leading to increasingly unbalanced typical vote fractions .
Training was performed on an Amazon Web Services (AWS) p2.xlarge EC2 instance with an NVIDIA K80 GPU. Training each model from random initialisation takes approximately eight hours.
Using the trained models, we make predictions for the typical vote fraction of each galaxy in the test subsets. We then evaluate performance by comparing , our posterior for positive responses from volunteers, with the observed from the Galaxy Zoo volunteers asked.
2.7 Results
We find that our probabilistic CNNs produces posteriors which are reliable and informative.
For each question, we first compare a random selection of posteriors from either 1 or 30 MC Dropout forward passes (i.e. 1 or 30 MCdropoutapproximated ‘networks’). Figures 4 and 5 show our posteriors for ‘Smooth or Featured’ and ‘Bar’, respectively.
Without MC Dropout, our posteriors are binomial. The spread of each posterior reflects two effects. First, the spread reflects the extremity of that previous authors have expressed as ‘volunteer agreement’ or ‘confidence’ (Dieleman et al., 2015; Domínguez Sánchez et al., 2018). Bin is narrower where is close to 0 or 1. Second, the spread reflects , the number of volunteers asked. For ‘Smooth or Featured’, where is approximately fixed, this second effect is minor. For ‘Bar’, where varies significantly between 10 and 40, the posteriors are more spread (less precise) where fewer volunteers have been asked.
With MC Dropout, our posteriors are a superposition of Binomials from each forward pass, each centered on a different . In consequence, the MC Dropout posteriors are more uncertain. This matches our intuition  by marginalising over the different weights and augmentations we might have used, we expect our predictions to broaden.
Given that each single network is relatively confident and the MCdropoutmarginalised model is relatively uncertain, which should be used? We prefer posteriors which are wellcalibrated i.e. which reflect the true uncertainty in our predictions.
To quantify calibration, we introduce a novel method; we compare the predicted and observed counts of within increasing ranges of acceptable error. We outline this procedure below.
Choose some maximum acceptable error in predicting each . Over all galaxies, sum the total probability (from our predicted posteriors) that for each galaxy . We call this the expected count: how many galaxies the posterior suggests should have within of the model prediction . For example, our ‘Bar’ model expects 2488 of 2500 galaxies in the ‘Bar’ test set to have an observed within of .
(11) 
Next, over all galaxies, count how often is within that maximum error . We call this the ‘actual’ count: how many galaxies are actually observed to have within of the model prediction . For example, we observe 2404 of 2500 galaxies in the ‘Bar’ test set to have within of .
(12) 
For a perfectly calibrated posterior, the actual and expected counts would be identical: the model would be correct (within some given maximum error) as often as it expects to be correct. For an overconfident posterior, the expected count will be higher, and for an underconfident posterior, the actual count will be higher.
We find that our predicted posteriors of volunteer votes are fairly wellcalibrated; our model is correct approximately as often as it expects to be correct. Figure 6 compares the expected and actual counts for our model, choosing between 0 and 40. Tables 1 and 2 show calibration results for our ‘Smooth’ and ‘Bar’ models, with and without MC Dropout, evaluated on their respective test sets. Coverage error is calculated as:
(13) 
Max Error  Coverage Error without MC  Coverage Error with MC 

0  42.5%  13.6% 
2  41.4%  16.3% 
5  22.5%  9.1% 
10  5.0%  3.4% 
15  1.2%  0.7% 
Max Error  Coverage Error without MC  Coverage Error with MC 

0  58.0%  32.7% 
2  44.0%  19.2% 
5  20.2%  10.6% 
10  5.0%  2.9% 
15  1.1%  0.6% 
For both questions, the single network (without using MC Dropout) is visibly overconfident. The MCdropoutmarginalised network shows a significant improvement in calibration over the single network. We interpret this as evidence for the importance of marginalising over both networks and augmentations in accurately estimating uncertainty (Section 2.3).
When making precise predictions, the MCdropoutmarginalised network remains somewhat overconfident. However, as the acceptable error is allowed to increase, the network is increasingly wellcalibrated. For example, the predicted probability that exactly (i.e. ) of volunteers respond ‘Bar’ is overestimated by 33%. In contrast, the predicted probability that (i.e. ) of volunteers respond ‘Bar’ is within 10% of the true probability. We discuss future approaches to further improve calibration in Section 4.
A key method for galaxy evolution research is to compare the distribution of some morphology parameter across different samples (e.g. are spirals more common in dense environments, Wang et al. 2018, do bars fuel AGN, Galloway et al. 2015, do mergers inhibit LERGs, Gordon et al. 2019, etc.) We would therefore like the distribution of predicted and , over all galaxies, to approximate the observed distribution of ^{9}^{9}9The ‘observed’ is approximated as , which has a similar distribution to the true (latent, unobserved) over a large sample. and . In short, we would like our predictions to be globally unbiased. Figure 7 compares our predicted and actual distributions of and . We find that our predicted distributions for and match well with the observed distributions for most values of and . Our model appears somewhat reticent to predict extreme (and therefore extreme ) for both questions. This may be a consequence of the difficulty in predicting the behaviour of single volunteers. We discuss this further in Section 4.
2.7.1 Comparison to Previous Work
The key goals of this paper are to introduce probabilistic predictions for votes and (in the following section) to apply this to perform active learning. However, by reducing our probabilistic predictions to point estimates, we can also provide conventional predictions and performance metrics.
Previous work has focused on deterministic predictions of either the votes (Dieleman et al., 2015) or the majority response (Domínguez Sánchez et al., 2018; Khan et al., 2018). While differences in sample selection and training data prevent a precise comparison, our model performs well at both tasks.
When reducing our posteriors to the most likely vote count , we achieve a rootmeansquare error of 0.10 (approx. votes) for ‘Smooth or Featured’ and 0.15 for ‘Bar’. We can also reduce the same posteriors to the most likely majority responses. Below, we present our results in the style of the ROC curves in Domínguez Sánchez et al. (2018) (hereafter DS+18, Figure 8) and the confusion matrices in Khan et al. (2018) (hereafter K+18, Figure 9) using our reduced posteriors. We find that our model likely outperforms Domínguez Sánchez et al. (2018) and is likely comparable with Khan et al. (2018).
Overall, these conventional metrics demonstrate that our models are sufficiently accurate for practical use in galaxy evolution research even when reduced to point estimates.
3 Active Learning
In the first half of this paper, we presented Bayesian CNNs that predict posteriors for the morphology of each galaxy. In the second, we show how we can use these posteriors to select the most informative galaxies for labelling by volunteers, helping humans and algorithms work together to do better science than either alone.
CNNs, and other deep learning methods, rely on vast training sets of labelled examples
(Simonyan & Zisserman, 2015; Szegedy et al., 2015; Russakovsky et al., 2015; He et al., 2015; Huang et al., 2017). As we argued in Section 1, we urgently need methods to reduce this demand for labelled data in order to fully exploit current and nextgeneration surveys.Previous approaches in morphology classification have largely used fixed datasets of labelled galaxies acquired prior to model training. This is true both for authors applying direct training (HuertasCompany et al., 2015; Domínguez Sánchez et al., 2018; Fischer et al., 2018; Walmsley et al., 2018; HuertasCompany et al., 2018) and those applying transfer learning (Ackermann et al., 2018; PérezCarrasco et al., 2018; Domínguez Sánchez et al., 2019b). Instead, we ask: to train the best model, which galaxies should volunteers label?
Selecting the most informative data to label is known as active learning. Active learning is useful when acquiring labels is difficult (expensive, timeconsuming, requiring experts, private, etc). This scenario is common for many, if not most, realworld problems. Terrestrial examples include detecting cardiac arrhythmia (Rahhal et al., 2016)
, sentiment analysis of online reviews
(Zhou et al., 2013), and Earth observation (Tuia et al., 2011; Liu et al., 2017). Astrophysical examples include stellar spectral analysis (Solorio et al., 2005), variable star classification (Richards et al., 2012), telescope design and time allocation (Xia et al., 2016), redshift estimation (Hoyle et al., 2016) and spectroscopic followup of supernovae (Ishida et al., 2018).3.1 Active Learning Approach for Galaxy Zoo
Given that only a small subset of galaxies can be labelled by humans, we should intelligently select which galaxies to label. The aim is to make CNNs which are just as accurate without having to label as many galaxies.
Our approach is as follows. First, we train our CNN on a small randomly chosen initial training set. Then, we repeat the following active learning loop:

Measure the CNN prediction uncertainty on all currentlyunlabelled galaxies (excluding a fixed test set)

Apply an acquisition function (Section 3.2) to select the most uncertain galaxies for labelling

Upload these galaxies to Galaxy Zoo and collect volunteer classifications (in this work, simulated with historical classifications)

Retrain the CNN and repeat
Other astrophysics research has combined crowdsourcing with machine learning models. Wright et al. (2017) classified supernovae in PanSTARRS (Kaiser et al., 2010) by aggregating crowdsourced classifications with the predictions of experttrained CNN and show that the combined human/machine ensemble outperforms either alone. However, this approach is not directly feasible for Galaxy Zoo, where scale prevents us from recording crowdsourced classifications for every image.
A previous effort to consider optimizing task assignment was made by Beck et al. (2018)
, who developed a ‘decision engine’ to allocate galaxies for classification by either human or machine (via a random forest). Their system assigns each galaxy to the categories ‘Smooth’ or ‘Featured’
^{10}^{10}10The actual categories used were ‘Featured’ or ‘Not Featured’ (Smooth + Artifact), but they argue that Artifact is sufficiently rare to not affect the results. , using SWAP (Marshall et al., 2016) to decide how may responses to collect. This is in contrast to the system presented here which only requests responses for informative galaxies, but (for simplicity) requests the same number of responses for each informative galaxy. Another important difference is that Beck et al. (2018) train their model exclusively on galaxies which can be confidently assigned to a class, while the use of uncertainty in our model allows learning to occur from every classified galaxy.This work is the first time active learning has been used for morphological classification, and the first time in astrophysics that active learning has been combined with CNNs or crowdsourcing.
In the following sections (3.2, 3.3, 3.4), we derive an acquisition function that selects the most informative galaxies for labelling by volunteers. We do this by combining the general acquisition strategy BALD (MacKay, 1992; Houlsby et al., 2011) with our probabilistic model and MonteCarlo Dropout (Gal, 2016). We then use historical data to simulate applying active learning strategy to Galaxy Zoo (Section 3.5) and compare the performance of models trained on galaxies selected using the mutual information versus galaxies selected randomly (Section 3.6).
3.2 BALD and Mutual Information
Bayesian Active Learning by Disagreement, BALD (MacKay, 1992; Houlsby et al., 2011), is a general informationtheoretic acquisition strategy. BALD selects subjects to label by maximising the mutual information between the model parameters and the probabilistic label prediction . We begin deriving our acquisition function by describing BALD and the mutual information.
We have observed data . Here, is the th subject and is the label of interest. We assume there are (unknown) parameters that model the relationship between input subjects and output labels , . We would like to infer the posterior of , . Once we know , we can make predictions on new galaxy images.
The mutual information measures how much information some random variable A carries about another random variable B, defined as:
(14) 
where is the entropy operator and is the expected entropy of , marginalised over (Murphy, 2012)
We would like to know how much information each label provides about the model parameters . We can then pick subjects to maximise the mutual information , helping us to learn efficiently. Substituting and for and :
(15) 
The first term is the entropy of our prediction for given the training data, implicitly marginalising over the possible model parameters . We refer to this as the predictive entropy. The predictive entropy reflects our overall uncertainty in given the training data available.
The second term is the expected entropy of our prediction made with a given , sampling over each we might have inferred from . The expected entropy reflects the typical uncertainty of each particular model on . Expected entropy has a lower bound set by the inherent difficulty in predicting from , regardless of the available labelled data.
Confident disagreement between possible models leads to high mutual information. For high mutual information, we should be highly uncertain about after marginalising over all the models we might infer (high ), but have each particular model be confident (low expected ). If we are uncertain overall, but each particular model is certain, then the models must confidently disagree.
3.3 Estimating Mutual Information
Rewriting the mutual information explicitly, replacing with our labels and with the network weights :
(16) 
Gal et al. (2017a) showed that we can use Eqn. 8 to replace in the mutual information (Eqn. 16):
(17) 
and again sample from with forward passes using dropout at test time (i.e. Monte Carlo integration):
(18) 
Next, we need a probabilistic prediction for , . Here, we diverge from previous work.
Recall that we trained our network to make probabilistic predictions for by estimating the latent parameter from which is Binomially drawn (Eqn. 3). Substituting the probabilistic predictions of Eqn. 3 into the mutual information:
(19) 
Or concisely:
(20) 
A novel complication is that we do not know , the total number of responses, prior to labelling. In GZ2, each subject is shown to a fixed number of volunteers, but (due to the decision tree) for each question will depend on responses to the previous question. Further, technical limitations mean that even for the first question (‘Smooth or Featured’), can vary (Figure 2). We (implicitly, for clarity) approximate with the expected for that question. In effect, we calculate our acquisition function with set to the value that, were we to ask volunteers to label this galaxy, we would expect responses.
To summarise, Eqn. 20 asks: how much additional information would be gained about network parameters that we use to predict and , were we to ask people about subject ?
3.4 Entropy Evaluation
Having approximated with dropout and calculated with our probabilistic model, all that remains is to calculate the entropies of each term.
is discrete and hence we can directly calculate the entropy over each possible state:
(21) 
For , we can also enumerate over each possible , where the probability of each is the mean of the posterior predictions (sampled with dropout) for that :
(22) 
and hence our final expression for the mutual information is:
(23) 
3.5 Application
To evaluate our active learning approach, we simulate applying active learning during GZ2. We compare the performance of our models when trained on galaxies selected using the mutual information versus galaxies selected randomly. For simplicity, each simulation trains a model to predict either ‘Smooth or Featured’ responses or ‘Bar’ responses.
For the ‘Smooth or Featured’ simulation, we begin with a small initial training set of 256 random galaxies. We train a model and predict (where is the expected number of volunteers to answer the question, calculated as the mean total number of responses for that question over all previous galaxies  see Figure 2). We then use our BALD acquisition function (Eqn. 20) to identify the 128 most informative galaxies to label. To simulate uploading the informative galaxies to GZ and receiving classifications, we retrieve previously collected GZ2 classifications. Finally, we add the newlylabelled informative galaxies to our training set. We refer to each execution of this process (training our model, selecting new galaxies to label, and adding them to the training set) as an iteration. We repeat for 20 iterations, recording the performance of our model throughout.
We selected 256 initial galaxies and 128 further galaxies per iteration, to match the training data size over which our ‘Smooth or Featured’ model performance varies. Our relatively shallow model reaches peak performance on around 3000 random galaxies; more galaxies do not significantly improve performance.
For the ‘Bar’ simulation, we observe that performance saturates after more galaxies (approx. 6000) and so we double the scale; we start with 512 galaxies and acquire 256 further galaxies per iteration. This matches previous results (and intuition) that ‘Smooth or Featured’ is an easier question to answer than ‘Bar’. Identifying bars, particularly weak bars, is challenging for both humans (Masters et al., 2012; Kruk et al., 2018) and machines (including CNNs, Domínguez Sánchez et al. 2018).
To measure the effect of our active learning strategy, we also train a baseline classifier by providing batches of randomly selected galaxies. We aim to compare two acquisition strategies for deciding which galaxies to label: selecting galaxies with maximal mutual information (active learning via BALD and MC Dropout) or selecting randomly (baseline). We evaluate performance on a fixed test set of 2500 random galaxies. We repeat each simulation four times to reduce the risk of spurious results from random variations in performance.
3.6 Results
For both ‘Smooth’ and ‘Bar’ simulations, our probabilistic models achieve equal performance on fewer galaxies using active learning versus random galaxy selection. We show model performance by iteration for the ‘Smooth’ (Figure 10) and ‘Bar’ (Figure 11) simulations. We display three metrics: training loss (model surprise on previouslyseen images, measured by Eqn. 6), evaluation loss (model surprise on unseen images), and rootmeansquare error (RMSE). We measure the RMSE between our maximumlikelihoodestimates and as
itself is never observed and hence cannot be used for evaluation. Due to the high variance in metrics between batches, we smooth our metrics via LOWESS
(Cleveland, 1979) and average across 4 simulation runs.For ‘Smooth’, we achieve equal RMSE scores with, at best, % fewer newlylabelled galaxies (RMSE of 0.117 with 256 vs. 640 new galaxies, Figure 10). Similarly for ‘Bar’, we achieve equal RMSE scores with, at best, % fewer newlylabelled galaxies (RMSE of 0.17 with 1280 vs. 2048 new galaxies, Figure 11). Active learning outperforms random selection in every run.
Given sufficient ( for ‘Smooth’, for ‘Bar’) galaxies, our models eventually converge to similar performance levels – regardless of galaxy selection. We speculate that this is because our relatively shallow model architecture places an upper limit on performance. In general, model complexity should be large enough to exploit the information in the training set yet small enough to avoid fitting to spurious patterns. Model complexity increases with the number of free parameters, and decreases with regularization (Friedman et al., 2001). Our model is both shallow and wellregularized (recall that dropout was originaly used as a regularization technique, Section 2.3). A more complex (deeper) model may be able to perform better by learning from additional galaxies.
3.6.1 Selected Galaxies
Which galaxies do the models identify as informative? To investigate, we randomly select one ‘Smooth or Featured’ and one ‘Bar’ simulation.
For the ‘Smooth or Featured’ simulation, Figure 12
shows the observed ‘Smooth’ vote fraction distribution, per iteration (set of new galaxies) and in total (summed over all new galaxies). Highly smooth galaxies are common in the general GZ2 catalogue. Random selection therefore leads to a training sample skewed towards highly smooth galaxies. In contrast, our acquisition function is far more likely to select galaxies which are featured, leading to a more balanced sample. This is especially true for the first few iterations; we speculate that this counteracts the skew towards smooth in the randomly selected initial training sample. By the final training sample, featured galaxies become moderately more common than smooth (mean
= 0.38). This suggests that featured galaxies are (on average) more informative for the model – over and above correcting for the skewed initial training sample. We speculate that featured galaxies may be more visually diverse, leading to a greater challenge in fitting volunteer responses, more disagreement between dropoutapproximatedmodels, and ultimately higher mutual information.For the ‘Bar’ simulation, Figure 13 shows the ‘Bar’ vote fraction distribution, per iteration and in total, as well as the total redshift distribution. Again, our acquisition function selects a more balanced sample by prioritising (rarer) barred galaxies. This selection remains approximately constant (within statistical noise) as more galaxies are acquired. With respect to redshift, our acquisition function prefers to select galaxies at lower redshifts. Based on inspection of the selected images (Figure 15), we suggest that these galaxies are more informative to our model because such galaxies are better resolved (i.e. less ambiguous) and more likely to be barred.
We present the most and least informative galaxies from the (fixed and never labelled) test subset for ‘Smooth’ (Figure 14 and Bar (Figure 15), as identified by our novel acquisition function and the final models from each simulation.
4 Discussion
Learning from fewer examples is an expected benefit of both probabilistic predictions and active learning. Our models approach peak performance on remarkably few examples: 2816 galaxies for ‘Smooth’ and 5632 for ‘Bar’. With our system, volunteers could complete Galaxy Zoo 2 in weeks^{11}^{11}11For example, classifying 10,000 galaxies (sufficient to train our models to peak performance) at the mean GZ2 classification rate of 800 galaxies/day would take 13 days. rather than years if the peak performance of our models would be sufficient for their research. Further, reaching peak performance on relatively few examples indicates that an expanded model with additional free parameters is likely to perform better (Murphy, 2012).
For this work, we rely on GZ2 data where (the number of responses to a galaxy) is unknown before making a (historical) classification request. Therefore, when deriving our acquisition function, we approximated as (the expected number of responses). However, during live application of our system, we can control the Galaxy Zoo classification logic to collect exactly responses per image, for any desired . This would allow our model to request (for example) one more classification for this galaxy, and three more for that galaxy, before retraining. Precise classification requests from our model will enable us to ask volunteers exactly the right questions, helping them make an even greater contribution to scientific research.
We also hope that this humanmachine collaboration will provide a better experience for volunteers. Inspection of informative galaxies (Figures 12, 13) suggests that more informative galaxies are more diverse than less informative galaxies. We hope that volunteers will find these (now more frequent) informative galaxies interesting and engaging.
Our results motivate various improvements to the probabilistic morphology models we introduce. In Section 2.7, we showed that our models were approximately wellcalibrated, particularly after applying MC Dropout. However, the calibration was imperfect; even after applying MC Dropout, our models remain slightly overconfident (Figure 6). We suggest two reasons for this remaining overconfidence. First, within the MC Dropout approximation, the dropout rate is known to affect the calibration of the final model (Gal et al., 2017b). We choose our dropout rate arbitrarily (); however, this rate may not sufficiently vary the model to approximate training many models. One solution is to ‘tune’ the dropout rate until the calibration is correct (Gal et al., 2017b). Second, the MC Dropout approximation is itself imperfect; removing random neurons with dropout is not identical to training many networks. As an alternative, one could simply train several models and ensemble the predictions (Lakshminarayanan et al., 2016). Both of these approaches are straightforward given a sufficient computational budget.
We also showed the distribution of model predictions over all galaxies generally agrees well with the distribution of predictions from volunteers (i.e. we are globally unbiased, Section 2.7). However, we noted that the models are ‘reluctant’ to predict extreme (the typical response probability, Section 2.1). We suggest that this is a limitation of our generative model for volunteer responses. The binomial likelihood becomes narrow when (here, ) is extreme, and hence the model is heavily penalised for incorrect extreme estimates. If volunteer responses were precisely binomially distributed (i.e. independent identicallydistributed trials per galaxy, each with a fixed of a positive response), this heavy penalty would correctly reflect the significance of the error. However, our binomial model of volunteers is only approximate; one volunteer may give consistently different responses to another. In consequence, the true likelihood of nonextreme responses given is wider than the binomial likelihood from the ‘typical’ response probability suggests, and the network is penalised ‘unfairly’. The network therefore learns to avoid making risky extreme predictions.
If this suggestion is correct, the riskaverse prediction shift will be monotonic (i.e. extreme galaxies will have slightly different but still be ranked in the same order) and hence researchers selecting galaxies near extreme may simply choose a slightly higher or lower threshold. To resolve this issue, one could apply a monotonic rescaling to the network predictions (as we do in Appendix A), introduce a more sophisticated model of volunteer behaviour (Marshall et al., 2016; Beck et al., 2018; Dickinson et al., 2019), or calibrate the loss to reflect the scientific utility of extreme predictions (Cobb et al., 2018). As predictions are globally unbiased for all nonextreme , and extreme predictions can be corrected post hoc (above), our network is ready for use.
Finally, we highlight that our approach is highly general. We hope that Bayesian CNNs and active learning can contribute to the wide range of astrophysical problems where CNNs are applicable (e.g. images, time series), uncertainty is important, and the data is expensive to label, noisy, imbalanced, or includes rare objects of interest. In particular, imbalanced datasets (where some labels are far more common than others) are common throughout astrophysics. Topics include transient classification (Wright et al., 2017), fast radio burst searches (Zhang et al., 2018), and exoplanet detection (Osborn et al., 2019). Active learning is known to be effective at correcting such imbalances (Ishida et al., 2018). Our results suggest that this remains true when active learning is combined with CNNs (this work is the first astrophysics application of such a combination). Recall that smooth galaxies are far more common in GZ2 but featured galaxies are strongly preferentially selected by active learning – automatically, without our instruction – apparently to compensate for the imbalanced data (Figure 12). If this observation proves to be general, we suggest that Bayesian CNNs and active learning can drive intelligent data collection to overcome research challenges throughout astrophysics.
5 Conclusion
Previous work on predicting visual galaxy morphology with deep learning has either taken no account of uncertainty or trained only on confidentlylabelled galaxies. Our Bayesian CNNs model and exploit the uncertainty in Galaxy Zoo volunteer responses using a novel generative model of volunteers. This enables us to accurately answer detailed morphology questions using only sparse labels (
responses per galaxy). Our CNNs can also express uncertainty by predicting probability distribution parameters and using MonteCarlo Dropout
(Gal et al., 2017a). This allows us to predict posteriors for the expected volunteer responses to each galaxy. These posteriors are reliable (i.e. wellcalibrated), show minimal systematic bias, and match or outperform previous work when reduced to point estimates (for comparison). Using our posteriors, researchers will be able to draw statistically powerful conclusions about the relationships between morphology and AGN, mass assembly, quenching, and other topics.Previous work has also treated labelled galaxies as a fixed dataset from which to learn. Instead, we ask: which galaxies should we label to train the best model? We apply active learning (Houlsby et al., 2011) – our model iteratively requests new galaxies for human labelling and then retrains. To select the most informative galaxies for labelling, we derive a custom acquisition function for Galaxy Zoo based on BALD (MacKay, 1992). This derivation is only possible using our posteriors. We find that active learning provides a clear improvement in performance over random selection of galaxies. The galaxies identified as informative are generally more featured (for the ‘Smooth or Featured’ question) and better resolved (for the ‘Bar’ question), matching our intuition.
As modern surveys continue to outpace traditional citizen science, probabilistic predictions and active learning become particularly crucial. The methods we introduce here will allow Galaxy Zoo to produce visual morphology measurements for surveys of any conceivable scale on a timescale of weeks. We aim to launch our active learning strategy on Galaxy Zoo in 2019.
Acknowledgements
MW would like to thank for H. Domínguez Sanchez and M. HuertasCompany for helpful discussions.
MW acknowledges funding from the Science and Technology Funding Council (STFC) Grant Code ST/R505006/1. We also acknowledge support from STFC under grant ST/N003179/1. LF, CS, HD and DW acknowledge partial support from one or more of the US National Science Foundation grants IIS1619177, OAC1835530, and AST1716602.
This research made use of the opensource Python scientific computing ecosystem, including SciPy (Jones et al., 01), Matplotlib (Hunter, 2007), scikitlearn (Pedregosa et al., 2011), scikitimage (van der Walt et al., 2014) and Pandas (McKinney, 2010).
This research made use of Astropy, a communitydeveloped core Python package for Astronomy (The Astropy Collaboration et al., 2013, 2018).
This research made use of TensorFlow
(Abadi et al., 2015).All code is publicly available on GitHub at www.github.com/mwalmsley/galaxyzoobayesiancnn (Walmsley, 2019).
Appendix A
CNN predictions are not necessarily probabilities. Modern CNN are susceptible to overconfidence when predicting class labels (Lakshminarayanan et al., 2016; Guo et al., 2017). To illustrate this problem, we show how the CNN ‘probabilities’ published in DS+18 (Domínguez Sánchez et al., 2018) are not wellcalibrated and therefore may cause systematic errors in later analysis. We chose DS+18 as the most recent deep learning morphology catalogue made publicly available, and thank the authors for their openness. We do not believe this issue is unique to DS+18.
DS+18 trained a CNN to predict the probability that a galaxy is barred. Barred galaxies were defined as those galaxies labelled as having any kind of bar (weak/intermediate/strong) in expert catalogue Nair & Abraham 2010 (N10). We refer to such galaxies as Nair Bars.
We first show that these CNN ‘probabilities’ are not wellcalibrated. We then demonstrate a simple technique to infer probabilities for Nair Bars from GZ2 vote fractions. Finally, we show that, as our Bayesian CNN estimates of GZ2 vote fractions are wellcalibrated, these vote fractions can be used to estimate probabilities for Nair Bars. The practical application is to predict what Nair & Abraham (2010) would have recorded, had the expert authors visually classified every SDSS galaxy.
We select a random subset of 1211 galaxies classified by N10 (this subset is motivated below). How many barred galaxies are in this subset? The DS+18 Nair Bar ‘probabilities’ (for each galaxy ) predict Nair Bars. However, only 379 are actually Nair Bars (Figure A16). This error is caused by the DS+18 Nair Bar ‘probabilities’ being, on average, skewed towards predicting ‘Bar’, as shown by the calibration curve of the DS+18 Nair Bar probabilities (Figure A17).
How can we better predict the total number of Nair Bars? GZ2 collected volunteer responses for many galaxies classified by N10 (6,051 of 14,034 match within , after filtering for total ‘Bar?’ votes as in Section 2.6). The fraction of volunteers who responded ‘Bar’ to the question ‘Bar?’ is predictive of Nair Bars, but is not a probability (Lintott et al., 2008). For example, volunteers are less able to recognise weak bars than experts (Masters et al., 2012), and hence the ‘Bar’ vote fraction only slightly increases for galaxies with weak Nair Bars vs. galaxies without. We need to rescale the GZ2 vote fractions. To do this, we divide the N10 catalogue into 80% train and 20% test subsets and use the train subset to fit (via logistic regression) a rescaling function (Figure A18) from GZ2 vote fractions to Nair Bar probabilities. We then evaluate the calibration of these probabilities on the test subset, which is the subset of 1211 galaxies used above. We predict 396 Nair Bars, which compares well with the correct answer of 379 vs. the DS+18 answer of 559 (Figure A16). This directly demonstrates that our rescaled GZ2 predictions are correctly calibrated over the full test subset. The calibration curve shows no systematic skew, unlike DS+18 (Figure A17).
Since the GZ2 vote fractions can be rescaled to Nair Bar probabilities, and the Bayesian CNN makes predictions of the GZ2 vote fractions, we can also rescale the Bayesian CNN predictions into Nair Bar probabilities using the same rescaling function. The rescaled Bayesian CNN GZ2 vote predictions correctly estimate the count of Nair Bars (372 bars predicted vs. 379 observed bars, Figure A16).
Finally, we note that the rescaled GZ2 votes, both observed from volunteers and predicted by the Bayesian CNN, outperform DS+18 in identifying Nair Bars (Figure A19). Nair Bars are labelled through repeated expert classification (as close to ‘gold standard’ ground truth as exists for imaging data) and hence accurate automated identification is directly useful for morphology research.
Appendix B  Theoretical Background on Variational Inference
The general problem of Bayesian inference can be framed in terms of a probabilistic model where we have some observed random variables
and some latent variables and we wish to infer after observing some data. Our probabilistic model allows us to use Bayes rule to do so; . In the setting of discriminative learning, the observed variables are the inputs and outputs of our classification task and , and we directly parameterise the distribution in order to make predictions by marginalising over the unknown weights, that is, the prediction for an unseen point given training data is(24) 
While this is a simple framework, in practice the integrals required to normalise Bayes’ rule and to take this marginal are often not analytically tractable, and we must resort to numerical approaches.
While there are many possible ways to perform approximate Bayesian inference, here we will focus on the framework of variational inference. The essential idea of variational inference is to approximate the posterior with a simpler distribution which is ‘as close as possible’ to , and then use in place of the posterior. This can take the form of analytically finding the optimal subject only to some factorisation assumptions using the tools of the calculus of variations, but the case that is relevant to our treatment is when we fix to be some family of distributions parameterised by and fit , changing an integration problem to an optimisation one.
The measure of ‘as close as possible’ used in variational inference in the KullbackLeibler (KL) divergence, or the relative entropy, a measure of distance between two probability distributions defined as
(25) 
The objective of variational inference is to choose the such that is minimised. Minimising this objective can be shown to be equivalent to maximising the ’log Evidence Lower BOund’, or ELBO,
(26) 
The reason for the name is the relationship
(27) 
which implies, since the KL divergence is strictly positive, that provides a lower bound on the log of the evidence , the denominator in Bayes rule above. By optimising the parameters of , with respect to , one can find the best approxmation to the posterior in the family of parameterised distributions chosen in terms of the ELBO.
The key advantage of this formalism is that the ELBO only involves the tractable terms of the model, and . The expectation is over the approximating distribution, but since we are able to choose we can make a choice that is easy to sample from, and therefore it is straightforward to obtain a monte carlo approximation of via sampling, which is sufficient to obtain stochastic gradients of which can be used for optimisation. The integral over the posterior on in the marginalisation step can likewise be approximated via sampling from if neccesary.
For neural networks, a common approximating distribution is dropout (Srivastava et al., 2014). The dropout distribution over the weights of a single neural network layer is parameterised by a weight matrix and a dropout probability . Draws from this distribution are described by
(28) 
where . Gal (2016) introduced approximating , with a dropout distributions over the weights of a network, and showed that in this case optimising the standard likelihood based loss is equivalent to the variational objective that would be obtained for the dropout distribution, so we may interpret the dropout distribution over the weights of a trained model as an approximation to the posterior distribution .
We can use this approximating distribution as a proxy for the true posterior when we marginalise over models to make predictions;
(29) 
A more detailed mathematical exposition of dropout as variational inference can be found in Gal (2016).
References
 Abadi et al. (2015) Abadi M., et al., 2015, TensorFlow: LargeScale Machine Learning on Heterogeneous Systems, https://www.tensorflow.org/
 Abazajian et al. (2009) Abazajian K. N., et al., 2009, The Astrophysical Journal Supplement Series, 182, 543
 Ackermann et al. (2018) Ackermann S., Schawinski K., Zhang C., Weigel A. K., Turp M. D., 2018, Monthly Notices of the Royal Astronomical Society, 479, 415
 Aihara et al. (2018) Aihara H., et al., 2018, Publications of the Astronomical Society of Japan, 70
 Baillard et al. (2011) Baillard A., Bertin E., Lapparent V. D., Fouqué P., Arnouts S., Mellier Y., Pelló R., Leborgne J., 2011, Astronomy & Astrophysics, 532
 Banerji et al. (2010) Banerji M., et al., 2010, Monthly Notices of the Royal Astronomical Society, 406, 342
 Beck et al. (2018) Beck M. R., et al., 2018, Monthly Notices of the Royal Astronomical Society, 476, 5516
 Cleveland (1979) Cleveland W. S., 1979, Journal of the American Statistical Association, 74, 829
 Cobb et al. (2018) Cobb A. D., Roberts S. J., Gal Y., 2018, arXiv
 Conselice (2003) Conselice C. J., 2003, The Astrophysical Journal Supplement Series, 147, 1
 Dey et al. (2018) Dey A., et al., 2018, eprint arXiv:1804.08657
 Dickinson et al. (2019) Dickinson H., Fortson L., Scarlata C., Beck M., Walmsley M., 2019, Proceedings of the International Astronomical Union
 Dieleman et al. (2015) Dieleman S., Willett K. W., Dambre J., 2015, Monthly Notices of the Royal Astronomical Society, 450, 1441
 Domínguez Sánchez et al. (2018) Domínguez Sánchez H., et al., 2018, Monthly Notices of the Royal Astronomical Society, 476, 3661
 Domínguez Sánchez et al. (2019a) Domínguez Sánchez H., et al., 2019a, arxiv
 Domínguez Sánchez et al. (2019b) Domínguez Sánchez H., et al., 2019b, Monthly Notices of the Royal Astronomical Society, 484, 93
 Fischer et al. (2018) Fischer J.L., Dom H., Bernardi M., 2018, arXiv, 000
 Flaugher (2005) Flaugher B., 2005, International Journal of Modern Physics A, 20, 3121
 Freeman et al. (2013) Freeman P. E., Izbicki R., Lee A. B., Newman J. A., Conselice C. J., Koekemoer A. M., Lotz J. M., Mozena M., 2013, Monthly Notices of the Royal Astronomical Society, 434, 282
 Friedman et al. (2001) Friedman J., Hastie T., Tibshirani R., 2001, The Elements of Statistical Learning. Springer, New York
 Gal (2016) Gal Y., 2016, PhD thesis, University of Cambridge
 Gal et al. (2017a) Gal Y., Islam R., Ghahramani Z., 2017a, Nips
 Gal et al. (2017b) Gal Y., Hron J., Kendall A., 2017b, Arxiv preprint, pp 3581–3590
 Galloway et al. (2015) Galloway M. A., et al., 2015, Monthly Notices of the Royal Astronomical Society, 448, 3442
 Gordon et al. (2019) Gordon Y. A., et al., 2019, arXiv
 Guo et al. (2017) Guo C., Pleiss G., Sun Y., Weinberger K. Q., 2017, International Conference on Machine Learning
 Hart et al. (2016) Hart R. E., et al., 2016, Monthly Notices of the Royal Astronomical Society, 461, 3663
 He et al. (2015) He K., Zhang X., Ren S., Sun J., 2015, Arxiv.Org, 7, 171
 Hezaveh et al. (2017) Hezaveh Y. D., Levasseur L. P., Marshall P. J., 2017, Nature, 548, 555
 Hocking et al. (2015) Hocking A., Geach J. E., Davey N., Sun Y., 2015, Mnras, 000, 1
 Houlsby et al. (2011) Houlsby N., Huszár F., Ghahramani Z., Lengyel M., 2011, PhD thesis (arXiv:1112.5745), doi:10.1007/9783642141256, http://arxiv.org/abs/1112.5745
 Hoyle et al. (2016) Hoyle B., Paech K., Rau M. M., Seitz S., Weller J., 2016, Monthly Notices of the Royal Astronomical Society, 458, 4498

Huang et al. (2017)
Huang G., Liu Z., van der Maaten L., Weinberger K. Q., 2017, The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp 4700–470
 HuertasCompany et al. (2011) HuertasCompany M., Aguerri J. A. L., Bernardi M., Mei S., Almeida J. S., 2011, Astronomy & Astrophysics, 525, 1
 HuertasCompany et al. (2015) HuertasCompany M., et al., 2015, Astrophysical Journal, Supplement Series, 221
 HuertasCompany et al. (2018) HuertasCompany M., et al., 2018, arXiv
 Hunter (2007) Hunter J. D., 2007, Computing in Science and Engineering, 9, 99
 Ishida et al. (2018) Ishida E. E. O., et al., 2018, MNRAS, 000, 1
 Jones et al. (01 ) Jones E., Oliphant T., Peterson P., et al., 2001–, SciPy: Open source scientific tools for Python, http://www.scipy.org/
 Kaiser et al. (2010) Kaiser N., et al., 2010. International Society for Optics and Photonics, p. 77330E, doi:10.1117/12.859188, http://proceedings.spiedigitallibrary.org/proceeding.aspx?doi=10.1117/12.859188
 Khan et al. (2018) Khan A., Huerta E. A., Wang S., Gruendl R., 2018, arXiv
 Kim & Brunner (2017) Kim E. J., Brunner R. J., 2017, Monthly Notices of the Royal Astronomical Society, 464, 4463
 Kruk et al. (2017) Kruk S. J., et al., 2017, Monthly Notices of the Royal Astronomical Society, 469, 3363
 Kruk et al. (2018) Kruk S. J., et al., 2018, Monthly Notices of the Royal Astronomical Society, 473, 4731
 LSST Science Collaboration et al. (2009) LSST Science Collaboration et al., 2009, preprint, (arXiv:0912.0201)
 Lakshminarayanan et al. (2016) Lakshminarayanan B., Pritzel A., Blundell C., 2016, Arxiv preprint
 Lanusse et al. (2018) Lanusse F., Ma Q., Li N., Collett T. E., Li C. L., Ravanbakhsh S., Mandelbaum R., Póczos B., 2018, Monthly Notices of the Royal Astronomical Society, 473, 3895
 Laureijs et al. (2011) Laureijs R., et al., 2011, Arxiv preprint
 LeCun et al. (2015) LeCun Y. A., Bengio Y., Hinton G. E., 2015, Nature, 521, 436
 Lintott et al. (2008) Lintott C. J., et al., 2008, Monthly Notices of the Royal Astronomical Society, 389, 1179
 Liu et al. (2017) Liu P., Zhang H., Eom K. B., 2017, IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, 10, 712
 Lotz et al. (2004) Lotz J. M., Primack J., Madau P., 2004, The Astronomical Journal, 128, 163
 Lu et al. (2015) Lu J., Behbood V., Hao P., Zuo H., Xue S., Zhang G., 2015, KnowledgeBased Systems, 80, 14
 MacKay (1992) MacKay D. J. C., 1992, Neural Computation, 4, 590
 Marshall et al. (2016) Marshall P. J., et al., 2016, Monthly Notices of the Royal Astronomical Society, 455, 1171
 Masters et al. (2012) Masters K. L., et al., 2012, Monthly Notices of the Royal Astronomical Society, 424, 2180
 McKinney (2010) McKinney W., 2010, Data Structures for Statistical Computing in Python, http://conference.scipy.org/proceedings/scipy2010/mckinney.html
 Murphy (2012) Murphy K. P., 2012, Machine Learning: A Probabilisitic Perspective. MIT Press, Boston, MA
 Nair & Abraham (2010) Nair P. B., Abraham R. G., 2010, The Astrophysical Journal Supplement Series, 186, 427
 Osborn et al. (2019) Osborn H. P., Ansdell M., Ioannou Y., Sasdelli M., Angerhausen D., Caldwell D., Jenkins J. M., Smith J. C., 2019, Arxiv preprint
 Pedregosa et al. (2011) Pedregosa F., et al., 2011, Journal of Machine Learning Research, 12, 2825
 PérezCarrasco et al. (2018) PérezCarrasco M., CabreraVives G., MartinezMarín M., Cerulo P., Demarco R., Protopapas P., Godoy J., HuertasCompany M., 2018, arXiv
 Peth et al. (2016) Peth M. A., et al., 2016, Monthly Notices of the Royal Astronomical Society, 458, 963
 Rahhal et al. (2016) Rahhal M. A., Bazi Y., AlHichri H., Alajlan N., Melgani F., Yager R., 2016, Information Sciences, 345, 340
 Richards et al. (2012) Richards J. W., et al., 2012, Astrophysical Journal, 744, 192
 Roberts & Haynes (1994) Roberts M. S., Haynes M. P., 1994, Annual Review of Astronomy and Astrophysics, 32, 115
 Russakovsky et al. (2015) Russakovsky O., et al., 2015, International Journal of Computer Vision, 115, 211
 Scarlata et al. (2007) Scarlata C., et al., 2007, The Astrophysical Journal Supplement Series, 172, 406
 Simonyan & Zisserman (2015) Simonyan K., Zisserman A., 2015, in International Conference on Learning Representations. (arXiv:1409.1556), doi:10.1016/j.infsof.2008.09.005, http://arxiv.org/abs/1409.1556
 Solorio et al. (2005) Solorio T., Fuentes O., Terlevich R., Terlevich E., 2005, Monthly Notices of the Royal Astronomical Society, 363, 543
 Spergel et al. (2013) Spergel D., et al., 2013, arXiv
 Srivastava et al. (2014) Srivastava N., Hinton G., Krizhevsky A., Sutskever I., Salakhutdinov R., 2014, Journal of Machine Learning Research, 15, 1929
 Strauss et al. (2002) Strauss M. A., et al., 2002, The Astronomical Journal, 124, 1810
 Szegedy et al. (2015) Szegedy C., et al., 2015, in Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition. pp 1–9 (arXiv:1409.4842), doi:10.1109/CVPR.2015.7298594, http://arxiv.org/abs/1409.4842
 The Astropy Collaboration et al. (2013) The Astropy Collaboration et al., 2013, Astronomy & Astrophysics, 558
 The Astropy Collaboration et al. (2018) The Astropy Collaboration et al., 2018, The Astronomical Journal, 156, 123
 Tuccillo et al. (2017) Tuccillo D., HuertasCompany M., Decencière E., VelascoForero S., Sánchez H. D., Dimauro P., 2017, arXiv
 Tuia et al. (2011) Tuia D., Volpi M., Copa L., Kanevski M., MunozMari J., 2011, IEEE Journal of Selected Topics in Signal Processing, 5, 606
 Walmsley (2019) Walmsley M., 2019, Galaxy Zoo Bayesian CNN: Initial public release, doi:10.5281/ZENODO.2677874, https://zenodo.org/record/2677874{#}.XNaQU6ZCfBI
 Walmsley et al. (2018) Walmsley M., Ferguson A. M. N., Mann R. G., Lintott C. J., 2018, Monthly Notices of the Royal Astronomical Society, 483, 2968
 Wang et al. (2018) Wang L., et al., 2018, Arxiv preprint
 Willett et al. (2013) Willett K. W., et al., 2013, Monthly Notices of the Royal Astronomical Society, 435, 2835
 Wright et al. (2017) Wright D. E., et al., 2017, Monthly Notices of the Royal Astronomical Society, 472, 1315
 Xia et al. (2016) Xia X., Protopapas P., DoshiVelez F., 2016, Proceedings of the 2016 SIAM International Conference on Data Mining, pp 477–485
 Zhang et al. (2018) Zhang Y. G., Gajjar V., Foster G., Siemion A., Cordes J., Law C., Wang Y., 2018, arXiv preprint
 Zhou et al. (2013) Zhou S., Chen Q., Wang X., 2013, Neurocomputing, 120, 536
 de Jong et al. (2015) de Jong J. T. A., et al., 2015, Astronomy & Astrophysics, 582, A62
 van der Walt et al. (2014) van der Walt S., Schönberger J. L., NunezIglesias J., Boulogne F., Warner J. D., Yager N., Gouillart E., Yu T., 2014, PeerJ, 2, e453
Comments
There are no comments yet.