Social media enables members of the public to post real-time text messages, videos and photographs describing events taking place close to them. While many posts may be extraneous or misleading, social media nonetheless provides streams of up-to-date information across a wide area. For example, after the Haiti 2010 earthquake, Ushahidi gathered thousands of text messages that provided valuable first-hand information about the disaster situation . An effective way to extract information from large unstructured datasets such as these is to employ crowds of non-expert annotators, as demonstrated by Galaxy Zoo. Besides social media, crowdsourcing provides a means to obtain geo-tagged annotations from other unstructured data sources such as imagery from satellites or unmanned aerial vehicles (UAV).
), which overlays colours onto a map to indicate the intensity or probability of phenomena of interest. Probabilistic methods have been used to generate heatmaps from observations at sparse, point locations[9, 1, 8]
, using a Bayesian treatment of Poisson process models. However, these approaches model the rate of occurrence of events, so are not suitable for classification problems. Instead, a Gaussian process (GP) classifier can be used to model a class label that varies smoothly over space or time. This uses a latent function over input coordinates, which is mapped through a sigmoid function to obtain probabilities. However, standard GP classifiers are unsuitable for heterogeneous, crowdsourced data since they do not account for the differing relevance, error rates and bias of individual information sources and annotators.
A key challenge in exploiting crowdsourced information is to account for its unreliability and combine it with trusted data as it becomes available, such as reports from experienced first responders in a disaster zone. For regression problems, differing levels of accuracy can be handled using sensor fusion approaches such as [12, 25]. The approach of 
uses heteroskedastic GPs to produce heatmaps that account for sensor accuracy through variance scaling. This method could be applied to spatial classification by mapping GPs through a softmax function. However, such an approach cannot handle label bias or accuracy that depends on the true class. Recently,, proposed learning a GP classifier from crowdsourced annotations, but their method uses a coin-flipping noise model that would suffer from the same drawbacks as adapting 
. Furthermore they train the model using a maximum likelihood (ML) approach, which may incorrectly estimate reliability when data for some workers is insufficient[20, 17, 7].
For classification problems, each information source can be modelled by a confusion matrix, which quantifies the likelihood of observing a particular annotation from an information source given the true class label. This approach naturally accounts for bias toward a particular answer and varying accuracy depending on the true class, and has been shown to outperform techniques such as majority voting and weighted sums[20, 17, 7]. Recent extensions following the Bayesian treatment of  can further improve results: by identifying clusters of crowd workers with shared confusion matrices [13, 23]; accounting for the time each worker takes to complete a task; additionally modelling language features in text classification tasks[4, 21]. However, these methods depend on receiving multiple labels from different workers for the same data points, or, in the case of [4, 21], on correlations between text features and target classes. None of the existing confusion matrix-based approaches can model the spatial distribution of each class, and therefore, when reports are sparsely distributed over an area of interest, they cannot compensate for the lack of data at each location.
In this paper, we propose a novel Bayesian approach to aggregating sparse, geo-tagged reports from sources of varying reliability, which combines independent Bayesian classifier combination (IBCC)  with a GP classifier to infer discrete state values across an area of interest. Our model, HeatmapBCC
, assumes that states at neighbouring locations are correlated, allowing us to fuse neighbouring reports and interpolate between them to predict the state at locations with no reports. HeatmapBCC uses confusion matrices to model the error rates, relevance and bias of each information source, permitting the use of non-expert crowds providing heterogeneous annotations. The GP handles the uncertainty that arises from sparse spatial data in a principled Bayesian manner, allowing us to incorporate prior information, such as physical models of disaster events such as earthquakes, and visualise the resulting posterior distribution as a spatial heatmap. We derive a variational inference method that is able to learn the reliability model for each information source without the need for ground truth training data. This method learns full distributions over latent variables that can be used to prioritise locations for further data gathering using an active learning approach. The next section presents in detail the HeatmapBCC model, and provides details of our efficient approximate inference algorithm. The following section then provides an empirical evaluation of our method on both synthetic and real-world problems, showing that HeatmapBCC can outperform rival methods. We make our code publicly available athttps://github.com/OxfordML/heatmap_expts.
2 The HeatmapBCC Model
Our goal is to classify locations of interest, e.g. to identify them as “flooded” or “not flooded”. We can then choose locations in a grid over an area of interest and plot the classifications on a map as a spatial heatmap
. The task is to infer a vectorof target state values at locations , where is the number of state values or classes. Each row of matrix is a coordinate vector that specifies a point on the map. We observe a matrix of potentially unreliable geo-tagged reports, , with possible discrete values, from different information sources at training locations .
HeatmapBCC assumes that each report label , from information source , at location , is drawn from . The target state, , selects the row, , of a confusion matrix[3, 20], , which describes the errors and biases of as a dependency between the report labels and the ground truth state, . As per standard IBCC , the reports from each information source are conditionally independent of one another given target , and each row of the confusion matrix is drawn from
. The hyperparametersencode the prior trust in .
We assume that state at location is drawn from a categorical distribution, , where is the probability of state at location . The generative process for state probabilities, , is as follows. First, draw latent functions for classes from a Gaussian process prior: , where is the prior mean function, is the prior covariance function, are hyperparameters of the covariance function, and is the inverse scale. Map latent function values to state probabilities: . Appropriate functions for include the logistic sigmoid and probit functions for binary classification, and softmax and multinomial probit for multi-class classification. We assume that
is drawn from a conjugate gamma hyperprior,, where is a shape parameter and is the inverse scale.
While the reports, , are modelled in the same way as standard IBCC , HeatmapBCC introduces a location-specific state probability, , to replace the global class proportions, , which IBCC  assumes are constant for all locations. Using a Gaussian process prior means the state probability varies reasonably smoothly between locations, thereby encoding correlations in the distribution over states at neighbouring locations. The covariance function is chosen to suit the scenario we wish to model and may be tailored to specific spatial phenomena (the geo-spatial impact of an earthquake, for example). The hyperparameters, , typically include a length-scale, , which controls the smoothness of the function. Here, we assume a stationary covariance function of the form , where is a function of the distance between two points and the length-scale,
. The joint distribution for the complete model is:
where , , and with elements .
3 Variational Inference for HeatmapBCC
We use variational Bayes (VB) to efficiently approximate the posterior distribution over all latent variables, allowing us to handle streaming data reports online by restarting the VB algorithm from the previous estimate as new reports are received. To apply variational inference, we replace the exact posterior distribution with a variational approximation that factorises into separate latent variables and parameters:
We perform approximate inference by optimising the variational posterior using Algorithm 1. In the remainder of this section we define the variational factors , expectation terms, variational lower bound and prediction step required by the algorithm.
Variational Factor for Targets, :
The variational factor further factorises into individual data points, since the target value, , at each input point, , is independent given the state probability vector , giving where and:
Missing reports in can be handled simply by omitting the term for information sources, , that have not provided a report .
Variational Factor for Confusion Matrix Rows, :
where are pseudo-counts and is the Kronecker delta. Since we assumed a Dirichlet prior, the variational distribution is also a Dirichlet, , with parameters , where . Using the digamma function, , the expectation required for Equation 2 is therefore:
Variational Factor for Latent Function:
The variational factor factorises between target classes, since at each point is independent given . Using the fact that , the factor for each class is:
This variational factor cannot be computed analytically, but can itself be approximated using a variational method based on the extended Kalman filter (EKF)[18, 22] that is amenable to inclusion in our overall VB algorithm. Here, we present a multi-class variant of this method that applies ideas from . We approximate the likelihood
with a Gaussian distribution, usingto replace Equation 4 with the following:
where is the variance of the binary indicator variable
given by the Bernoulli distribution. We approximate Equation5 by linearising using a Taylor series expansion to obtain a multivariate Gaussian distribution . Consequently, we estimate using EKF-like equations[18, 22]:
where and is the Kalman gain, is the vector of probabilities of target state computed using Equation 2 for the input points, is the diagonal sigmoid Jacobian matrix and is a diagonal observation noise variance matrix. The diagonal elements of are , where is the matrix of mean values for all classes.
The diagonal elements of the noise covariance matrix are , which we approximate as follows. Since the observations are Bernoulli distributed with an uncertain parameter
, the conjugate prior over
is a beta distribution with parametersand . This can be updated to a posterior Beta distribution , where and . We now estimate the expected variance:
We determine values for the prior beta parameters,
, by moment matching with the prior meanand variance of , found using numerical integration. According to Jensen’s inequality, the convex function is a lower bound on . Thus our approximation provides a tractable estimate of the expected value of .
The calculation of requires evaluating the latent function at the input points . Further, Equation 6 requires to approximate , causing a circular dependency. Although we can fold our expressions for and directly into the VB cycle and update each variable in turn, we found solving for and each VB iteration facilitated faster inference. We use the following iterative procedure to estimate and :
Initialise using Equation 9.
Estimate using the current estimate of .
Update the mean using Equation 6, inserting the current estimate of .
Repeat from step 2 until and converge.
The latent means, , are then used to estimate the terms for Equation 2:
Variational Factor for Inverse Function Scale:
The inverse covariance scale, , can also be inferred using VB by taking expectations with respect to :
which is a gamma distribution with shapeand inverse scale . We use these parameters to compute the expected latent model precision, in Equation 7, and for the lower bound described in the next section we also require .
Variational Lower Bound:
Due to the approximations described above, we are unable to guarantee an increased variational lower bound for each cycle of the VB algorithm. We test for convergence of the variational approximation efficiently by comparing the variational lower bound on the model evidence calculated at successive iterations. The lower bound for HeatmapBCC is given by:
Once the algorithm has converged, we predict target states, and probabilities at output points by estimating their expected values. For a heatmap visualisation, is a set of evenly-spaced points on a grid placed over the region of interest. We cannot compute the posterior distribution over analytically due to the non-linear sigmoid function. We therefore estimate the expected values by sampling from its posterior and mapping the samples through the sigmoid function. The multivariate Gaussian posterior of has latent mean and covariance :
where is the prior mean at the output points, is the covariance matrix of the output points, is the covariance matrix between the input and the output points, and is the Kalman gain. The predictions for output states are the expected probabilities of each state at each output point , computed using Equation 2. In a multi-class setting, the predictions for each class could be plotted as separate heatmaps.
We compare the efficacy of our approach with alternative methods on synthetic data and two real datasets. In the first real-world application we combine crowdsourced annotations of images in the aftermath of a disaster, while in the second we aggregate crowdsourced labels assigned to geo-tagged text messages to predict emergencies in the aftermath of an Earthquake. All experiments are binary classification tasks where reports may be negative (recorded as ) or positive (). In all experiments, we examine the effect of data sparsity using an incremental train/test procedure:
Train all methods on a random subset of reports (initially a small subset)
Predict states at grid points in an area of interest. For HeatmapBCC, we use the predictions described in Section 3
Evaluate predictions using the area under the ROC curve (AUC) or cross entropy classification error
Increment subset of training labels at random and repeat from step 1.
Specific details vary in each experiment and are described below. We evaluate HeatmapBCC against the following alternatives: a Kernel density estimator (KDE)[19, 15], which is a non-parametric technique that places a Gaussian kernel at each observation point, then normalises the sum of Gaussians over all observations; a GP classifier , which applies a Bayesian non-parametric approach but assumes reports are equally reliable; IBCC with VB, which performs no interpolation between spatial points, but is a state-of-the-art method for combining unreliable crowdsourced classifications; and an ad-hoc combination of IBCC and the GP classifier (IBCC+GP
), in which the output classifications of IBCC are used as training labels for the GP classifier. This last method illustrates whether the single VB learning approach of HeatmapBCC is beneficial, for example, by transferring information between neighbouring data points when learning confusion matrices. For the first real dataset, we include additional baselines: SVM with radial basis function kernel; a K-nearest neighbours classifier with(NN); and majority voting (MV), which defaults to the most frequent class label (negative) in locations with no labels .
4.1 Synthetic Data
We ran three experiments with synthetic data to illustrate the behaviour of HeatmapBCC with different types of unreliable reporters. For each experiment, we generated binary ground truth datasets as follows: obtain coordinates at all points in a grid; draw latent function values from a multivariate Gaussian distribution with zero mean and Matérn covariance with and inverse scale ; apply sigmoid function to obtain state probabilities, ; draw target values, , at all locations.
: median improvement of HeatmapBCC over alternatives over 25 datasets, against number of crowdsourced labels. Shaded areas show inter-quartile range. Top-left: AUC, 25% noisy reporters. Top-right: AUC, 50% noisy reporters. Bottom-left: AUC, 75% noisy reporters. Bottom-right: NLPD of state probabilities,, with 50% noisy reporters.
Noisy reporters: the first experiment tests robustness to error-prone annotators. For each of the ground truth datasets, we generated three crowds of reporters. In each crowd, we varied the number of reliable reporters between , and , while the remainder were noisy reporters with high random error rates. We simulated reliable reporters by drawing confusion matrices, , from beta distributions with parameter matrix set to along the diagonals and elsewhere. For noisy workers, all parameters were set equally to . For each proportion of noisy reporters, we selected reporters and grid points at random, and generated reports by drawing binary labels from the confusion matrices . We ran the incremental train/test procedure for each crowd with each of the ground truth datasets. For HeatmapBCC, GP and IBCC+GP the kernel hyperparameters were set as , , and . For HeatmapBCC, IBCC and IBCC+GP, we set confusion matrix hyperparameters to along the diagonals and elsewhere, assuming a weak tendency toward correct labels. For IBCC we also set .
Figure 1 shows the median differences in AUC between HeatmapBCC and the alternative methods for noisy reporters. Plotting the difference between methods allows us to see consistent performance differences when AUC varies substantially between runs. More reliable workers increase the AUC improvement of HeatmapBCC. With all proportions of workers, the performance improvements are smaller with very small numbers of labels, except against IBCC, as none of the methods produce a confident model with very sparse data. As more labels are gathered, there are more locations with multiple reports, and IBCC is able to make good predictions at those points, thereby reducing the difference in AUC as the number of labels increases. However, for the other three methods, the difference in AUC continues to increase, as they improve more slowly as more labels are received. With more than 700 labels, using the GP to estimate the class labels directly is less effective than using IBCC classifications at points where we have received reports, hence the poorer performance of GP and IBCC+GP.
In Figure 1 we also show the improvement in negative log probability density (NLPD) of state probabilities, . We compare HeatmapBCC only against the methods that place a posterior distribution over their estimated state probabilities. As more labels are received, the IBCC+GP method begins to improve slightly, as it is begins to identify the noisy reporters in the crowd. The GP is much slower to improve due to the presence of these noisy labels.
Biased reporters: the second experiment simulates the scenario where some reporters choose the negative class label overly frequently, e.g. because they fail to observe the positive state when it is present. We repeated the procedure used for noisy reporters but replaced the noisy reporters with biased reporters generated using the parameter matrix . We observe similar performance improvements to the first experiment with noisy reporters, as shown in Figure 2, suggesting that HeatmapBCC is also better able to model biased reporters from sparse data than rival approaches.
Figure 3 shows an example of the posterior distributions over produced by each method when trained on random labels from a simulated crowd with biased reporters. We can see that the ground truth appears most similar to the HeatmapBCC estimates, while IBCC is unable to perform any smoothing.
Continuous report locations: in the previous experiments we drew reports from discrete grid points so that multiple reporters produced noisy labels for the same target, . The third experiment tests the behaviour of our model with reports drawn from continuous locations, with 50% noisy reporters drawn as in the first experiment. In this case, our model receives only one report for each object at the input locations . Figure 4 shows that the difference in AUC between HeatmapBCC and other methods is significantly reduced, although still positive. This may be because we are reliant on to make classifications, since we have not observed any reports for the exact test locations . If is close to , the prediction for class label is uncertain. However, the improvement in NLPD of the state probabilities is less affected by using continuous locations, as seen by comparing Figure 1 with Figure 4, suggesting that HeatmapBCC remains advantageous when there is only one report at each training location. In practice, reports at neighbouring locations may be intended to refer to the same , so if reports are treated as all relating to separate objects, they could bias the state probabilities. Grouping reports into discrete grid squares avoids this problem and means we obtain a state classification for each square in the heatmap. We therefore continue to use discrete grid locations in our real-world experiments.
4.2 Crowdsourced Labels of Satellite Images
We obtained a set of 5,477 crowdsourced labels from a trial run of the Zooniverse Planetary Response Network project222http://www.planetaryresponsenetwork.com/beta/. In this application, volunteers labelled satellite images showing damage to Tacloban, Philippines, after Typhoon Haiyan/Yolanda. The volunteers’ task was to mark features such as damaged buildings, blocked roads and floods. For this experiment, we first divided the area into a
grid. The goal was then to combine crowdsourced labels to classify grid squares according to whether they contain buildings with major damage or not. We treated cases where a user observed an image but did not mark any features as a set of multiple negative labels, one for each of the grid squares covered by the image. Our dataset contained 1,641 labels marking buildings with major structural damage, and 1,245 negative labels. Although this dataset does not contain ground truth annotations, it contains enough crowdsourced annotations that we can confidently determine labels for most of the region of interest using all data. The aim is to test whether our approach can replicate these results using only a subset of crowdsourced labels, thereby reducing the workload of the crowd by allowing for sparser annotations. We therefore defined gold-standard labels by running IBCC on the complete set of crowdsourced labels, and then extracting the IBCC posterior probabilities fordata points with crowdsourced labels where the posterior of the most probable class . The IBCC hyperparameters were set to along the diagonals, elsewhere, and .
We ran our incremental train/test procedure 20 times with initial subsets of 178 random labels. Each of these 20 repeats required approximately 45 minutes runtime on an Intel i7 desktop computer. The length-scales for HeatmapBCC, GP and IBCC+GP were optimised at each iteration using maximum likelihood II by maximising the variational lower bound on the log likelihood (Equation 11), as described in . The inverse scale hyperparameters were set to and , and the other hyperparameters were set as for gold label generation. We did not find a significant difference when varying diagonal confusion matrix values from to .
In Figure 5 (left) we can see how AUC varies as more labels are introduced, with HeatmapBCC, GP and IBCC+GP converging close to our gold-standard solution. HeatmapBCC performs best initially, potentially because it can learn a more suitable length-scale with less data than GP and IBCC+GP. SVM outperforms GP and IBCC+GP with labels, but is outperformed when more labels are provided. Majority voting, nearest neighbour and IBCC produce much lower AUCs than the other approaches. The benefits of HeatmapBCC can be more clearly seen in Figure 5 (right), which shows a substantial reduction in cross entropy classification error compared to alternative methods, indicating that HeatmapBCC produces better probability estimates.
4.3 Haiti Earthquake Text Messages
Here we aggregate text reports written by members of the public after the Haiti 2010 Earthquake. The dataset we use was collected and labelled by Ushahidi. We have selected 2,723 geo-tagged reports that were sent mainly by SMS and were categorised by Ushahidi volunteers. The category labels describe the type of situation that is reported, such as “medical emergency" or “collapsed building". In this experiment, we aim to predict a binary class label, "emergency" or "no emergency" by combining all reports. We model each category as a different information source; if a category label is present for a particular message, we observe a value of from that information source at the message’s geo-location. This application differs from the satellite labelling task because many of the reports do not explicitly report emergencies and may be irrelevant. In the absence of ground truth data, we establish a gold-standard test set by training IBCC on all 2723 reports, placed into 675 discrete locations on a grid. Each grid square has approximately 4 reports. We set IBCC hyper-parameters to along the diagonals, elsewhere, and .
Since the Ushahidi data set contains only reports of emergencies, and does not contain reports stating that no emergency is taking place, we cannot learn the length-scale from this data, and must rely on background knowledge. We therefore select another dataset from the Haiti 2010 Earthquake, which has gold standard labels, namely the building damage assessment provided by UNOSAT . We expect this data to have a similar length-scale because the underlying cause of both the building damages and medical emergencies was an earthquake affecting built-up areas where people were present. We estimated using maximum likelihood II optimisation, giving an optimal value of grid squares. We then transferred this point estimate to the model of the Ushahidi data. Our experiment repeated the incremental train/test procedure 20 times with hyperparameters set to , , along the diagonals, elsewhere, and .
Figure 6 shows that HeatmapBCC is able to achieve low error rates when the reports are sparse. The IBCC and HeatmapBCC results do not quite converge due to the effect of interpolation performed by HeatmapBCC, which can still affect the results with several reports per grid square. The gold-standard predictions from IBCC also contain some uncertainty, so cross entropy does not reach zero, even with all labels. The GP alone is unable to determine the different reliability levels of each report type, so while it is able to interpolate between sparse reports, HeatmapBCC and IBCC detect the reliable data and produce different predictions when more labels are supplied. In summary, HeatmapBCC produces predictions with 439 labels (65%) that has an AUC within 0.1 of the gold standard predictions produced using all 675 labels, and reduces cross entropy to 0.1 bits with 400 labels (59%), showing that it is effective at predicting emergency states with reduced numbers of Ushahidi reports. Using an Intel i7 laptop, the HeatmapBCC inference over 675 labels required approximately one minute.
We use HeatmapBCC to visualise emergencies in Port-au-Prince, Haiti after the 2010 earthquake, by plotting the posterior class probabilities as the heatmap shown in Figure 6. Our example shows how HeatmapBCC can combine reports from trusted sources with crowdsourced information. The blue area shows a negative report from a simulated first responder, with confusion matrix hyperparameters set to along the diagonals, so that the negative report was highly trusted and had a stronger effect than the many surrounding positive reports. Uncertainty in the latent function can be used to identify regions where information is lacking and further reconnaisance is necessary. Probabilistic heatmaps therefore offer a powerful tool for situation awareness and planning in disaster response.
In this paper we presented a novel Bayesian approach to aggregating unreliable discrete observations from different sources to classify the state across a region of space or time. We showed how this method can be used to combine noisy, biased and sparse reports and interpolate between them to produce probabilistic spatial heatmaps for applications such as situation awareness. Our experiments demonstrated the advantages of integrating a confusion matrix model to capture the unreliability of different information sources with sharing information between sparse report locations using Gaussian processes. In future work we intend to improve scalability of the GP using stochastic variational inference  and investigate clustering confusion matrices using a hierarchical prior, as per [13, 23], which may improve the ability to learn confusion matrices when data for individual information sources is sparse.
We thank Brooke Simmons at Planetary Response Network for invaluable support and data. This work was funded by EPSRC ORCHID programme grant (EP/I011587/1).
Adams, R.P., Murray, I., MacKay, D.J.: Tractable nonparametric bayesian inference in poisson processes with gaussian process intensities. In: Proceedings of the 26th Annual International Conference on Machine Learning. pp. 9–16. ACM (2009)
-  Corbane, C., Saito, K., Dell’Oro, L., Bjorgo, E., Gill, S.P., Emmanuel Piard, B., Huyck, C.K., Kemper, T., Lemoine, G., Spence, R.J., et al.: A comprehensive analysis of building damage in the 12 january 2010 Mw7 Haiti earthquake using high-resolution satellite and aerial imagery. Photogrammetric Engineering & Remote Sensing 77(10), 997–1009 (2011)
-  Dawid, A.P., Skene, A.M.: Maximum likelihood estimation of observer error-rates using the EM algorithm. Journal of the Royal Statistical Society. Series C (Applied Statistics) 28(1), 20–28 (Jan 1979)
-  Felt, P., Ringger, E.K., Seppi, K.D.: Semantic annotation aggregation with conditional crowdsourcing models and word embeddings. In: International Conference on Computational Linguistics. pp. 1787–1796 (2016)
-  Girolami, M., Rogers, S.: Variational Bayesian multinomial probit regression with Gaussian process priors. Neural Computation 18(8), 1790–1817 (2006)
Hensman, J., Matthews, A.G.d.G., Ghahramani, Z.: Scalable variational Gaussian process classification. In: International Conference on Artificial Intelligence and Statistics (2015)
-  Kim, H., Ghahramani, Z.: Bayesian classifier combination. Gatsby Computational Neuroscience Unit Technical Report GCNU-T.,London, UK (2003)
-  Kom Samo, Y.L., Roberts, S.J.: Scalable nonparametric Bayesian inference on point processes with Gaussian processes. In: International Conference on Machine Learning. pp. 2227–2236 (2015)
-  Kottas, A., Sansó, B.: Bayesian mixture modeling for spatial poisson process intensities, with applications to extreme value analysis. Journal of Statistical Planning and Inference 137(10), 3151–3163 (2007)
-  Lintott, C.J., Schawinski, K., Slosar, A., Land, K., Bamford, S., Thomas, D., Raddick, M.J., Nichol, R.C., Szalay, A., Andreescu, D., et al.: Galaxy zoo: morphologies derived from visual inspection of galaxies from the sloan digital sky survey. Monthly Notices of the Royal Astronomical Society 389(3), 1179–1189 (2008)
Long, C., Hua, G., Kapoor, A.: A joint Gaussian process model for active visual recognition with expertise estimation in crowdsourcing. International Journal of Computer Vision 116(2), 136–160 (2016)
-  Meng, C., Jiang, W., Li, Y., Gao, J., Su, L., Ding, H., Cheng, Y.: Truth discovery on crowd sensing of correlated entities. In: 13th ACM Conference on Embedded Networked Sensor Systems. pp. 169–182. ACM (2015)
-  Moreno, P.G., Teh, Y.W., Perez-Cruz, F.: Bayesian nonparametric crowdsourcing. Journal of Machine Learning Research 16, 1607–1627 (2015)
-  Morrow, N., Mock, N., Papendieck, A., Kocmich, N.: Independent Evaluation of the Ushahidi Haiti Project. Development Information Systems International 8, 2011 (2011)
Parzen, E.: On estimation of a probability density function and mode. The annals of mathematical statistics 33(3), 1065–1076 (1962)
-  Rasmussen, C.E., Williams, C.K.I.: Gaussian processes for machine learning. The MIT Press, Cambridge, MA, USA 38, 715–719 (2006)
-  Raykar, V.C., Yu, S.: Eliminating spammers and ranking annotators for crowdsourced labeling tasks. Journal of Machine Learning Research 13, 491–518 (2012)
-  Reece, S., Roberts, S., Nicholson, D., Lloyd, C.: Determining intent using hard/soft data and gaussian process classifiers. In: 14th International Conference on Information Fusion. pp. 1–8. IEEE (2011)
-  Rosenblatt, M., et al.: Remarks on some nonparametric estimates of a density function. The Annals of Mathematical Statistics 27(3), 832–837 (1956)
-  Simpson, E., Roberts, S., Psorakis, I., Smith, A.: Dynamic Bayesian combination of multiple imperfect classifiers. Intelligent Systems Reference Library series Decision Making with Imperfect Decision Makers, 1–35 (2013)
-  Simpson, E.D., Venanzi, M., Reece, S., Kohli, P., Guiver, J., Roberts, S.J., Jennings, N.R.: Language understanding in the wild: Combining crowdsourcing and machine learning. In: 24th International Conference on World Wide Web. pp. 992–1002 (2015)
-  Steinberg, D.M., Bonilla, E.V.: Extended and unscented gaussian processes. In: Advances in Neural Information Processing Systems. pp. 1251–1259 (2014)
-  Venanzi, M., Guiver, J., Kazai, G., Kohli, P., Shokouhi, M.: Community-based bayesian aggregation models for crowdsourcing. In: 23rd international conference on World wide web. pp. 155–164 (2014)
-  Venanzi, M., Guiver, J., Kohli, P., Jennings, N.R.: Time-sensitive Bayesian information aggregation for crowdsourcing systems. Journal of Artificial Intelligence Research 56, 517–545 (2016)
-  Venanzi, M., Rogers, A., Jennings, N.R.: Crowdsourcing spatial phenomena using trust-based heteroskedastic Gaussian processes. In: 1st AAAI Conference on Human Computation and Crowdsourcing (HCOMP) (2013)