Large Data and (Not Even Very) Complex Ecological Models: When Worlds Collide
We consider the challenges that arise when fitting complex ecological models to 'large' data sets. In particular, we focus on random effect models which are commonly used to describe individual heterogeneity, often present in ecological populations under study. In general, these models lead to a likelihood that is expressible only as an analytically intractable integral. Common techniques for fitting such models to data include, for example, the use of numerical approximations for the integral, or a Bayesian data augmentation approach. However, as the size of the data set increases (i.e. the number of individuals increases), these computational tools may become computationally infeasible. We present an efficient Bayesian model-fitting approach, whereby we initially sample from the posterior distribution of a smaller subsample of the data, before correcting this sample to obtain estimates of the posterior distribution of the full dataset, using an importance sampling approach. We consider several practical issues, including the subsampling mechanism, computational efficiencies (including the ability to parallelise the algorithm) and combining subsampling estimates using multiple subsampled datasets. We demonstrate the approach in relation to individual heterogeneity capture-recapture models. We initially demonstrate the feasibility of the approach via simulated data before considering a challenging real dataset of approximately 30,000 guillemots, and obtain posterior estimates in substantially reduced computational time.
READ FULL TEXT