Reliable variance propagation for spatial density surface models

07/20/2018
by   Mark V Bravington, et al.
0

Density Surface Models (DSMs) are two-stage models for estimating animal density from line-transect data. First, detectability is estimated by distance sampling, and then a Generalized Additive Model (GAM) is used to estimate animal density as a function of location and/or other spatially-varying environmental covariates. One criticism of DSMs has been that uncertainty from the two stages is not usually propagated correctly into the final variance estimates. Here we show how to reformulate a DSM so that the uncertainty in detection probability from the distance sampling stage (regardless of its complexity) is captured as a random effect in the GAM stage. This allows straightforward computation of the overall variance via standard GAM machinery. We further extend the formulation to allow for spatial variation in group size, which can be an important covariate for detectability. We illustrate these models using line transect survey data of minke whales and harbour porpoise from the SCANS-II survey in northern Europe.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 16

08/25/2020

Unbiased estimator for the variance of the leave-one-out cross-validation estimator for a Bayesian normal model with fixed variance

When evaluating and comparing models using leave-one-out cross-validatio...
09/19/2019

Improving inference for nonlinear state-space models of animal population dynamics given biased sequential life stage data

State-space models (SSMs) are a popular tool for modeling animal abundan...
08/29/2018

Consistency of estimators and variance estimators for two-stage sampling

Two-stage sampling designs are commonly used for household and health su...
11/30/2019

Image segmentation of liver stage malaria infection with spatial uncertainty sampling

Global eradication of malaria depends on the development of drugs effect...
05/28/2020

Accounting for location uncertainty in distance sampling data

Ecologists use distance sampling to estimate the abundance of plants and...
01/15/2020

Hierarchical Spatial Modeling of Monotone West Antarctic Snow Density Curves

Snow density estimates below the surface, used with airplane-acquired ic...
04/20/2020

R package SamplingStrata: new developments and extension to Spatial Sampling

The R package SamplingStrata was developed in 2011 as an instrument to o...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Line transect surveys are a standard method to survey both marine and terrestrial populations; distance sampling methods are then used to estimate abundance while accounting for detectability (Buckland et al., 2001). There is particular interest in spatially-explicit (model-based) estimates of cetacean density for management and mitigation purposes (International Whaling Commission, 2012; Hammond et al., 2013; Roberts et al., 2016) where it is important that abundances (and their variances) are calculated over arbitrary areas. Spatial models for distance sampling data must take into account detectability data, usually via an offset in the model. Uncertainty about the (fixed) offset must be incorporated and the covariance between the detectability and spatial parts of the model needs to be estimated. This can be particularly problematic when sighting conditions change as a function of space, in this case the detection process and distribution processes are confounded. For example, during cetacean surveys weather changes in space as the survey progresses, so sightability and habitat covariates are both functions of space. In this paper we show how the detection function uncertainty can be accommodated painlessly within standard software.

There are many methods for estimating spatially-explicit abundance estimates from distance sampling data, here we consider density surface models (DSMs; Hedley and Buckland, 2004; Miller et al., 2013): two-stage models that first use a detection function to model detectability, followed by a spatial model (in our case a generalized additive model; GAM, e.g. Wood, 2017) to capture the animals density as a function of environmental covariates such as sea surface temperature or bathymetry. Detectability is modelled using (for example) methods described in Marques and Buckland (2003), where covariates such as weather or observer are used to estimate the effect of observation conditions on detectability.

This two-stage approach is convenient because it uses well-established software and diagnostic tests for each part of the problem. This does however complicate variance estimation, as uncertainty from the detection function must be propagated through to the spatial model. Joint fitting of the detection function and spatial model is not possible in most GAM packages. In this paper we fit a detection model, then fit the GAM including a quadratic approximation to the detection function log-likelihood centred on the point estimate which enters the linear predictor as a zero-mean normal random effect. We therefore effectively (re)fit both models at once. This approach allows us to use well-tested model checking and diagnostics used first for detection function fitting and then for GAMs. We also propose a diagnostic based on our variance propagation method that can be used to assess model specification issues.

Following a summary of DSMs and notation in Section 2, we present our new formulation in Section 3, including variance computation and diagnostics. Section 4 comments on problems with existing approaches to variance propagation in DSMs. In Section 5, we extend the formulation to cover DSMs where group size varies spatially and affects detectability (a common situation with whales and dolphins). Section 6 gives examples of the variance propagation method and the group size model. Some discussion is given in Section 7, including comparison with single stage methods such as Johnson et al. (2009) and Yuan et al. (2017).

2 Density surface models

We focus on modelling line transect distance sampling data: observers move along a set of survey lines, counting (groups of) animals, recording distances from the line to the observed groups (or their cues, such as blows for cetaceans), and the size of each detected group.

To fully describe the DSMs in this paper, we distinguish four different classes of variable.

  1. Density covariates, , vary in space and potentially affect local animal abundance: e.g., latitude and depth; they are required for prediction and fitting, and are assumed known across the entire region of interest.

  2. Effort covariate(s), (or ), affect detection probability: e.g., sea conditions measured on the Beaufort scale, observer name; assumed known along each transect, but not necessarily in unsurveyed areas.

  3. Individual covariates, , that affect detection probability and are a persistent property of each group (independent of whether the group is observed or not) during its window of observability: e.g., size (number of animals), and perhaps behaviour.

    is assumed known for each observed group (see Discussion). The random variable

    varies from one group to the next, and its statistical distribution may vary spatially. may have a direct effect on abundance (via the mean group size), as well as on detection probability, in which case it is also necessary to estimate certain properties of such as its local mean.

  4. Observation variables, , which are random properties of one observation on one group: e.g., perpendicular distance between the group and the trackline. In certain settings, may contain other elements. For example, in a multi-observer-platform survey (e.g., MRDS; Buckland et al., 2004), might also include which of the active observers saw the group; in a cue-based setting, might include the bearing between sighting and trackline.

These classes are assumed to be mutually exclusive; overlaps can lead to fundamental problems for distance sampling which we do not address here (such as non-uniform animal distribution within the sample unit; e.g., Marques et al., 2012). The distinction between individual and effort covariates is often glossed over but they have rather different implications for abundance estimation (see below).

In the first stage of DSM, the detection function , which involves unknown parameters as well as and , describes the probability of making an observation at some distance . The parameters are usually estimated by maximizing this log-likelihood across observations :

(1)

where is the transect containing sighting . Here is the overall detection probability for a group, defined by

where is the distribution function of . In standard distance sampling where consists only of perpendicular distance, is uniform between and some fixed truncation distance, beyond which observations are discarded. This formulation encompasses a wide range of models, including multiple covariate distance sampling (MCDS; Marques and Buckland, 2003) with and , multi-observer mark-recapture distance sampling (MRDS; Borchers et al., 1998), and cue-based “hazard probability models” (Skaug and Schweder, 1999).

The second part of DSM models the local count of observations via a GAM to capture spatial variation in animal density. This allows us both to estimate abundance within any sub-region of interest, and to compensate as far as possible for uneven survey coverage (whether by design, or by virtue of field logistics and weather conditions). Since transects are generally very long in comparison to their width and therefore contain a range of density and density covariate values, we divide transects into smaller segments, which are the sample units for GAM; environmental covariates are assumed not to change much within each segment. The relationship between counts, , per segment, , and density covariates, is modelled as an additive combination of smooth functions:

(2)

where each segment is of area ), and follows some count distribution such as quasi-Poisson, Tweedie or negative binomial and we assume a link. The are smooth functions, represented by a basis expansion (i.e., for some basis functions ) penalized by a sum of quadratic penalties;

is an intercept term, included in parameter vector

; is a vector of smoothing (hyper)parameters which control the wiggliness of the. We take a Bayesian interpretation of GAMs (Wood, 2017), in which controls the variance of a multivariate improper Gaussian prior (Marra and Wood, 2011):

with scale parameter , smoothing parameters and penalty matrices ( indicates pseudoinverse). We estimate via REML (Wood, 2011)

, an empirical Bayes procedure. Fully Bayesian approaches, placing hyperpriors on

are also possible. Ignoring group size for now we write (we return to group size in Section 5). We may think of the product as the effective area of segment , analogous to the effective strip (half) width (Buckland et al., 2001).

We are interested in the uncertainty of a predicted abundance estimate, . We assume below that we have created some prediction grid with all density covariates available for each cell in the grid. Abundance is predicted for each cell, and summed for an overall abundance, , over some region of interest which may not be the entire surveyed area. Although does not appear explicitly in the prediction:

the GAM offsets clearly do affect , so it is important to account somehow for detection probability uncertainty. (2) assumes the offset is fixed, so extra steps are required.

3 Variance propagation for Density Surface Models

Let be the true probability of detection in segment and omit , therefore assuming that there are no individual-level covariates (e.g., that group size is always 1) for now (see Section 5). If is the true (unknown) value of , and is its MLE, we use the shorthand and when the dependence is clear. The expected number of encounters in segment is where is the underlying density.

Given , we can re-write (2) on the link scale as:

(3)

is the (known) th row of the design matrix, i.e., the values of the basis functions in segment , so and is an offset. The complication is that we only have an estimate of . To tackle this, we first rewrite the linear predictor as

and then take a Taylor series expansion of about :

(4)

By defining the vectors and , we can rewrite 3 as

(5)

Asymptotic arguments suggest that the term can be neglected for reasonable sample sizes. Since estimation of spatial models requires a large amount of data, the requirements for the quadratic approximation are probably met in most cases (if the spatial model is reasonable).

We have approximately that where the covariance matrix is calculated as the negative inverse Hessian of 1. In other words, the “posterior distribution” of from fitting the detection function now becomes a prior distribution for . To first order, then plays the same structural role in (5) as the basis coefficients . The design matrix for ( in (5)) is obtained by differentiating the log-detection probabilities, with respect to at . Derivatives do not need to be accurate, and need only be computed once, so simple 3-point numerical differentiation is perfectly adequate. should be readily available (via the Hessian) from distance sampling stage regardless of the complexity of the model. This method can be applied automatically to almost any distance sampling setup. Simultaneous estimates and can be obtained from standard GAM fitting software. Posterior inferences about (therefore and abundance) automatically propagate the uncertainty from fitting the detection function.

The only technical difference from fitting a standard GAM, is that is usually unknown and has to be estimated (i.e., the prior on has known covariance, but unknown scale), whereas the prior on is completely determined from the detection function fitting (i.e., in effect , where is the scale parameter). This setup cannot be specified directly in the R package mgcv because of implementation details (at least up to version 1.8; it may be possible within other GAM implementations), unless is fixed (as opposed to estimated). With per-segment count data, a fixed (of 1) would correspond to a purely Poisson response, which might be appropriate if maximum count per segment is only 1 or 2. In our experience, better fits can usually be obtained by estimating and using a Tweedie response distribution. In order to implement (5) for a general response distribution using mgcv (Wood, 2011), we therefore use a one-dimensional search over to maximize the marginal REML. At each iteration, given the working value , we re-fit the GAM fixing and . Speed can be improved by re-using some of the setup computations (design matrices, etc) at each iteration.

3.1 Diagnostics and interpretation of

Posterior modes are available after fitting, and may help with diagnosing inconsistencies between the detection function and spatial model. For example, if weather is systematically worse in some parts of the survey region, then both and will contribute to the expected pattern of sightings, and the two sets of parameters will be confounded. Fortunately, because both and are penalized (i.e., partly constrained by their prior distributions), the spatial model has limited ability to affect , but it is nevertheless worth checking whether there is any such tension between the two parts of the model. This can be checked in a couple of ways. The first is to compare the inferred spatial distribution and abundance from fitting (3) with the “naive” estimates where detection uncertainty is ignored and the offset is treated is exact. The second is to check whether the detection probabilities (by covariate level) would be substantially changed by fitting the spatial model; in other words, whether

is close enough to zero given its prior distribution, or, perhaps more usefully, whether the overall detectability by covariate level has changed. Since the fitted spatial model still includes the information from the first stage, any shift of more than about 1 standard deviations (based on the covariance from the detection function stage) might merit investigation.

As a general diagnostic tool for density surface models, we have found it useful to compare total observed and expected numbers of sightings, grouped by detection covariates (e.g., Beaufort). This can be helpful in diagnosing detection function problems, e.g., failure of certain detectability at zero distance under poor weather conditions, as well as failures of the spatial model (e.g., an abrupt change in density). In addition one could also use standard detection function model checking (e.g., quantile-quantile plots) with the adjusted parameters.

3.2 Calculating

Given a model that includes detection function uncertainty, we now discuss two methods to calculate the variance of .

Posterior simulation. The posterior for given data and smoothing parameters , are distributed as ; where is the covariance matrix for the GAM coefficients Wood (2017, Sections 5.8 & 6.9.3). We form the prediction matrix, , which maps model coefficients to values of the linear predictor for the prediction data so (Wood, 2017, Section 6.10). The following algorithm can be used:

  1. For :

    1. Simulate from , to obtain .

    2. Calculate predicted abundance, (where is a row vector of areas for the prediction cells).

  2. Calculate the empirical variance or percentiles of the s.

In practice does not have to be particularly large. Marra et al. (2011) achieve reasonable results with .

Analytically. One can avoid the computational burden of posterior simulation (which can be large for complex models) by calculating:

(6)

(the delta method; Seber 1982, Section 1.3.3) where derivatives are evaluated at the estimated values of the model parameters.

4 Previous methods for estimating uncertainty in Density Surface Models

Several approaches have been suggested to combine detection function and spatial model predicted abundance uncertainties; we review them briefly here. We need to estimate the following:

where here is a random variable for the probability of detection and the subscripts indicate the expectation/variance taken over that variable. The first part of this can be derived from GAM theory and is detailed below, the second is more tricky.

Assuming independence. When is the same for all observations, then , so and are independent. The total variance of the abundance estimate can be calculated by combining the GAM variance estimate with the variance of the probability of detection summing the squared coefficients of variation () (Goodman, 1960). Hence

When there are not covariates in the detection function we calculate:

(7)

In the case where detectability is a function of covariates, it is hard to justify the use of the CV decomposition as there are correlations between the spatial distribution and the covariates affect detectability. Program Distance (Thomas et al., 2010) uses the Horvitz-Thompson-corrected counts per segment as the response (thus removing the detectability from the right hand side of (2)). Variance is then calculated by taking the probability of detection averaged over the observations by first calculating the Horvitz-Thompson estimate of the abundance in the covered area (, where is group size of the observation and is the probability of detecting that group) then using that to calculate the implied average detectability, had the analysis not contained covariates (, where is the number of observed groups). The numerical derivatives of with respect to can then be used in (7) to derive a variance for this probability of detection, averaged over the observations.

We do not recommend this approach, for several reasons. Transforming the response through multiplication by a random variable breaks the mean-variance and independence assumptions of the GAM, so that the computed is invalid (unless there are no covariates in the detection function). Additionally, there is no coherent way to generalize the formula to small-area predictions — the effort covariates within a small area will not have the same range as those in the larger survey area (i.e., weather conditions will not be homogenous throughout the survey area). Hence, the uncertainty that applies to the overall is usually not the appropriate uncertainty to apply to a small area where observing conditions may be untypical. While some ad hoc correction could perhaps be contrived, it is not obvious how to do so reliably in general.

The bootstrap. Bootstraps are sometimes seen as an attractive alternative to deal with all aspects of variance in DSMs, bypassing awkward analytical calculations. There are many different ways to bootstrap a DSM; Hedley and Buckland (2004) describe two possible implementations (one parametric, one non-parametric), which are not easy to choose between and which do not necessarily give similar answers. Ignoring computational time issues the first practical difficulty in setting up a “good” non-parametric bootstrap for a DSM is sampling units, independent of the fitted model: in situations where there is more than one survey occasion, it is unclear exactly how one decides on the units used for resampling.

The second, more substantial, issue is the fundamental statistical problem with combining smoothers (which essentially come from a Bayesian/random effects paradigm) with bootstraps (a deeply frequentist concept). The problem does not seem to be well-known in the statistical ecology literature, so we give an explanation here. The basic problem is that (most) bootstraps use only the posterior modes of random effects, thus omitting a key part of the posterior uncertainty. To see this, consider a simple “spatial model” where the region is divided into blocks, each with its own independent random effect, and a bootstrap that generates new data at each original observation/transect, either parametrically or non-parametrically. If one of the blocks is unsampled in the original data, it will be unsampled in every realization too, and the “spatial model” simply sets the point estimate of that random effect to zero in every bootstrap realization; hence a bootstrap will ascribe zero uncertainty about the density in that block. The correct inference would of course be for the random effect to retain its prior variance. This phenomenon has been well-known in statistics since at least Laird and Louis (1987) (see also the discussants), who coined the term “naïve bootstrap” for such procedures that ignore the point estimate shrinkage inevitable in mixed or random effect models (fixed effect models are not susceptible in the same way). They proposed some parametric modifications (“type II” and “type III” bootstraps) that are more sensible in the limited situations they consider. However, the underlying theory is complex (Carlin and Gelfand, 1991; Carlin and Louis, 2008) and it is far from clear whether simple yet reliable bootstraps can be devised for complicated multi-stage random effect situations like DSMs.

The GAM formulation, whereby each random effect spreads its influence across a wide area and the posterior distributions of the are not independent, disguises the naïve bootstrap issue but does not fully remove it, especially in line transect settings. Figure 1 shows a simple unidimensional Poisson process, sampled at either end but not in the middle (rug plot). Bootstrap replicates (shown in light grey, of which there are 500) largely fail to capture our uncertainty in the unsampled middle area. The analytical estimate (dark grey band) illustrates how little we know about the unsampled area.

Figure 1: Comparison of bootstrap and analytical uncertainty for a Poisson process. The black line is the true intensity function(on the response scale) and points are observations. Blue line is a smooth of space and light grey wiggly lines are 500 bootstrap predictions, dashed lines are point-wise upper and lower 95% quantiles from the bootstrap, the dark grey band is the analytical GAM confidence band using (6). The bootstrap appears confident that there is nothing in the unsampled area, but the analytical estimate illustrates how little we know.

The above does not imply that simple or indeed complicated bootstraps will never give reliable results in line transect density surface modelling; given plenty of observations and good, even coverage, many approaches to inference will give similar and good results. However, it is sometimes not obvious whether this holds for a specific dataset, nor what to do bootstrap-wise if not. Instead, the (empirical) Bayes framework of GAMs offers a coherent and general-purpose way to capture uncertainty which we explore next. See Wood (2017, Section 6.10.3) for further comment.

5 A new model for group size

Our variance propagation method addresses when detectability depends only on effort covariates, not for individual covariates such as group size. Incorporating individual covariates in the detection function is not problematic but it is not obvious how to allow for these different detection probabilities in the GAM. Further, it is not obvious how to combine predictions of different group sizes since average group sizes may vary spatially (Ferguson et al., 2006).

One approach is to use Horvitz-Thompson estimates of abundance per segment as the response in the model, correcting for detectability in the response rather than incorporating it via the offset of the model (Hedley and Buckland, 2004), but this does not allow variance propagation. One could fit a separate spatial model to subsets of the data for each group size, though it seems inefficient to not share information between subsets of the data. To account for group size variation in space and in detectability we propose a new model that allows us to propagate variance in the manner described above.

We form categories of group sizes, denoted , where groups within each category have similar detectabilities and fit a detection function incorporating these group size categories. We then fit a GAM to an -fold replicate of the dataset, with the response in the replicate of the segment being , the number of groups in category that were seen in that segment. (The total number of observations is unchanged; each observation is allocated to just one of the “replicates”). Group size category (as a factor) is included as an explanatory variable, and smooths are modified to allow similar variations in density of groups with different sizes. There are no extra assumptions in this formulation from the model in Section 3, except to assume that the numbers of groups of different size categories in a given segment are independent, given the underlying density (which is allowed to vary with group size).

Factor-smooth interactions

We extend (2) to include multiple smooths of space which correspond to different categorizations of group size, so our model is:

for where is the number of observed groups in group class in segment and is the spatial smooth (where is a spatial coordinate) for group size class . Smoothers like are referred to as factor-smooth interactions (Wood, 2017; Pedersen et al., 2018). are any other smooths (of covariates ). For clarity we make the dependence on group size class explicit: , i.e., the probability of detection given segment-level detection covariates and group size . In practice we duplicate the full segment data as many times as there are group size categories (so our model now has (number of segments) rows of data) with a response of the number of groups in that category that occur in that segment.

There are a number of different possible forms for . These vary in two main ways: (1) do levels share a smoothing parameter, or have separate ones? (2) do smooths tend toward a “global” smooth that dictates a general spatial effect. Here we adopt the “fs” basis in mgcv which can be thought of as a non-linear version of a random slopes model: smooths are generated for each factor level with smooths defined as deviations from a reference level. Each of the levels has the same smoothing parameter; this is appealing as we might expect that the spatial smooths for each group size are similar but there might be some process that generates larger groups in certain places (e.g., large prey aggregations attracting large groups of animals). This approach is easily extended to non-spatial smoothers (e.g., depth).

Abundance and uncertainty estimation with group size smooths

Abundance is estimated by summing over the predictions for each group size category () and weighting them by the corresponding mean group size ():

We can find (where is the mean group size) from the variance propagation procedure above, but we need , which we can obtain from the law of total variance:

(8)

where is the variance within category , estimated by its empirical variance. The effect of on should be small (because categories are narrow, and mean must lie within category), and also should not vary much spatially, so further spatial adjustment to that variance component is not required. Estimates are of course correlated as they are fitted within a single GAM, so is a covariance matrix.

6 Examples

We re-analyse line transect data for two species from the SCANS-II surveys in European Atlantic waters. For survey details and a serious analysis see Hammond et al. (2013). Here we assume certain detection on the trackline and ignore island/coastline effects in the spatial model. We computed a diagnostic for both models comparing observed versus expected counts at the observed levels of observation-level covariates (in addition to usual distance sampling/GAM diagnostics). Models were fitted using the dsm package in R, using the dsm_varprop function to calculate the variance of .

6.1 Minke whales

Minke whale (Balaenoptera acutorostrata) shipboard survey data (from the North Sea; strata U and V in Hammond et al. 2013), included Beaufort as a covariate in a hazard-rate detection function (recorded at each segment, binned as [0, 1], (1, 2], (2, 4], only including segments where sea state ). The following spatial model was then fitted:

where , constrained to have a minimum power parameter of 1.2 since values in the range 1–1.2 make little practical difference to variances but can lead to numerical problems. are projected coordinates. Distances were truncated at 870m for detection function fitting, as in Hammond et al. (2013); we ignore group size as most group sizes were predominantly 1 (total 68 observations, with 3 size 2 and 1 size 5, all others singletons). Table 1 shows the number of predicted minke whales per kilometre of transect from the model given Beaufort — minke density is correlated with weather conditions, so we need to account for this when calculating variance.

Table 2 compares estimated detectability at different Beaufort levels from the detection function with those “corrected” during the fitting of the variance propagation model; we see there is little difference in the probability of detection after refitting. 0.2465 using variance propagation method versus 0.2643 when using (7). Our estimates are lower, indicating there is some negative covariance between density and detectability. increases in the most severe sea state from the middle level (last row of Table 2), so this feature is perhaps driving covariance between the spatial model and detection function.

Beaufort Predicted abundance per km Segments
[0,1] 0.026 0.508 255
(1,2] 0.008 0.216 316
(2,4] 0.007 0.247 386
Table 1: Predicted number of minke whales per linear kilometre of transect by observed Beaufort, as categorised in the detection function. Number of segments at each sea state category is listed in the final column.
Beaufort
[0,1] 0.508 0.115 0.482
(1,2] 0.216 0.078 0.203
(2,4] 0.247 0.088 0.304
Table 2: Changes in estimated detectability for the levels of Beaufort between the fitted model denoted as

, its standard error and the “corrected” detectability from the variance propagation model denoted

for the minke whale data. Beaufort is treated as a factor in the analysis, hence detectability is non-monotonic with increasing sea state.
[0,1] (1,2] (2,4]
Observed 41.00 14.00 20.00
Expected 42.58 15.28 14.59
Table 3: Observed versus expected counts from the minke whale DSM at levels of Beaufort used in the detection function.

6.2 Harbour porpoise

Harbour porpoise (Phocoena phocoena) aerial survey data from SCANS-II in the Irish Sea, coastal Irish waters and Western coastal Scotland (strata O, R and N, respectively) was used to illustrate the group size model. Aerial surveys were preferred as error in group size is likely less of an issue than in shipboard surveys for harbour porpoise (Phil Hammond, Debi Palka, personal communication, November 2017). Three group size bins were formed: size 1 (131 observations), 2 (35 observations) and 3-5 (14 observations). We fitted the following model:

where is the detectability (estimated from a hazard-rate detection function) in segment for group size class (), when the Beaufort level was (same as for minke whales). where the power parameter was constrained to be . Each

had a maximum basis size of 20 (total basis size of 60); the fitted model had a total effective degrees of freedom of 20.47 for

. Table 4 shows observed vs expected counts by Beaufort — there is some misfit at the highest state (perhaps due to our assumption certain detection at zero distance; Hammond et al. estimated this as 0.45 for Beaufort 0-1 and 0.31 in Beaufort 2-3). GAM checking showed reasonable fit to the data (low deviance explained of 26.4% is not uncommon with DSMs). Plots of the per-group size bin predictions and the combined prediction are given in Figure 2. Using (7), the CV of abundance was estimated to be 0.0236, when our new variance propagation method was used the CV was estimated as 0.0965. So variance calculated assuming independence (via (7)) underestimates uncertainty in the case were group size (and therefore detectability) vary in space. Only a small piece of code implementing (8) was required in addition to the dsm_varprop function.

Figure 2: Predicted density surfaces from the new group size model for harbour porpoise. First three plots are density maps for the given group size (i.e., group abundance multiplied by mean group size), right plot shows the combined map, summing the previous three plots per prediction cell. We can see that distribution is roughly similar in all three group size categories though with almost no larger groups in the North, far more animals occurring as singletons than in larger groups.
[0,1] (1,2] (2,4]
Observed 98 36.00 30.00
Expected 96 34.32 37.14
Table 4: Observed versus expected counts from the harbour porpoise DSM at levels of Beaufort used in the detection function.

7 Discussion

We have shown how to (i) propagate uncertainty from detectability models to the spatial models for a particular class of detection function (i.e., those without individual-level covariates) and (ii)

include group size as a covariate in the detection function while still being able to propagate uncertainty and address spatial variation in group size. These methods are implemented in the dsm package for R but can be implemented in any standard GAM fitting software. We also have reiterated the issues with using bootstraps for estimating uncertainty in random effects models, a model class that includes DSMs. It is straightforward to apply our group size approach to covariates which affect detectability and vary in space, but do not directly affect abundance; e.g., behaviour: if feeding groups are less conspicuous (by being underwater) than resting groups (at the surface) of similar size, and feeding is more prevalent in some parts of the surveyed area. A major advantage of this approach over simple (or complex) stratification schemes is that we are now sharing information between the levels of our categorized variable, this makes the results less sensitive to over specifying the number of categories, as the model will shrink back towards the simpler model in the absence of strongly informative data.

We have assumed that all variables are measured without much error. Measurement error for individual-level covariates such as group size can be a serious problem in distance sampling (Hodgson et al., 2017) — distance between observer and group can affect not just detectability, but also the extent of group size error. If group size varies spatially, it is hard to see how to separate the spatial modelling stage from the distance sampling stage. A full discussion is beyond the scope of this paper, but we suspect that specially-designed observation protocols and bespoke analyses may be the only way to tackle such thorny cases.

All-in-one fitting of both detection and spatial models is also possible (e.g., Johnson et al., 2009; Yuan et al., 2017). If models are specified correctly the all-in-one approach could be slightly more efficient, but adding the full detection function likelihood to the spatial model likely would not make much difference in addition to our quadratic approximation used here. Nevertheless, our own preference is for the two-stage approach, mainly because in our experience the careful fitting of detection functions is a complicated business which can require substantial model exploration and as few as possible “distractions” (such as simultaneously worrying about the spatial model). The two-stage process allows any form of detection function to be used, without having to make deep modifications to software. In summary, if one knew one had the correct model to begin with, one-stage fitting would be more efficient, but this is never the case in practice. It is valuable to check for any tension or confounding between the detection function and density surface parts of the model, which can occur if there are large-scale variations in sighting conditions across the survey region (Section 3.1), this is readily diagnosed in a two-stage model. Although this does not appear to lead to problems in the datasets we have analysed with the software described in this paper, we have come across it in other variants of line transect-based spatial models with different datasets. It may not be so easy to detect partial confounding when using all-in-one frameworks.

Finally, we note that the approach outlined here (of using a first-stage estimate as a prior for a second-estimate, and propagating variance appropriately) is quite general and is comparable to standard sequential Bayesian approaches to so-called “integrated data models”. The first-stage model need not be a detection function (or such a disparate model) but instead could be from another GAM (or other latent Gaussian model). Again, this allows us to ensure that first-stage models are correct before moving to more complex modelling. Modelling need not only be two-stage and could extend to multi-stage models.

Supplementary Materials

R code implementing the group size model in Section 5 along with data and code for the minke whale and harbour porpoise analyses in Section 6 are available at https://github.com/dill/varprop-suppmaterials.

Acknowledgements

The authors thank Natalie Kelly, Jason Roberts, Eric Rexstad, Phil Hammond, Steve Buckland and Len Thomas for useful discussions, Devin Johnson for the suggestion of this as a general method, and Simon Wood for continued development of mgcv and GAM theory. Data are from the SCANS-II project supported by the EU LIFE Nature programme (project LIFE04NAT/GB/000245) and governments of range states: Belgium, Denmark, France, Germany, Ireland, Netherlands, Norway, Poland, Portugal, Spain, Sweden and UK. This work was funded by OPNAV N45 and the SURTASS LFA Settlement Agreement, and being managed by the U.S. Navy’s Living Marine Resources program under Contract No. N39430-17-C-1982, US Navy, Chief of Naval Operations (Code N45), grant number N00244-10-1-0057 and the International Whaling Commission.

References

  • Borchers et al. [1998] D. L. Borchers, W. Zucchini, and R. M. Fewster. Mark-Recapture Models for Line Transect Surveys. Biometrics, 54(4):1207, 1998.
  • Buckland et al. [2001] S. T. Buckland, D. R. Anderson, K. P. Burnham, D. L. Borchers, and L. Thomas. Introduction to Distance Sampling. Estimating Abundance of Biological Populations. Oxford University Press, Oxford, UK, 2001.
  • Buckland et al. [2004] S. T. Buckland, D. R. Anderson, K. P. Burnham, J. L. Laake, D. L. Borchers, and L. Thomas. Advanced Distance Sampling. Estimating abundance of biological populations. Oxford University Press, Oxford, UK, Aug. 2004.
  • Carlin and Gelfand [1991] B. P. Carlin and A. E. Gelfand.

    A sample reuse method for accurate parametric empirical Bayes confidence intervals.

    Journal of the Royal Statistical Society: Series B, 1991.
  • Carlin and Louis [2008] B. P. Carlin and T. A. Louis. Bayesian Methods for Data Analysis, Third Edition. CRC Press, June 2008.
  • Ferguson et al. [2006] M. C. Ferguson, J. Barlow, P. Fiedler, S. B. Reilly, and T. Gerrodette. Spatial models of delphinid (family Delphinidae) encounter rate and group size in the eastern tropical Pacific Ocean. Ecological Modelling, 193(3-4):645–662, Mar. 2006.
  • Goodman [1960] L. A. Goodman. On the Exact Variance of Products. Journal of the American Statistical Association, 55(292):708, 1960.
  • Hammond et al. [2013] P. S. Hammond, K. Macleod, P. Berggren, D. L. Borchers, L. Burt, A. Cañadas, et al. Cetacean abundance and distribution in European Atlantic shelf waters to inform conservation and management. Biological Conservation, 164(C):107–122, Aug. 2013.
  • Hedley and Buckland [2004] S. L. Hedley and S. T. Buckland. Spatial models for line transect sampling. Journal of Agricultural, Biological, and Environmental Statistics, 9(2):181–199, June 2004.
  • Hodgson et al. [2017] A. Hodgson, D. Peel, and N. Kelly. Unmanned aerial vehicles for surveying marine fauna: assessing detection probability. Ecological applications : a publication of the Ecological Society of America, 27(4):1253–1267, 2017.
  • International Whaling Commission [2012] International Whaling Commission. Requirements and Guidelines for Conducting Surveys and Analysing Data within the Revised Management Scheme. Journal of Cetacean Research and Management, 13(SUPPL):509–517, 2012.
  • Johnson et al. [2009] D. S. Johnson, J. L. Laake, and J. M. Ver Hoef. A Model-Based Approach for Making Ecological Inference from Distance Sampling Data. Biometrics, 66(1):310–318, 2009.
  • Laird and Louis [1987] N. M. Laird and T. A. Louis. Empirical Bayes Confidence Intervals Based on Bootstrap Samples. Journal of the American Statistical Association, 82(399):739–750, Sept. 1987.
  • Marques and Buckland [2003] F. F. C. Marques and S. T. Buckland. Incorporating covariates into standard line transect analyses. Biometrics, 59(4):924–935, Dec. 2003.
  • Marques et al. [2012] T. A. Marques, S. T. Buckland, R. Bispo, and B. Howland. Accounting for animal density gradients using independent information in distance sampling surveys. Statistical Methods & Applications, 22(1):67–80, Nov. 2012.
  • Marra and Wood [2011] G. Marra and S. N. Wood. Practical variable selection for generalized additive models. Computational Statistics and Data Analysis, 55(7):2372–2387, July 2011.
  • Marra et al. [2011] G. Marra, D. L. Miller, and L. Zanin. Modelling the spatiotemporal distribution of the incidence of resident foreign population. Statistica Neerlandica, 66(2):133–160, Nov. 2011.
  • Miller et al. [2013] D. L. Miller, M. L. Burt, E. A. Rexstad, and L. Thomas. Spatial models for distance sampling data: recent developments and future directions. Methods in Ecology and Evolution, 4(11):1001–1010, Aug. 2013.
  • Pedersen et al. [2018] E. J. Pedersen, D. L. Miller, G. L. Simpson, and N. Ross. Hierarchical generalized additive models: an introduction with mgcv. PeerJ Preprints, 6:e27320v1, Nov. 2018. ISSN 2167-9843. doi: 10.7287/peerj.preprints.27320v1. URL https://doi.org/10.7287/peerj.preprints.27320v1.
  • Roberts et al. [2016] J. J. Roberts, B. D. Best, L. Mannocci, E. Fujioka, P. N. Halpin, D. L. Palka, L. P. Garrison, K. D. Mullin, T. V. N. Cole, C. B. Khan, W. A. McLellan, D. A. Pabst, and G. G. Lockhart. Habitat-based cetacean density models for the U.S. Atlantic and Gulf of Mexico. Scientific Reports, pages 1–12, Mar. 2016.
  • Seber [1982] G. A. F. Seber. The Estimation of Animal Abundance and Related Parameters. Macmillan, New York, 1982.
  • Skaug and Schweder [1999] H. J. Skaug and T. Schweder. Hazard models for line transect surveys with independent observers. Biometrics, 55(1):29–36, Mar. 1999.
  • Thomas et al. [2010] L. Thomas, S. T. Buckland, E. A. Rexstad, J. L. Laake, S. Strindberg, S. L. Hedley, et al. Distance software: design and analysis of distance sampling surveys for estimating population size. Journal of Applied Ecology, 47(1):5–14, Feb. 2010.
  • Wood [2011] S. N. Wood. Fast stable restricted maximum likelihood and marginal likelihood estimation of semiparametric generalized linear models. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 73(1):3–36, 2011.
  • Wood [2017] S. N. Wood. Generalized Additive Models: An Introduction with R, Second Edition. Chapman & Hall/CRC Texts in Statistical Science. CRC Press, 2017.
  • Yuan et al. [2017] Y. Yuan, F. E. Bachl, F. Lindgren, D. L. Borchers, I. J. B., S. T. Buckland, et al. Point process models for spatio-temporal distance sampling data from a large-scale survey of blue whales. The Annals of Applied Statistics, 11(4):2270–2297, 2017.