Percentile-Based Residuals for Model Assessment

10/08/2019 ∙ by Sophie Bérubé, et al. ∙ Johns Hopkins Medicine 0

Residuals are a key component of diagnosing model fit. The usual practice is to compute standardized residuals using expected values and standard deviations of the observed data, then use these values to detect outliers and assess model fit. Approximate normality of these residuals is key for this process to have good properties, but in many modeling contexts, especially for complex, multi-level models, normality may not hold. In these cases outlier detection and model diagnostics aren't properly calibrated. Alternatively, as we demonstrate, residuals computed from the percentile location of a datum's value in its full predictive distribution lead to well calibrated evaluations of model fit. We generalize an approach described by Dunn and Smyth (1996) and evaluate properties mathematically, via case-studies and by simulation. In addition, we show that the standard residuals can be calibrated to mimic the percentile approach, but that this extra step is avoided by directly using percentile-based residuals. For both the percentile-based residuals and the calibrated standard residuals, the use of full predictive distributions with the appropriate location, spread and shape is necessary for valid assessments.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

Abstract

Residuals are a key component of diagnosing model fit. The usual practice is to compute standardized residuals using expected values and standard deviations of the observed data, then use these values to detect outliers and assess model fit. Approximate normality of these residuals is key for this process to have good properties, but in many modeling contexts, especially for complex, multi-level models, normality may not hold. In these cases outlier detection and model diagnostics aren’t properly calibrated. Alternatively, as we demonstrate, residuals computed from the percentile location of a datum’s value in its full predictive distribution lead to well calibrated evaluations of model fit. We generalize an approach described by Dunn and Smyth (1996) and evaluate properties mathematically, via case-studies and by simulation. In addition, we show that the standard residuals can be calibrated to mimic the percentile approach, but that this extra step is avoided by directly using percentile-based residuals. For both the percentile-based residuals and the calibrated standard residuals, the use of full predictive distributions with the appropriate location, spread and shape is necessary for valid assessments.

KEYWORDS
Percentile-based residuals, Model assessment, Outlier detection, non-Gaussian predictions, Well-calibrated diagnostics.

1 Introduction

Residuals are a key component of diagnosing model fit. They are used to identify outlying data points, and plotted against predicted values they can reveal model lack of fit. The commonly used standard residuals are computed as, Residual = (Observed - Expected)/SD

, where the ‘Expected’ and ‘SD’(standard deviation) come from a data point-specific predictive distribution. Irrespective of model form, if the predictive distributions are well estimated, these residuals have mean 0 and variance 1, however there is no guarantee that they will have a

distribution. If they aren’t

, outlier detection and global model assessments can perform poorly, with poorly calibrated Type I error for outlier detection, potentially low power, and misleading residual plots. Therefore, a residual that is well-calibrated for any predictive distribution has the potential to improve performance, and a percentile-based approach achieves this goal.

Herein, we develop and evaluate a generalization of Dunn and Smyth (1996)

. Expanding on their randomized percentile-based residual, we consider an approach that goes beyond the first two moments of the full predictive distribution and uses all data points to derive the full predictive distribution, which is then used in its entirety to compute the percentile-based residuals. We further derive mathematical properties of these percentile-based residuals including their power.

Percentile-based residuals are computed by finding the percentile location of an observation in its full predictive distribution, then computing the corresponding Gaussian quantile to produce the residual. For continuous distributions, when the full predictive distribution matches the underlying truth, these residuals are distributed

. The definition is general in that the full predictive distribution can be from a Bayesian analysis (including using the MCMC draws as the distribution), from frequentist modeling, from machine learning (e.g. classification and regression trees (CART), support vector machines, etc.) or from any other modeling approach. More recent work on this topic includes 

Cook et al. (2006) describing the ‘percentile, and Gaussian quantile’ approach to evaluating computer software and Efron (2008) who presents an example of transforming to z-values. The use of inverse Gaussian transformations was introduced much earlier than 2006 though, with Efron (1987)

showing an example of the ability of normalizing inverse transformations to automatically improve performance of the bootstrap confidence intervals without the user having to re-calibrate the process for each new application.

With ‘SD’, the standard deviation of the predictive distribution, if the working predictive distribution is Gaussian, the standard (Observed - Expected)/SD residuals are identical to the percentile-based residuals. However, the Gaussian assumption is commonly inappropriate. For instance, Dunn and Smyth (1996)

consider a log-linear model in the context of survival data as well as a logistic regression in the context of bionomial data. Rather than these examples, we consider models for which the full predictive distribution may incorporate parameter uncertainty, and hierarchical models with distributions that aren’t Gaussian. We show that in such cases, assuming normality, and using the standard residuals can lead to poorly calibrated model diagnostics and outlier detection; in the context of formal hypothesis testing, conservative or inflated Type I errors and similar influences on the power of the test. Against this background, we show that replacing the standard residuals by the percentile-based properly calibrates model assessment and testing. Similar advantages are associated with using percentile-based residuals in diagnostic plots. Finally we show that the usual residuals can be calibrated to mimic the percentile-based approach, but this step is avoided by direct use of percentiles.

2 Notation and Methods

Let represent all data (dependent variable, covariates) for the () sampling unit, and let represent all data. We focus on a scalar , which can be a unit-specific summary statistic. The analyst produces a working model, with covariates and parameters

(all parameters; slopes, variances, variance components, etc.). Embedded in the working model are all modeling assumptions and data analytic choices. Examples include linear and logistic regression, CART, random forests and other machine learning approaches (for these

represents the underlying algorithm’s end result). Data analysis produces the working predictive cumulative distribution function for unit

,

(1)

but the true predictive cumulative distribution function for the unit is,

(2)

is determined by the the modeling approach, and producing it is quite general. The full posterior distribution of can be used to generate an in or out of sample predictive distribution for (e.g., the collection of MCMC samples), a plug-in approach substituting for with no attention to uncertainty in the estimate, or the end-result of a machine-learning algorithm, with or without infusion of uncertainty.

2.1 The (O - E)/SD, or standard residuals

With , the standard (Observed - Expected)/SD residuals are,

(3)

An example of the above residual is linear regression, where

. In order to perform model diagnostics like outlier identification and goodness of fit, the empirical distribution of the is evaluated relative to the distribution; plotting versus can identify the need for model enhancement. If are the true mean and standard deviation under , then the have mean and variance , but the full distribution can be far from Gaussian unless the

s are Gaussian or approximately Gaussian due to the Central Limit Theorem (CLT).

2.2 Percentile-based residuals

The standard residuals can be represented as,

(4)

where denotes the CDF, or cumulative density functon, of a distribution and . This framework supports generalizing the definition of a residual by using the full predictive distribution of . To relax dependence on the CLT, we find the percentile location of in the working predictive distribution and map it to the associated quantile of a distribution. Specifically, we define

The ‘one-half correction’ is needed to balance the computation for a discrete distribution. For example, if puts all mass at a single point and is that point, the uncorrected ; the corrected (and correct) . If the direct estimate, , is equal to the largest value of the predictive distribution, the correction brings from infinity to a finite value. Even for a continuous , either or can be , for example if the observed value is beyond the support of the predictive distribution. In these cases, truncating the residual, for example at , is often appropriate.

Equation 2.2 is equivalent to replacing in equation 4 with the predictive distribution . It is also evident that when is Gaussian, the standard residuals are identical to the percentile-based. Importantly, this approach allows the user to estimate the working predictive distribution, and thus the residuals, using all data points.

3 Properties

The data analyst generates the working predictive distribution using data from units with observations in unit . The can be small and so the unit-specific, direct estimates, may be far from Gaussian. Thus, assuming that the are may induce false positive and false negative rates far from the nominal values in assessing the relation between the observed and expected values. More generally, when each , the empirical distribution of can deviate substantially from .

Dropping the subscript with the understanding that the distributions are specific, we evaluate properties under and under . A discrepancy between and detected under is a Type I error (a false positive) and failing to detect a discrepancy between and under

is a Type II error (a false negative). Discrepancies between

and can be induced by various conditions including the use of incorrect parametric families, an incorrect mean model, the lack of uncertainty infusion or some combination of these. In analyzing residuals, the goal is to detect discrepancies between and while controlling Type I error, optimizing power and producing valid and informative residual plots. We show herein that depending on the choice of the use of standard residuals () can cause Type I error rate to be inflated or conservative and the shape of the true predictive distribution, , can be mis-represented.

3.1 Properties of

Let and denote the mean and variance of . We first show that the Type I error for the standard residuals is not well calibrated.

Theorem 1.

[Type I error] The Type I error at level for standard residuals, is given by:

(6)

Proof: We give the proof for the right sided test. Let , then

So, the effective Type I error for the standard residuals is . ∎

The proof for the left-sided test is essentially the same as for the right-sided; the proof for the two-sided test combines the right and left tail probabilities.

From Theorem 1, it follows that the Type I error for a right-sided test using induces the following relationships to the nominal level, :

It is now easy to identify the conditions that lead to inflated or conservative Type I error. If

places more probability mass on the right-tail than the Gaussian distribution, the standard residuals will produce inflated Type I error. Conversely, if

places less probability mass on the right-tail than the Gaussian distribution, then the standard residuals will produce conservative Type I error. Similar conditions can be derived for left- and two-sided tests.

The proof of Theorem 1 immediately identifies the raw power of under , where ‘raw’ indicates that the power is not adjusted for the poorly calibrated Type 1 error.

Theorem 2 (Power for ).

has power for rejection to the right with nominal right-sided Type I error ,

(8)

3.2 Properties of

For continuous and ,

Consequently, under when , , and Type I error using the right-sided rejection region is perfectly calibrated.

Theorem 3 gives the full distribution of for the continuous case.

Theorem 3.

[Distribution of ] For continuous and with densities and , let be the distribution of with density computed under . Then, if is absolutely continuous wrt ,

(10)

Consequently, if .

Proof.

so,

Taking the derivative wrt gives the density in equation (10). ∎

From equation (10), for to be a Gaussian density, the ratio must be 1.0 (as it is under ). More generally, contains as a multiplicative factor which can produce a Gaussian-like shape.

3.3 Power comparisons

We begin by deriving power (e.g., probability of detecting an outlier) for the right-sided test at level .

Theorem 4 (Power for ).

The right-side rejection power of for a one-sided test of nominal (and actual) size is,

(11)
Proof.

Substitute in equation 10. ∎

Using equations 8 and 11 various comparisons can be made between the standard residuals () and the percentile-based residuals (). For a right-sided test, since is monotonically increasing, will have higher, equal or less power than depending on whether is less than, equal to, or greater than . Combining the power comparisons with the results for the Type I error of in (1) we have the following result:

Theorem 5.

For a right-sided test, the standard residuals have inflated, correct, or conservative Type I error, and higher, equal, or lesser power than depending on whether is less than, equal to, or greater than .

The result shows why it is inappropriate to use the standard residuals. When , may have higher power than , but that apparent win is induced at least in part by inflated Type I error. On the other hand, when , will have conservative Type I error and lower power than . By contrast, the have properly calibrated Tyep I error and valid power. Importantly, even with well calibrated percentile-based residuals, multiple testing corrections should be performed when appropriate.

3.3.1 Calibrating

The can be adjusted to have properly calibrated Type I error and thus valid power. From Theorem 1, using a right-sided rejection region of the form , gives a Type I error of . Equating this to the nominal level we have:

(12)

Constructing the right-sided rejection region based on the the quantile now gives a test based on usual residuals with calibrated Type I. As Theorem 6 shows, the power for the calibrated equals the power for the percentile-based residuals.

Theorem 6.

For the right-sided test, the power of the calibrated equals that of .

Proof.

From equation 8, the power for the right-sided test based on using calibrated rejection region is given by . Using the definition of from equation 12, this gives , which is the power for the percentile-based residuals from equation 11. ∎

This power equivalence also holds for a left-sided test, but does not hold exactly for a two-sided.

Theorems 5 and 6 illustrate the pitfalls of using the standard residuals and highlight the importance of using the full, predictive distribution for model assessment, via either or the calibrated .

4 Simulation study

We conducted a simulation study to evaluate and compare properties of (both uncalibrated and calibrated) and .

4.1 Data generation

For Beta, , and , we consider two true models, and . For each model, . For , ; for , . and can be interpreted as the cumulative distribution under the null and alternative hypotheses, respectively.

In the working model (), are unknown, and Bayesian analysis produces the working predictive distribution based on the collection of the MCMC samples of . Specifically, and . A total of iterations were obtained with burn-in. The convergence was checked visually by the trace plot as well as the Gelman-Rubin convergence statistic (Gelman et al., 1992).

4.2 Results

Based on a single replication of sampling units, Figure 1 compares and from the working model under the true models

(the null hypothesis) and

(the alternative hypothesis). Note that under the null (the left column), the distribution of is very close to the reference, which is not the case for , indicating that is poorly calibrated. Also, under the alternative (right column) is more likely to detect a model discrepancy than is .

[width=1]simu4_residuals_v0.jpeg

Figure 1: and based on a single replication of sampling units. Panel A, Q-Q plots; panel B, smoothed densities; Panel C, empirical CDFs. Panel A includes the reference line (black). Panels B and C include the reference density and distribution. The vertical dashed line in Panel B corresponds to the right-sided rejection region for . The reference line overlaps with in the left column (null hypothesis) in Panels B and C.

Table 1 reports the estimated rejection rate based on replications of the null hypothesis where the true residual of a sampling unit is , with nominal, right-sided for , , and calibrated with from equation 12. Type I error is inflated for , but neither for nor for calibrated All simulations produce similar comparisons.

True Rejection Rate Calibrated
Model Hypothesis N (calibrated)
Null 150 0.074 0.050 0.049 0.026
175 0.074 0.050 0.050 0.026
200 0.074 0.050 0.050 0.026
225 0.074 0.050 0.050 0.026
250 0.073 0.050 0.049 0.026
Alternative 150 0.103 0.324 0.321 0.428
175 0.103 0.326 0.323 0.429
200 0.102 0.326 0.324 0.430
225 0.102 0.319 0.316 0.424
250 0.102 0.323 0.320 0.426
Table 1: Estimated rejection rate based on replications with right-sided for , , and calibrated using from equation 12.

5 Application to Protein Microarrays

When a protein microarray is assembled, probes or specific proteins are arranged in rows and columns on a glass slide. After the sample of interest has been loaded onto the slide, the array is scanned and light of different wavelengths produces a signal at each probe the intensity of which depends on the presence and quantity of a particular target protein in the sample. The scanning apparatus for protein microarrays produces two measurements at each probe: an observed foreground, signal and an observed background signal, . We present simulated arrays composed of individual foreground and background signals generated under three conditions. Generally, the goal of this simulation is to compare a case where the analysis model incorporates more variability than is in the data generating model and a case where the analysis model and the data generating model match exactly. In this sense, we aim to show how percentile-based residuals can effectively assess general model fit.

5.1 Model and estimation

In this simulation, we posit that that a true underlying foreground signal, , and a true underlying background signal, , multiply to produce a true, underlying quantity, . In the protein array, and are measured with multiplicative errors and to produce and . Measurement errors and are independent, log-normal with

the respective variances of the underlying normal distributions.

These assumptions are encoded in the Bayesian hierarchical model (the analysis model, ),

(13)

We performed simulations by generating , or an array’s worth of foreground and background signals on a three-tiered basis for the true distribution (). Tier 1 involves fixing a single and , Tier 2 involves fixing (not shown below), and Tier 3 involves randomly generating . The analysis model (, equation 5.1) matches Tier 3. In all tiers, .

We report results for Tier 1 and Tier 3 (results for Tier 2 are very similar to those for Tier 1). The analysis model is used to generate MCMC draws from relevant posterior distributions. For each MCMC, three chains were run with randomly selected initial values for all parameters with a burn-in of and a subsequent iterations with a thinning interval of length , resulting in a sample size of for each of the simulated probes on an array. The convergence was checked visually by the trace plot as well as the Gelman-Rubin convergence statistic (Gelman et al., 1992).

5.2 Analysis of residuals

We define the standard residual and the percentile-based residual in this context as follows:

In this case, ‘’ is all generated data points, is the model described in 5.1 estimated using , and is the full predictive distribution of . Note that best practice would be to set aside the observation, fit the model and use it to predict the data value using model. However, with the large sample size in our example, the difference is negligible.

While in Tier 3, the analytic and data-generating models match, in Tier 1, the analytic model brings in more stochastic elements than are present in the data. This discrepancy is clearly displayed in panel B of Figure 2. The percentile-based residuals of the true

values in the posterior predictive distribution

have a standard normal distribution in Tier 3 , whereas the percentile-based residuals in Tier 1 have a much narrower spread and are centered slightly to the left of zero.

[width=150mm, height=150mm]y_fg_residuals.jpeg

Figure 2: Distribution of the percentile location of in its full posterior predictive distribution, Tier 1 (left column), Tier 3 (right column). Panel A, Q-Q plots; panel B, smoothed densities; Panel C, empirical CDFs. Panel A includes the reference line (black). Panels B and C include the reference density and distribution.

Importantly, in Tier 3, where the data analysis model and the data generation model match exactly, the standard residuals, depart from the standard normal distribution. As evidenced clearly in Panel A of figure 2 the quantiles of the standard residuals as compared to the quantiles would lead to an inflated Type I error rate, that is a higher probability of erroneously detecting a discrepancy between the true predictive distribution and the working predictive distribution.

6 Sub-national estimates of contraceptive use

Access to family planning provides multi-faceted benefits to women and their families. It has been shown to reduce maternal and child mortality, empower women and girls, and enhance environmental sustainability. Remarkable progress has been made in the past several decades in measuring family planning use rates around the world, and in low-income countries in particular. However, the national focus of those surveys (e.g., Demographic and Health Surveys) makes the result impractical for monitoring and evaluation at sub-national levels. Given the fact that many policies are made and implemented at subnational levels, there is an urgent need to reach out to policy makers at local levels and empower them with relevant estimates.

6.1 The Performance Monitoring and Evaluation 2020 Survey

The data analyzed here are drawn from Performance Monitoring and Evaluation 2020 (PMA2020) survey conducted in Kenya during May-July of 2014. The survey interviewed 3479 women in 111 enumeration areas (EAs) in Kenya. Funded by the Bill and Melinda Gates Foundation, PMA2020 was originally designed to facilitate annual progress reporting in support of the goals and principles of Family Planing 2020 (FP2020) initiative across priority countries in Africa and Asia (Zimmerman et al., 2017). The survey uses mobile devices (smartphones) to routinely gather nationally representative data on key family planning indicators. Data are collected at the woman, household, and facility levels by a network of resident enumerators stationed throughout the country.

Given its original goal of providing national estimates, PMA2020 surveys are adequately powered to provide reliable estimates for the whole nation,and in some cases for urban and rural regions separately. While national estimates become more available, regional, county and district officials expressed interests in estimates of indicators to monitor and evaluate progress at their level of operation and responsibilities. To meet this need, a Bayesian hierarchical model was developed for several African countries with multiple rounds of PMA surveys (Li et al., 2019). Described below is a model for a single cross-section of the repeated cross-sectional data collected in that study. The sufficiently large number of women per EA (about 30) makes the standard and percentile-based residuals nearly identical, and to illustrate the difference between the two, we present results based on a 30%, unstratified random sample of the original PMA dataset.

6.2 Bayesian Modeling

is the indicator of woman in EA not using (0) or using (1) contraceptive methods. is the woman-sepcific vector of covariates (e.g., age, education, parity), and is an EA-specific random effect. The working model is,

The are ‘rolled up’ to the EA level (producing ), and these are the model-based, EA-specific rates.

As described in Li et al. (2019)

, we used Markov chain Monte Carlo implemented using JAGS (version 4.3.0) and conducted in R, to generate draws from the joint posterior distribution of

, and consequently for and , the latter being ‘Small Area Estimates.’ The quantity ‘Avelogistic’ is produced by mixing the over the posterior for , but over the mixture of priors for . This approach produces residuals relative to a population model rather than those relative to a model tuned to the EA, and so evaluates model adequacy.

6.3 Data Analysis

Figure 3 shows the percentile-based and standardized residuals for the 111 EAs. The variation in percentile-based residuals is smaller than in standard residuals (Panel A). Panel B suggests that both and are close to normally distributed, that each set has a variance greater than 1.0, with the percentile-based having a smaller spread. Panel C reveals additional detail, showing that the are close to Gaussian (albeit with a large spread), but the are bimodal, possibly indicating that a binary covariate is missing from the model. We haven’t been able to find a covariate that removes the bimodality, but note that identifies the problem, the do not.

[scale=0.8]PMAfig.png

Figure 3: (A) (left panel) and (right panel) vs avelogistic; (B) Q-Q Plots for and residuals; (C) Histograms of and residuals, along with the reference density.

7 Discussion

Residuals are a mainstay for assessing model fit and detecting outliers. We define and evaluate a percentile-based approach that, by respecting all aspects of a predictive distribution, is a considerable improvement over use of the standard residuals. Improvements include properly calibrated Type I error when testing a working hypothesis, improved power, and more revealing diagnostic plots. While it is the case that the nominal for testing can be adjusted to calibrate the standard residuals to have a desired Type I error, and with careful adjustments can improve residual plots and other model diagnostics, the percentile-approach automatically takes care of these issues. The percentile-based approach has the added benefit of encouraging development of a full predictive distribution that incorporates sampling, measurement, and modeling-induced uncertainties. Consequently, we encourage its use.

Funding
The authors gratefully acknowledge partial support from the following sources: SB, NIH-NIAID, U19-AI089680; QL, Performance Monitoring and Evaluation 2020 project from the Bill & Melinda Gates Foundation; CW, NCI P30CA006973, NCI 1 P50 CA098252; TAL, NIH-NIAID, U19-AI089680

Acknowledgments
The authors are grateful to the Southern and Central Africa International Centers for Excellence in Malaria Research for useful discussions particularly in relation to the section on applications to protein microarrays.

Bibliography

  • Cook et al. (2006) Cook, S. R., Gelman, A., and Rubin, D. B. “Validation of software for Bayesian models using posterior quantiles.” J. Comput. Graph. Statist., 15:675–692 (2006).
  • Dunn and Smyth (1996) Dunn, P. K. and Smyth, G. K. “Randomized Quantile Residuals.” Journal of Computational and Graphical Statistics, 5(3):236–244 (1996).
  • Efron (1987) Efron, B. “Better Bootstrap Confidence Intervals.” Journal of the American Statistical Association, 82:171–185 (1987).
  • Efron (2008) —. “Microarrays, Empirical Bayes and the Two-Group Model.” Statistical Science, 23:1–22 (2008).
  • Gelman et al. (1992) Gelman, A., Rubin, D. B., et al. “Inference from iterative simulation using multiple sequences.” Statistical science, 7(4):457–472 (1992).
  • Li et al. (2019) Li, Q., Louis, T. A., Liu, L., Wang, C., and Tsui, A. “Subnational estimation of modern contraceptive prevalence in five sub-Saharan African countries: a Bayesian hierarchical approach.” BMC Public Health, 19(1):216 (2019).
  • Zimmerman et al. (2017) Zimmerman, L., Olson, H., Tsui, A., and Radloff, S. “PMA2020: Rapid Turn-Around Survey Data to Monitor Family Planning Service and Practice in Ten Countries.” Studies in Family Planning, 48(3):293–303 (2017).