Heckman (1974, 1976) introduced a model for dealing with the sample selection bias problem with the aid of a bivariate normal distribution to relate the outcome of interest and a selection rule. A semiparametric alternative to this model, known as Heckman’s two-step method, was proposed by , so that the non-robustness of the normal distribution in the presence of outliers could be handled.
introduced a model for dealing with the sample selection bias problem with the aid of a bivariate normal distribution to relate the outcome of interest and a selection rule. A semiparametric alternative to this model, known as Heckman’s two-step method, was proposed byHeckman (1979)
, so that the non-robustness of the normal distribution in the presence of outliers could be handled.
The most discussed problem regarding the Heckman model is its sensitivity to the normal assumption of errors. Misspecification of the error distribution leads to inconsistent maximum likelihood estimators, yielding biased estimates (Lai and Tsay, 2018). On the other hand, when the error terms are correctly specified, the estimation by maximum likelihood or by procedures based on likelihood produces consistent and efficient estimators (Leung and Yu, 1996; Enders, 2010).
However, even when the shape of the error density is correctly specified, the heteroskedasticity of the error terms can cause inconsistencies in the parameter estimates, as shown by Hurd (1979) and Arabmazar and Schmidt (1981). In response to this concern, Donald (1995) discussed how heteroskedasticity in sample selection models is relatively neglected and provided two reasons to motivate the importance of taking this into account in practice. The first reason is that typically, the data used to fit sample selection models comprises large databases, in which heterogeneity is commonly found. The second reason is that the estimates of the parameters obtained by fitting the usual selection models may in some cases be more severely affected by heteroscedasticity than by incorrect distribution of the error terms (Powell, 1986).
Nevertheless, even though there is a large body of recent research and studies on sample selection models, few studies have been carried out to correct or minimize the impact of heteroscedasticity. This is one of the aims of our paper.
Chib et al. (2009) proposed a semiparametric model for data with sample selection bias. They considered nonparametric functions in their model, which allowed great flexibility in the way covariates affect response variables. They still presented a Bayesian method for the analysis of such models. Subsequently, introduced another general estimation method based on Markov Chain Monte Carlo simulation techniques and used a simultaneous equation system that incorporates Bayesian versions of penalized smoothing splines.
proposed a semiparametric model for data with sample selection bias. They considered nonparametric functions in their model, which allowed great flexibility in the way covariates affect response variables. They still presented a Bayesian method for the analysis of such models. Subsequently,Wiesenfarth and Kneib (2010)
introduced another general estimation method based on Markov Chain Monte Carlo simulation techniques and used a simultaneous equation system that incorporates Bayesian versions of penalized smoothing splines.
Recent works on sample selection models have aimed to address robust alternatives to the Heckman model. In this direction, Marchenko and Genton (2012) proposed a Student- sample selection model for dealing with the robustness to the normal assumption in the Heckman model. Zhelonkin et al. (2016) proposed a modified robust semiparametric alternative based on Heckman’s two-step estimation method. They proved the asymptotic normality of the proposed estimators and provided the asymptotic covariance matrix. To deal with departures from normality due to skewness,
proposed a modified robust semiparametric alternative based on Heckman’s two-step estimation method. They proved the asymptotic normality of the proposed estimators and provided the asymptotic covariance matrix. To deal with departures from normality due to skewness,Ogundimu and Hutton (2016) introduced the skew-normal sample selection model to mitigate the remaining effect of skewness after applying a logarithmic transformation to the outcome variable.
Another direction that has been explored in the last few years is the modeling of discrete data with sample selection. For instance, Marra and Wyszynski (2016) introduced sample selection models for count data, potentially allowing for the use of any discrete distribution, non-Gaussian dependencies between the selection and outcome equations, and flexible covariate effects. The modeling of zero-inflated count data with sample selection bias is discussed by Wyszynski and Marra (2018). Mu and Zhang (2018) considered the semiparametric identification and estimation of a heteroscedastic binary choice model with endogenous dummy regressors, and no parametric restriction on the distribution of the error term was assumed. This yields general multiplicative heteroscedasticity in both selection and outcome equations and multiple discrete endogenous regressors. A class of sample selection models for discrete and other non-Gaussian outcomes was recently proposed by Azzalini et al. (2019).
Wojtys et al. (2018) introduced a generalized additive model for location, scale and shape, which accounts for non-random sample selection, and Kim et al. (2019) proposed a Bayesian methodology to correct the bias of estimation of the sample selection models based on a semiparametric Bernstein polynomial regression model that incorporates the sample selection scheme into a stochastic monotone trend constraint, variable selection, and robustness against departures from the normality assumption.
In the aforementioned papers, the solution to deal with departures from normality for continuous outcomes is to assume robust alternatives like Student- or skew-normal distributions. Another common approach is to consider non-parametric structures for the density of the error terms. Our proposal goes in a different direction, one which has not yet been explored in the literature.
In this paper, we propose a generalization of the Heckman sample selection model by allowing the sample selection bias and dispersion parameters to depend on covariates. We show that the non-robustness of the Heckman model may be due to the assumption of the constant sample selection bias parameter rather than the normality assumption. Our proposed methodology allows us to understand what covariates are important to explain the sample selection bias phenomenon rather than to solely make conclusions about its presence. It is worthy to mention that our methodology can be straightforwardly adapted for existing sample selection like those proposed by Marchenko and Genton (2012), Ogundimu and Hutton (2016), and Wyszynski and Marra (2018). We now highlight other contributions of our paper:
We explore the inferential aspects of the maximum likelihood estimators (MLEs) for our proposed generalized Heckman model. More specifically, we show that this model satisfies regularity conditions so that it ensures consistency and asymptotic normality of the MLEs. In particular, we show that the Heckman model satisfies the regularity conditions, which is a new finding.
Proper residual for sample selection models is proposed as a byproduct of our asymptotic analysis. This is another relevant contribution of our paper, since this point has not yet been thoroughly addressed.
We develop an R package for fitting our proposed generalized Heckman model. It also includes the Student- and skew-normal sample selection models, which have not been implemented in R (R Core Team, 2020) before. This makes the paper replicable and facilitates the use of our generalized Heckman model by practitioners.
We show that the normal assumption for analyzing medical expenditure data is suitable and that the conclusions drawn using our approach are consistent with findings from prior literature. Moreover, we identify which covariates are relevant for explaining the presence of sample selection bias in this important dataset.
This paper is organized as follows. In Section 2 we define the generalized Heckman (GH) sample selection model and discuss estimation of the parameters through the maximum likelihood method. Further, diagnostics tools and residual analysis are discussed. Section 3 is devoted to showing that the GH model satisfies regularity conditions that ensure consistency and asymptotic normality of the maximum likelihood estimators. In Section 4, we present Monte Carlo simulation results for evaluating the performance of the maximum likelihood estimators of our proposed model and for checking the behavior of other existing methodologies under misspecification. In Section 5 we apply our generalized Heckman model to the data on ambulatory expenditures from the 2001 Medical Expenditure Panel Survey and show that our methodology overcomes an existing problem in a simple way. Concluding remarks are addressed in Section 6. This paper contains a Supplementary Material which can be obtained from the authors upon request.
2 Generalized Heckman Model
Assume that are linearly related to covariates and through the following regression structures:
where , , and are vectors of unknown parameters with associated covariate vectors
are vectors of unknown parameters with associated covariate vectorsand , for , and is a sequence of independent bivariate normal random vectors. More specifically, we suppose that
with the following regression structures for the sample selection bias and dispersion parameters:
where and are parameter vectors, with associated covariate vectors and , for . The arctanh (inverse hyperbolic tangent) link function used for the sample selection bias parameter ensures that it belongs to the interval . The variable is observed only if while the variable is latent. We only know if is greater or less than 0. The equation (1) is the primary interest equation and the equation (2) represents the selection equation. In practice, we observe the variables
for , where if and equals 0 otherwise.
The mixed distribution of is composed by the discrete component
and by a continuous part given by the conditional density function
where and denote the density and cumulative distribution functions of the standard normal, respectively. More details about this are provided in the Supplementary Material.
denote the density and cumulative distribution functions of the standard normal, respectively. More details about this are provided in the Supplementary Material.
Let be the parameter vector. The log-likelihood function is given by
where if is observed and otherwise, , and , for .
Expressions for the score function and the respective Hessian matrix are presented in the Supplementary Material. The maximum likelihood estimators (MLEs) are obtained as solution of the non-linear system of equations , which does not have an explicit analytic form. We use the quasi-Newton algorithm of Broyden-Fletcher-Goldfarb-Shanno (BFGS) via the package optim of the software R (R Core Team, 2020) to maximize the log-likelihood function.
It is important to emphasize that the maximum likelihood method may suffer from possible multicollinearity problems when the selection equation has the same covariates as the regression equation (for example, see Marchenko and Genton (2012)). To reduce the impact of this problem in parameter estimation, the exclusion restriction is suggested in the literature. According to this approach, at least one significant covariate included in the selection equation should not be included in the primary regression. The interested reader can find more details on the exclusion restriction procedure for the Heckman sample selection model in Heckman (1976), Leung and Yu (2000) and Newey (2009).
We now discuss diagnostic techniques, which have been proposed to detect observations that could exercise some influence on the parameter estimates or inference in general. Next, for the generalized Heckman model, we describe the generalized Cook distance (GCD), and in the next section, we propose the score residual.
Cook’s distance is a method commonly used in statistical modeling to evaluate changes in the estimated vector of parameters when observations are deleted. It allows us to assess the effect of each observation on the estimated parameters. The methodology proposed by Cook (1977) suggests the deletion of each observation and the evaluation of the log-likelihood function without such a case. According to Xie and Wei (2007), the generalized Cook distance (GCD) is defined by
where is a nonnegative definite matrix, which measures the weighted combination of the elements for the difference , and is the MLE of when removing the th observation. Many choices for were considered by Cook and Weisberg (1982) . We use the inverse variance-covariance matrix
. We use the inverse variance-covariance matrixTo determine whether the th case is potentially influential on inference about , we check if its associated GCD value is greater than . In this case, this point would be a possible influential observation.
We illustrate the usage of the GCD in the analysis of the medical expenditure data in Section 5. Regarding the residual analysis, in the next section, we propose a proper residual for sample selection models, which is one of the aims of this paper.
3 Asymptotic Properties and Score Residuals
Our aim in this section is to show that under some conditions, our proposed generalized Heckman sample selection model satisfies the regularity conditions stated by Cox and Hinkley (1979). As a consequence, the maximum likelihood estimators discussed in the previous section are consistent and asymptotically normal distributed. As a by-product of our findings here, we propose a score residual that is well-known to be approximately normal distributed. Proofs of the theorems stated in this section can be found in the Appendix.
Let be the parameter space and be the contribution of the th observation to the log-likelhood function, where retains its definition from previous section, for .
The score function associated to the generalized Heckman model has mean zero and satisfies the identity .
We now propose a new residual for sample selection models inspired from Theorem 3.1. From (15), we define the ordinary score residual by for the non-censored observations (where ) and the standardized score residual by
where , for such that . Alternatively, a score residual based on all observations (including the censored ones) can be defined by
for . In practice, we replace the unknown parameters by their maximum likelihood estimates. The evaluation of the goodness-of-fit of our proposed generalized Heckman model will be performed through the score residual analysis. Based on this approach, discrepant observations are identified, besides being possible to evaluate the existence of serious departures from the assumptions inherent to the model. If the model is appropriate, plots of residuals versus predicted values should have a random behavior around zero. Alternatively, a common approach is to build residual graphics with simulated envelopes (Atkinson, 1985). In this case, it is not necessary to know about the distribution of the residuals, they just need to be within the region formed by the envelopes so indicating a good fit. Otherwise, residuals outside the envelopes are possible outliers or indicate that the model is not properly specified. We will apply the proposed score residual (10) to the MEPS data analysis. As will be shown, the residual analysis indicates that the normal assumption for the data is suitable in contrast with the non-robustness of the Heckman model mentioned in existing papers in the literature on sample selection models.
We now establish the consistency and asymptotic normality of the MLEs for our proposed generalized Heckman model. For this, we need to assume some usual regularity conditions.
(C1) The parameter space is closed and compact, and the true parameter value, say , is an interior point of .
(C2) The covariates are a sequence of iid random vectors, and is the information matrix conditional on the covariates.
(C3) is well-defined and positive definite, and , where is the Frobenius norm.
Under Conditions (C1)–(C3), the maximum likelihood estimator of for the generalized Heckman model is consistent and satisfies the weak convergence: , where is the identity matrix,
is the identity matrix,is the conditional information matrix, and denotes the true parameter vector value.
An important consequence of Theorem 3.3 is that the classic Heckman model is regular under Conditions (C1)–(C3). Therefore, the MLEs for this model are consistent and asymptotically normal distributed, which is a new finding of this paper.
4 Monte Carlo Simulations
4.1 Simulation design
In this section, we develop Monte Carlo simulation studies to evaluate and compare the performance of the maximum likelihood estimators under the generalized Heckman, classic Heckman, Heckman-Skew, and Heckman- models when the assumption of either constant sample selection bias parameter or constant dispersion is not satisfied. To do this, six different scenarios with relevant characteristics for a more detailed evaluation were considered. In Scenarios 1 and 2, we use models with both varying dispersion and correlation (sample selection bias parameter) and with (I) exclusion restriction and (II) without the exclusion restriction.
For Scenarios 3-6, the exclusion restriction is considered. More specifically, in Scenarios 3, 4, and 5, we have specified the following (III) constant dispersion and varying correlation; (IV) varying dispersion and constant correlation; (V) both constant dispersion and correlation. To evaluate the sensitivity in the parameter estimation of the selection models at high censoring, in Scenario 6 we simulated from the generalized Heckman model with (VI) varying both sample selection bias and dispersion parameters and an average censorship of
Scenario 1 aims to evaluate the performance of the generalized Heckman model and compare it with its competitors when the assumption of constant sample selection bias parameter and dispersion is not satisfied. Scenario 2 is devoted to demonstrating that despite the absence of exclusion restriction, our model can yield satisfactory parameter estimates. Scenarios 3 and 4 aim to justify the importance of modeling through covariates the correlation and dispersion parameters, respectively. Scenario 5 illustrates some problems that the generalized Heckman model can face as with the classic Heckman model. Finally, Scenario 6 was included to demonstrate the sensitivity of selection models to high correlation and high censoring. We here present the results from Scenario 1. The remaining results are presented in the Supplementary Material.
All scenarios were based on the following regression structures:
for . All covariates were generated from a standard normal distribution and were kept constant throughout the experiment. The responses were generated from the generalized Heckman model according to each of the six configurations. We set the sample sizes and Monte Carlo replicates. Pilot simulations showed that the choice of parameters used in the simulations does not affect the results, as long as they maintain the same average percentage of censorship.
We would like to highlight that there is no R package for fitting the Heckman- and Heckman skew-normal models. Therefore, we developed an R package (to be submitted) able to fit our proposed generalized Heckman model and also the sample selection models by Marchenko and Genton (2012) and Ogundimu and Hutton (2016).
In the maximization procedure to estimate parameters based on the optim package, we consider as initial values for , , , and the maximum likelihood estimates by the classic Heckman model. For the remaining parameters, we set and for and . The initial values for the degrees of freedom of the Heckman-
. The initial values for the degrees of freedom of the Heckman-model and the skewness parameter for the Heckman-Skew model were set to and , respectively. These values were chosen after some pilot simulations.
4.2 Scenario 1: Varying sample selection bias and dispersion parameters
In Table 1, we present the empirical mean and root mean square error (RMSE) of the maximum likelihood estimates of the parameters based on the generalized Heckman, classic Heckman, Student-, and skew-normal sample selection models under Scenario 1. From this table, we observe a good performance of the MLEs based on the generalized Heckman model, even for estimating the parameters related to the sample selection bias and dispersion. The bias and the RMSE under this model decrease for all the estimates as the sample size increases; therefore, suggesting the consistency of the MLEs, which is in line with our Theorem 3.3. On the other hand, even the regression structures for and being correctly specified, we see that the MLEs for these parameters, based on the classic Heckman, skew-normal, and Student-, do not provide satisfactory estimates even for a large sample. This illustrates the importance of considering covariates for the sample selection bias and dispersion parameters. The mean estimates of the degrees of freedom and skewness for the Student- and skew-normal sample selection models were and , respectively.
The above comments are also supported by Figures 1, 2, and 3, where boxplots of the parameter estimates are presented for sample sizes , , and , respectively. We did not present the boxplots of the estimates of , , and since they behaved similarly to other boxplots.
We now provide some simulations to evaluate the size and power of likelihood ratio, gradient, and Wald tests. We consider Scenario 1 and present the empirical significance level of the tests in Table 2 for nominal significance levels at 1%, 5%, and 10%.
Under the null hypothesis of absence of sample selection bias ( ), the likelihood ratio, gradient, and Wald tests presented empirical values close to the nominal values only under the generalized Heckman model. For the other models, the type-I error was inflated and indicated the presence of selection bias. This suggests that the tests should be used with caution to test parameters of the sample selection models and that some confounding can be caused due to either varying sample selection bias or heteroskedasticity. It is important to point out that, even for the generalized Heckman model, the Wald test presents a considerable inflated type-I error for
Under the null hypothesis of absence of sample selection bias (for all , that is
), the likelihood ratio, gradient, and Wald tests presented empirical values close to the nominal values only under the generalized Heckman model. For the other models, the type-I error was inflated and indicated the presence of selection bias. This suggests that the tests should be used with caution to test parameters of the sample selection models and that some confounding can be caused due to either varying sample selection bias or heteroskedasticity. It is important to point out that, even for the generalized Heckman model, the Wald test presents a considerable inflated type-I error for.
In Table 3, we present the empirical power of the likelihood ratio, gradient and Wald tests (in percentage) for simulated data according to Scenario 1 under generalized Heckman, classic Heckman, Heckman-Skew, and Heckman- models, with significance level at and . From these results, we can observe that the tests, under the generalized Heckman model, provide high power, mainly when the sample size increases. On the other hand, since tests based on the other models do not provide the correct nominal significance level, the power of tests in these cases are not really comparable.
5 MEPS Data Analysis
We present an application of the proposed model to a set of real data. Consider the outpatient expense data of the 2001 Medical Expenditure Panel Survey (MEPS) available in the R software in the package ssmrob (Zhelonkin et al., 2014). These data were also used by Cameron and Trivedi (2009), Marchenko and Genton (2012), and Zhelonkin et al. (2016) to fit the classic Heckman model, Heckman- model, and the robust version of the two-step method, respectively. The MEPS is a set of large-scale surveys of families, individuals, and their medical providers (doctors, hospitals, pharmacies, etc.) in the United States. It has data on the health services Americans use, how often they use them, the cost of these services, and how they are paid, as well as data on the cost and reach of health insurance available to American workers.
The sample is restricted to persons aged between 21 and 64 years and contains a variable response with observations of outpatient costs, of which 526 (15.8 %) correspond to unobserved expenditure values identified as zero expenditure. It also includes the following explanatory variables: Age represents age measured in tens of years; Fem is an indicator variable for gender (female receives value 1); Educ represents years of schooling; Blhisp is an indicator for ethnicity (black or Hispanic receive a value of 1); Totcr is the total number of chronic diseases; Ins is the insurance status; and Income denotes the individual income.
The variable of interest represents the log-expenditure on medical services of the th individual. We consider the logarithm of the expenditure since it is highly skewed (i.e., see Figure 4, where plots of the expenditure and log-expenditure are presented). The variable denoting the willingness of the th individual to spend is not observed. We only observe , which represents the decision or not of the th individual to spend on medical care.
According to Cameron and Trivedi (2009) and Zhelonkin et al. (2016), it is natural to fit a sample selection model to such data, since the willingness to spend is likely to be related to the expense amount However, after fitting the classic Heckman model and using the Wald statistics for testing against the conclusion is that there is no statistical evidence to reject that is, there is no sample selection bias. Cameron and Trivedi (2009) suspected this conclusion on the absence of sample selection bias, and Marchenko and Genton (2012) argued that a more robust model could evidence the presence of sample selection bias in the data; these authors proposed a Student- sample selection model to deal with this problem. However, as will be illustrated in this application, this problem of the classic Heckman model can be due to the assumption of constant sample selection bias and constant dispersion parameters rather than the normal assumption itself.
After a preliminary analysis, we consider the following regression structures for our proposed generalized Heckman model:
for . The primary equation has the same covariates of the selection equation with the additional covariate Income, so that exclusion restriction is in force. In Table 4, we present the summary of the fits of the classic Heckman and generalized Heckman models. From this table, we can observe that the covariates Fem and Totchr are significant to explain the sample selection bias, by using any significance level. We perform a likelihood (LR) test for checking the absence () or presence of sample selection bias. The LR statistic was with corresponding -value equal to . Therefore, our proposed generalized Heckman model is able to detect the presence of sample selection bias even under normal assumption. We also performed the gradient and Wald tests, which confirmed this conclusion.
Further, the covariates age, totcr, and ins were significant for the dispersion parameter. For the selection equation, the covariate Income is not significant (significance level at 5%) based on the generalized Heckman model in contrast with the classic Heckman model. Anyway, it is important to keep it in order to satisfy the exclusion restriction. Regarding the primary equation, we observe that the covariate Ins is only significant under the classic Heckman model. Another interesting point is that Educ is strongly significant for the primary equation under our proposed generalized Heckman model.
|Sample Selection Bias Parameter|
Summary fits of the classic Heckman model (HM) and generalized Heckman model (GHM). The GHM summary fit contains estimates with their respective standard errors, z-value,, and inferior and superior bounds of the confidence interval.
We conclude this application by checking the goodness-of-fit of the fitted generalized sample selection Heckman model. In Figure 5, we provide the QQ-plot of the score residuals given in (10) with simulated envelopes, and also a Cook distance plot for detecting global influence. Based on this last plot, we do not detect any outlier observation, since all points are below the reference line . Anyway, we investigate if the highlighted point #2602 (above the line ) is influential. We fitted our model by removing this observation and no changes either on the parameter estimates or different conclusions about the significance of covariates were obtained. Regarding the QQ-plot of the score residuals, we observe a great performance of our model, since 96% of the points are inside of the envelope. This confirms that the normal assumption for this particular dataset is adequate and that our generalized Heckman model is suitable for the MEPS data analysis.
6 Concluding Remarks
In this paper, a generalization of the Heckman model was proposed by allowing both sample selection bias and dispersion parameters to vary across covariates. We showed that the proposed model satisfies certain regularity conditions that ensure consistency and asymptotic normality of the maximum likelihood estimators. Furthermore, a proper score residual for sample selection models was proposed. These finding are new contributions on this topic. The MEPS data analysis based on the generalized Heckman model showed that the normal assumption for the data is suitable in contrast with existing findings in the literature. Future research should address (i) generalization of other sample selection models such as Student- and skew-normal to allow varying sample selection bias and dispersion parameters; (ii) proposal of proper residuals for other sample selection models; and (iii) deeper study of influence analysis. An R package for fitting our proposed generalized Heckman, Student-, and skew-normal models has been developed and will be available soon.
Proof of Theorem 3.1. We here show the results for the derivatives with respect to . The results involving the other derivatives follow similarly and therefore they are omitted.
For and , we have that
By using basic properties of conditional expectation, it follows that and it is immediate that . Let us now compute the conditional expectations involved in .
Here it is worth to remember the notations and , for . We now use the conditional density function given in (8) to obtain that
where the last equality follows by identifying a normal kernel in the integral. On the other hand, we use the fact that given has mean equal to (more details are given in the Supplementary Material) and get
With the results above, we obtain that almost surely and therefore for .
We now concentrate our attention to prove the identity stated in the theorem. It follows that
where we have used that (since ) in the last equality. It is immediate that .
Following in a similar way as before, after some algebra we obtain that
By combining these results, we have that
Since the conditional expectations coincide, the marginal expectations also coincide so giving the desired result.
Proof of Theorem 3.3. Conditions (C1)–(C3) and Theorem 3.1 give us the consistency of the MLEs. To establish the asymptotic normality of the estimators, we need to show that the third derivatives of the log-likelihood function are bounded by integrable functions not depending on the parameters.
We will show here that this is possible for the derivatives involving the ’s. The other cases follow in a similar way as discussed in the proof of Theorem 3.1 and therefore they are omitted.
By computing the third derivatives with respect to the ’s and using the triangular inequality, we have that
for , where , which is well-defined due to Assumption (C1).
We now need to show that the expectations of the terms in are finite. Let us show that , where denotes the expectation with respect to the true parameter vector value and is defined as by replacing by . The proofs for the remaining terms follow from this one or in a similar way.
For , it follows that . Now, consider . Theorem 1.2.6 from Durrett (2019) gives us the following inequality for :
Using this inequality and under , we obtain that . These results imply that
with being proved in the same lines that the first two moments presented in the Supplementary Material, which completes the proof of the desired result.
being proved in the same lines that the first two moments presented in the Supplementary Material, which completes the proof of the desired result.
- Arabmazar and Schmidt (1981) Arabmazar, A. and Schmidt, P. (1981). Further evidence on the robustness of the tobit estimator to heteroskedasticity. Journal of Econometrics 17, 253–258.
- Atkinson (1985) Atkinson, A. C. (1985). Plots, Transformations, and Regression. Oxford: Oxford University Press.
- Azzalini et al. (2019) Azzalini, A. and Kim, H. M. and Kim, H. J. (2019). Sample selection models for discrete and other non-Gaussian response variables. Statistical Methods and Applications 19, 27–56.
- Cameron and Trivedi (2009) Cameron, C. A. and Trivedi, P. K. (2009). Microeconometrics Using Stata. TX: Stata Press.
- Chib et al. (2009) Chib, S. and Greenberg, E. and Jeliazkov, I. (2009). Estimation of semiparametric models in the presence of endogeneity and sample selection. Journal of Computational and Graphical Statistics 18, 321–348.
Cook, R. D. (1977).
Detection of influential observation in linear regression.Technometrics 19, 15–18.
- Cook and Weisberg (1982) Cook, R.D. and Weisberg, S. (1982). Residuals and Influence in Regression. New York: Chapman and Hall.
- Cox and Hinkley (1979) Cox, D.R. and Hinkley, D.V. (1979). Theoretical Statistics. Chapman and Hall/CRC.
- Donald (1995) Donald, S.G. (1995). Two-Step estimation of heteroskedastic sample selection models. Journal of Econometrics 65, 347–380.
- Durrett (2019) Durrett, R. (2019). Probability: Theory and Examples. Cambridge University Press.
- Enders (2010) Enders, C.K. (2010). Applied Missing Data Analysis. Guilford Press: New York, NY.
- Fahrmeir and Kaufmann (1985) Fahrmeir, L. and Kaufmann, H. (1985). Consistency and asymptotic normality of the maximum likelihood estimator in generalized linear models. Annals of Statistics 13, 342–368.
- Heckman (1974) Heckman, J.J. (1974). Shadow prices, market wages, and labor supply. Econometrica 42, 679–694.
- Heckman (1976) Heckman, J.J. (1976). The common structure of statistical models of truncation, sample selection and limited dependent variables and a simple estimator for such models. Annals of Economic and Social Measurement 5, 475–492.
- Heckman (1979) Heckman, J.J. (1979). Sample selection bias as a specification error. Econometrica 47, 153–161.
- Hurd (1979) Hurd, M. (1979). Estimation in truncated samples when there is heteroscedasticity. Journal of Econometrics 11, 247-258.
- Kim et al. (2019) Kim, H., Roh, T. and Choi, T. (2019). Bayesian analysis of semiparametric Bernstein polynomial regression models for data with sample selection. Statistics 53, 1–30.
- Lai and Tsay (2018) Lai, H.P. and Tsay, W.J. (2018). Maximum simulated likelihood estimation of the panel sample selection model. Econometric Reviews 37, 744–759.
- Leung and Yu (1996) Leung, S.F. and Yu, S. (1996). On the choice between sample selection and two-part models. Journal of Econometrics 72, 197–229.
- Leung and Yu (2000) Leung, S. F. and Yu, S. (2000). Collinearity and two-step estimation of sample selection models: Problems, origins, and remedies. Computational Economics 15, 173–199.
- Marchenko and Genton (2012) Marchenko, Y. V. and Genton, M. G. (2012). A Heckman selection-t model. Journal of the American Statistical Association 107, 304–317.
- Marra and Wyszynski (2016) Marra, G. and Wyszynski, K. (2016). Semi-parametric copula sample selection models for count responses. Computational Statistics and Data Analysis 104, 110-129.
- Mu and Zhang (2018) Mu, B. and Zhang, Z. (2018). Identification and estimation of heteroscedastic binary choice models with endogenous dummy regressors. Econometrics Journal 21, 218–246.
- Newey (2009) Newey, W.K. (2009). Two-step series estimation of sample selection models. Econometrics Journal 12, 217–229.
- Ogundimu and Hutton (2016) Ogundimu, E.O. and Hutton, J.L. (2016). A sample selection model with skew-normal distribution. Scandinavian Journal of Statistics 43, 172–190.
- Powell (1986) Powell, J.L. (1986). Symmetrically trimmed least squares estimation for tobit models. Econometrica 54, 1435–1460.
- R Core Team (2020) R Core Team (2020). R: A Language and Environment for Statistical Computing. R Foundation for Statistical Computing.
- Wiesenfarth and Kneib (2010) Wiesenfarth, M. and Kneib, T. (2010). Bayesian geoadditive sample selection models. Journal of the Royal Statistical Society - Series C 59, 381–404.
- Wojtys et al. (2018) Wojtys, M. and Marra, G. and Radice, R. (2018). Copula based generalized additive models for location, scale and shape with non-random sample selection. Computational Statistics and Data Analysis 127, 1–14.
- Wyszynski and Marra (2018) Wyszynski, K. and Marra, G. (2018). Sample selection models for count data in R. Computational Statistics 33, 1385-1412.
- Xie and Wei (2007) Xie, F.-C. and Wei, B.-W. (2007). Diagnostics analysis for log-Birnbaum–Saunders regression models. Computational Statistics and Data Analysis 51, 4692–4706.
- Zhelonkin et al. (2014) Zhelonkin, M. and Genton, M. G. and Ronchetti, E. (2014). R package ssmrob: Robust estimation and inference in sample selection models. https://CRAN.R-project.org/package=ssmrob.
- Zhelonkin et al. (2016) Zhelonkin, M. and Genton, M. G. and Ronchetti, E. (2016). Robust inference in sample selection models. Journal of the Royal Statistical Society - Series B 78, 805–827.