The proliferation of panel data studies is well-documented and much of it has been attributed to data availability and challenging methodology (Hsiao, 2007)
. While panel data has been attractive for understanding behavior and dynamics, the modeling complexities involved in it have moved attention away from its unique capacities. Modeling features such as a binary outcome variable or a quantile analysis, which are relatively straightforward to implement with cross-sectional data, are challenging and computationally burdensome for panel data. However, these features are important as they allow for the modeling of probabilities and lead to a richer view of how the covariates influence the outcome variable. Motivated by these difficulties, this paper adds to the methodological advancements for panel data by developing quantile regression methods for binary longitudinal data and designing a computationally efficient estimation algorithm. The approach is applied to two empirical studies, female labor force participation and home ownership.
The paper touches on three growing econometric literatures – discrete panel data, quantile regression for panel data, and quantile regression for discrete data. In reference to the latter, quantile regression has been implemented in binary data models (Kordas, 2006; Benoit and Poel, 2012), ordered data models (Rahman, 2016; Alhamzawi and Ali, 2018), count data models (Machado and Silva, 2005; Harding and Lamarche, 2015), and censored data models (Portnoy, 2003; Harding and Lamarche, 2012)
. For limited dependent variables, the concern is modeling the latent utility differential in the quantile framework, since the response variable takes limited values and does not yield continuous quantiles. Our paper follows the work in this literature by using the latent utility setting and interpreting the utility as a “propensity” or “willingness” that underlie the latent scale, thus increasing our understanding of the impact of the covariates on the binary outcomes.
The literature on quantile regression in panel data settings includes (but is not limited to) Koenker (2004), Geraci and Bottai (2007), Liu and Bottai (2009), Galvao (2010), Galvao and Kato (2016), Lamarche (2010), Harding and Lamarche (2009) and Harding and Lamarche (2017). The latter of these papers discusses the issues associated with solely focusing on fixed effects estimators and highlights the usefulness of allowing for a flexible specification of individual heterogeneity associated with covariates, also of interest in the present paper. In a recent Bayesian paper, Luo et al. (2012) develop a hierarchical model to estimate the parameters of conditional quantile functions with random effects. The authors do so by adopting an Asymmetric Laplace (AL) distribution for the residual errors and suitable prior distributions for the parameters. However, directly using the AL distribution does not yield tractable conditional densities for all of the parameters and hence a combination of Metropolis-Hastings (MH) and Gibbs sampling is required for model estimation. The use of the MH algorithm may require tuning at each quantile. To overcome this limitation, Luo et al. (2012) also present a full Gibbs sampling algorithm that utilizes the normal-exponential mixture representation of the AL distribution. This mixture representation is also followed in our work, with important computational improvements.
Finally, for discrete panel data, recent work by Bartolucci and Nigro (2010) introduces a quadratic exponential model for binary panel data and utilizes a conditional likelihood approach, which is computationally simpler than previous classical estimators. Bayesian approaches to binary panel data models include work by Albert and Chib (1996), Chib and Carlin (1999), Chib and Jeliazkov (2006), and Burda and Harding (2013). These work influence the estimation methods designed in our quantile approach to binary panel data.
This paper contributes to the three literatures by extending the various methodologies to a hierarchical Bayesian quantile regression model for binary longitudinal data and proposing a Markov chain Monte Carlo (MCMC) algorithm to estimate the model. The model handles both common (fixed) and individual-specific (random) parameters (commonly referred to mixed effects in statistics). The algorithm implements a blocking procedure that is computationally efficient and the distributions involved allow for straightforward calculations of covariate effects. The framework is implemented in two empirical applications. The first application examines female labor force participation, which has been heavily studied in panel form. The topic became of particular interest in the state dependence versus heterogeneity debate (Heckman, 1981a). We revisit this question and implement our panel quantile approach, which has been otherwise unexplored for this topic. The results offer new insights regarding the determinants of female labor force participation and how the ages of children have different effects across the quantiles and utility scale. The findings suggest that policy should be focused on women’s transitions into the labor force after child birth and the few years after.
The second application considers the probability of home ownership during the Great Recession. Micro-level empirical analyses on individuals moving into and out of housing markets are lacking in the recent literature. Past studies include Carliner (1974) and Poirier (1977), but the recent housing crisis offers a new opportunity to reevaluate the topic. Furthermore, a full quantile analysis of home ownership is yet to be explored. Since home ownership is a choice that requires years of planning, individual characteristics may range drastically across the latent utility scale. The analysis presented in this paper controls for multivariate heterogeneity in individuals and wealth, and investigates the determinants of home ownership, state dependence in home ownership, and how the shock to housing markets affected these items. The results provide an understanding as to how individuals of particular demographics and socioeconomic status fared during the collapse of the housing market.
The rest of the paper is organized as follows. Section 2 reviews quantile regression and the AL distribution, Section 3 introduces the quantile regression model for binary longitudinal data, presents a simulation study, and discusses methods for covariate effects. Section 4 considers the two applications and concluding remarks are offered in Section 5.
2 Quantile Regression and Asymmetric Laplace Distribution
-th quantile of a random variableis the value such that the probability that will be less than equals . Mathematically, if
denotes the inverse of the cumulative distribution function (cdf) of , the -th quantile is defined as
Quantile regression implements the idea of quantiles within the regression framework with modified to denote the inverse cdf of the dependent variable given the covariates. The objective is to estimate conditional quantile functions and to this purpose, regression quantiles are estimated by minimizing the quantile objective function which is a sum of asymmetrically weighted absolute residuals.
To formally explain the quantile regression problem, consider the following linear model,
where is a scalar response variable, is a vector of covariates, is a vector of unknown parameters that depend on quantile , and is the error term such that its -th quantile equals zero. Henceforth, we will drop the subscript for notational simplicity. In classical econometrics, the error does not (or is not assumed to) follow any distribution and estimation requires minimizing the following objective function,
The minimizer gives the -th regression quantile and the estimated conditional quantile function is obtained as . Alternatively, the objective function (2) can be written as a sum of piecewise linear or check functions as follows,
where and is an indicator function, which equals 1 if the condition inside the parenthesis is true and 0 otherwise. The check function, as seen in Figure 1, is not differentiable at the origin. Hence, classical econometrics relies on computational techniques to estimate quantile regression models. Such computational methods include the simplex algorithm (Dantzig, 1963; Dantzig and Thapa, 1997, 2003; Barrodale and Roberts, 1973; Koenker and d’Orey, 1987), the interior point algorithm (Karmarkar, 1984; Mehrotra, 1992; Portnoy and Koenker, 1997), the smoothing algorithm (Madsen and Nielsen, 1993; Chen, 2007), and metaheuristic algorithms (Rahman, 2013).
In contrast to classical quantile regression, Bayesian quantile regression assumes that the error follows an AL distribution because the AL pdf
contains the quantile loss function (2) in its exponent. This facilitates the construction of a working likelihood, required for Bayesian analysis. Maximizing an AL likelihood is equivalent to minimizing the quantile objective function (Koenker and Machado, 1999; Yu and Moyeed, 2001). A random variable
follows an AL distribution if its probability density function (pdf) is given by:
where is the check function as defined earlier, is the location parameter, is the scale parameter, and
is the skewness parameter(Kotz et al., 2001; Yu and Zhang, 2005)
. The mean and variance ofwith pdf (3) are
If and , then both mean and variance depend only on and hence are fixed for a given value of .
The Bayesian approach to quantile regression for binary data assumes that
. Here, the variance is constant to serve as a normalization for identification, typical in probit and logit models(Poirier and Ruud, 1988; Koop and Poirier, 1993; Jeliazkov and Rahman, 2012). However, working directly with the AL distribution is not conducive to constructing a Gibbs sampler and hence the normal-exponential mixture of the AL distribution is often employed (Kozumi and Kobayashi, 2011). Several recent papers have utilized the mixture representation, including Ji et al. (2012) for Bayesian model selection in binary and Tobit quantile regression, Luo et al. (2012) for estimating linear longitudinal data models, and Rahman (2016) for estimating ordinal quantile regression models. We also exploit the normal-exponential mixture representation of the AL distribution to derive the estimation algorithm for quantile regression in binary longitudinal data settings.
3 The Quantile Regression Model for Binary Longitudinal Data
This section presents the quantile regression model for binary longitudinal data (QBLD) and an estimation algorithm to fit the model. The performance of the proposed algorithm is illustrated in a simulation study. The last part of this section considers methods for model comparison and covariate effects.
3.1 The Model
The proposed model looks at quantiles of binary longitudinal data expressed as a function of covariates with common effects and individual-specific effects. The individual-specific effects offer additional flexibility in that both intercept and slope heterogeneity can be captured, which are important to avoid biases in the parameter estimates. The QBLD model can be conveniently expressed in the latent variable formulation (Albert and Chib, 1993) as follows,
where the latent variable denotes the value of at the -th time period on the -th individual, is a vector of explanatory variables, is vector of common parameters, is a vector of covariates that have individual-specific effects, is an vector of individual-specific parameters, and is the error term assumed to be independently distributed as with . This implies that the conditional density of is an for , and , with . Note that may contain a constant for intercept heterogeneity, as well as other covariates (which are often a subset of those in ) to account for slope heterogeneity of those variables. The variable is unobserved and represents the latent utility associated with the observed binary choice . The latent variable formulation serves as a convenient tool in the estimation process (Albert and Chib, 1993). Furthermore, latent utility underlies the interpretation of the results at the various quantiles.
While working directly with the AL density is an option, the resulting posterior will not yield the full set of tractable conditional distributions necessary for a Gibbs sampler. Thus, we utilize the normal-exponential mixture representation of the AL distribution, presented in Kozumi and Kobayashi (2011), and express the error as follows,
where is mutually independent of with
representing an exponential distribution and the constantsand
. The mixture representation gives access to the appealing properties of the normal distribution.
Longitudinal data models often involve a moderately large amount of data, so it is important to take advantage of any opportunity to reduce the computational burden. One such trick is to stack the model for each individual (Hendricks et al., 1979). We define , , , , , and . Building on equations (4) and (5), the resulting hierarchical model can be written as,
where we assume that are identically distributed as a normal distribution. The last row represents the prior distributions with and
denoting the normal and inverse-gamma distributions, respectively. Here, we note that the form of the prior distribution onholds a penalty interpretation on the quantile loss function (Koenker, 2004). A normal prior on implies a penalty and has been used in Yuan and Yin (2010), and Luo et al. (2012). One may also employ a Laplace prior distribution on that imposes penalization, as used in several articles such as Alhamzawi and Ali (2018). While Alhamzawi and Ali (2018) also work with quantile regression for discrete panel data (ordered, in particular), our work contributes by considering multivariate heterogeneity (not just intercept heterogeneity), and introducing computational improvements outlined below.
By Bayes’ theorem, we express the “complete joint posterior” density as proportional to the product of likelihood function and the prior distributions as follows,
where the first line uses independence between prior distributions and second line follows from the fact that given , the observed is independent of all parameters because the second line of (6) determines given with probability 1. Substituting the distribution of the variables associated with the likelihood and the prior distributions in (7) yields the following expression,
The joint posterior density (8) does not have a tractable form, and thus simulation techniques are necessary for estimation. Bayesian methods are increasing in popularity (Poirier, 2006), and this paper takes the approach for a couple of reasons. First, with discrete panel data, working with the likelihood function is complicated because it is analytically intractable. The inclusion of individual-specific effects makes matters worse. Second, while numerical simulation methods are available for discrete panel data, they are often slow and difficult to implement (Burda and Harding, 2013). The availability of a full set of conditional distributions (which are outlined below) makes Gibbs sampling an attractive option that will be simpler to implement, both conceptually and computationally.
We can derive the conditional posteriors of the parameters and latent variables by a straightforward extension of the estimation technique for the linear mixed-effects model presented in Luo et al. (2012). This is presented as Algorithm 2 in Appendix A, which shows the conditional posterior distributions for the parameters and latent variables necessary for a Gibbs sampler. While this Gibbs sampler is straightforward, there is potential for poor mixing properties due to correlation between and . The correlation often arises because the variables corresponding to the parameters in are often a subset of those in . Thus, by conditioning these items on one another, the mixing of the Markov chain will be slow.
Algorithm 1 (Blocked Sampling)
Sample in one block. The objects are sampled in the following two sub-steps.
Let . Sample marginally of from , where,
Sample the vector for all , where and is the interval if and the interval if . This is done by sampling at the -th pass of the MCMC iteration using a series of conditional posterior distribution as follows:
where denotes a truncated normal distribution. The terms and are the conditional mean and variance, respectively, and are defined as,
where , is a column vector with -th element removed, denotes the -th element of , denotes the -th row of with element in the -th column removed and is the matrix with -th row and -th column removed.
Sample for , where,
Sample for and , where,
Sample , where and .
To avoid this issue, we develop an alternative algorithm which jointly samples in one block within the Gibbs sampler. This blocked approach significantly improves the mixing properties of the Markov chain. The success of these blocking techniques can be found in Liu (1994), Chib and Carlin (1999), and Chib and Jeliazkov (2006). The details of our blocked sampler are described in Algorithm 1.333The derivation of the conditional posterior densities are presented in Appendix B. In particular, is sampled marginally of from a multivariate normal distribution. The latent variable is sampled marginally of from a truncated multivariate normal distribution denoted by , where is the truncation region given by such that is the interval if and the interval if . To draw from a truncated multivariate normal distribution, we utilize the method proposed in Geweke (1991). This involves drawing from a series of conditional posteriors which are univariate truncated normal distributions. Previous work using this approach include Chib and Greenberg (1998) and Chib and Carlin (1999). The random effects parameter is sampled conditionally on from another multivariate normal distribution. The variance parameter is sampled from an inverse-gamma distribution and finally the latent weight is sampled element-wise from a generalized inverse Gaussian (GIG) distribution (Dagpunar, 1988, 1989; Devroye, 2014).
We end this section with a cautionary note on sampling from a truncated multivariate normal distribution, with the hope that it will be useful to researchers on quantile regression. In our algorithm above, we sample from a using a series of conditional posteriors which are univariate truncated normal distributions. This method is distinctly different and should not be confused with sampling from a recursively characterized truncation region typically related to the Geweke-Hajivassiliou-Keane (GHK) estimator (Geweke, 1991; Börsch-Supan and Hajivassiliou, 1993; Keane, 1994; Hajivassiliou and McFadden, 1998).444In the latter scenario, the model can be written as , where is a lower triangular Cholesky factor of such that . To be general, let the lower and upper truncation vectors for be and , respectively. Then the random variable is sampled from , where are the elements of . This is a recursively characterized truncation region, since the range of depends on the draw of for . The vector can be obtained by substituting the recursively drawn into . However, the draws so obtained are not the same as drawing from a multivariate normal distribution truncated to the region . The difference between the two samplers have been exhibited in Breslaw (1994) and carefully discussed in Jeliazkov and Lee (2010).
3.2 Simulation Study
This subsection evaluates the performance of the algorithm in a simulation study, where the data are generated from a model that has common effects and individual-specific effects in both the intercept and slopes. We estimate the quantile regression model for binary longitudinal data (QBLD) using our proposed blocked sampler (Algorithm 1) and the non-blocked sampler (Algorithm 2).
The data are simulated from the model where and . For the parameters and covariates: , , with and , with . The error is generated from a standard AL distribution, for . Here, the notation
denotes a standard uniform distribution. The binary response variableis constructed by assigning 1 to all positive values of and 0 to all negative values of . Since the values generated from an AL distribution are different at each quantile, the number of 0s and 1s are also different at each quantile. In the simulation, the number of observations corresponding to 0s and 1s for the 25th, 50th and 75th quantiles are , and , respectively.
|25th quantile||50th quantile||75th quantile|
|25th quantile||50th quantile||75th quantile|
), standard deviations (std) and inefficiency factors (if) of the parameters in the simulation study from the QBLD model. The first panel presents results from Algorithm 1 and the second panel presents results from Algorithm 2.
The posterior estimates of the model parameters are based on the generated data and the following independent prior distributions: , and . Table 1 reports the posterior means, standard deviations and inefficiency factors calculated from MCMC iterations after a burn-in of iterations. The inefficiency factors are calculated using the batch-means method discussed in Greenberg (2012). The simulation exercise was repeated for various covariates, sample sizes, common and individual-specific parameters, and the results do not change from this baseline case; hence they are not presented.
The posterior mean for regression coefficients for both the samplers (blocked and non-blocked methods) are near the true values, . Additionally, the standard deviations are small. Across each quantile, the number of 0s and 1s varies, and the samplers perform well in each case. Furthermore, starting the algorithm at different values appears inconsequential, which is a benefit of the full Gibbs sampler.
Turning attention to the differences between the two algorithms, it is clear that the inefficiency factors from the blocked algorithm are much lower, suggesting better sampling performance and a nice mixing of the Markov chain. The advantages of the blocking procedure are more apparent from the autocorrelation in the MCMC draws at different lags. Table 2 presents the autocorrelation in MCMC draws at lag 1, lag 5, and lag 10. Looking at lag 10, the autocorrelation for the s are between in the blocked algorithm, which is nearly half of , obtained from the non-blocked sampler. Recall that in our data generation process, we did not make the covariates in a subset of those in . Whereas in real-data exercises, it is typical for to be a subset. Therefore, we expect the benefits of the blocked sampler to be even more pronounced in real data settings.
|25th quantile||50th quantile||75th quantile|
|Lag 1||Lag 5||Lag 10||Lag 1||Lag 5||Lag 10||Lag 1||Lag 5||Lag 10|
|25th quantile||50th quantile||75th quantile|
|Lag 1||Lag 5||Lag 10||Lag 1||Lag 5||Lag 10||Lag 1||Lag 5||Lag 10|
Finally, Figure 2 presents the trace plots of the parameters at the 25th quantile for the blocked algorithm, which graphically demonstrate the appealing sampling. Given the computational efficiency with the blocking procedure, it is our preferred way for estimating QBLD models and will be used in the subsequent real data applications.
3.3 Additional Considerations
In this section, we briefly discuss methods for model comparison and computation of covariate effects. For model comparison, we follow standard techniques for longitudinal data models. Specifically, in the application sections we provide the log-likelihood, conditional AIC (Greven and Kneib, 2010), and conditional BIC (Delattre et al., 2014). This is a bit unusual for a Bayesian analysis, however, we want the results in our empirical applications to align with the classical work on the topics, such as Bartolucci and Farcomeni (2012). Thus, we follow the approaches so as to allow for better comparisons and cross references.
For covariate effects, in general terms, we are interested in the average difference in the implied probabilities between the case when is set to the value and . Given the values of the other covariates denoted and those of the model parameters , one can obtain the probabilities and . Following from Jeliazkov et al. (2008) and Jeliazkov and Vossmeyer (2018), if one is interested in the distribution of the difference marginalized over and given the data , a practical procedure is to marginalize out the covariates using their empirical distribution, while the parameters are integrated out with respect to their posterior distribution. Formally, the goal is to obtain a sample of draws from the distribution,
The computation of these probabilities is straightforward because the differences between the probabilities of success is related to differences in AL cdf, marginalized over and the posterior distribution of . Also, the procedure handles uncertainty stemming from the sample and estimation strategy. This approach is demonstrated in each of the following applications.
4.1 Female Labor Force Participation
Modeling female labor force participation has been an important area of work in the economics and econometric literature for decades. The list of work is vast, but a partial list includes Heckman and Macurdy (1980), Heckman and Macurdy (1982), Mroz (1987), Hyslop (1999), Arellano and Carrasco (2003), Chib and Jeliazkov (2006), Kordas (2006), Carro (2007), Bartolucci and Nigro (2010), and Eckstein and Lifshitz (2011).
Within the literature, several pertinent questions have been analyzed including the relationship between participation and age, education, fertility, and permanent and transitory incomes. However, serial persistence in the decision to participate and its two competing theories – heterogeneity and state dependence – have been of substantive interest. Heterogeneity implies that females may differ in terms of certain unmeasured variables that affect their probability of labor force participation. If heterogeneity is not properly controlled, then past decisions may appear significant to current decisions leading to what is called spurious state dependence. In contrast, pure state dependence implies that dynamic effects of past participation genuinely affect current employment decisions. Consideration of heterogeneity and state dependence is important in modeling female labor force participation and can have economic implications as discussed in Heckman (1981a), Heckman (1981b) and Hsiao (2014, pp. 261-270). We re-examine the above mentioned aspects using our proposed Bayesian quantile regression model for binary longitudinal data. To our knowledge, this is the first attempt to analyze female labor force participation within a longitudinal quantile framework. So, what can we learn from a panel quantile approach? Of particular interest are the impacts of infants and children across the various quantiles. Understanding the differential effects across the latent utility scale can help shape female labor force policies, such as maternity leave and child care.
Before proceeding forward, we draw attention to Kordas (2006) who evaluated female labor force participation using cross sectional data and smoothed binary regression quantiles. His results offer interesting insights across the quantiles, which further motivate our application and extension to transitions into and out of the labor force in the panel setting. We also follow his interpretation where the latent utility differential between working and not working may be interpreted as a “propensity” or “willingness-to-participate” (WTP) index.
|Full Sample||Employed 7 Years||Employed 0 Years||Single Transition from Work||Single Transition to Work||Multiple Transitions|
The data for this study are taken from Bartolucci and Farcomeni (2012), which were originally extracted from the Panel Study of Income Dynamics (PSID) conducted by the University of Michigan. The data consist of a sample of females who were followed for the period 1987 to 1993 with respect to their employment status and a host of demographic and socio-economic variables. The dependent variable in the model is employment status ( if the individual is employed, otherwise) and the covariates include age (in 1986), education (number of years of schooling), child 1-2 (number of children aged 1 to 2, referred to the previous year), child 3-5, child 6-13, child 14-, Black (indicator for Black race), income of the husband (in US dollars, referred to the previous year), and fertility (indicator variable for birth of a child in a certain year). Lagged employment status is also included as a covariate to examine state dependence of female labor force participation decision.
Table 3 presents summary statistics for the variables. The presentation of the table follows from Hyslop (1999), where statistics are broken up into subgroups of women that have worked 0 years, 7 years, or transitioned during the period. As one can see from the table, the average age in the sample is roughly 30, about 40% of the sample is employed throughout the entire period, 10% are not in the labor force throughout the entire period, 20% transition into or out of the labor force once, and 30% transition multiple times. Looking closely at the different variables for children, there is a decent amount of variation across the subgroups. For mothers who are employed 0 years, the average values for child 1-2 and child 3-5 are 0.46 and 0.56, respectively. These numbers are more than double compared to that of mothers who are employed for all the 7 years. Further, as children age (child 6-13) more mothers have a single transition to work. While these differences demonstrate some observed heterogeneity, unobserved heterogeneity still plays a role, which motivates further analysis. Particularly, a quantile setting will reveal information not available in the raw observed data by utilizing the latent scale as the willingness-to-participate index.
The data are modeled following equations (4) and (5) and the model (QBLD) is specified with a random intercept (i.e., only includes a constant). We also estimate the probit model for binary longitudinal data (PBLD) using the algorithm presented in Koop et al. (2007) and Greenberg (2012) and identical priors for relevant parameters. The results for the QBLD and PBLD models are presented in Table 4 and are based on data for the years 1988-1993, since using a lagged dependent variable drops information for the year 1987. The reported posterior estimates are based on 12,000 MCMC draws after a burn-in of 3,000 draws and the following priors on the parameters: and . Table 4 presents the posterior means, standard deviations, and inefficiency factors at the 25th, 50th, and 75th quantiles, and for the binary probit model. Furthermore, the log-likelihood, conditional AIC (Greven and Kneib, 2010) and conditional BIC (Delattre et al., 2014) are available for each model.
|25th quantile||50th quantile||75th quantile||PBLD|
|denotes variable minus the sample average.|
First, note that across the quantiles the inefficiency factors are low, implying a nice mixing of the Markov chain. These results, which were demonstrated in the simulation study, hold in empirical applications as well. Next, if we consider each quantile as corresponding to a different likelihood, then the 25th quantile has the lowest conditional AIC and conditional BIC. This result is not surprising since the unconditional probability of participation is around 70% in the sample. Our result also finds support in Kordas (2006), where he reports that the 30th conditional quantile would be the one most efficiently estimable.
The results for the education variable are positive, statistically different from zero, and show various incremental differences across the quantiles. Education is found to have stronger effects in the upper part of the latent index, which is expected since these are women who have a high utility for working and thus have obtained the requisite education. Regarding the state dependence versus heterogeneity debate, we find that employment is serially positively correlated, which is a consequence of state dependence. The effect gets incrementally larger as one moves up the latent utility scale. While we are controlling for individual heterogeneity with the random intercept, we still find evidence of state dependence. This result agrees with Bartolucci and Farcomeni (2012), who investigate the question with a latent class model. Other papers that find empirical evidence of strong state dependence effects include Heckman (1981a), Hyslop (1999), and Chib and Jeliazkov (2006).
To further understand the results, covariate effects are computed for several variables for the 3 quantiles and the PBLD model. The covariate effect calculations follow from Section 3.3 and the results are displayed in Table 5. Note that the 50th quantile results are similar to that of the PBLD, which is to be expected. The covariate effect for education is calculated on the restricted sample of individuals with a high school degree (12 years of schooling). The effect that is computed is 4 additional years of schooling, implying a college degree. The effect for income is a discrete change by $10,000, the effect for children is increasing the count by one, and for fertility it is a discrete change to the indicator variable.
The results show that the birth of a child in that year (fertility), reduces the probability that a woman works by 16.7 percentage points at the 25th quantile, 17.4 percentage points at the 50th quantile, and 13.3 percentage points at the 75th quantile. For individuals in the lower part of the latent index, having children ages 1-2 impacts their employment decision less than those at the upper quantiles. Perhaps, women with a low utility for working are less impacted by infants and toddlers because it is often a desire to stay home with the child for a few years. Whereas, women with a high utility for working face negative impacts because of the desire to enter the work force.
The most pronounced negative effect of children occurs when the child is ages 3-5. Often women temporarily exit the work force until children are ready for pre-school and this result provides evidence of the difficulty mothers faces re-entering the work force after a several year leave of absence (Drange and Rege, 2013). The finding is interesting from a policy standpoint. If policy is focused on increasing participation, offering more support in the years when the child is likely not breastfeeding but before kindergarten would be beneficial.
The covariate effect of a college degree is 5.2 to 7.1 percentage points across the quantiles, while husband’s income is approximately percentage point across the quantiles. Thus, a college degree increases the probability a woman works by about 6 percentage points, whereas an increase in family income only decreases the probability by 1 percentage point for every $10,000. While many of these results align with existing findings, the behavior in the high and low quantiles presents useful information, which was otherwise unexplored in panel data.
4.2 Home Ownership
The recent financial crisis had major implications for home ownership in the United States. Figure 3 displays the home ownership rates for the United States from the 1960s to 2017. These data were taken from the FRED website provided by the Federal Reserve Bank of St. Louis. The rate of home ownership rose in the late 1990s and early 2000s, but started to decline after 2007. The determinants of home ownership was reviewed in the 1970s (Carliner, 1974; Poirier, 1977). However, the recent crisis offers a unique event and shock to housing markets to reevaluate this topic.
The literature on home ownership has examined racial gaps (Charles and Hurst, 2002; Turner and Smith, 2009), wealth accumulation and income (Turner and Luea, 2009), mobility and the labor market (Ferreira et al., 2010; Fairlie, 2013), and tax policy (Hilber and Turner, 2014). However, unlike the labor force context, state dependence has only been lightly examined with regard to housing tenure.555Chen and Ost (2005) control for state dependence in a study of housing allowance in Sweden. Given the large down payments and extensive mortgage processes typical in home ownership, state dependence is likely to be a key factor, as well as individual heterogeneity.
Furthermore, quantile analyses in the home ownership literature are lacking. The quantiles represent degrees of willingness or utility of owning a home. Owning a home in the United States usually requires an individual to produce a large upfront investment, a promising credit history, and a willingness to engage in 30 year mortgages, resulting in less liquidity. Given these requirements, interest lies in how the determinants of home ownership varies across the latent utility scale. Therefore, this paper adds to the literature on the probability of home ownership by employing the QBLD model. The approach has several advantages, namely that we can control for multivariate heterogeneity, visit the state dependence versus heterogeneity argument in the housing context, and analyze willingness of home ownership across the quantiles.
The dataset is constructed from the Panel Study of Income Dynamics (PSID) and consists of a balanced panel of 4092 individuals observed for the years 2001, 2003, 2005, 2007, 2009, 2011, and 2013. The sample is restricted to individuals aged 25-65 who answered the relevant questions for the 7 years and captures the period before, during, and after the Great Recession. The dependent variable is defined as follows:
for and (2001 is dropped because it is a dynamic model). The covariates include demographics, marital status, employment, job industry, health insurance, education, socioeconomic status, lagged home ownership, and an indicator for after the recession (2009-2013). The model includes a random intercept and a random slope on an income-to-needs variable, which allows for individual heterogeneity and heterogeneity in income. Heterogeneity in income is an important control because a marginal increase in income could have a wide range of effects on the probability of owning a home, where for some the effect of income could be 0 (perhaps, those who own their home freehold, or those who have no desire for ownership). Whereas, for others, increases in income could go directly into home ownership utility. Table LABEL:Table:HomeOwnDataSummary presents summary statistics for the variables. Once again, the presentation of the table follows from Hyslop (1999), where statistics are broken up into subgroups of people that have always been home owners, never been home owners, or transitioned during the period of interest.
In the sample, about 56% of individuals own a home across the entire sample period, 18% never own, and the remaining transition at least once. The age of the head of the household is that in the year 2003. Job industry is classified into four categories.JobCat1 is an indicator for jobs in construction, manufacturing, agriculture, and wholesale. JobCat2 is an indicator for jobs in business, finance, and real estate. JobCat3 is an indicator for jobs in the military and public services. The omitted category (JobCat4) consists of jobs in professional and technical services, entertainment and arts services, health care, and other. Education is broken up into categories: less than high school (omitted), high school degree or some college (Below Bachelors), and college or advanced degree (Bachelors & Above). Race is broken up into white/asian (omitted), black, and other. Marital status is discretized into married, single, and divorced/widowed (omitted). Region is discretized to west, south, northeast, and midwest (omitted). We have two income measures, including income-to-needs ratio and net wealth.666This measure of net wealth excludes home equity and housing assets, so as to not conflate with the outcome of interest. We employ an inverse hyperbolic sine (IHS) transformation for net wealth because it adjusts for skewness and retains negative and 0 values, which is a common feature of data on net wealth (Friedline et al., 2015).
The table demonstrates some drastic differences across the subgroups. As expected, the “owned 6 years” group is older and wealthier than the others. Families that transition tend to have more children, and a higher proportion of females and singles are in the “owned 0 years” group. These differences in the raw data motivate our question of interest – with so much state dependence in home ownership and heterogeneity among individuals and income, what are the determinants of home ownership through an economic downturn? The results should provide insights into discrepancies across subgroups of the population and should better inform policy aiming to assist home owners during downturns. Standard methods for investigating a binary panel dataset of this sort do not capture the extensive heterogeneity problem, nor do they offer quantile analyses, which highlights the usefulness of our approach.
The results for the home ownership application are presented in Table 6. Posterior means, standard deviations, and inefficiency factors calculated using the batch-means method are presented for the 25th, 50th, and 75th quantiles, as well as for the binary longitudinal probit model (PBLD). The results are based on 12,000 MCMC draws with a burn in of 3,000 draws. The priors on the parameters are: , and . As in the female labor force application, the inefficiency factors are low, implying a nice mixing of the Markov chain.
|25th quantile||50th quantile||75th quantile||PBLD|
|log Age of Head|
|IHS Net Wealth|