1 Introduction
The evaluation of university courses, which is essential for quality insurance, is typically based on student ratings. A large body of literature focuses on studying the factors associated with the expressed evaluations, such as the characteristics of the student, the teacher and the course (Spooren et al., 2013). It is widely recognized that teaching quality is a key determinant of student satisfaction, even if teacher observed characteristics often reveal weak effects (Hanushek and Rivkin, 2006). Therefore, it is helpful to gather more information about teacher practices and beliefs by specific surveys involving the teachers themselves (Goe et al., 2008). In this vein, the PRODID project, launched in 2013 by the University of Padua (Dalla Zuanna et al., 2016), is a valuable source as it implemented a new CAWI survey addressed to the teachers for collecting information on their practices and beliefs about teaching.
We aim at analysing the relationship between student ratings and teacher practices and beliefs, controlling for the available characteristics of students, teachers and courses. Given the hierarchical structure of ratings nested into teachers, we exploit multilevel modelling (Goldstein, 2010; Rampichini et al., 2004). Teacher practices and beliefs from the PRODID survey enter the model as level 2 predictors, but they are missing for nearly half of the teachers due to nonresponse. Thus, the multilevel analysis must face a serious issue of missing data at level 2. This issue is receiving increasing attention in the literature (Grund et al., 2018). In addition, modelling the effects of teacher practices and beliefs is complicated since they are measured by a wide set of binary and ordinal items, calling for suitable model selection techniques. Therefore, the case study raises the methodological challenge of selecting level 2 predictors affected by missing values. We handle the missing values through multiple imputation by chained equations, exploiting information at both level 1 and level 2 (Grund et al., 2017; Mistler and Enders, 2017). For the selection of predictors, we consider regularization techniques for ordinal predictors (Gertheiss and Tutz, 2010) and we propose a strategy to combine selection of predictors and imputation of their missing values.
The rest of the paper is organized as follows. Section 2 describes the data structure and the model specification. Section 3 outlines the imputation procedure to handle missing data at level 2, then Section 4 presents the regularization method chosen to deal with ordinal predictors. Section 5 outlines the proposed strategy to combine imputation and model selection, while Section 6 illustrates the application of the strategy to the case study, reporting the main results. Section 7 concludes with some remarks and directions for future work.
2 Data description and model specification
As anticipated in Section 1, we wish to analyse the relationship between student ratings and teacher practices and beliefs, controlling for the available characteristics of the student, the teacher, and the course. To this end, we exploit a dataset of the University of Padua for academic year 2012/13, obtained by merging three sources: (i) the traditional course evaluation survey with items on a scale from 1 to 10, where 10 is the maximum; (ii) administrative data on students, teachers, and courses; (iii) the innovative PRODID survey collecting information on teacher practices and beliefs (Dalla Zuanna et al., 2016).
Data have a twolevel hierarchical structure, with student ratings at level 1 and teachers at level 2. The average group size is (min 5, max 442).
We investigate student opinion about teacher ability to motivate students, which is one of the items of the course evaluation questionnaire^{1}^{1}1https:www.unipd.itopinionestudentisulleattivitadidattiche. The analysis is based on the following 2level linear model for rating of teacher :
(1) 
where
is the vector of level 1 covariates (student characteristics) including the constant, while
is the vector of fully observed level 2 covariates (administrative data on teachers and courses), and is the vector of partially observed level 2 covariates (teacher practices and beliefs). Model errors are assumed independent across levels with the standard distributional assumptions, namely and .The survey on teacher beliefs and practices has about fifty percent of missing questionnaires, posing a serious issue of missing data at level 2. An analysis based on listwise deletion would discard the entire set of student ratings for non responding teachers, causing two main problems: (i) a dramatic reduction of the sample size, and thus of the statistical power, and (ii
) possibly biased estimates if the missing data mechanism is not MCAR. To overcome these issues, we impute missing values by means of multiple imputation, which allows us to retain all the observations and to perform the analysis under the more plausible MAR assumption
(Seaman et al., 2013).3 Handling missing data at level 2
In multilevel models, the treatment of missing data requires special techniques since missing values can occur at any level of the hierarchy. Furthermore, missing values can alter variance components and correlations.
Multiple imputation (MI) is a flexible approach to handle missing data taking into account the uncertainty deriving from the imputation procedure. MI is carried out in two steps: (i) generate several imputed data sets according to a suitable imputation model; (ii) fit the substantive model on each imputed data set, and join the results using Rubin rules (Little and Rubin, 2002). The two main approaches to implement MI are Joint Modelling (JM) and fully conditional specification, also known as Multivariate Imputation by Chained Equations (MICE), see van Buuren (2018) for a comprehensive treatment, and Mistler and Enders (2017); Grund et al. (2017)
for a comparison of these approaches in multilevel settings. In the JM approach, data are assumed to follow a joint multivariate distribution and imputations are generated as draws from the fitted distribution. In the MICE approach, missing data are replaced by iteratively drawing from the fitted conditional distributions of partially observed variables, given the observed and imputed values of the remaining variables in the imputation model. In our case missing data are only at level 2, so we can apply MI techniques to the level 2 data set and then merge level 1 and level 2 data sets. According to the literature on MI in multilevel settings, the imputation model used to simulate missing information at level 2 should include level 2 covariates, the cluster size, and proper summaries of level 1 variables (covariates and response variable). Several strategies may be adopted to summarize level 1 variables: if all the variables are normal, the sample cluster mean is the optimal choice
(Carpenter and Kenward, 2013). For the imputation of categorical variables there are no theoretical results, but simulation studies show that the sample cluster mean is a good compromise between accuracy and computational speed
(Erler et al., 2016; Grund et al., 2018). Therefore, we summarise level 1 variables through the cluster mean, which is easy to implement in our case since level 1 variables are completely observed.In our case the imputation step is challenging since we have to impute many categorical variables, indeed about 50% of the teachers did not respond to the whole questionnaire producing missing values on binary items (teacher practices) and ordinal items (teacher beliefs on a 7point scale). The JM approach is computationally demanding with many ordinal items, thus we rely on the MICE approach, performing imputations using the mi chained command of Stata (Stata Corp., 2017)
. The imputation model is composed of binary logit models for the
binary items (teacher practices) and cumulative logit models for the ordinal items (teacher beliefs). The imputation model includes the following fully observed covariates: teacher characteristics, course characteristics (including the number of ratings), and the cluster means of the ratings for all questions of the course evaluation questionnaire, including the response variable. The inclusion of mean ratings increases the plausibility of the MAR assumption.4 Selecting ordinal predictors with regularization techniques
The PRODID questionnaire measures teacher practices using 10 binary items and teacher beliefs using 20 ordinal items on a 7point Likert scale. Such items contain information on a few dimensions of teaching that in principle could be summarized using latent variable models for ordinal items (Bartholomew et al., 2011). However, about of the teachers did not respond to the questionnaire, thus applying latent variable methods to the complete cases can lead to biased results. On the other hand, fitting latent variable models using imputed data sets raises two main problems: (i) how to combine the results in order to identify the latent dimensions and assign the corresponding scores to the teachers, and (ii) how to take into account the variability of the predicted scores in the main model. The literature on factor analysis in the presence of missing responses is growing (LorenzoSeva et al., 2016; Nassiri et al., 2018), but the issue is still controversial, thus we prefer to directly use the imputed PRODID items as covariates in the main model and select them applying model selection techniques.
The imputation method outlined in Section 3 preserves the 7 point scale of the ordinal items. A simple way to specify the effect of an ordinal predictor in a regression model is to treat category codes as scores and include a single regression coefficient. This specification imposes a strong linearity assumption which may be relaxed by dummy coding, where each category is represented by an indicator variable (except for the reference category). For example, fitting model (1) with dummy coding for ordinal items and (first category as reference), item does not show a linear effect (Figure 1).
This example makes clear that, in our case, it is advisable to begin with dummy coding for all items, and to apply data driven methods to choose a suitable specification for each item. Indeed, dummy coding yields a flexible but not parsimonious model, since it entails estimating parameters for the ordinal predictors. Therefore, we need regularization methods allowing us to retain the flexibility of the dummy coding specification, while ensuring model parsimony.
Regularization methods for ordinal predictors Gertheiss and Tutz (2010); Tutz and Gertheiss (2016) have a twofold aim: (i) investigating which variables should be included in the model; (ii) investigating which categories of an ordinal predictor should be distinguished and which can be collapsed. In the presence of ordinal predictors, each having categories, Gertheiss and Tutz (2010) suggest to implement the lasso with the following penalty term:
(2) 
where
is the coefficient of the dummy variable identifying the
th category of the th predictor (with for the baseline category) and are weights allowing for adaptive lasso. This approach can be applied to select all items of the PRODID questionnaire, including both ordinal and binary items, since a binary predictor is just an ordinal predictor with .In order to exploit existing software for regularization, we use the backward difference coding, also known as split coding (Walter et al., 1987; Gertheiss and Tutz, 2010). Such a procedure allows a reparameterisation of the model in equation (1) in terms of the new parameters for the ordinal predictors:
(3) 
and, in turn, the estimation of model parameters by means of a standard lassotype optimisation. Original dummy coefficients are simply obtained by backtransforming ; that is . Note that split coding does not affect binary items, so that for such items .
The weights in equation (2) are chosen as suggested by Zou (2006), yielding an adaptive lasso procedure for parameter estimation with the following penalty term:
(4) 
with
denoting the Ordinary Least Squares estimate of model parameter
.In practice, in our application, we used the command lasso2 included in the lassopack module of Stata (Ahrens et al., 2018) for parameter estimation. In the following, we outline the regularization algorithm as implemented in this procedure, which relies on Belloni et al. (2012). In particular, the regularization procedure of lasso2 minimizes the following penalized criterion:
(5) 
where is the sample size, , and denote the model parameters, is the residual sum of squares corresponding to model (1) (after splitcoding the variables), is the overall penalty parameter, and is the penalty term in equation (4). To minimize the objective function (5), lasso2 exploits a coordinate descent algorithm (Fu, 1998).
As far as the penalty parameter in equation (5) is concerned, this is chosen by minimizing the extended BIC index (EBIC) proposed by Chen and Chen (2008) and implemented in the lasso2 procedure as follows:
(6) 
In the equation above, is the number of parameters in the fitted model, while is the number of parameters in the full model. Note that is equal to the standard plus the term .
It is worth to notice that the lasso2 procedure relies on a standard linear model, while the model of interest appearing in equation (1
) is a linear random intercept model. We tried a specific procedure for linear mixed models, namely the
lmmlasso package of R (Schelldorfer et al., 2011; Groll and Tutz, 2014), but we encountered computational difficulties due to the large size of the data set. However, the random effects are expected to have a little role in the regularization process for the predictors. Moreover, in order to reduce the bias induced by penalization, it is in general advisable to refit the model using only the selected predictors (Gertheiss and Tutz, 2010; Belloni and Chernozhukov, 2013). Thus, we use the computationally efficient algorithm of lasso2 to perform variable selection, then we fit the random intercept model (1) on the selected predictors.5 Combining variable selection and multiple imputation
Our case study raises the additional issue of combining variable selection with multiple imputation. Zhao and Long (2017) review different approaches, highlighting that many aspects are still not fully explored. We choose the common approach of applying variable selection on each imputed data set and then combine the results. Specifically, we devise the following strategy:

generate imputed data sets using MICE, as described in Section 3;

for each imputed data set, perform variable selection using adaptive lasso for ordinal predictors, as outlined in Section 4;

retain the predictors selected in at least of the imputed data sets; specifically we choose a threshold of as in setting of Wood et al. (2008), which performed well in their simulation study;

for each imputed data set, fit the linear random intercept model (1) including the retained predictors;

combine the
vectors of estimated coefficients and the corresponding standard errors exploiting Rubin’s rules
(Little and Rubin, 2002); 
perform statistical tests on the regression parameters to refine the set of retained predictors;

repeat steps (d)–(f) to choose the final model.
This strategy allows us to select the ordinal predictors while giving proper standard errors, namely accounting for both the hierarchical structure of the data and the uncertainty due to multiple imputation. Step (f) is advisable since it allows us to exploit the availability of proper standard errors to refine variable selection.
6 Results
The strategy outlined in Section 5 is applied to the case study on student ratings presented in Section 2, which raises problems of missing data and selection of ordinal predictors.
The model of interest is the random intercept model (1). At level 1, the model includes student predictors , which are centered around their cluster average in order to interpret the associated parameters as within effects (Snijders and Bosker, 2012). At level 2, the model includes teacher and course predictors from administrative archives (fully observed), and teacher practices and beliefs (subject to missing). The vector contains dummy variables for binary items and for ordinal items. Adopting the backwarddifference coding of Section 4, the total number of parameters for the ordinal items is .
The imputation step is carried out with MICE as described at the end of Section 3. We generate imputed data sets.
The variable selection procedure begins by applying the regularization method described in Section 4 to each imputed data set, in order to select binary and ordinal items from the PRODID questionnaire, while the other predictors are included in the model without penalization. We retain the predictors selected in at least of the imputed data sets, namely 5 binary items and 13 ordinal items. For each ordinal item , the procedure selects only a subset of the parameters defined in equation (3), implying collapsing of categories. Overall, the regularization procedure reduces the number of parameters from to .
The analysis proceeds by fitting model (1) with the retained predictors on imputed data sets and combining the results with the Rubin’s rules. The model is fitted by maximum likelihood using the mixed and mi commands of Stata (Stata Corp., 2017). The variable selection procedure is refined using statistical tests based on the standard errors obtained by Rubin’s rules, as suggested in step (f) of Section 5. After this step, the final model includes the binary item and the ordinal items , , , . Table 1
reports final model results, while descriptive statistics of model variables are reported in Tables
2 and 3 in the Appendix. As shown in Table 1, the selection procedure on ordinal predictors yielded predictorspecific collapsing of categories. For example, for item Table 1 reports two coefficients corresponding to the following collapsed categories: (baseline), , . This means that the effect of item on the response variable is constant within the collapsed categories. This result is due to the selection procedure, which retained for predictor two out of six parameters in equation (3), specifically and . Due to backwarddifference coding, the parameters of the ordinal items represent contrasts between adjacent categories, thus is the effect of passing from category to category , while is the effect of passing from category to category . The sum of the two parameters, , is the effect of passing from category to category .[Table 1 here]
We briefly comment on the main finding about the effects of teacher characteristics on their ability to motivate students. Table 1 shows that older teachers and female teachers obtain on average lower ratings, controlling for the remaining covariates. The contribution of external experts () has a positive effect; this is the only item retained by the selection process out of the ten items about practices. As for beliefs, only four out of 20 items are significantly related to the ratings. In particular, ratings tend to be higher for teachers who feel that teaching is an exciting experience () and student opinions are a key indicator of course quality (). On the contrary, ratings tend to be lower for teachers who think that cooperation among students helps learning () and teachers interested in discussing didactic methods with colleagues ().
In order to assess the overall contribution of teacher practices and beliefs to explain differences in the ratings among courses, we compare the residual level 2 variance under different model specifications. In particular, fitting model (1) without any predictor yields an estimated level 2 variance , which reduces to 1.2306 (%) after introducing all the predictors except teacher practices and beliefs. The final model gives , corresponding to a further reduction of residual level 2 variance of about %. Thus, teacher practices an beliefs are the most relevant observed factors in explaining differences in the ratings among courses.
To evaluate the performance of the imputation procedure, the last two columns of Table 1 report the diagnostic measures and , which are derived from the decomposition of the total sampling variance of an estimator (Enders, 2010):
(7) 
where is the betweenimputation variance, while is the withinimputation variance, with denoting the standard error obtained from the th imputed data set. The index (Fraction of Missing Information) is used to quantify the influence of multiple imputation on the sampling variance of a parameter estimate:
(8) 
On the other hand, the index RE (Relative Efficiency) is the relative efficiency for using a finite number of imputations ( in our case) versus the theoretically optimal infinite number of imputations:
(9) 
As regards for level 1 predictors, Table 1 shows values of near zero and values of near one. Indeed, level 1 predictors are fully observed and clustermean centered, so they are not affected by imputations of level 2 predictors. Fully observed level 2 predictors (i.e. teacher and course characteristics) are little affected by imputations, showing between and , and relative efficiency close to 1. For imputed level 2 predictors (i.e. teacher practices and beliefs) ranges from to , with a mean value of , indicating that on average of the sampling variance is attributable to missing data, which is lower than the fraction of missing values in the data set (about ). This points out a favourable tradeoff between the increase of sampling error due to imputations and its reduction due to data augmentation. Moreover, the relative efficiency for imputed predictors ranges from to , suggesting that imputations are acceptable to obtain a satisfactory level of efficiency.
7 Concluding remarks
In this paper we considered a complex analysis involving a multilevel model with many level 2 ordinal and binary predictors affected by a high rate of missing values. We proposed a strategy to jointly handling missing values and selecting categorical predictors. The proposed solution combines existing methods in an original way to solve the specific problem at hand, but it is generally applicable to settings requiring to select categorical predictors affected by missing values. Specifically, we handled missing data using Multiple Imputation by Chained Equations. This allowed us to retain all the observations and analyze the data under the MAR assumption instead of the unrealistic MCAR assumption. The MAR assumption seems plausible since the imputation model exploits all the information from level 1 and level 2 observed values. The ordinal and binary predictors were selected using an ad hoc regularization method, namely the lasso for ordinal predictors. The regularization procedure induces a datadriven specification of the relationship between the response and the ordinal predictors, by collapsing the categories. This method can be easily extend to handle also nominal predictors (Tutz and Gertheiss, 2016). Regularization was applied separately on each imputed data set and the results were combined retaining the parameters selected in at least half of the imputed data sets. Finally, the random effect model of interest was fitted including the chosen predictors. The uncertainty due to imputation is accounted by Rubin’s rules. The proposed procedure allowed us to specify the model in a flexible, though parsimonious way.
The results obtained with the final model pointed out that some teacher practices and beliefs are significantly related to ratings about teacher ability to motivate students.
The complexity of the case study, especially in terms of number of observations and number of categorical variables affected by missing values, forced us to rely on computationally low demanding algorithms. The solution of computational issues would allow us to extend the proposed approach in several ways. For example, in the imputation step MICE could be replaced by Joint Modelling (Goldstein et al., 2014; Quartagno and Carpenter, 2016) or by the latent class approach (Vidotto et al., 2018), which give valid inferences for a wider set of specifications of the model of interest, including non linear effects and interactions.
Combining model selection with multiple imputation is an open issue (Zhao and Long, 2017). We devised a simple strategy to face a computationally demanding setting, but it would be interesting to explore other approaches.
Acknowledgements
The authors gratefully acknowledge the support of the University of Padua project Advances in Multilevel and Longitudinal Modelling, principal investigator Omar Paccagnella, grant no. SID2016
References
 Ahrens et al. (2018) Ahrens A., Hansen C.B., Schaffer M.E. (2018). LASSOPACK: Stata module for lasso, squareroot lasso, elastic net, ridge, adaptive lasso estimation and crossvalidation. Statistical Software Components S458458, Boston College Department of Economics, revised 07 Apr 2018.
 Bartholomew et al. (2011) Bartholomew, D.J., Knott, M., Moustaki, I. (2011). Latent Variable Models and Factor Analysis: A Unified Approach, 3rd Edition John Wiley & Sons Inc.
 Belloni et al. (2012) Belloni, A., Chen, D., Chernozhukov, V., Hansen, C. (2012). Sparse Models and Methods for Optimal Instruments With an Application to Eminent Domain. Econometrica, 80, 2369–2429.
 Belloni and Chernozhukov (2013) Belloni, A., Chernozhukov, V. (2013). Least squares after model selection in highdimensional sparse models. Bernoulli, 19, 521–547.
 Carpenter and Kenward (2013) Carpenter, J., Kenward, M. (2013). Multiple imputation and its application. Chichester, United Kingdom: John Wiley & Sons, Ltd.
 Chen and Chen (2008) Chen, J., Chen, Z. (2008). Extended Bayesian information criteria for model selection with large model spaces. Biometrika, 95, 759–771.
 Dalla Zuanna et al. (2016) Dalla Zuanna, G., Clerici, R., Paccagnella, O., Paggiaro, A., Martinoia, S., Pierobon, S. (2016). Evaluative research in education: a survey among professors of University of Padua. Excellence and Innovation in Learning and Teaching, 1, 17–34.
 Enders (2010) Enders, C.K. (2010) Applied Missing Data Analysis , The Guilford Press
 Erler et al. (2016) Erler N.S., Rizopoulos D., van Rosmalen J., Jaddoe V.W.V., Franco O.H., Lesaffre E.M.E.H. (2016). Dealing with missing covariates in epidemiologic studies: a comparison between multiple imputation and a full Bayesian. Statistics in Medicine, 35, 2955–2974.
 Fu (1998) Fu, W. J. (1998). Penalized Regressions: The Bridge Versus the Lasso. Journal of Computational and Graphical Statistics, 7(3), 397–416.
 Gertheiss and Tutz (2010) Gertheiss J., Tutz G. (2010). Sparse modeling of categorial explanatory variables. The Annals of Applied Statistics, 4, 2150–2180.
 Goe et al. (2008) Goe, L., Bell, C., Little, O.(2008). Approaches to evaluating teacher effectiveness: A research synthesis. National Comprehensive Center for Teacher Quality, Washington, DC.
 Goldstein (2010) Goldstein H.(2010). Multilevel Statistical Models, 4th Edition John Wiley & Sons, Ltd.
 Goldstein et al. (2014) Goldstein H., Carpenter J.R., Browne W.J. (2014). Fitting multilevel multivariate models with missing data in responses and covariates that may include interactions and nonlinear terms. Journal of RSS Series A, 177, 553–564.
 Groll and Tutz (2014) Groll, A., Tutz G. (2014). Variable selection for generalized linear mixed models by L 1penalized estimation. Statistics and Computing, 24, 137–154.
 Grund et al. (2017) Grund S., Ludtke O., Robitzsch A. (2017). Multiple imputation of missing data for multilevel models: Simulations and recommendations. Organizational Research Methods, 21, 111–149.
 Grund et al. (2018) Grund S., Ludtke O., Robitzsch A. (2018). Multiple Imputation of Missing Data at Level 2: A Comparison of Fully Conditional and Joint Modeling in Multilevel Designs. Journal of Educational and Behavioral Statistics, 43, 316–353.
 Hanushek and Rivkin (2006) Hanushek, E.A., Rivkin, S.G.(2006). Teacher quality. In: Hanushek, E.A., Welch, F. (eds), Handbook of the economics of education (Vol. 1). North Holland, Amsterdam, 1050–1078.
 Little and Rubin (2002) Little, R.J.A., Rubin, D.B. (2002). Statistical Analysis with Missing Data. 2nd Edition, Wiley.
 LorenzoSeva et al. (2016) LorenzoSeva, U., Van Ginkel, J.R. (2016). Multiple Imputation of missing values in exploratory factor analysis of multidimensional scales: estimating latent trait scores Anales de psicología, 32, 596–608.
 Mistler and Enders (2017) Mistler, S.A., Enders, C.K. (2017). A Comparison of Joint Model and Fully Conditional Specification Imputation for Multilevel Missing Data. Journal of Educational and Behavioral Statistics, 42, 371–404.
 Nassiri et al. (2018) Nassiri, V., Lovik, A., Molenberghs, G., Verbeke, G. (2018). On usingmultiple imputation for exploratory factor analysis of incomplete data Behavior Research Methods, 50, 501–51.
 Quartagno and Carpenter (2016) Quartagno, M., Carpenter, J.R. (2016) Multiple Imputation of IPD Metaanalysis: allowing for heterogeneity and studies with missing covariates. Statistics in Medicine. 35, 2938–2954.
 Rampichini et al. (2004) Rampichini, C., Grilli, L., Petrucci A. (2004). Analysis of university course evaluations: from descriptive measures to multilevel models. Statistical Methods & Applications 13, 357–373.
 Seaman et al. (2013) Seaman, S., Galati, J., Jackson, D., Carlin, J. (2013). What Is Meant by ‘Missing at Random’? Statistical Science, 28, 257–268.
 Schelldorfer et al. (2011) Schelldorfer J., Buhlmann P., Van de Geer S. (2011). Estimation for HighDimensional Linear MixedEffects Models Using l1Penalization Scandinavian Journal of Statistics, 38, 197–214.
 Snijders and Bosker (2012) Snijders, TAB, Bosker, RJ (2012). Multilevel analysis: An introduction to basic and advanced multilevel modeling (2nd ed.). SAGE Publications Inc.
 Spooren et al. (2013) Spooren, P., Brockx, B., Mortelmans, D. (2013). On the validity of student evaluation of teaching: The state of the art. Review of Educational Research 83, 598–642.
 Stata Corp. (2017) Stata Corp. (2017). Stata: Release 15. Statistical Software. College Station, TX: StataCorp LLC.
 Tutz and Gertheiss (2016) Tutz G., Gertheiss J. (2016). Regularized regression for categorical data. Statistical Modelling, 16(3): 161–200
 van Buuren (2018) van Buuren, S. (2018). Flexible Imputation of Missing Data (2nd ed.). CRC Press Taylor & Francis Group: Boca Raton, FL.
 Vidotto et al. (2018) Vidotto D., Vermunt, J.K., van Deun K. (2018). Bayesian Multilevel Latent Class Models for the Multiple Imputation of Nested Categorical Data. Journal of Educational and Behavioral Statistics, 43, 511–539.
 Walter et al. (1987) Walter, S.D. Feinstein, A.R., Wells, C.K. Coding ordinal independent variables in multiple regression analyses American Journal of Epidemiology, 125, 319–323
 Wood et al. (2008) Wood A.M., White I.R., Royston P. (2008). How should variable selection be performed with multiply imputed data? Statistics in Medicine, 27, 32273246, DOI: 10.1002/sim.3177
 Zhao and Long (2017) Zhao Y., Long Q. (2017). Variable selection in the presence of missing data: imputationbased methods. WIREs Comput Stat, 9:e1402. doi: 10.1002/wics.1402
 Zou (2006) Zou, H. (2006). The Adaptive Lasso and Its Oracle Properties. Journal of the American Statistical Association, 101(476), 1418–1429.
Appendix
[Tables 2 and 3 here]
Comments
There are no comments yet.