solar_app_house_price
solar regressision application on House price data. Done by python and stata.
view repo
In this paper we focus on the variable-selection peformance of solar on the empirical data with complicated dependence structures and, hence, severe multicollinearity and grouping effect issues. We choose the prostate cancer data and the Sydney house price data and apply two lasso solvers, elastic net and solar on them (code can be found at <https://github.com/isaac2math/>). The results shows that (i) lasso is affected by the grouping effect and randomly drop variables with high correlations, resulting unreliable and uninterpretable results; (ii) elastic net is more robust to grouping effect; however, it completely lose variable-selection sparsity when the dependence structure of the data is complicated; (iii) solar demonstrates its superior robustness to complicated dependence structures and grouping effect, returning variable-selection results with better stability and sparsity. Also, such stability and sparsity make solar a reliable variable pre-estimation filter of a linear dependence structure esimation (linear probablistic graph learning). The linear probablistic graph estimated on the variable selected by solar returns an intuitive, sparse and stable dependence structure.
READ FULL TEXT VIEW PDFsolar regressision application on House price data. Done by python and stata.
solar regression application on prostate cancer data
Variable selection is essential in the modern statistics. It has been successfully applied to high-dimensional predictions, dependence structure estimations and structure learning across many fields, including finance, machine learning, biostatistics and signal-processing. In the field of linear modelling, different researchers have introduced different new algorithms (e.g., lasso and variabel screening) with better theoretical properties and the simulation performances. With ever-expanding data dimensions, those new algorithms are designed to select a sparse set of variables that works well for prediction, interpretation and dependence structure estimation. However, due to the complicated dependence structure and, hence, the multicollinearity issue in the real-world data, these new algorithms may not perform well in real-world applications. Hence, it is necessary and still challenging to perform an accurate and stable variable selection under complicated dependence structures and multicollinearity.
To improve the accuracy and stability of variable-selection, in last chapter we propose the subsample-ordered least-angle regression (solar). Using directed acyclic graphs, examples and simulations, we show that solar yields substantial improvements lasso in terms of the sparsity, stability and accuracy of variable selection. However, the improvements of solar has not been confirmed on real-world data with complicated dependence structures and multicollinearity issues. In this paper, we focus on the variable selection performance of solar on two real-world data with heavy multicollinearity and complicated dependence structures, the prostate-cancer data (small ) and Sydney house data (moderate ). Taking two lasso solvers and elastic net as competitors, we show that solar outperform them by reaching a better balance between sparsity and variable-selection stability. We also demonstrate that, based on the variable selection result of solar, it is more stable and reliable to conduct dependence structure estimation and probablistic graph learning.
As a major issue of linear modelling for decades, multicollinearity can cause problems on classical techniques of linear modelling from different perspectives. Firstly, since linear modelling can be considered as the error minimization in a linear space, the multicollinearity issue will reduce the magnitude of the minimal eignvalue in the linear space, causing different issues on numerical convergence (e.g., the Cholesky decomposition or the gradient descent) and model estimation. Moreover, a severe multicollinearity will amplify the instability of the parameter estimate across samples. For example, the more severe the multicollinearity issue is, the more dramtically the sample regression coefficients will change across samples, implying that it is improbable to interpret the sample regression coefficients reliably and accurately. Furthermore, the multicollinearity issue also causes problems on statistical tests. A severe multicollinearity issue will unnecessarily overamplify the volume of the standard error of regression coefficients. As a result, the finite-sample performance of all the statistical tests that rely on the sample covariance (e.g., the post-OLS t-test or the covariance test of lasso
(Lockhart et al., 2014), the condtional correlation tests of dependence structure estimation (Scutari and Denis, 2014)) will be weakened (Farrar and Glauber, 1967). Last but not least, the multicollinearity may also reduce the algorithmic stability of the model (Elisseeff et al., 2003), which reduce the generalization ability and the prediction ability of the estimated model.Multicollinearity also affects the reliability of the variable selection algorithms in linear modelling. For example, the lasso regression (Tibshirani, 1996) will be unstable if a group of variables are highly correlated to each other (Zou and Hastie, 2005; Jia and Yu, 2010). Lasso will randomly select one from the group and drop the other out of the regression model, which is referred to as the grouping effect. For all linear modelling techniques, the variable selection decision is based on the conditional correlation between a covariate and the response while controlling the other covariate. As a result, the grouping effect may well apply to other variable selection methods like the best subsset method (including AIC, BIC and Mallow’s ), reducing the stability and accruacy of the variable selection in linear modelling.
The consequence of grouping effect and multicollinearity has gone beyond the field of variable selection in linear modelling. Since (i) it is NP-hard to estimate the dependence struture (also referred to as probablistic graph learning) on data with large (Heckerman et al., 1995; Chickering et al., 2004); (ii) the dependence structure estimation algorithms typically work on data with large and very sparse , variable selection methods in linear modelling (e.g., SCAD (Fan and Li, 2001), ISIS (Fan and Lv, 2008) and different lasso-type estimators (Fan et al., 2009)) are frequently used to filter out the redundant variables before estimating the linear dependence structures in biostatistics and machine learning. However, due to the complicated linear structure and, hence, the grouping effect, lasso or other classical varibale selection methods may randomly drop some of the highly correlated variables, resulting in the omissions of important variables in the linear structure.
Different attempts have been made to reduce the effect of multicollinearity. For a more stable regression coefficients estimate, Hoerl and Kennard (1970)
apply the Tikhonov regularization to OLS, resulting in the Ridge regression. However, since Ridge sacrifices its unbiasedness for the smaller regression coefficient variance (a.k.a a James-stein-type estimator), extra difficulty is brougt to the statistical tests and the post-estimation inference of Ridge. To reduce the grouping effect and obtain a stable variable-selection result, cross-validated group lasso and cross-validated elastic net (CV-en) are introduced
Zou and Hastie (2005); Friedman et al. (2010). By grouping the highly correlated variables together (i.e. they will be dropped out or included as a group), group lasso improves the robustness of lasso to the grouping effect. However, group lasso relies on manual grouping of variables, which heavily relies on the accuracy of the field knowledge. On the other hand, even though Zou and Hastie (2005) and Jia and Yu (2010) show that in some cases CV-en improves the stability and accuracy of lasso variable selection, Jia and Yu (2010) also show that the improvement is mariginal and “when the lasso does not select the true model, it is more likely that the elastic net does not select the true model either.”In this paper, we compare the variable-selection performance of solar with lasso and CV-en in two real-world datasets, the prostate cancer data and Sydney house price data. The prostate cancer data is an small-size, industry-standard data for testing variable-selection and prediction performance of new estimators/algorithms in machine learning and biostatistics. In prostate cancer data, due to the heavy muticollinearity among all explanatory variables, lasso randomly drops an important variable and returns counterintuitive results; by contrast, solar and CV-en includes all important variables and return stable variable-selection results that are consistent with biostatistics theory. Alongside with simulations in last chapter, the performance of solar on prostate cancer data confirms the advantage of solar over lasso from the perspective of variable-selection stability and accuracy.
We also apply solar, lasso and CV-en to Sydney house price data. Compared with the prostate cancer data, Sydney house price data has more variables, more severe multicollinearity issue and more complicated dependence structure. As a a result, lasso and CV-en lose their sparsity on Sydney house price data, selecting respectively 44 and 57 variables out of 57 variables. By contrast, variables selected by solar still remains sparse (9 out of 57 variables) and intuitive. By dropping 48 variables, solar only reduces and by a very small margin. By conducting the post-selection inference on the variable-selection result of solar, we further corrects the possible variable-selection issue in solar caused by multicollinearity. Based on the rectified solar variable-selection results, we implement the probablistic graph learning and estimate the dependence structure of Sydney house price data (centered on variable ‘price’), which is also intuitive and consistent with the economic facts.
In last chapter we introduce the solar algorithm, which is specifically designed for high-dimensional variable-selection with severe multicollinearity. In the simulation section of last chapter, it has been illustrated that (i) solar outperforms lasso from the perspective of variable-selection stability and accuracy; (ii) solar can still successfully identify informative variable with harsh IRC settings. In this section, we verifies the advantage of solar on the prostate-cancer data (Friedman et al., 2001), a representative data with small , small and severe multicollinearity and grouping effect.
The prostate-cancer data is collected for the prediction of the prostate cancer aggression. As shown in table 1, in this data we have medical test scores and
prostate-cancer patients with positive and conclusive diagnoses. Among all variables, ‘lspa’ is the log PSA test score, which for decades the U.S. Food and Drug Administration (FDA) have been using to measure the cancer aggression of a prostate cancer patient. For a prostate cancer patient, the higher the PSA score is, the more aggressive the cancer is. As a result, ‘lspa’ is the response variable of the regression. The other variables in the data are used as covariates of the regression, which are different medical test results and also frequently used in the prostate-cancer diagnosis. Among all covariates in this data, ‘gleason’ and ‘pgg45’ are pathologically most relevant to the aggression of cancer. ‘gleason’ is the current Gleason test score – a score that ranges from 1 to 10 – and is another standard FDA test score for the prostate cancer aggression. The higher the Gleason score is, the more aggressive the cancer is. Likewise, ‘pgg45’ is the percentage of 4 or 5 Gleason socres that were recorded over the history (not including the current Gleason score). The Gleason score and the PSA score are major tools for prostate cancer diagnosis. From the perspective of the pathology theory, all covariates in the data are relevant to the prostate cancer diagnosis. By using these variables, we aim to predict the aggression of the prostate cancer as accurately as possible.
name | description |
---|---|
lpsa | the log PSA score |
lcavol | the log cancer volume |
lweight | the log weight of prostate |
age | age of the patient |
lbph | the log amount of benign prostatic hyperplasia |
svi | the presence of seminal vesicle invasion (binary) |
lcp | the log amount of capsular penetration |
gleason | the current Gleason score (most cancers score 3 or higher) |
pgg45 | the percentage of 4 or 5 Gleason scores that were recorded over the patient history |
However, due to the complicated dependence structure in this biostatistics data, the multicollinearity issue is severe among the covariates. As shown in table 2
, {lcavol, svi, lcp, gleason, pgg45} are highly correlated with one another, implying that the classical variable selection method may be overtrigged and randomly drop some of them due to the ‘grouping effect’. Despite the instability of the variable selection result, the high muticollinearity may weaken the accuracy of regression coefficients estimation of the ordinary least square (OLS) or maximum likelihood estimation (MLE) on this sample. The standard error of each regression coefficiet estimate will be unnecessarily large, implying that the regression coefficiet estimate may change dramtically across samaples.
lcavol | lweight | age | lbph | svi | lcp | gleason | pgg45 | |
lcavol | 1 | |||||||
lweight | 0.280521 | 1 | ||||||
age | 0.225 | 0.347969 | 1 | |||||
lbph | 0.02735 | 0.442264 | 0.350186 | 1 | ||||
svi | 0.538845 | 0.155385 | 0.117658 | -0.085843 | 1 | |||
lcp | 0.67531 | 0.164537 | 0.127668 | -0.006999 | 0.673111 | 1 | ||
gleason | 0.432417 | 0.056882 | 0.268892 | 0.07782 | 0.320412 | 0.51483 | 1 | |
pgg45 | 0.433652 | 0.107354 | 0.276112 | 0.07846 | 0.457648 | 0.631528 | 0.751905 | 1 |
Table 3 shows the variable-selection comparison of solar, lasso solvers and CV-en. Since most of the covariates have been shown pathologically relevant to the severity of prostate cancer, all variable-selection methods select a similar combination of variables into the regression model. Most of the selected variables make perfect sense in pathology. For example, benign prostatic hyperplasia (variable ‘lbph’) and inflammation may also cause PSA score to increase significantly. As a result, the inclusion of ‘lbph’, ‘svi’ and ‘lcp’ is intuitive.
However, the only difference between lasso and solar/CV-en is the variable ‘gleason’. Two lasso solvers include ‘pgg45’ instead of ‘gleason’. For prostate cancer patients with positive and conclusive diagnosis, this variable-selection result seems to suggest that the current PSA score – an accurate measure of the current cancer aggression – is not relevant to the current Gleason score, another accurate measure of the current cancer aggression. By contrast, it suggests that the current PSA score is relevant to ‘pgg45’, the historical gleason values. This variable-selection result seems very counterintuitive. Consider a prostate cancer patient that just has never had any positive diagnosis before, his ‘pgg45’ value will be while the corresponding values of ‘gleason’ and ‘lpsa’ would be high. In prostate cancer data, we have patients with but . Hence, for these patients ‘pgg45’ is not useful for prostate cancer diagnosis.
However, both CV-en and solar includes ‘gleason’. From the perspective of variable selection, CV-en is more likely to be robust to muticollinearity. As a result, the stability and accuracy of solar variable-selection is confirmed by the fact that CV-en and solar select the same variables. Nevertheless, we still need to investiage whether the drop-out of ‘gleason’ in lasso is due to high multicollinearity and grouping effect.
Variables selected | Total | |
---|---|---|
solar | lcavol, lweight, age, lbph, svi, lcp, gleason, pgg45 | |
CV-en | lcavol, lweight, age, lbph, svi, lcp, gleason, pgg45 | |
CV-lars-lasso | lcavol, lweight, age, lbph, svi, lcp, pgg45 | |
CV-cd | lcavol, lweight, age, lbph, svi, lcp, pgg45 |
To investigate the grouping effect in the prostate cancer data, first we report the average solution path of solar in table 4. As shown in the table, on average, ‘gleason’ and ‘pgg45’ are included at the end of lars at each subsample, where ‘gleason’ is included by lars before ‘pgg45’. As shown in last chapter, the value of a variable measure the stage that forward regression, on average, includes a variable. A varibale with a larger value, on average, will be included into forward regression at an earlier stage, implying that it is more likely to be an informative variable. As a result, solar suggests that on average ‘gleason’ is more likely to be informative than ‘pgg45’. Compared to the variable-selection result of lasso, the result of solar makes more sense in pathology, especially for the 35 patients with in the data.
Variables in | |
---|---|
1 | lcavol |
0.844 | lcavol, lweight |
0.622 | lcavol, lweight, age |
0.511 | lcavol, lweight, age, lbph |
0.444 | lcavol, lweight, age, lbph, svi |
0.266 | lcavol, lweight, age, lbph, svi, lcp, gleason |
0.088 | lcavol, lweight, age, lbph, svi, lcp, gleason, pgg45 |
Table 4 also shows that ‘pgg45’ is included right after ‘gleason’. We know from last chapter that the value of a variable can be seen as the conditional relevance to . Hence, two highly correlated variables may have the similar relevance to , implying that they may be ranked close to each other in the average solution path. As a result, the location of ‘gleason’ and ‘pgg45’ in table 4 suggests that, conditional on {lcavol, lweight, age, lbph, svi, lcp}, ‘gleason’ and ‘pgg45’ may be highly correlated to each other. To validate that hypothesis, first we report the marginal coorelation table of gleason to each other variable in the data (table 5). Table 5 verifies the multicollinearity is severe for ‘gleason’ in the prostate cancer data, which may potentially lead to the IRC violation. In this case, due to the sampling randomness and multicollinearity, lasso solvers may randomly drop gleason out of the regression even though ‘gleason’ may be informative.
lcavol | lweight | age | lbph | svi | lcp | pgg45 | |
0.432 | 0.057 | 0.269 | 0.078 | 0.320 | 0.515 | 0.752 |
As shown in last chapter, IRC is vital in lasso regression. If IRC is violated, lasso variable selection may not be reliable, resulting in the inclusion of redundant variables and the exclusion of informative variables. Based on the pathological intuition and variable selection result of solar and CV-en, ‘gleason’ is likely to be informative. Hence, to check if IRC is violated with respect to ‘gleason’, we standardize all variables and estimate equation (2.1),
(2.1) |
and check the norm of its regression coefficient, which can be found in Table 6.
No. Observations: | 97 | AIC: | 202.6 |
---|---|---|---|
R-squared: | 0.595 | BIC: | 223.2 |
Adj. R-squared: | 0.563 | F-statistic: | 18.68 |
Dep. Variable: | gleason | Prob (F-statistic): | 4.24e-15 |
const | ||||
---|---|---|---|---|
lcavol | ||||
lweight | ||||
age | ||||
lbph | ||||
svi | ||||
lcp | ||||
pgg45 |
Table 6 reports the OLS result of 2.1. In this regression, the response variables and all covariates are standardized, implying that the regression coefficent actually represent the conditional correlation between a covariate and the response variable. Table 6 shows that around 60% of the variation of gleason can be explained by {lcavol, lweigh, age, lbph, svi, lcp, pgg45}. Moreover, controlling the other covariates in this regression, the conditional correlation between ‘pgg45’ and ‘gleason’ is around . This clearly shows that we have the grouping effect problem among {lcavol, lweigh, age, lbph, svi, lcp, pgg45, gleason}.
Even worse, table 6 also confirms the possible violation of IRC on ‘gleason’. As shown in the table, in (2.1
) is around 1.1. This implies that, even though ‘gleason’ seems to be an informative variable in pathology and is included by solar and CV-en, lasso will still drop it out. Moreover, the dropout of ‘gleason’ may be a random decision due to the sampling randomness and the grouping effect among {lcavol, age, svi, lcp, pgg45, gleason}. Put in another way, if we collect another sample from the same population and re-apply lasso on it, lasso is likely to randomly drop another variable from {lcavol, age, svi, lcp, pgg45, gleason}, probably ‘pgg45’ due to high sample correlation. As a result, it is diffcult to interpret the variable-selection result of lasso.
The variable-selection comparison among lasso, CV-en and solar confirms the advantage of solar, which we have demonstrated in last chapter. The dependence structure in prostate cancer data may not be very complicated due to relatively small (9 variables in this data). In such scenario, lasso is affected by grouping effect and may be not reliable. By contrast, solar and CV-en are both robust to the grouping effect and return a intuitive and stable variable-selection result. In next section, to check the robustness of each variable selection method to grouping effect, we apply CV-en, solar and lasso on Sydney house price data, the one with larger and, hence, much more complicated dependence structures.
In last section, we demonstrate the advantage of solar in prostate cancer data, which is an industry-representative of data with small and severe grouping effect. Due to the well-implemented laboratory experiments/tests and the accurate procedure of clinical data collection, many biostatistical datasets are known to be with clear and well-define dependence structures. Also, due to the limitation of the value of , the dependence structure in prostate cancer data is still managable. In this section, we focus on the performance of solar, lasso and CV-en on Sydney house price data, a data with larger , much more complicated dependence struture and more severe grouping effect.
Sydney house price data is collected for the price prediction of all the second-hand houses on the market of Mid and East Sydney, 2010. As shown in Table 13
(at the end of the paper), in Sydney house price data we have 57 covariates, which can be classified into 3 categories: the features of the house, distance to key locations (public transport, shopping, etc), local school quality and the localized socio-economic data. The features of the house is reported in the real-estate transactions alongside with the house price. The distance of each house to the nearest key locations are computed in QGIS – a open-source geographical information system – based on the GPS location of each house and the geo-data collected from Department of Land and Natural Resources, New South Wales, Australia. The ICSEA score – an indication of the socio-educational backgrounds of students– is collected from Australian Curriculum, Assessment and Reporting Authority (ACARA). The variables about local school quality (the average examination scores) is collected from Department of Education, New South Wales, Australia.
To check the possible multicollinearity and grouping effect in Sydney house price data, we also need to check the pairwise correlation table among all covariates. Due to the size of the table, we report it as an additional CSV file, which shows that multiple covariates in the Sydney houes price data are highly corelated with one another. As we expected, the possible grouping effect and multicollinearity issue in Sydney house price data may be much worse that the prostate cancer data, which implies a much more complicated dependence structure.
Table 7 shows the selection results of solar, lasso and CV-en. In data with complicated dependence structures and severe multicollinearity, both lasso and CV-en lose sparsity of variable selection. Two lasso solvers only manage to drop 7 out of 55 variables and CV-en select all 57 variables. This is consistent with the finding of Jia and Yu (2010). “When the lasso does not select the true model, it is more likely that the elastic net does not select the true model either.” Heurestically increasing the value of in lasso (e.g., one-se rule) may potentially improve the sparisty of lasso. However, this may leads to lasso randomly dropping variables due to the grouping effect. On the other hand, CV-en is designed to tolerate multicollinearity and grouping effect, returning a sparse and stable regression model. However, due to the complicated dependence structure, CV-en completely fails to accomplish variable selection. Conclusively, shrinkage methods fail to simultaneously maintain sparsity and stability in this data. By contrast, as a variable-selection algorithm robust to the complication of the dependence structure, solar still returns a very sparse regression model, only variables selected out of .
Variables selected | Total | |
---|---|---|
solar | Median_mortgage_repay_monthly, Median_rent_weekly, Median_Tot_fam_inc_weekly, | |
Bedrooms, Baths, Parking, Beach, Gaol, ICSEA | ||
CV-lars-lasso/ | Lang_spoken_home_Eng_only_P, Australian_citizen_P, Median_age_persons, | |
CV-cd | Median_mortgage_repay_monthly, Median_rent_weekly, Median_Tot_fam_inc_weekly, | |
Average_num_psns_per_bedroom, Average_household_size, TVO2009, Suburb_Area, | ||
AreaSize, Bedrooms, Baths, Parking, Airport, Beach, Cemetery, ChildCare, Club, | ||
GolfCourt, High, Lib, Museum, Park, Police, PreSchool, PrimaryHigh, Primary, | ||
RailStat, Rubbish, SportsCenter, SportsCourtField, Swimming, Tertiary, DistBound, | ||
ICSEA, ReadingY3, WritingY3, SpellingY3, GrammarY3, NumeracyY3, WritingY5, | ||
SpellingY5, GrammarY5 | ||
CV-en | Tot_P_P, Lang_spoken_home_Eng_only_P, Australian_citizen_P, Median_age_persons, | |
Median_mortgage_repay_monthly, Median_Tot_prsnl_inc_weekly, Median_rent_weekly, | ||
Median_Tot_fam_inc_weekly, Average_num_psns_per_bedroom, Average_household_size, | ||
TVO2010, TPO2010, TVO2009, TPO2009, Suburb_Area, AreaSize, Bedrooms, | ||
Baths, Parking, Airport, Beach, Cemetery, ChildCare, CommunityFacility, Club, | ||
Gaol, GeneralHospital, GolfCourt, High, Lib, MedCenter, Museum, Park, PO, Police, | ||
PreSchool, PrimaryHigh, Primary, RailStat, Rubbish, Sewage, SportsCenter, | ||
SportsCourtField, Swimming, Tertiary, DistBound, ICSEA, ReadingY3, WritingY3, | ||
SpellingY3, GrammarY3, NumeracyY3, ReadingY5, WritingY5, SpellingY5, | ||
GrammarY5, NumeracyY5 |
Table 8 reports the regression coefficents of OLS, lasso solvers and solar. Due to the dimensionality, we only focus on the regression coefficents of variables selected by solar. Since the lasso regression coefficiets are under penalty, they are biased toward and typically smaller than the OLS regression coefficents, implying that the magnitude of lasso regression coefficents are not particularly helpful for the mariginal effect evaluation. Since (i) the regression coefficents of the elastic net are under the composite - penalty; (ii) the elastic net penalty (and hence the bias in elastic net regression coeficients) is more complicated than lasso, we skip the regression coefficent value of elastic net in table 8
. Even though solar and OLS regression coefficents are both unbiased under regularity conditions, in this scenario the solar regression coefficients are still preferred due to the sparsity of the solar regression model, which reduces the severity of the curse of dimensionality and only returns the most important variables in regression modelling.
OLS | CV-lars-lasso | CV-cd | solar | |
---|---|---|---|---|
Median_mortgage_repay_monthly | ||||
Median_rent_weekly | ||||
Median_Tot_fam_inc_weekly | ||||
Bedrooms | ||||
Baths | ||||
Parking | ||||
Beach | ||||
Gaol | ||||
ICSEA |
As shown in the simulation of last chapter and the prostate cancer data, the accuracy and stability of variable selection may be reduced by multicollinearity and grouping effect embedded in the data, especially when the potential dependence structure is large. To investigate if solar variable selection is affected by grouping effect, we first report the average solution path in house price data, shown in Table 9.
Variables in | |
---|---|
1 | Baths |
0.977 | Baths, Median_mortgage_repay_monthly |
0.955 | Baths, Median_mortgage_repay_monthly, Bedrooms |
0.956 | Baths, Median_mortgage_repay_monthly, Bedrooms, Median_Tot_fam_inc_weekly |
0.933 | Baths, Median_mortgage_repay_monthly, Bedrooms, Median_Tot_fam_inc_weekly, ICSEA |
0.911 | Baths, Median_mortgage_repay_monthly, Bedrooms, Median_Tot_fam_inc_weekly, ICSEA, |
Median_rent_weekly | |
0.888 | Baths, Median_mortgage_repay_monthly, Bedrooms, Median_Tot_fam_inc_weekly, ICSEA, |
Median_rent_weekly, Parking | |
0.866 | Baths, Median_mortgage_repay_monthly, Bedrooms, Median_Tot_fam_inc_weekly, ICSEA, |
Median_rent_weekly, Parking, Gaol, Beach | |
0.844 | Baths, Median_mortgage_repay_monthly, Bedrooms, Median_Tot_fam_inc_weekly, ICSEA, |
Median_rent_weekly, Parking, Gaol, Beach, ChildCare |
Table 9 shows that, variable ‘Gaol’ and ‘Beach’ have similar values (around 0.866), implying they may be high corrleated. Also, ‘ChildCare’ is included right after ‘Beach’ and ‘Gaol’ with , suggesting that ‘Gaol’ may also be correlated with ‘ChildCare’ and ‘Gaol’. Hence, we are going to check the group of variables that is highly correlated to ‘Gaol’, which is reported in table 11. Table 11 shows that, ‘Airport’, ‘Rubbish’ and ‘ChildCare’ are all highly correlated to ‘Gaol’ (pairwise correlation larger than 0.5). As a result, such high correlation may trigger the grouping effect of variable selection and potentially violate IRC.
ChildCare | Airport | Rubbish | Beach | |
0.756 | 0.715 | 0.671 | 0.528 |
Based on table 11, it is necessary to check if the IRS with respect to ‘Gaol’ is violated. As a result, we standardize all variables and esimate regression equation (3.1),
(3.1) |
No. Observations: | F-statistic: | ||
R-squared: | Prob(F-statistic): | ||
Adj. R-squared: | Df Model: |
const | ||||
---|---|---|---|---|
Airport | ||||
ChildCare | ||||
Rubbish | ||||
Beach |
As table 11 shows, the collinearity between Gaol and {ChildCare, Airport, Rubbish, Beach} is very severe. Almost 90% of the variation of Gaol can be explained by {ChildCare, Airport, Rubbish, Beach} and in (3.1). Compared with what we have in prostate cancer data, the grouping effect and multicollinearity in Sydney house price data is much more severe. As a result, even solar may not be completely immune of the severe grouping effect. This implies that, even if one of the variables in {ChildCare, Airport, Rubbish, Beach, Gaol} may be informative, it is very likely that variable-selection algorithms fail to identify the informative variable in that group. As a result, the inclusion of ‘Gaol’, ‘ChildCare’ and ‘Beach’ may actually serve as a placeholder of the group {ChildCare, Airport, Rubbish, Beach, Gaol} in the variable-selection result. To avoid the misleading of the grouping effect, it is statistically reasonable to replace {ChildCare, Beach, Gaol} in the variable-selection result of solar with {ChildCare, Airport, Rubbish, Beach, Gaol}. We referred to the revised variable-selection result of solar as the ‘rectified solar selection’.
There is an empirical reason why {Gaol, ChildCare, Airport, Rubbish, Beach} are highly correlated with each other. All observations in the house price data is collected in East and Mid Sydney, Australia at 2010. As shown in Goolge map, in East and Mid Sydney, the gaol (Long Bay correctional complex), childcare center (e.g., Blue Gum Cottage Child Care, Alouette Child Care), airport (Kingsford-Smith Airport) and rubbish incenerators (e.g., Malabar Wastewater Treatment Plant, Sydney Desalination Plant, Cronulla Wastewater Treatment Plant, Bondi Wastewater Treatment Plant) all concentrate at the southeast coastline of East Sydney, which explains the collinearity of those variables.
At the end of this subsection, we estimate OLS regressions based on the variable-selection results of lasso, CV-en and solar and compare their and . For completeness, we also estimate an OLS regression based on the ‘rectified solar selection’ (equation (3.3)) and compare its performance with solar regression results (equation (3.2)).
(3.2) | ||||
(3.3) | ||||
The comparison results are summarized in table 12. In this case, lasso clearly does a better job on variable selection than CV-en. With slightly better sparsity, the variables selected by lasso produce the same with respect to price as does CV-en. Note that the variables that lasso drops are highly correlated with those that lasso selects, which implies another potential problem of the grouping effect. However, the results of CV-en and lasso may be considered not sparse enough for many economics and ecnonometrics analysis. By contrast, the rectified solar selection delete 46 variables and returns a very sparse model. Moreover, the sparisty is accomplished by reducing by only 0.03. As a result, it clearly shows that, rectified solar selection balances and number of variables better than lasso and CV-en in Sydney house price data.
Variable-selection algorithms like lasso are frequently used as a pre-estimation variable filer of the linear dependence structure estimation (also referred to as linear probablistic graph estimation). ‘linear’ means the population dependence among variables are based on correlation. However, due to the multicollinearity and grouping effect, lasso may drop informative variables out by mistake, making the following-up dependence structure estimation inaccurate. Also, in some data with complicated dependence structure (e.g., Sydney house price data), lasso and CV-en may completely lose sparsity. Since the linear dependence structure estimations typically perform well on data with large and sparse , failing to return a sparse set of selected variables may cause extra difficulty. The rectified solar selection returns a very sparse set of variables with good prediction power. As a result, it is reasonable to conduct dependence structure estimations based on the rectified solar selection result. In this subsection, based on the variables selected in the rectified solar selection, we estimate the linear dependence structure on Sydney house price data.
There are two general methods for linear dependence structure estimation. One of them is called the constraint-based learning. This estimation method is done by doing a number of conditional and marginal correlation tests among all possible pairs of variables. Another method is call the score-based learning. By assuming the exact distribution of each variable, this method compute the BIC score of each possible dependence structure and select the one with minimal BIC score. Both methods are global and combinatorial searches, which typically requires a very small (typically less than ). However, for now the score-based learning algorithms are only developed for dependence structures with discrete or Gaussian variables. As a result, using R package ‘bnlearn’, we conduct the constraint-based learning on the rectified solar selection result and estimate the dependence structure centered on the variable ‘price’, also referred to as the Markove Blanket of ‘price’. We use Monte-Carlo correlation test to purge unnecessary edges in the dependence structure graph, also referred to as a linear, directed acyclic graph. Based on the field knowledge and economic explanation, we add causal directions to the remaining edges. The learning result is shown in figure 1, where {Median_Tot_fam_inc_weekly, ICSEA} are grouped manually. Since ‘ICSEA’ is partially defined by ‘Median_Tot_fam_inc_weekly’, ‘ICSEA’ and ‘Median_Tot_fam_inc_weekly’ are deeply related. Hence, we group them up manually. Likewise, {Gaol, ChildCare, Airport, Rubbish, Beach} are also grouped manually due to the multicollinearity within the group.
Since all the variables in {Bedrooms, Baths, Parking, Beach, Airport, ChildCare, Rubbish, ICSEA, Median_mortgage_repay_monthly, Median_rent_weekly, Median_Tot_fam_inc_weekly} are selected by solar, highly likely they are correalted to ‘price’ in population, either conditionally or marginally. However, it is possible that variables in {Median_mortgage_repay_monthly, Median_rent_weekly, Median_Tot_fam_inc_weekly, Bedrooms, Baths, Parking, Beach, Airport, ChildCare, Rubbish, ICSEA} are correlated to ‘price’ as different roles: some serve as the parents of ‘price’ while the other serve as children and spouses. To determine the actual role of each variable, we need to implement both conditional and marginal correlation tests. The logic of role determination is very straightforward. A parent of ‘price’ is correlated to a child of ‘price’ via ‘price’. Hence, the parent-child correlation will reduce significantly (sometimes directly to zero) after we control ‘price’. Put it statistically, the absolute value of the marginal correlation between a parent and a child should be significantly larger than the corresponding conditional correlation. If (i) , and are variables that are generated chonologically, (ii) the marginal correlation between and is statistically significantly nonzero; (iii) after we control , the conditional correlation between and is statistically significantly zero, we know that is the parent of and is the child.
For example, in our data, the marginal correlation of ‘Median_mortgage_repay_monthly’ or ‘Median_rent_weekly’ to ‘Bath’ are respectively and , both of which are statistically significantly nonzero. However, after we control ‘price’, the corresponding conditional correlations both reduce to almost . By conducting the following two Monte-Carlo conditional correlation tests,
(3.4) | ||||
(3.5) |
The p-values of both are respectively and , implying that the corresponding conditional correlations are statistically significantly zero. Since all houses in our data are second-hand houses, we know that (i) the action of mortgage and leasing typically happen after the house price is determined; (ii) as a part of the construction, the number of bath rooms is typically determined before the sale of the house. Hence, the direction on the causation paths are
and
Likwise, we do the similar marginal/conditional correlation tests on other variables. Since our data is for second-hand houses, both the houes features and the distance of each house to specific locations are determined before the determination of the house price on the second-hand market, implying that {Bedrooms, Baths, Parking, Beach, Airport, ChildCare, Rubbish} are determined before {Median_mortgage_repay_monthly, Median_rent_weekly}, which are computed at the end of the year. It turns out that, after controlling ‘price’, the absolute values of and are reduced obviously, but not directly to zero. And the conditional correlation test does reject the following hypotheses
(3.6) | ||||
(3.7) |
The reduction of the abosulte values of the correlations after controlling ‘price’ implies that ‘price’ is an intermediate variable for one of the indirect causal relations between ‘Parking’ and ‘Median_rent_weekly’. However, the rejection of
implies that there may be more than one causal relations (direct or indirect) between ‘Parking’ and ‘Median_rent_weekly’. Since Expectation-Maximization based tests are required for the compelete estimation of the dependence structure and solar only serves as a pre-estimation filter, in this paper we stop here and allow ‘Parking’ to influence ‘Median_mortgage_repay_monthly’/‘Median_rent_weekly’ directly and indirectly (via ‘price’).
For the similar reason, in the graph {Gaol, ChildCare, Airport, Rubbish, Beach} causes ‘Mortgage’ and ‘Rent’ both directly and indirectly (via ‘price’). One possible reason is that leasing and sale are two real-estate markets, the dynamics of which may work differently. Given the feature of the house, in the sale market of real estate, price is determined by the bargaining of the supply and demand; however, after investigating the sale price of the house, the mortgage repayment is determined by the bank based on the house feature, the wealth/income of the applicant and the possible rent payment of the house.
The only causal directions that cannot be determined in this data is between ‘price’ and {Median_Tot_fam_inc_weekly, ICSEA}. On one hand, families with higher incomes can afford more expensives houses, implying the family income causes the house price. On the other hand, the median familiy income of the local suburb and the ICSEA score of the local school will both increase after a wealthy family moves in. Since theses variables are only observed once in our data, we cannot determine the corresponding causal directions. By conducting different marginal/conditional tests, we only know that those two variable are both directly and indirectly (via ‘price’) correlated to both ‘Median_mortgage_repay_monthly’ and ‘Median_rent_weekly’.
In this paper we demonstrate the performance of solar variable selection on different empirical data with severe multicollinearity issue and, hence, a severe grouping effect. As the competitor of solar, lasso is affected by the grouping effect and returns the unreliable variable-selection result; even though more robust to the grouping effect than lasso, CV-en loses the sparsity of the variable-selection result when gets large. By contrast, solar returns a stable and sparse variale-selection result and illustrates a better robustness to the grouping effect. As a result, the advantage of solar that we demonstrate in the simulation of last chapter is verified in empirical data.
Chickering, D. M., Heckerman, D., Meek, C., 2004. Large-sample learning of Bayesian networks is NP-hard. Journal of Machine Learning Research 5, 1287–1330.
Farrar, D. E., Glauber, R. R., 1967. Multicollinearity in regression analysis: the problem revisited. The Review of Economic and Statistics, 92–107.