In most research settings in medicine, we aim at knowing the effect of a certain exposure to the risk of some specific disease among a well-defined targeted population. To achieve this goal, well-designed trials are usually used. However, the sample sizes for such studies are usually limited due to the high cost of recruitment and thus the sample size usually just have power to detect the effect for the primary exposure of interest. In the mean time, the number of available observational studies or trial studies from other population are accumulated quickly nowadays. Can we use information from these data to improve our inference on the population where small data is drawn from? Here we refer the randomized clinical trial data which have a clearly defined target population by the design and sampling scheme as “small data” and refer other external data as “big data”. Our goal is to efficiently combine the information from these two types of data to obtain more accurate estimation of the association between some readily available quantities that are presented in both data (e.g., age, gender) and risk of diseases among population where small data is drawn from. Since the distributions of predictors as wells as the relationship among these predictors and the event of interest are likely to be different between “big data” and “small data”, we are at the risk of introducing some bias when using the information from big data. However, given the size of the big data, they can still provide insightful information about how we predict risk among the targeted population, assuming that the two populations share a certain degree of similarity in their prediction model forms. This motivate us to find an estimator that is better than using “small data” only with a bias-variance trade-off. Directly pooling the two data together can lead to substantial bias and lead to increased mean square error (MSE) when the difference between the two sources are large. This motivate us to find estimators that are always not lead to increased MSE than using “small data” only and do lead to decreased MSE under certain situation.
studied the combining of regression results from small and big data in linear regression setting and showed Stein-type result for Gaussian responses: i.e., the use of small data only is inadmissible when
and degree of freedom is more than 10.Gross and Tibshirani (2016) proposed to use shared Lasso to achieve this in linear regression setting. However, the similar enhanced regression approach for non-Gaussian outcome has not been fully studied. To our knowledge, the risk prediction work are mostly dependent on the assumption that certain reduced marginal model or marginal information from the big data is accurate (Cheng et al., 2018; Chatterjee et al., 2016). In this work, we propose to fill the gap of risk prediction combining information from small and big data for binary outcome that relies on an alternative structural assumption where we assume the effect structure rather than effect magnitude are the same.
The structure of the paper is as follow. In section 2, we introduce the notation and models we used followed by our proposed estimators and its connection to other existing estimators. In section 3, we study the performance of different estimators for their finite sample properties and show the improvement using our proposed estimator. In section 4, we provide theoretical results that guarantee our proposed estimator to be no worse than the small data only analysis in terms of MSE. In section 5, we applied our method to analyze the Asia Cohort Consortium data with sensitivity for potential violation of model assumptions. In section 6, we discuss the potential extension of our proposed estimator to more general setting.
2.1 Notation and Model
We denote our outcome of interest as and denote the design matrices by and where and represent the sample sizes for the small data and the big data respectively. In general, we reserve the subscripts and to denote quantities related to the big data and the small data respectively. Since the outcomes of interest is binary (disease occurrence), denoting , we assume logistic regression models for both the small data and the large data and write them as:
where and are unknown regression parameters and is indicator function. Our goal is to obtain accurate estimation of while treating as nuisance parameter using the information from both the small and the big data. In this project, we propose novel weighted shrinkage estimators and relate them to the penalized regression based estimators. We compare the performance of weighted estimators, penalized estimators and the James-Stein type shrinkage estimator (James and Stein, 1961; Efron and Morris, 1973).
2.2 Penalized regression based estimators
Here we first introduce the penalized regression based estimators that can be used to integrate the information from the two data sets. We consider minimizing the following object function in order to obtain an estimates of , where a penalty is put on only.
where is the penalty term. It is obvious that the estimators from above optimization problem can be implemented using a penalized logistic regression such as glmnet in R (Friedman, Hastie, Tibshirani, 2008) with design matrix that has row like and the penalty factor . Here the form of the penalty term can be flexible, for example, for LASSO penalty (Tibshirani, 1996; Efron et al, 2004),
for ridge regressionpenalty. Other penalties like elastic net (Zou and Hastie, 2005), SCAD (Fan and Li, 2001), MCP (Zhang, 2010) can also be used but for comparison purpose, we use and penalty to represent the performance of this class of estimators.
When the prediction in small dataset is more important than the estimation of regression parameter , instead of penalize based on parameter, we might penalize on the extra linear predictor and use to replace . The tuning parameter could be determined via fold cross-validation Arlot and Celisse (2010) in small data set.
2.3 Weighted shrinkage estimator
We propose an alternative approach to the penalized regression method via weighted shrinkage method. This method has been shown to be useful under linear regression model (Chen, Owen, and Shi, 2015), however, as we will see in this section, the application of it to this nonlinear model is not straightforward.
The basic idea of this kind of the weighted shrinkage estimator is to first fit the logistic regression model among the small data and the big data separately to obtain and as the estimator for and and then combine the two estimators through an weighted average for a specific weight matrix . Specifically, when , this is just the estimator of using small data only and when , this can be approximately viewed as a pooled estimator of the small and the big dataset assuming .
It is obvious that the performance of highly depend on the choice of weight matrix . The major goal here is to find the optimal weight matrix as a function of , and data , where and are design matrices for small and large data respectively. Here we define optimal weight by the weight that minimize the coefficient estimation error .
To find out the form for the optimal weight, we use the asymptotic expansion of and . The optimal weight obtained via second-order approximation is denoted as and the optimal weight obtained via higher-order Edgeworth expansion (Hall, 1992) is denoted as . For all these weights, we could plug in and for and to obtain estimated version of these optimal weights.
For the James-Stein estimator, we use the form from An, Fung, Krewski (2010), i.e, or where and ,
is the test statistic for. Here we have .
Now we provide more details on how to obtain these weights. We begin with the second order approximation. Using the expansion of logistic regression estimator, we have
where are the variance of and are the variance of . Ignoring and terms in the above expansion lead to second order optimal weight
where , and corresponding estimated second order optimal weight where means the terms and are replaced by their consistent estimated version.
For the higher order approximation, we use the approximation
where , and the expression of higher order terms could be found in the appendix. We have and , and . Denote , , , and , , , , , and , . Ignore , terms, we have is minimized at
The estimated version can be denoted as where means , , , , , , , , , , , in are replaced by their consistent estimators.
2.4 Relationship between two types of estimators
The shrinkage estimator defined above is closely related to the penalized estimator. can be written as based on the penalty using the asymptotic linear expansion of GLM as in equation 2 and equation 5 are term whose form can be obtained from Edgeworth expansion (Sun, Loader, McCormick, 2000). To relate this to the penalized regression method, we consider the following penalized version and find and minimize
Denote and and The score function will be
and the information matrix
So we have the approximation
Then we have
So we can see that with different choice of and , penalized estimator is asymptotically equivalent to weighted estimator.
To see how much efficiency we can gain using our proposed estimator under finite sample setting, We use a detailed simulation exercise, as described below, to compare the different methods with each other. In these simulations, we consider different size for , the dimension for (including intercept), such that , but we set the norm of , i.e., to be fixed at . We also vary , the amount of bias in the big data, such that takes the following values . We generate the small data and the big data
from the same Gaussian distribution, and the covariates are assumed to be uncorrelated with each other. We also vary, the size of the small data , between and in increments of (thus ), while we consider two fixed sizes for , namely . For each simulation, we generate based on our assumed logistic models for the small and the big data, given in equation 1,
For each simulation scenario, we perform 100 simulations to compute the mean squared error (MSE) in estimation, . We obtain estimates for by (1) using small data only (Small), (2) pooling big and small data (Pool), (3) weighted with optimal weight from second order approximation (), (4) weighted with optimal weight from higher order approximation (), (5) penalized regression (), (6) penalized regression (), or (7) JS+ weighted estimator (JSP).
The simulation results are shown in Figures 1 and 4. Each figure represents the different simulation settings under a fixed size of the big data, for example, Figure 1 shows plots for different simulation settings when , and Figure 4 shows the same for . In each figure, nine plots are presented in a grid of three rows and three columns, where the columns shows plots for a particular ratio of (in increasing order of magnitude from left to right), while the rows show plots for different dimension size (in increasing order of magnitude from top to bottom). Each plot in the grid presents the graphs of the log transformed ratio of the MSE of , when we use each of the procedures (Small, Pool, , , , , JSP), versus that when we only use the small data (Small), as a function of the varying sizes for . From the plots, we can make the following observations:
The performance of and
procedures are very close to each other in every setting, pointing to the fact that probably the optimal second order approximation weightssuffices for our problem (in fact it is difficult to visually observe the graph for as it is exactly overlaid by the graph for ).
In every simulation setting, the and the procedures outperform every other method. The gain in performance of and over the next best performing method increases with increasing dimension size , and increasing ratio of . The same trend is observed for both sizes for (1000/10000).
The penalized procedure is the third best performing method overall (after and ), and its performance is similar to the / procedures when dimension size is small (), and the relative bias is low ().
The pooled procedure is the worst performing method overall and is quite sensitive to the bias . Although it shows relatively good performance when dimension size is small (), and when the relative bias is low (), with increasing dimensions, and especially with increasing bias, its performance becomes very poor. Apart from JSP in some scenarios, it is the only procedure that show extremely elevated MSEs in comparison to the small data.
The performance of the procedure is similar to JSP in some settings, but is better than it in others. For example, with increasing dimensions, and with increasing bias, the JSP procedure sometimes tend to have higher MSE than those obtained from the small data (Small), especially when the size of the small data is on the higher end, but the procedure always perform better than Small.
All methods (except for Pool and in some instances JSP) show lower MSE than the estimates obtained from the small data itself, and the gain in efficiency is most pronounced when the size of the small data is small.
4 Theoretical Results
The simulation results indicate that our proposed estimator always outperform the small-data only analysis in terms of MSE and such improvement is substantial sometimes. However, to apply our proposed method in general, a natural question is whether there exists certain scenario that the proposed estimator will underperform the small-data only analysis, especially when the difference between the two sources of data is large. To answer this question, here we summarize the theoretical guarantee of our proposed weighted estimators in the following theorems and the detail expression and proof can be found in the appendix.
Theorem 1: The second order optimal weight and its estimated version approximately minimize at level in the sense that
where the infimum is taken over all random matrix that are measurable given .
and are also approximate optimal weight for prediction purpose in the sense that
Theorem 2: The higher order optimal weight and its estimated version approximately minimize at level in the sense that
Theorem 3: Assuming and , the weighted estimator based on estimated higher order optimal weights is more efficiency than using small data only, i.e., hold asymptotically when .
Here we provide the general idea for the proof while the details can be found in the Appendix. We can use the second or high order expansion of to obtain the optimal weight as a function of and and the improvement of using weighted matrix is of the order of in comparison to using small data only. Then we showed that the difference between MSE based on these estimated optimal weight and oracle optimal weight are positive with approximation error in the order of . The details of the proof of these theorems could be found in appendix.
5 Analysis of ACC data
The Asia Cohort Consortium (ACC) is a collaborative effort born out of the need to study Asian population, seeking to understand the relationship between genetics, environmental exposures, and the etiology of a disease through the establishment of a cohort of at least one million healthy people around different countries in Asia, followed over time to various disease endpoints and death. This pooling project, with its huge sample size across 29 subcohorts from 10 Asian countries (https://www.asiacohort.org/ParticipatingCohorts/index.html), provides the perfect opportunity to explore informative relationships (association of exposure with disease, genome variability with disease etc) among major Asian ethnic groups.
Over the last few decades, obesity has become an important health issue in many countries. According to World Health Organization estimates, more than a billion adults around the world are overweight, and at least 300 million of them are obese (see Abelson and Kennedy, 2004). Many epidemiological studies have found association between the body-mass index (BMI) and a variety of health outcomes, including mortality (see Haslam and James, 2005). However, most of these inferences have been drawn from studies in populations of European origins, and very little focus has been given to the relationship between BMI and the overall risk of death among Asians, who account for more than 60% of the world population (see Zheng et al, 2011). The data collected as part of the ACC can be used to answer these important questions.
To show the usefulness of our proposed methodology in a practical setting, we use data from the ACC to explore the relationship between BMI and mortality. In particular, we concentrate only on the cohorts from China - data from the Shanghai Cohort Study (SCS) is used to form our small data, while data from rest of the Chinese subcohorts - China Hypertension Survey Epidemiology Follow-up Study (CHEFS), Linxian General Population Trial Cohort, Shanghai Men’s Health Study (SMHS) and Shanghai Women’s Health Study (SWHS) are pooled together to form the initial big data. Since the SCS cohort only included males, we decide to restrict the big data to include only male participants from the other subcohorts (which completely excluded the SWHS). For individuals in the small data, enrollment started in 1986 and the study is continued till 2007, while for the pooled large data, enrollment started in 1985, and the last year of follow-up is 2011. Missingness in covariates is not a big concern (no missingness in the small data, and only 0.79% missingness in the large data). The baseline age distribution of the individuals is found to be different in the small and the large data, and since mortality is a definite function of age, for better comparability, we decide to restrict the two datasets such that they contain individuals whose baseline age varied between 50 and 60. And because methods described in this paper pertain to binary outcomes only, and time to follow-up varies for different individuals in the two datasets, we decide to only consider the first year of follow-up for each individual. Firstly, this makes the binary statuses of mortality comparable for individuals in the two datasets, and secondly the short period of follow-up ensures that we do not lose too many individuals who are lost to follow-up. Such individuals forms only 0.06% of the small data and 2.88% of the large data, and are removed from the analysis. After performing all these data management steps, the small data is found to contain 10675 individuals with 40 mortality events, while the large data is found to contain 46779 individuals with 206 events. Apart from BMI, baseline age is also included as a covariate in the model, as well as indicators for each individual’s smoking and drinking habits, as these covariates have been proven to be important predictors of mortality in many settings.
We start off with analyzing the small and the big data separately first, using the standard logistic regression model, and then by pooling them together. We then estimate the regression coefficients using the proposed weighted shrinkage methods, namely, with the optimal second order weights (), the optimal higher order Edgeworth weights (), and for comparison, the optimal James-Stein weights (JSP). We also obtained the penalized estimates, using the and procedures.
The estimates and their standard errors for the various procedures are presented below in Table1. As can be seen, the pooled procedure obtains the lowest standard errors, owing to the fact that it uses the entirety of the big and the small data, but it also means that the estimates for this procedure are inherently biased towards the ones that we obtain from the big data itself, as it contains a lot more information (than the small data) because of its size, so naive pooling inappropriately shifts most of the focus to the big data itself. The weighted shrinkage procedures seem to be better adjusted in this respect, with estimates shrunk somewhat but much closer to the ones that we see from the small data itself, but with much lower standard errors than the small data estimates. The optimal second order and higher order Edgeworth weights ( and ) perform similarly in this regard, and has lower standard errors than the estimates from the James Stein adjusted weights JSP, except for BMI, in which case the JSP procedure obtains a lower standard error than or , however the estimate for BMI obtained by JSP is shrunk completely to that obtained from the big data itself. Among the penalized procedures, seem to borrow more strength from the big data, and thus has lower standard errors and higher amount of shrinkage, while the estimates for the procedure seem to be closer to the small data estimates, and thus have higher standard errors.
|Age at Baseline||Est||0.03||0.12||0.10||0.04||0.04||0.05||0.10||0.03|
|Body Mass Index||Est||-0.14||-0.09||-0.09||-0.13||-0.13||-0.09||-0.12||-0.14|
|Ever Used Alcohol||Est||0.06||-0.37||-0.30||0.02||0.02||-0.13||-0.02||0.06|
In this paper, we proposed better estimators that allow more accurate estimation of the regression coefficient and the risk prediction for our target population using information from another different population with more observations. Although the expansion and detailed form of the weight we provided are specifically for logistic regression, the optimal weight formula is general in terms of the expansion formula , , s. So the framework we proposed here could be extended to generalized linear model and estimating equation models straightforward thought the more complicated computation of Edgeworth expansion for these estimating equation based estimator need to be derived for the optimal estimation weight.
To utilize the big data, although we do not need to know the exact relationship between the small and big data in terms of association strength, we do need the model form in big data to be correctly specified. In our setting, the same logistic form need to hold for both the big and the small data. When the covariate is limited and is categorical, this assumption is weak and easy to satisfy. When there is continuous covariate, we can apply existing model checking tools to the big data to check whether our model assumption holds.
In our analysis of the ACC data, we only concentrated on the first year of follow-up, because the methods presented in this paper are relevant only for binary outcomes, and the short period of follow-up ensured that we did not lose too many individuals to loss to follow-up, which would have otherwise introduced unforeseen sources of bias in our analysis. However in doing so, we lost a lot of rich information which is contained in the time to follow-up data. This shows the need to extend our methods to the case when we have time to event data, and this indeed is one of our future research goals.
7 Supplementary Materials
Web Appendices referenced in Sections 2 and 3 are available with this article at the Biometrics website on Wiley Online Library.
- Abelson and Kennedy (2004) Abelson, P. and Kennedy, D. (2004). The obesity epidemic. Science, 304(5676), 1413-1413.
- An, Fung, Krewski (2010) An, L., Fung, K.Y., Krewski, D. (2010). Mining pharmacovigilance data using Bayesian logistic regression with James-Stein type shrinkage estimation. J Biopharm Stat., 20(5), 998-1012.
- Arlot and Celisse (2010) Arlot, S. and Celisse, A. (2010). A survey of cross-validation procedures for model selection. Statistics Surveys, 4, 40-79.
- Chatterjee et al. (2016) Chatterjee, N., Chen, Y., Maas, P. Carroll, R. J. (2016). Constrained maximum likelihood estimation for model calibration using summary-level information from external big data sources. Journal of the American Statistical Association, 111, 107-117.
- Chen, Owen, and Shi (2015) Chen, A., Owen, A. B., Shi, M. (2015). Data enriched linear regression. Electronic Journal of Statistics, 9, 1078-1112.
- Cheng et al. (2018) Cheng, W., Taylor, J.M.G., Gu, T., Thomlins, S. A., Mukherjee, B.(2018). Informing a risk prediction model for binary outcomes with external coefficient information. Journal of the Royal Statistical Society: Series C, early view online.
- Efron and Morris (1973) Efron, B., Morris, C. (1973). Stein’s estimation rule and its competitors—an empirical Bayes approach. Journal of the American Statistical Association 68(341), 117-130.
- Efron et al (2004) Efron B, Hastie T, Johnstone L, Tibshirani R. (2004) Least angle regression (with discussion). The Annals of Statistics 32, 407-499.
- Fan and Li (2001) Fan J, Li R. (2001) Variable selection via nonconcave penalized likelihood and its oracle properties. Journal of the American Statistical Association 96(456), 1348-1360.
- Friedman, Hastie, Tibshirani (2008) Friedman, J., Hastie, T. and Tibshirani, R. (2008). Regularization Paths for Generalized Linear Models via Coordinate Descent. Journal of Statistical Software 33 (1), 1-22.
- Gross and Tibshirani (2016) Gross, S.M., Tibshirani, R. (2016). Data shared Lasso: A novel tool to discover uplift. Computational Statistics & Data Analysis 101, 226-235.
- Hall (1992) Hall, P. (1992). The Bootstrap and Edgeworth Expansion. Springer, New York.
- Haslam and James (2005) Haslam, D. W. and James, W. P. (2005). Obesity Lancet, 366, 1197-1209.
- James and Stein (1961) James, W., Stein, C. (1961). Estimation with quadratic loss. Proc. Fourth Berkeley Symp. Math. Statist. Prob. 1, 361-379.
Stein, C.M. (1981). Estimation of the mean of a multivariate normal distribution.The Annals of Statistics 9, 1135-1151.
- Sun, Loader, McCormick (2000) Sun, J. Loader, C. and McCormick W.P. (2000). Confidence bands in generalized linear models. The Annals of Statistics 28, 429-460.
- Tibshirani (1996) Tibshirani, R. (1996) Regression shrinkage and selection via the Lasso. Journal of the Royal Statistical Society: Series B 58, 267-288.
- Zhang (2010) Zhang, CH. (2010) Nearly unbiased variable selection under minimax concave penalty. The Annals of Statistics 38(2), 894-942.
- Zheng et al (2011) Zheng, W., McLerran, D. F., Rolland, B. et al. (2011). Association between body-mass index and risk of death in more than 1 million Asians. New England Journal of Medicine 364 (8), 719-729.
- Zou and Hastie (2005) Zou, H., Hastie, T. (2005) Regularization and variable selection via the elastic net. Journal of the Royal Statistical Society: Series B 67, 301-320.