Transfer learning of regression models from a sequence of datasets by penalized estimation

07/04/2020 ∙ by Wessel N. van Wieringen, et al. ∙ Amsterdam UMC 0

Transfer learning refers to the promising idea of initializing model fits based on pre-training on other data. We particularly consider regression modeling settings where parameter estimates from previous data can be used as anchoring points, yet may not be available for all parameters, thus covariance information cannot be reused. A procedure that updates through targeted penalized estimation, which shrinks the estimator towards a nonzero value, is presented. The parameter estimate from the previous data serves as this nonzero value when an update is sought from novel data. This naturally extends to a sequence of data sets with the same response, but potentially only partial overlap in covariates. The iteratively updated regression parameter estimator is shown to be asymptotically unbiased and consistent. The penalty parameter is chosen through constrained cross-validated loglikelihood optimization. The constraint bounds the amount of shrinkage of the updated estimator toward the current one from below. The bound aims to preserve the (updated) estimator's goodness-of-fit on all-but-the-novel data. The proposed approach is compared to other regression modeling procedures. Finally, it is illustrated on an epidemiological study where the data arrive in batches with different covariate-availability and the model is re-fitted with the availability of a novel batch.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 9

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

In the field of machine learning, many approaches for transfer learning have been developed, which typically amounts to algorithmic approaches for pre-training a model on some other data before moving to the data of interest

(Pan and Yang, 2010; Shilo, Rossman and Segal, 2020). The idea of transfer learning has also been shown to be more generally useful, e.g., for regression modeling, where it may also be more feasible to get analytical insight into algorithmic approaches (Minami et al., 2020). While transfer learning for regression models allows for differences in the response as well as in the parameters when moving from data set to data set, the framework of transfer learning typically assumes that there is an initial model and that the corresponding structure stays the same when new data become available and need to be incorporated. Transer learning then refers to the updating of knowledge of the model parameter when new information becomes available (Shalev-Shwartz, 2012)

. As such it produces a sequence of parameter estimates. The latest estimate takes into account the most recent information but also exploits previously acquired data, while minizing a certain loss function. Examples of tranfer regression techniques abound. Statistical readers will be most familiar with recursive least squares

(Plackett, 1950), mixed models (Verbeke and Molenbergs, 2009), or state-space models (Durbin and Koopman, 2012)

. In this context and related, we recently showed how to perform tranfer learning of the normal distribution’s precision matrix through targeted ridge penalized estimation

(van Wieringen, Stam, Peeters, and van de Wiel, 2020).

In the following, we consider settings where the response stays the same, when moving along a sequence of data sets, but the set of covariates may change. For example, data of large epidemiological studies often arrive in batches. Each batch sheds light on the same phenomenon, e.g. environment-disease relation, observed with possibly different covariate information. As such with the arrival of a new batch, the currently learned model of the phenomenon under study needs updating. In this paper we show how updating can be achieved by means of penalized estimation.

Here targeted ridge penalized estimation is proposed as an enrichment of the transfer learning regression toolbox. The presented procedure updates the regression parameter estimate by minimization of the loss function using the novel data and in which the target is formed by the current parameter estimate derived from the hitherto available data. The approach is designed to not depend on covariance estimates of parameters from the previous data, as a change of available covariates will typically invalidate the usefulness of such covariance estimates. Incorporation of multiple competing current estimates, when available, is discussed. Properties, e.g. moments and consistency, of this estimator for tranfer regression learning are proven. All this is then translated to the learning of the logistic regression model. The penalty parameter is chosen via constrained cross-validation to warrant learning and avoid one-off estimation. Transfer regression learning via targeted ridge penalized estimation is compared to standard alternatives. Finally, the potential of the method is illustrated on an epidemiological study.

2 Methods

2.1 Model and penalty structure

Throughout either a linear or logistic regression model, being the most widely used variants of generalized linear models, are entertained. Both are learned by means of targeted ridge penalized estimation, an extension of ridge regression (inspired by

Singh, Chaubey, and Dwivedi, 1986). It considers estimating the

-dimensional regression coefficient vector

of the linear regression model,

with , by minimizing the nonzero centered ridge regression loss function:

(1)

with penalty parameter and nonrandom shrinkage target . An analytic expression for the minimizer of the above loss function exists and gives rise to the nonzero centered ridge regression estimator: . When the traditional ridge regression loss function is recovered. The motivation for this estimator becomes evident after reformulation of the penalized estimation problem above, i.e. the minimization of loss (1), as a constrained estimation problem. When , the regression parameter constraint is centered at the origin. But when and the constraint on is nonzero centered. Of particular interest is the case when an informative choice for is available. Then, the nonzero centered parameter constraint may contain the true value of and, in principle, no shrinkage for the estimate of is needed. The targeted ridge estimator then becomes less biased (Singh, Chaubey, and Dwivedi, 1986).

Two alternative interpretations of this estimator are relevant for the presented work. First, the nonzero centered ridge regression can be interpreted as an attempt to correct for the nonzero target. This is easily seen when using the change-of-variable . Substitution of in the loss function (1) gives: . In this is the residual vector after invoking the prior knowledge. This is regressed on

to correct for the variance in

wrongly attributed to by the prior knowledge as condensed in nonzero target . Secondly, from a Bayesian perspective the nonzero centered ridge penalty still corresponds (in the sense that the posterior mean coincides with the optimum of the loss function above) to a normal prior on the elements of the regression parameter . But this prior now has a mean equal to the corresponding element of .

2.2 Updating the linear regression model

The targeted ridge regression estimator introduced in the previous section is put to use in the case where multiple data sets come available sequentially at time points . At each time we fit the linear regression model (now equipped with the index to separate the data sets: with using knowledge from the previous case. To this end we use the nonzero centered ridge regression framework. Note that the design matrix may differ among data sets, in the sense that for the theoretical exposé the number of covariates is fixed but their settings and the sample size are allowed to change. The effect of these covariates on the response, however, is assumed to be fixed through time.

At the arrival of a new -th data set, the estimates of parameters and obtained from the previous -th data set(s) are available. These may be used as prior knowledge in the updating of the parameter estimates from the -th data set through:

where the previous estimate of thus plays the role of . In addition, the and may be used to choose the penalty parameters by minimization of the MSE plugging in these previous estimates for and .

The updated ridge linear regression estimator is a linear combination of the observed responses from the subsequent studies. Consequently, it is normally distributed with expectation and variance (see Supplementary Material I for details):

In the above the ’s have been assumed to be nonrandom.

To provide some intuition assume i) every data set is equipped with the same orthonormal design matrix (i.e. ) and ii) for all . The expectation and variance then reduce to and . In particular, and . In words, when the number of novel data sets grows large enough the updated ridge estimator becomes unbiased. Moreover, its variance is smaller than that of the OLS estimator for any .

More general results on the convergence of the sequence of ridge updated regression estimator and the associated linear predictor can be obtained. To this end notice that the above generated sequence of regression estimators (in which the arguments of

have been dropped to reduce notational clutter) forms a first order Markov chain with continuous state space

as:

Moreover, it is time-homogeneous:

for any and .

The well-known theory of Markov process can now be exploited to show (under some conditions) the asymptotic unbiasedness of the ridge updated linear regression estimator (Theorem 1). To this end the following assumption is made:

  • There exists an infinite sequence of studies into the linear relationship between a continuous response and a set of covariates. The data from these studies, , are used to fit the linear regression model by means of the updated ridge linear regression estimator, which yields the sequence of estimators which is initiated by an arbitrary, nonrandom .

The theorem’s proof, deferred – as all proofs – to the Supplementary Material II, requires to show that the updating process has a stationary distribution with expectation equal to .

Theorem 1.

(Asymptotic unbiasedness of updated ridge linear regression estimator)
Adopt assumption A1.linear. Let be sufficiently large. Then, for some , where denotes the null space of the linear map induced by . If , then and, consequently, the updated ridge linear regression estimator is asymptotically unbiased.

Asymptotic unbiasedness can also be shown for the linear predictor (Theorem 2), which the restriction on the intersection on null spaces of the design matrices is not needed.

Theorem 2.

(Asymptotic unbiasedness of updated ridge linear regression predictor)
Adopt assumption A1.linear. Let be the design matrix with covariate information on novel samples for which a prediction is needed. The updated linear predictor associated with the updated ridge linear regression estimator is then asymptotically unbiased: .

A few notes on these theorems are to be pointed out. First, both theorems describe asymptotic behavior. But they do so not in the traditional sense, in which the sample size goes by one at the time. Here that may be if a new study with a degenerated sample size is included. But as most studies are expected to have a sample size exceeding one, the limit is reached in larger step sizes. Moreover, the data of the new study is not added to the data of the previous study, rather the former is weighed against a summary obtained from the latter: the current updated ridge linear regression estimator.

Another, more important note concerns the role of the penalty parameters in Theorems 1 and 2. The asymptotic unbiasedness of estimator and predictor is independent of the choice of the penalty parameters. Practically, this means that whether one chooses the penalty by means of cross-validation or some other means the asymptotic unbiasedness of the updated ridge linear regression estimator and predictor is warranted. Moreover, irrespectively of the amount of penalization applied at each update Theorems 1 and 2 hold.

In addition to asymptotic unbiasedness the updated ridge linear regression estimator can be shown to be consistent (Theorem 3), although under assumptions on the penalty parameter sequence.

Theorem 3.

(Consistency of the updated ridge linear regression estimator and predictor)
Adopt assumption A1.linear. Let be sufficiently large and . Assume for all that and with

the largest singular value of

. Then, the updated ridge linear regression estimator and the associated linear predictor are consistent, i.e. - and -, respectively.

The proposed updating scheme may be seen as a frequentist analogue of Bayesian updating (Berger, 2013). In the Bayesian narrative, the ridge penalty corresponds to a normal prior. The resulting posterior of is also a normal distribution. When updating, this normal posterior then serves as (normal) prior for the next update of . The prior for the -th update then becomes , which yields a posterior mean coinciding with the frequentist estimator .

Finally, the target for the estimation of the regression parameter from the current data need not be the most recently updated ridge regression estimator. It may be replaced by alternative estimators obtained from the preceding data sets. These targets may even be simultaneously available prior to the estimation of the regression parameter from the current data. One would then preferably have the current data choose among these targets. This can be done straightforwardly within the current framework. Hereto replace the targeted ridge penalty by what could be considered a mixture of such penalties. For , , such targets , this yields the loss function: , where all and . This loss is minimized with respect to by:

The targets are effectively averaged weightedly to form a novel target. The weight of this average are unknown. Like the penalty parameter the weights are tuning parameters and are to be chosen prior to the estimation, e.g. through a procedure like cross-validation (cf. Section 2.4).

2.3 Updating the logistic regression model

The results of the previous section carry over to generalized linear models. This is exemplified here for the logistic regression model. Hereto the same experimental set-up as for the linear regression model is assumed to apply, now with a binary response: for

. The probability of a ‘1’ – and thereby that of a ‘0’ – is modeled by the logistic link function:

for every . Again the regression parameter is estimated by maximization of the loglikelihood augmented with a nonzero centered ridge penalty:

The corresponding estimating equation is:

(2)

where with link function defined by . The root of Equation (2) is found by means of a Newton-Raphson algorithm, reformulated as an iteratively re-weighted least squares algorithm. This requires a minor modification from that presented in Schaefer, Roi, and Wolfe (1984); Le Cessie and van Houwelingen (1992) for the zero-centered ridge logistic regression estimator. Starting from an initial guess the estimate is updated by:

where diagonal with

and the adjusted response variable

. This recursive formula is applied until convergence, which yields the desired nonzero centered ridge logistic regression estimate.

Now assume data from a sequence of studies into the same logistic regression model, i.e. with a common regression parameter , but possibly different design matrices are available. From each study the parameter is estimated by means of nonzero centered ridge logistic regression estimator with the estimate of the previous study as shrinkage target. The resulting estimate is that which solves:

This yields a sequence of estimators , in which the (again) the arguments of the have been dropped to reduce notational clutter. The -th estimator of this sequence relates, after convergence of the iteratively re-weighted least squares algorithm, to its predecessor by:

with and now involving and .

The thus defined sequence of updated logistic regression estimators can again be seen as being generated by a order Markov chain with as its state space. Like before this fact is exploited to show asymptotic properties of the updated ridge logistic regression estimator. Throughout the presented asymptotic results the following assumption is adopted:

  • Assume an infinite sequence of studies into the generalized linear relationship between a binary response and a set of covariates. The data from these studies, , are used to fit the logistic regression model by means of the updated ridge logistic regression estimator, which yields the sequence of estimators which is initiated by an arbitrary, nonrandom .

Theorems 4 and 5 below state the asymptotic unbiasedness of the updated estimator and predictor, respectively, while Theorem 6 states their consistency.

Theorem 4.

(Asymptotic unbiasedness of updated logistic regression estimator)
Adopt assumption A2.logistic. Let be sufficiently large. Then, for some . If , then and, consequently, the updated ridge logistic regression estimator is asymptotically unbiased.

Theorem 5.

(Asymptotic unbiasedness of updated logistic regression predictor)
Adopt assumption A2.logistic. Let be the design matrix with covariate information on novel samples for which a prediction is needed. The updated linear predictor formed from the updated ridge logistic regression estimator is then asymptotically unbiased: .

Theorem 6.

(Consistency of the updated ridge logistic regression estimator and predictor)
Adopt assumption A2.logistic. Let be sufficiently large and . Assume for all that and with the largest singular value of . Then, the updated ridge logistic regression estimator and the associated linear predictor are consistent, i.e. - and -, respectively.

The updated ridge logistic regression estimator too is a frequentist analogue of a Bayesian updating scheme. Now the analogy is not exact but approximate. This is due to the fact that the normal prior is not conjugate for the logistic likelihood and, consequently, the posterior is not a well-known and characterized distribution. Nonetheless, the latter may be approximated in a Laplacian manner by a normal distribution (cf. Bishop (2006)

), for which the Bernstein-Von Mises theorem

van der Vaart (1998) provides conditions for its quality. The mean of this approximating normal corresponds to the mode of posterior (i.e. the updated ridge logistic regression estimator), while its variance relates to the curvature of the posterior at its mode. This normal-like posterior then serves as prior when updating the knowledge on the logistic regression parameter at the arrival of a novel data set.

The targeted ridge logistic regression estimator may also be modified to accommodate multiple targets acquired from different estimators. Again, this amounts to averaging the targets weightedly.

2.4 Choice of the tuning parameter

Informative choices for the tuning parameters, and , of updated ridge estimators are presented. The ridge penalty parameter of a novel data set is chosen through a form of cross-validation. Cross-validation chooses the penalty parameter that yields the best performance of the model on novel data. In absence of novel data part of data, referred to as the test or left-out data, is set aside to serve as such. The remaining data, called the training or left-in data, are used to learn the model that is evaluated on the test data. The thus acquired performance depends on the selection of the test data, and may accidently yield an overly optimistic performance. To remove the effect of the particular the data are split into training and test data repetitively, say, times, thus giving rise to -fold cross-validation. Splitting may be done randomly but is done here in a stratified manner. The latter procedure ensures approximate equally sized test data sets and the single occurrence of each sample in a test data set. The performance averaged over the test data sets is considered to be representative for novel data. The chosen penalty parameter optimizes this performance. The model’s performance is usually measured by its fit on the test data. For the linear regression model this amounts to the cross-validated sum-of-squared where and are formed by subsetting and to the -th test data, while is the ridge regression estimator obtained from all but the -th test data. In case of logistic regression the cross-validated likelihood is used as a performance of measure. The optimal is found by minimization/maximization of the appropriate criterion using standard machinery.

The proposed cross-validation procedure does not take into account the data sets prior to the -th one. Unforeseen circumstances may have lead to a realization of the -th data set that is not representative. For instance, the response-covariate relation may be absent, i.e. , in this particular data set. Cross-validation, which optimizes the predictive performance in the -th data set, is likely to select a rather small that will not shrink towards the updated ridge regression estimator obtained from the previous data sets. As such the information from these prior data sets accumulated in the updated ridge regression estimator is ignored. With the

-th data set being the odd-one-out, estimation of the regression parameter from the

-th data set starts anew with an uninformative shrinkage target. To safeguard against such phenomena the cross-validated performance measure is still optimized with respect to the penalty parameter but with a constraint on the detoriation of the fit on the preceding data sets. The minimization takes place over the set defined as:

with denoting the sample size fraction of the -th data set in relation to the sample size accumulated over all data sets up to . In the optimization the constraint is enforced using a barrier function. Constraining the choice of to ensures that the fit of previously acquired data set does not detoriate more than the fraction . In the presence of multiple targets the constraint simply demarcates a viable subspace of the penalty parameters and the ’s. Finally, a similar constraint can be conceived for the choice of the ridge logistic regression estimator’s penalty parameter through cross-validation. The sum-of-squares are then to be replaced by the (minus) loglikelihoods.

An informative choice on that initiates the updating may be obtained from profound knowledge of the relationship described by the linear regression model. Usually, however, such knowledge is at best present in tacit form. One way out could be literature providing univariate estimates of (some of) the ’s that comprise . An alternative would ‘sacrifice’ the first data set to obtain an initial estimate with which the updating is initiated. Traditional ridge regression (with a zero target) or, following literature, univariate estimates from covariate-wise simple linear regression could be used to arrive at this initial estimate. Due to the shrinkage to zero the former tends to underestimate the elements of , whereas the latter has the opposite result (as confounding covariates are omitted). Alternatively, an estimate from other transfer regression learning procedures – preferably an unbiased one – can be used to initiate the updating.

Initiation is required not only at the initial time point. With the onset of the big data age, more and more information is registered over time. Consequently, ‘early’ data sets comprise a limited number of covariates, while more recent ones comprise many additional ones. Or, it may be too costly to measure certain covariates at each instance but the budget allows for their measurement at (say) every other study. Hence, in practice not every covariate will be present in each data set. A pragmatic choice of the target is than to use the latest estimates element-wise.

3 Comparisons

Two comparisons are conducted. The first illustrates that updating is beneficial compared to de novo estimation. Secondly, updating is contrasted, both in papyro and in silico, to estimation from the pooled data

3.1 Regular ridge regression

In a small simulation study the behavior of the updated ridge linear regression estimator is contrasted to its traditional ridge counterpart. First, an initial data set is generated from is estimated with the traditional ridge regression estimator in combination with a LOOCV chosen penalty parameter. This estimate serves as the to initiate the updated ridge regression. Now sequentially another 25 data sets are drawn. From each is estimated using both the traditional and updated ridge estimators with LOOCV (without any constraint but positivity) for penalty parameter determination. For the latter the previous updated ridge regression estimate is used as target. This process is repeated a hundred times.

All data sets are drawn as follows. Throughout the -th element of the vector of regression coefficients is given by for . Each element of the -dimensional -th design matrix with is drawn from the standard normal distribution. Then, with each element of sampled from . Hence, , , and consequently , are generated anew at each .

Figure 1: The top panels show the

-quantile intervals of the traditional (left) and updated (right) ridge estimates of

with plotted against . The solid, colored line inside these intervals is the corresponding quantile. The dotted, grey lines are the true values of the ’s. The bottom panels show the mean squared errors over hundred runs of the mixed model’s maximum likelihood and updated ridge regression estimators (initiated with and ) plotted against . All data sets are sampled from the same regression model, but in the right panel every tenth data set is sampled from an empty model.

In Figure 1 the , and quantiles of the traditional and updated ridge estimates of with are plotted against . These quantiles of the traditional ridge estimates of these elements of are constant over (left panel of Figure 1). Those of the update ridge estimates (right panel of Figure 1) clearly improve as increases. the improvement is two-fold: i) they become less biased, and ii) the distance between the and quantiles vanishes.

3.2 Mixed model

A natural and widespread alternative to the proposed approach would be to employ a mixed model that includes a data set-specific random effect of the covariates. The mixed model is then refitted at the arrival of a new data set, each time to an enlarged data set encompassing the newly arrived data and all those that preceded it. This may be computationally demanding but is considered a mere practical nuisance for the moment. To make matters more precise consider for the data set formed from all those up to the -th one the mixed model with:

  • the vector of length -dimension where ,

  • the -dimensional matrix ,

  • the -dimensional matrix constructed as a block matrix with

    and off-diagonal blocks all equalling the zero matrix of appropriate dimensions,

  • the vector of length with the data set-specific random effects, and

  • the error vector of length .

The random effects and errors are assumed to be independent and obey multivariate normal laws, and for all , and with zero covariance across data sets, e.g. for .

The maximum likelihood estimator of the above formulated mixed model’s fixed regression parameter given the variance parameters and is (cf. Bates and DebRoy, 2004):

with its first two moments:

in which .

The updated ridge regression estimator and the maximum likelihood estimator of the mixed model’s fixed effect parameter can now be compared with respect to their mean squared error. Theorem 7 does so for a sequence of studies with orthonormal design matrices.

Theorem 7.

(Mean squared error of mixed vs. updated estimator)
Adopt assumption A1.linear. Let be orthonormal and for . Then, when initiated by , the updated ridge regression estimator outperforms (in the mean squared error sense), the maximum likelihood estimator of the mixed model’s fixed effects parameter as given above. Hence, .

This theorem states that the updated linear regression estimator may, when initiated appropriately and equipped with the right penalty parameter scheme, outperform in the MSE sense the maximum likelihood estimator of the mixed model’s fixed effects parameter. Acknowledged, Theorem 7 assumed an orthonormal design matrix for all studies, which is rather restrictive from a practical perspective. In principle, however, a result as in Theorem 7 can be obtained for with of sufficient rank. The proof follows that of Theorem 7 but the mathematics is more cumbersome and the result less insightful.

Theorem 7 assumes the variance parameters and known. In practice, these parameters too need to be estimated from data. This is done in the simulation study that follows.

The performance of the presented updated ridge regression estimator is compared to the maximum likelihood estimator of the mixed model’s fixed effect parameter in two simulations. The first simulation investigates the result of Theorem 7 under more realistic assumptions. The second simulation reveals a better performance of the updated ridge regression estimator than the maximum likelihood estimator of the mixed model’s fixed effect parameter when for a particular study in the sequence accidently data from a different model are sampled. This accidently erroneous sampling from a different model will introduce bias in the latter estimator, as it cannot reduce the influence of the data from this study. The updated ridge regression estimator, however, decides by the size of the penalty parameter how it weighs the current data against the regression estimate obtained from the preceding studies. In principle, it may favor the latter, effectively ignoring the data from the erroneously sampled study. This may cause a small reduction in efficiency compared to its competitor, but this is expected to be outweighed by the avoided bias. Both simulations are modified from that presented in Section 3.1. The settings are identical except for the fact that . Moreover, in a second scenario the -th data set with is sampled from an empty model, i.e. . With the arrival of each new data set the parameter is estimated using the ML estimator of the mixed model’s fixed parameter (from that when the accumulated sample size exceeds the dimension). But also using the updated ridge regression estimator initiated uninformatively with and informatively with . Its penalty parameter is chosen through constrained cross-validation to prevent the detoriation of the fit on the preceding data sets. The average (over hundred repeats) quadratic loss, e.g. , of the mixed model’s fixed parameter ML and updated ridge regression estimators are plotted against (bottom panels of Figure 1). In the first scenario with all data sets drawn from the same linear regression model the maximum likelihood estimator of the mixed model’s fixed parameter performs better than the uninformatively initiated updated ridge estimator but worse than the informatively initiated one. For small this picture is also seen in the second scenario with every tenth data set sampled from an empty model. But the maximum likelihood estimator of the mixed model’s fixed parameter suffers most (relatively) from the ‘outlying’ data sets, and is for larger even overtaken by the uninformatively initiated updated ridge regression estimator.

3.3 State space model

A commonly applied form of transfer regression learning is the linear Gaussian state space model with time-varying coefficients (Durbin and Koopman, 2012). The model comprises an observation equation, describing the observations at each by a linear regression model, and a state equation, describing the fluctuations in the regression parameter over time. A state space model that applies to the situation studied here is:

where and both identically and independently distributed over as well as between each other. The estimator of obtained through likelihood maximization coincides with the mixed model’s one. Hence, we refer to the the previous subsection.

4 Application

The fingertips study registers annual information on a range of public health indicators for the counties of England (https://fingertips.phe.org.uk/). Following van Schaik, Peng, Ojelabi, and Ling (2019), who advocate the use of the fingertips data for the illustration of novel statistical techniques, we apply the proposed transfer learning procedure to relate the counties’ suicide rate to the other health indicators. Building on the R-script of van Schaik, Peng, Ojelabi, and Ling (2019) the data are downloaded using the fingertipsR-package (Fox and Flowers, 2009). Preprocessing of the data amounts to removal of cases without full information on the health indicators registered. This is done per year as the set of registered health indicators varies over the years. In particular, the size of this set tends to increase over time. Moreover, the data are zero-centered year-wise. The resulting data comprise the years 2008 to 2016, over these years the number of health indicators ranges from 1 to 23 from a total of 23 different while the number of counties with full case information ranges from 57 to 159 over the years (refer to the Supplementary Material III for more details).

Figure 2: The panels show the trajectories of the ML (left) and the updated ridge regression, with its penalty parameter chosen via constrained LOOCV, estimates. Each trajectory represents a single covariate. The presence of a health indicator in the data of a particular year is evident from a symbol on its trajectory at the corresponding year. The symbol is omitted in years that the health indicator was not registered.

Analysis of the data commences with the first year available, i.e. 2008. The linear model is fitted with the targeted ridge regression estimator with a zero target, i.e. , and the penalty parameter is chosen by LOOCV. For subsequent years the same estimator is used but the target is formed element-wise from previous estimates: the -th element of the target is taken from the most recent estimate. If the -th covariate was not present in the data preceeding year it is taken from the year before that, and so on, until it is taken from the initial target for the year 2008. Moreover, the penalty parameter is using by unrestricted and constrained LOOCV (as discussed in Section 2.4). Finally, for the purpose of reference the regression parameter is also estimated year-wise with the maximum likelihood regression estimator.

The trajectories of the resulting estimates are shown in Figure 2 and the Supplementary Material III. The main takeaway of the plots is the volatile behaviour of the ML regression estimates. In part this is due to a different sample size and, probably more important, the varying set of health indicators registered each year. The updated ridge regression estimator yield, irrespectively of the employed cross-validation method, yield much smoother trajectories. Hence, providing last year’s parameter estimate as a suggestion for the estimation of the current year’s one appears to harness against the involvement of a different set of covariates. Indirectly, the usefulness of this suggestion for the learning of the updating of the estimate is also evident by the increase of both the un- and constrained cross-validated penalty parameters over the years (not shown). This indicates that the suggestion is becoming more relevant as years go by. A convenient consequence of the updated ridge regression estimator’s smooth trajectories is the consistency of the sign of the regression parameter’s estimates over the years, most obvious from the estimators’ trajectories for two elements of the regression parameter (provided in the Supplementary Material III), facilitating a sensible interpretation. Of course, there is no free lunch. The updated ridge rergession estimators exhibit a small detoriation in the fit as can be witnessed from the residuals (see Supplementary Material III), although this appears to be negligible.

5 Conclusion

We presented methodology for continuously learning regression models from studies executed over large periods of time with data coming in bit-by-bit as they run their course. It could be considered a frequentist analogy to Bayesian updating and comprises sequential targeted ridge penalized estimation, shrinking towards the latest estimate, when novel data become available. At each update the penalty parameter represents the extent to which the latest estimate yields an adequate model of the novel data. The iteratively updated estimator and its associated linear predictor have been shown to be asymptotically unbiased and consistent. The penalty parameter is chosen through cross-validation in which the search domain of the penalty parameter is constrained to ensure that the newly learned estimate causes little to no detoriation of the fit of the historic data in comparison to the previous estimate. In a comparison with other standard statistical transfer learning procedures situations were identified where these procedures where outperformed by the proposed method. Finally, the proposed method was illustrated on data from an epidemiology study, showing its promise.

The current transfer learning proposal employs a single penalty parameter per update. Hence, all elements of the novel estimate are shrunken in the same amount to the current one. However, it may be that certain elements of the current estimate provide a better fit than others on the novel data. Then, to differentiate the shrinkage among the novel estimate’s elements the single-parameter penalty is to be replaced by a multi-parameter one: , the penalty parameter is now a matrix. This matrix may be unstructured, leaving one with the problem of selecting penalty parameters. More practically, it may be diagonal and possibly parameterized by a few scalar penalty parameters, thereby shrinking groups of elements similarly. Alternatively, a parametrization in line with generalized ridge regression (Hoerl and Kennard, 1970; Hemmerle, 1975; Lawless, 1981), only shrinking differentially in the canonical directions, may be considered. Either choice provides greater flexibility in the information propogated between updates.

References

  • Bates and DebRoy (2004) Bates, D. M., and DebRoy, S. (2004). Linear mixed models and penalized least squares.

    Journal of Multivariate Analysis

    , 91(1), 1–17.
  • Berger (2013) Berger, J. O. (2013). Statistical Decision Theory and Bayesian Analysis. Springer Science & Business Media.
  • Bishop (2006) Bishop, C. M. (2006). Pattern Recognition and Machine Learning. Springer.
  • Carroll and Pederson (1993) Carroll, R. J. and Pederson, S. (1993). On robustness in the logistic regression model.. Journal of the Royal Statistical Society, Series B (Methodological), 55(3), 693–706.
  • Fox and Flowers (2009) Fox, S., and Flowers, J. (2019). fingertipsR: Fingertips Data for Public Health. R package version 0.2.9. https://CRAN.R-project.org/package=fingertipsR
  • Hemmerle (1975) Hemmerle, W. J. (1975). An explicit solution for generalized ridge regression. Technometrics, 17(3), 309–314.
  • Hoerl and Kennard (1970) Hoerl, A. E. and Kennard, R. W. (1970). Ridge regression: biased estimation for nonorthogonal problems. Technometrics, 12(1), 55–67.
  • Huber and Ronchetti (2009) Huber, P.J., and Ronchetti, E.M. (2009). Robust Statistics. John Wiley & Sons.
  • Durbin and Koopman (2012) Durbin, J., and Koopman, S.J. (2012). Time series analysis by state space methods. Oxford University Press.
  • Lawless (1981) Lawless, J. F. (1981). Mean squared error properties of generalized ridge estimators. Journal of the American Statistical Association, 76(374), 462–466.
  • Le Cessie and van Houwelingen (1992) Le Cessie, S., and Van Houwelingen, J. C. (1992). Ridge estimators in logistic regression. Journal of the Royal Statistical Society. Series C (Applied Statistics), 41(1), 191–201.
  • Minami et al. (2020) Minami, S., Liu, S., Wu, S., Fukumizu, K. and Yoshida, R. (2020). A general class of transfer learning regression without implementation cost. https://arxiv.org/abs/2006.13228.
  • Pan and Yang (2010) Pan, S.J. and Yang, Q. (2010). A Survey on Transfer Learning. IEEE Transactions on Knowledge and Data Engineering, 22(10), 1345–1359.
  • Plackett (1950) Plackett, R. L. (1950). Some theorems in least squares. Biometrika, 37(1/2), 149–157.
  • Schaefer, Roi, and Wolfe (1984) Schaefer, R. L., Roi, L. D., and Wolfe, R. A. (1984). A ridge logistic estimator. Communications in Statistics – Theory and Methods, 13(1), 99–113.
  • Shalev-Shwartz (2012) Shalev-Shwartz, S. (2012). Online learning and online convex optimization. Foundations and Trends in Machine Learning, 4(2), 107–194.
  • Shilo, Rossman and Segal (2020) Shilo, S., Rossman, H. and Segal, E. (2020). Axes of a revolution: challenges and promises of big data in healthcare. Nature Medicine, 26(1), 29–38.
  • Singh, Chaubey, and Dwivedi (1986) Singh, B., Chaubey, Y.P., and Dwivedi, T.D. (1986). An almost unbiased ridge estimator. Sankhya: The Indian Journal of Statistics, Series B, 48(3), 342–346.
  • Tutz and Binder (2007) Tutz, G., and Binder, H. (2007) Boosting ridge regression. Computational Statistics & Data Analysis, 51(12), 6044–6059..
  • van der Vaart (1998) van der Vaart, A.W. (1998). Asymptotic Statistics. Cambridge University Press.
  • van Schaik, Peng, Ojelabi, and Ling (2019) van Schaik, P., Peng, Y., Ojelabi, A., and Ling, J. (2019) Explainable statistical learning in public health for policy development: the case of real-world suicide data. BMC Medical Research Methodology, 19:152.
  • van Wieringen, Stam, Peeters, and van de Wiel (2020) van Wieringen, W. N., Stam, K. A., Peeters, C. F. W., and van de Wiel, M. A. (2020). Updating of the Gaussian graphical model through targeted penalized estimation. Journal of Multivariate Analysis, 178, Article 104621.
  • Verbeke and Molenbergs (2009) Verbeke, G., and Molenberghs, G. (2009). Linear mixed models for longitudinal data. Springer.