Regularized Nonlinear Regression for Simultaneously Selecting and Estimating Key Model Parameters

04/23/2021 ∙ by Kyubaek Yoon, et al. ∙ Yonsei University 0

In system identification, estimating parameters of a model using limited observations results in poor identifiability. To cope with this issue, we propose a new method to simultaneously select and estimate sensitive parameters as key model parameters and fix the remaining parameters to a set of typical values. Our method is formulated as a nonlinear least squares estimator with L1-regularization on the deviation of parameters from a set of typical values. First, we provide consistency and oracle properties of the proposed estimator as a theoretical foundation. Second, we provide a novel approach based on Levenberg-Marquardt optimization to numerically find the solution to the formulated problem. Third, to show the effectiveness, we present an application identifying a biomechanical parametric model of a head position tracking task for 10 human subjects from limited data. In a simulation study, the variances of estimated parameters are decreased by 96.1 estimated parameters without L1-regularization. In an experimental study, our method improves the model interpretation by reducing the number of parameters to be estimated while maintaining variance accounted for (VAF) at above 82.5 Moreover, the variances of estimated parameters are reduced by 71.1 compared to that of the estimated parameters without L1-regularization. Our method is 54 times faster than the standard simplex-based optimization to solve the regularized nonlinear regression.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 4

page 6

page 7

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

In parameter estimation, a model is considered to be identifiable when a unique set of parameters is specified for given measurement data. However, when the data is limited, estimating unknown parameters of a model results in poor identifiability Gutenkunst et al. (2007). In such a case, small changes in the data could result in very different estimated parameters, for example, a rather randomly chosen local minimum out of multiple local minima Lund et al. (2019); Ramadan et al. (2018). The resulted overfitting impairs the model parsimony and generalizability Pitt and Myung (2002). The overfitted model may yield good results with a training data set used to estimate parameters, but it may yield poor estimates with a new test data set. Moreover, this issue becomes worse when parameters are estimated from the data corrupted by random noise geman1992neural.

A biomechanical model often has unknown parameters to be estimated with limited data due to unavailable internal states and the non-invasive nature of human data collection Little et al. (2010). For a limited observation data set, one way for improving identifiability is to build a parsimonious model by lumping a large number of parameters into a small number of lumped parameters Do et al. (2018a). Such a parsimonious model has better interpretability and provides higher estimation accuracy for arbitrary data Babyak (2004).

As a way to build a parsimonious model, the Least absolute shrinkage and selection operator (Lasso) was first introduced in Tibshirani (1996) and then further developed in Zou and Hastie (2005); Tibshirani et al. (2005); Yuan and Lin (2006); Zou (2006). The Lasso is typically used to select sensitive parameters among parameters of a linear model. A sensitive parameter subset is considered to have relatively large impact on the output of the model. That is, small changes in the sensitive parameters result in large changes in the model response Tibshirani and Wasserman (1988).

To formally introduce the Lasso, we suppose that we have where and .

is a function, which depends on the true parameter vector

. is independent and identically distributed with and . Without loss of generality, we assume that the true parameter vector has the first entries non-zero. That is, , for and , for . Finally, when , we consider the following linear least squares problem with -regularization.

(1)

where is the input. is the observation vector. In (1

), the hyperparameter

determines the amount of regularization. The Lasso shrinks more number of parameters toward 0 as increases in general. Moreover, insensitive parameters are shrunk to 0 if is sufficiently large Zou (2006). The remaining parameters, which are not shrunk to 0, are considered sensitive parameters Ramadan et al. (2018). In this paper, we consider the sensitive parameters as key model parameters. The -regularization methods and weighted least squares method were used to select nonlinear auto regressive with exogenous variables (NARX) models Qin et al. (2012). The Lasso was used to remove insensitive parameters when all unknown parameters are to be non-negative Kump et al. (2012). The modified Lasso was proposed for the nonlinear induction motor identification problem, which deals with a similar problem to our study Rasouli et al. (2012)

. However, they do not provide consistency and the asymptotic normality results. The modified principal component analysis (PCA) based on the Lasso was proposed for dimension reduction 

Jolliffe et al. (2003). To select and estimate sensitive parameters of a model, sensitive parameters were selected based on parameter estimation variances predicted by the Fisher information matrix Ramadan et al. (2018, 2019).

In this paper, our objective is to develop a regularized nonlinear parameter estimation method for a model with unknown parameters using a limited data set. Thus, we formulate the model parameter estimation problem as a nonlinear least squares problem with -regularization as follows.

(2)

where and is the set of typical values. Note that these values may be obtained as the mean values of based on preliminary information or estimation. The details of (2) are introduced in Section 3. In our preliminary work, we developed a parameter selection method for system identification with application to head-neck position tracking and reported the model parameter estimates Kyubaek and Jongeun (in press).

The contributions of the paper are as follows. First, we consider nonlinear regression with a generalized penalty function that includes an -penalty function and provide its consistency and oracle properties (i.e., convergence to the correct sparsity and asymptotic normality) in Section 2

. To the best of the authors’ knowledge, our work is the first to provide such analysis for nonlinear regression with a generalized penalty function. For example, convergence properties for various penalized linear regression have been discussed

Johnson et al. (2008); Fan and Li (2001). Note that we do not assume the distribution of errors, which is different from the assumption of Fan and Li (2001). In Section 3, we then reformulate the regularized nonlinear regression for simultaneously selecting and estimating key model parameters by defining in (2) as the deviation of the k-th parameter from its nominal value. Next, we improve the optimization algorithm of Rasouli et al. (2012) to numerically solve the nonlinear least squares problem. Finally, to show the effectiveness, we present an application identifying a biomechanical parametric model of a head-neck position tracking task from limited data in simulation and experimental studies. In a simulation study, our algorithm reduces the variance of most parameter estimates as well as the bias. In an experimental study, the variance of selected sensitive parameters is reduced by on average while maintaining the goodness of fit at above . In addition, our method is 54 times faster as compared to the Lasso with the brute force optimization, e.g., the standard simplex-based optimization we presented recently Ramadan et al. (2018).

2 Consistency and Oracle Properties

We propose a penalized nonlinear regression approach for parameter selection and estimation. First of all, we adopt the following equivalent nonlinear least squares estimator with a generalized penalty function from (2):

(3)

where . The first term in corresponds to nonlinear least squares estimation, and the second term, is the penalty function used for parameter selection. in the penalty function is a nonnegative regularization parameter. Note that we consider a generalized penalty function in (3). The appropriate choice of the penalty function, including -regularization, is further investigated in the assumption  2.

Without loss of generality, we assume only a few parameters are non-zero, such that the true parameter vector has the first entries non-zero. That is, , for and , for . For the nonlinear function in (3), we further consider the following assumptions.

Assumption 1

(Nonlinear function)

  1. The true parameter is in the interior of the bounded parameter set , and is twice differentiable with respect to near for all .

  2. Let and . Then, there exists a positive definite matrix such that as .

  3. As and ,

    uniformly, where

    is the identity matrix

  4. There exists a such that

For the penalty function in (3), we further consider the following assumptions.

Assumption 2

(Penalty function) The first and second derivative of the penalty function denoted by and have the following properties:

  1. For a nonzero fixed ,

  2. For any ,

Remark 1

Assumption 2 is satisfied for several well known penalty functions, e.g., SCAD, Adaptive Lasso, and Hard penalty with proper choices of . Assumption 2-(1) is satisfied for -regularization with a proper choice of . The details are discussed in Johnson et al. (2008).

The following theorem shows the existence of a local minimizer of with the order of .

Lemma 1

Under Assumptions 1 and 2-(1), for any , there exists a positive constant that makes, for large enough n,

where .

Theorem 1

Under the assumptions in Lemma 1

, there exists, with probability tending to 1, a root-n-consistent local minimizer

of , that is, .

Next theorem shows oracle properties (i.e., convergence to the correct sparsity and asymptotic normality) of the estimator on the true set.

Theorem 2

Assume that is the local minimizer of with the root-n-consistency. If Assumptions 1 and 2 hold,

  • for the set ,  ,

  • For , ,

    where , and is the first submatrix of .

The proofs of Lemma 1 and Theorem 1 are given in Appendix A and Appendix B, respectively and the proof of Theorem 2 is given in Appendix C.

Figure 1: Sensorimotor control block diagram for the head-neck system Ramadan et al. (2018)

3 Application to Head Position Tracking

In this section, we evaluate our approach to solve the nonlinear least squares problem with -regularization from simulation and experimental studies of a biomechanical parametric model of a head-neck position tracking task in Ramadan et al. (2018). The reliability of the head-neck position tracking task to quantify head-neck motor control is demonstrated in Popovich Jr et al. (2015). As compared to the Levenberg optimization algorithm in Rasouli et al. (2012), our method implements the Levenberg-Marquardt optimization algorithm to numerically solve our nonlinear least squares problem with -regularization. The Levenberg-Marquardt optimization uses the diagonal elements of the hessian matrix approximation to overcome the slow convergence problem when the value of the damping factor is large Moré (1978). The details of our algorithm are shown in Appendix D and Table 3.

Additionally, in order to simultaneously select and estimate key model parameters of the head-neck system, we reformulate the Lasso penalty function as -regularization on the deviation of parameters from a set of typical values. In this paper, we adopt the mean of the nominal parameter values obtained from preliminary estimation as the set of typical values. Therefore, our method simultaneously selects and estimates only sensitive parameters while fixing insensitive parameters onto the mean of the nominal parameter values. In order to compare with the method of Ramadan et al. (2018), we set the number of sensitive parameters to 5. In this case, we increase the regularization hyperparameter value until we obtain 5 sensitive parameters. The goodness of fit is quantitatively evaluated by variance accounted for (VAF). VAF represents how much the experimental data can be explained by a fitted regression model. VAF is formally defined as follows.

Parameters Max Min Description
50 Visual feedback gain
500 Vestibular feedback gain
300 1 Proprioceptive feedback gain
0.4 0.1 Visual feedback delay
0.2 0.01

Lead time constant of the irregular vestibular afferent neurons

1 0.05 Lead time constant of the central nervous system
5 0.1 Lag time constant of the irregular vestibular afferent neurons
60 5 Lag time constant of the central nervous system
1 0.01 First lead time constant of the neck muscle spindle
1 0.01 Second lead time constant of the neck muscle spindle
5 0.1 Intrinsic damping
5 0.1 Intrinsic stiffness
0.0148 0.0148 Head inertia
0.1 0.1 Torque converter time constant
Table 1: The neurophysiological parameters of the head position model. All the information is obtained from Ramadan et al. (2018)

3.1 Subjects

10 healthy subjects participated in the experimental study. They did not have any history of neck pain lasting more than three days or any neurological motor control impairments. The Michigan State University’s Biomedical and Health Institutional Review Board approved the test protocol. The subjects signed an informed consent before participating in the experiment Ramadan et al. (2018, 2019).

Figure 2: The experimental setup for the head-neck position tracking task. is the reference command signal, and is the measured head rotation angle, and SP indicates one of string potentiometers on both sides of the helmet.

3.2 Parametric model

Fig. 1 shows the block diagram of the head-neck system for position tracking. This is a representative physiological feedback control model Peng et al. (1996); Chen et al. (2002). The model consists of 14 parameters. As shown in Table 1, 2 out of 14 parameters are set to fixed values from Peng et al. (1996). The remaining 12 parameters to be estimated are:

The remaining 12 parameters have the lower and upper bounds from Ramadan et al. (2018) and are normalized using min-max normalization in order to ignore the scale differences between parameters.

(a)
(b)
Figure 3: The bias (a) and variance (b) of estimated parameters in the simulation studies. Comparing our method with a nonlinear least squares problem without -regularization (white bar), we set the mean of nominal parameters (black bar) obtained from preliminary estimation as a set of typical values. The y-axis is a logarithmic scale.
(a) Subject 1
(b) Subject 2
(c) Subject 3
(d) Subject 4
(e) Subject 5
Figure 4: The curve fitting with 10 experimental cases. Solid lines represent estimated responses from the fitted models and dotted lines represent measured responses.
(f) Subject 6
(g) Subject 7
(h) Subject 8
(i) Subject 9
(j) Subject 10
Figure 4: The curve fitting with 10 experimental cases (continued). Solid lines represent estimated responses from the fitted models and dotted lines represent measured responses.
Subject Improvement ()
1 69.25
2 65.79
3 74.83
4 89.36
5 87.44
6 65.02
7 53.62
8 77.70
9 63.44
10 67.62
Avg. 71.41
Table 2: Improvement on the variance of estimated parameters in an experiment study. The given values are . is the variance of estimated parameters obtained from our method, and is the variance of estimated parameters without regularization.

3.3 The Experiment

As shown in Fig. 2, each subject wears a helmet attached with two string potentiometers measuring the axial rotation of the head. Subjects rotated their heads about the vertical axis to track the command signals on the display. The command signal is a pseudorandom sequence of steps with random step durations and amplitudes. The angle of the signal is bounded between . The output signal is the head rotation angle. Each subject performed three 30-second trials, and the sampling rate was 60 Hz Ramadan et al. (2018).

3.4 Simulation study

In this section, we analyze the bias and variance of estimated parameters from a simulation study with the known true parameters for comparison. First, we generate the simulated data where and , which are the input and observation vectors, respectively. Additionally, we obtain 20 sets of nominal parameter values from preliminary estimation performed 20 times over all three trials for each subject. Finally, we evaluate our method by setting the mean of 20 sets of nominal parameter values as a set of typical values. In Fig. 3, as compared to the nonlinear least squares estimator without -regularization, except for , the biases of the other parameters were decreased by on average. In addition, the variances of all estimated parameters were decreased by on average.

3.5 In vivo experimental study

In this section, the variances of estimated parameters from our method are compared with the result of the standard simplex-based optimization in Ramadan et al. (2018). All parameters are pushed further toward the mean of nominal parameter values obtained from preliminary estimation as the regularization hyperparameter increases. The regularization hyperparameter increases until 5 sensitive parameters are selected. After selecting a sensitive parameter subset for each subject, we select the most frequent subset among all subjects for the fair comparison with Ramadan et al. (2018). The Lasso in Ramadan et al. (2018) selected 5 parameters of as the most frequent subset of sensitive parameters, and the subset of was selected by our method. As a result, 4 out of 5 sensitive parameters are selected by both methods.

Second, we evaluate our method based on the goodness of fit measured by variance accounted for (VAF). All values are given as mean standard deviation across subjects. The goodness of fit of our method () over 10 subjects is almost equal to that of the Lasso ( ) in Ramadan et al. (2018). Without -regularization, over all subjects. In Fig. 4, for all subjects, the estimated responses are almost same as the measured responses, and the estimated responses are smoother than the measured responses.

Third, as shown in Table 2, the variance of estimated parameters is reduced by 71.4 on average across parameters and subjects as compared to those of estimated parameters without regularization.

Finally, we compute the average computation time using the “timeit” function from (The MathWorks Inc., Natick, MA, U.S.A.). The average computation time of our method for a subject is 54 times faster than that of the Lasso with the standard simplex-based optimization in Ramadan et al. (2018). In particular, the average computation time of our method across subjects is 5.6 seconds per trial, and that of the Lasso in Ramadan et al. (2018) is 302.0 seconds per trial.

4 Discussion

We provided consistency and oracle properties (i.e., convergence to the correct sparsity and asymptotic normality) for a nonlinear regression approach with a generalized penalty function. As a result, we proved the existence of a local minimizer and the convergence to the sparse unknown parameters for the penalized nonlinear least squares estimator. It is important to note that for the first time, we have proved convergence properties of the penalized nonlinear least squares estimator, as compared to previous studies Johnson et al. (2008); Wu (1981); Fan and Li (2001).

In the simulation study, we showed that the bias and variance of estimated parameters of our method were decreased as compared to those of estimated parameters without -regularization. When we set the mean of nominal parameter values as the typical values for non-selected estimates, the variance significantly was decreased. In addition, although the -regularization is known to induce biased estimates, the bias with our method also slightly was decreased except for one parameter with the increased bias. The reason might be that our method pushes all parameters toward the mean of nominal parameter values obtained from 20 preliminary estimation. Therefore, if we set the appropriate values as the typical values, we could achieve lower values of bias and variance errors.

In the experimental study, our proposed method was compared with the Lasso in Ramadan et al. (2018) using three performance criteria. First, we confirmed that the proposed method simultaneously selected and estimated sensitive parameters in the nonlinear model. As a result, five parameters were selected and estimated in our method, and the remaining parameters were fixed to the mean of nominal parameters obtained from preliminary estimation. With our method, 4 out of 5 sensitive parameters were the same as those selected in Ramadan et al. (2018). This result showed that our method behaved similarly in sensitive parameter selection by Ramadan et al. (2018) using the Fisher information matrix. Selected sensitive parameters may vary slightly depending on the performance of the optimizer and the condition of initial points. However, they have not changed much over repeated randomized realizations. In addition, we presented VAF to quantitatively evaluate the goodness of fit of the estimated model. From the standard nonlinear least squares problem without -regularization, VAF was about 84.9, and our method achieved about 82.5. Hence, the goodness of fit of the model estimated from our method was similar to that of model estimated from the standard nonlinear least squares problem without -regularization although only 5 selected parameters were used for estimation in our method. Moreover, as shown in Fig. 4, most curve fitting errors occur around the peak points, which shows the limitation of the presented dynamics models that do not perfectly reflect the real physiological head-neck control processes. This could be due to the switching of human control strategies with sudden changes in head-neck orientations Chen et al. (2002).

Next, the model identifiability was improved by sensitive parameter selection from our method. The variance of estimated 12 parameters is reduced by on average. In general, when the number of parameters to be estimated is large with limited data, the model has poor (or lack of) identifiability. Therefore, through our method with key parameter selection, i.e., the nonlinear least squares problem with -regularization on the deviation of parameters from the mean of the nominal parameter values, the uniqueness of the estimated solution can be ensured even for an original problem with a lack of identifiability due to limited data.

Finally, the average computation time of our method was 54 times faster than that of the Lasso with the brute force optimization algorithm in Ramadan et al. (2018). Our method reduced the computation time by eliminating large inverse matrix computation by modifying a Jacobian formulation as a minimization formulation.

5 Conclusions

In this paper, we tackled a parameter estimation problem with limited data by formulating it as a nonlinear least squares estimator with -regularization.

We effectively improved the model identifiability by applying the Lasso to the nonlinear least squares problem. As asymptotic results, we provided consistency and oracle properties for a nonlinear regression approach with a generalized penalty function. Based on these results, we proposed a novel solution to our problem by solving the nonlinear least squares problem with -regularization on the deviation of parameters from the nominal values in order to simultaneously select and estimate model parameters.

From simulation and experimental studies, we successfully demonstrated that the proposed method simultaneously selected and estimated sensitive parameters, improved the model identifiability by reducing the variance of estimated parameters and took a much shorter computation time than that of the Lasso in Ramadan et al. (2018).

Future work would be to apply our method to other clinical patient-specific calibration of the disease models Do et al. (2018b); Zhang et al. (2019).

Acknowledgements.
This work was supported by the Mid-career Research Programs through the National Research Foundation of Korea (NRF) funded by the Ministry of Science and ICT (NRF-2018R1A2B6008063, 2019R1A2C1002213). This publication was made possible by grant number U19AT006057 from the National Center for Complementary and Integrative Health (NCCIH) at the National Institutes of Health. Its contents are solely the responsibility of the authors and do not necessarily represent the official views of NCCIH. The research of Wei-Ying Wu was supported by Ministry of Science and Technology of Taiwan under grants (MOST 107-2118-M-259-001-).

Conflict of interest

The authors declare that they have no conflict of interest.

Appendix A Proof of Lemma 1

We first find the lower bound of .

For the sake of simplicity,
we define , where . For the term ,

By Assumption 1, and we claim that,

The claim follows from the lemma 2.1 in Huber and others (1973). The condition for the lemma in our setting is

where denote the maximum absolute value of all elements of matrix . Since is an matrix and is a matrix,

The first and second conditions from Assumption 1 imply the last equality. Therefore, we have

(1)

For the term ,

If we show , with Assumption 1, we obtain

(2)

can be rewritten as,

By Assumption 1, the first term converges to almost surely. By the conditions 1, 2, and 4 of Assumption 1 and Cauchy-Schwarz inequality, the second term converges to zero almost surely. For the last term, it is enough to show that

(3)

uniformly on with probability 1, because of the conditions 1 and 2 of Assumption 1. Then, by the condition 4 of Assumption 1, (3.17) can be shown in a manner similar to that of Wu (1981) For the term , by Assumption 2 for a fixed ,

(4)

Thus, combined (2) with (4), we have

which leads to the desired result for large enough .

Appendix B Proof of Theorem 1

By Lemma 1 and the continuity of , we obtain Theorem 1.

Appendix C Proof of Theorem 2

Proof of (i)
First, we break into two sets:

Then, it is enough to show for any , and . For any , we can show for large enough because of the consistency.

To verify for large enough , we first show on the set . Note that

where lies between and . Since , and Thus, we have

(1)

Combining (1) with , we have

(2)

for which satisfies Since is the local minimizer of with the root-n-consistent, we attain

(3)

from

Therefore, there exists such that, for large enough ,

In addition, by the second assumption of Assumption 2,

for the large enough , which leads to for large enough .

Proof of (ii)
By the Taylor expansion,

where Since is the local minimizer of , so that

Finally,

because , and the consistency of .

Appendix D Nonlinear least squares estimator with -regularization.

In order to apply the Lasso to the nonlinear least squares problem, we reformulate the Levenberg-Marquardt (LM) optimization algorithm as a linear least squares problem as follow.