I Introduction
In this paper we consider the problem of learning a nonlinear dynamical system model with multiple outputs and multiple inputs (when they exist). Generally this identification problem can be tackled using different model structures, with the class of linear models being arguably the most well studied in engineering, statistics and econometrics [27, 15, 6, 2, 7].
Linear models are often used even when the system is known to be nonlinear [10, 25]. However certain nonlinearities, such as saturations, cannot always be neglected. In such cases using block-oriented models is a popular approach to capture static nonlinearities [11]
. Recently, such models have been given semiparametric formulations and identified using machine learning methods, cf.
[22, 23]. To model nonlinear dynamics a common approach is to use Narmax models [26, 5].In this paper we are interested in recursive identification methods [18]. In cases where the model structure is linear in the parameters, recursive least-squares can be applied. For certain models with nonlinear parameters, the extended recursive least-squares has been used [8]. Another popular approach is the recursive prediction error method which has been developed, e.g., for Wiener models, Hammerstein models, and polynomial state-space models [33, 19, 31].
Nonparametric models are often based on weighted sums of the observed data [24]. The weights vary for each predicted output and the number of weights increases with each observed datapoint. The weights are typically obtained in a batch manner; in [1, 4] they are computed recursively but must be recomputed for each new prediction of the output.
For many nonlinear systems, however, linear models work well as an initial approximation. The strategy in [21] exploits this fact by first finding the best linear approximation using a frequency domain approach. Then, starting from this approximation, a nonlinear polynomial state-space model is fitted by solving a nonconvex problem. This two-step method cannot be readily implemented recursively and it requires input signals with appropriate frequency domain properties.
In this paper, we start from a nominal model structure. This class can be based on insights about the system, e.g. that linear model structures can approximate a system around an operating point. Given a record of past outputs, and inputs , that is,
a nominal model yields a predicted output which differs from the output . The resulting prediction error is denoted [16]. By characterizing the nominal prediction errors in a data-driven manner, we aim to develop a refined predictor model of the system. Thus we integrate classic and data-driven system modeling approaches in a natural way.
The general model class and problem formulation are introduced in Section II. Then in Section III we apply the principle of maximum likelihood to derive a statistically motivated learning criterion. In Section IV this nonconvex criterion is minimized using a majorization-minimization approach that gives rise to a convex user-parameter free method. We derive a computionally efficient recursive algorithm for solving the convex problem, which can be applied to large datasets as well as online learning scenarios. In Section V, we evaluate the proposed method using both synthetic and real data examples.
In a nutshell, the contribution of the paper is a modelling approach and identification method for nonlinear multiple input-multiple output systems that:
-
explicitly separates modeling based on application-specific insights from general data-driven modelling,
-
circumvents the choice of regularization parameters and initialization points,
-
learns parsimonious predictor models,
-
admits a computationally efficient implementation.
Notation: denotes the th standard basis matrix. and denote the Kronecker and Hadamard products, respectively.
is the vectorization operation.
, and , where , denote -, - and weighted norms, respectively. The Moore-Penrose pseudoinverse of is denoted .Remark 1.
An implementation of the proposed method is available at https://github.com/magni84/lava.
Ii Problem formulation
Given , the -dimensional output of a system can always be written as
(1) |
where is any one-step-ahead predictor based on a nominal model. Here we consider nominal models on the form
(2) |
where the vector is a given function of and denotes the unknown parameters.
Remark 2.
A typical example of is
(3) |
in which case the nominal predictor is linear in the data and therefore captures the linear system dynamics. Nonlinearities can be incorporated if such are known about the system, in which case will be nonlinear in the data.
The popular Arx model structure, for instance, can be cast into the framework (1) and (2) by assuming that the nominal prediction error
is a white noise process
[27, 15]. For certain systems, (2) may accurately describe the dynamics of the system around its operation point and consequently the white noise assumption on may be a reasonable approximation. However, this ceases to be the case even for systems with weak nonlinearities, cf. [10].Next, we develop a data-driven model of the prediction errors in (1), conditioned on the past data . Specifically, we assume the conditional model
(4) |
where is an matrix of unknown latent variables, is an unknown covariance matrix, and the vector is any given function of . This is a fairly general model structure that can capture correlated data-dependent nominal prediction errors.
Note that when , the prediction errors are temporally white and the nominal model (2) captures all relevant system dynamics. The latent variable is modeled as random here. Before data collection, we assume to have mean
as we have no reason to depart from the nominal model assumptions until after observing data. Using a Gaussian distribution, we thus get
(5) |
where is an unknown covariance matrix.
Our goal here is to identify a refined predictor of the form
(6) |
from a data set
, by maximizing the likelihood function. The first term is an estimate of the nominal predictor model while the second term tries to capture structure in the data that is not taken into account by the nominal model. Note that when
is sparse we obtain a parsimonious predictor model.Remark 3.
The model (1)-(4) implies that we can write the output in the equivalent form
where is a white process with covariance . In order to formulate a flexible data-driven error model (4), we overparametrize it using a high-dimensional . In this case, regularization of is desirable, which is achieved by (5). Note that and are both unknown. Estimating these covariance matrices corresponds to using a data-adaptive regularization, as we show in subsequent sections.
Remark 4.
The nonlinear function of can be seen as a basis expansion which is chosen to yield a flexible model structure of the errors. In the examples below we will use the Laplace operator basis functions [28]. Other possible choices include the polynomial basis functions, Fourier basis functions, wavelets, etc. [26, 15, 32].
Remark 5.
In (6), is a one-step-ahead predictor. However, the framework can be readily applied to -step-ahead prediction where and depend on .
Iii Latent variable framework
Given a record of data samples, , our goal is to estimate and to form the refined predictor (6). In Section III-A, we employ the maximum likelihood approach based on the likelihood function , which requires the estimation of nuisance parameters and . For notational simplicity, we write the parameters as and in Section III-B we show how an estimator of is obtained as a function of and .
Iii-a Parameter estimation
We write the output samples in matrix form as
In order to obtain maximum likelihood estimates of , we first derive the likelihood function by marginalizing out the latent variables from the data distribution:
(7) |
where the data distribution and are given by (4) and (5), respectively.
To obtain a closed-form expression of (7), we begin by constructing the regression matrices
It is shown in Appendix A that (7) can be written as
(8) |
where
(9) |
are vectorized variables, and
(10) | ||||
(11) |
Note that (8) is not a Gaussian distribution in general, since may be a function of . It follows that maximizing (8) is equivalent to solving
(12) |
where
(13) |
and is nothing but the vector of nominal prediction errors.
Iii-B Latent variable estimation
Next, we turn to the latent variable which is used to model the nominal prediction error in (4). As we show in Appendix A, the conditional distribution of is Gaussian and can be written as
(14) |
with conditional mean
(15) |
and covariance matrix
(16) |
An estimate is then given by evaluating the conditional (vectorized) mean (15) at the optimal estimate obtained via (12).
Iv Majorization-minimization approach
The quantities in the refined predictor (6) are readily obtained from the solution of (12). In general, may have local minima and (12) must be tackled using computationally efficient iterative methods to find the optimum. The obtained estimates will then depend on the choice of initial point . Such methods includes the majorization-minimization approach [12, 35]
, which in turn include Expectation Maximization
[9] as a special case.The majorization-minimization approach is based on finding a majorizing function with the following properties:
(17) |
with equality when . The key is to find a majorizing function that is simpler to minimize than . Let denote the minimizer of . Then
(18) |
This property leads directly to an iterative scheme that decreases monotonically, starting from an initial estimate .
Given the overparameterized error model (4), it is natural to initialize at points in the parameter space which correspond to the nominal predictor model structure (2). That is,
(19) |
at which .
Iv-a Convex majorization
For a parsimonious parameterization and computationally advantageous formulation, we consider a diagonal structure of the covariance matrices in (4), i.e., we let
(20) |
and we let denote the covariance matrix corresponding to the th row of , so that
(21) |
We begin by majorizing (13) via linearization of the concave term :
(22) |
where and are arbitrary diagonal covariance matrices and is obtained by inserting and into (11). The right-hand side of the inequality above is a majorizing tangent plane to , see Appendix B. The use of (22) yields a convex majorizing function of in (12):
(23) |
where is a constant. To derive a computationally efficient algorithm for minimizing (23), the following theorem is useful:
Theorem 1.
Proof.
Remark 6.
The augmented form in (23), enables us to solve for the nuisance parameters and in closed-form and also yields the conditional mean as a by-product.
To prepare for the minimization of the function (25) we write the matrix quantities using variables that denote the th rows of the following matrices:
Theorem 2.
Proof.
See Appendix C. ∎
Remark 7.
Remark 8.
The iterative scheme (18) is executed by initializing at and solving (26). The procedure is then repeated by updating the majorization point using the new estimate . It follows that the estimates will converge to a local minima of (13). The following theorem establishes that the local minima found, and thus the resulting predictor (6), is invariant to in the form (19).
Theorem 3.
Proof.
See Appendix D. ∎
Remark 9.
As a result we may initialize the scheme (18) at . This obviates the need for carefully selecting an initialization point, which would be needed in e.g. the Expectation Maximization algorithm.
Iv-B Recursive computation
We now show that the convex problem (26) can be solved recursively, for each new sample and .
Iv-B1 Computing
If we fix and only solve for , the solution is given by
(31) |
where
Note that both and are independent of , and that they can be computed for each sample using a standard recursive least-squares (LS) algorithm:
(32) | ||||
(33) | ||||
(34) |
Iv-B2 Computing
Using (31), we concentrate out from (26) to obtain
where
In Appendix E it is shown how the minimum of can be found via cyclic minimization with respect to the elements of , similar to what has been done in [36] in a simpler case. This iterative procedure is implemented using recursively computed quantities and produces an estimate at sample .
Iv-B3 Summary of the algorithm
The algorithm computes and recursively by means of the following steps at each sample :
-
Compute via the cyclic minimization of . Cycle through all elements times.
-
Compute
The estimates are initialized as and . In practice, small works well since we cycle times through all elements of for each new data sample. The computational details are given in Algorithm 1 in Appendix E, which can be readily implemented e.g. in Matlab.
V Numerical experiments
In this section we evaluate the proposed method and compare it with two alternative identification methods.
V-a Identification methods and experimental setup
The numerical experiments were conducted as follows. Three methods have been used: LS identification of affine Arx, Narx using wavelet networks (Wave for short), and the latent variable method (Lava) presented in this paper. From our numerical experiments we found that performing even only one iteration of the majorization-minimization algorithm produces good results, and doing so leads to a computationally efficient recursive implementation (which we denote Lava-R for recursive).
For each method the function is taken as the linear regressor in (3). Then the dimension of equals . For affine ARX, the model is given by
where is estimated using recursive least squares [27]. Note that in Lava-R, is computed as a byproduct (32).
For the wavelet network, nlarx in the System Identification Toolbox for Matlab was used, with the number of nonlinear units automatically detected [17].
For Lava-R, the model is given by (6) and are found by the minimization of (26) using . The minimization is performed using the recursive algorithm in Section IV-B3 with . The nonlinear function of the data can be chosen to be a set of basis functions evaluated at . Then can be seen as a truncated basis expansion of some nonlinear function. In the numerical examples, uses the Laplace basis expansion due to its good approximation properties [28]. Each element in the expansion is given by
(35) |
where are the boundaries of the inputs and outputs for each channel and are the indices for each element of . Then the dimension of
equals , where is a user parameter which determines the resolution of the basis.
Finally, an important part of the identification setup is the choice of input signal. For a nonlinear system it is important to excite the system both in frequency and in amplitude. For linear models a commonly used input signal is a pseudorandom binary sequence (PRBS), which is a signal that shifts between two levels in a certain fashion. One reason for using PRBS is that it has good correlation properties [27]
. Hence, PRBS excites the system well in frequency, but poorly in amplitude. A remedy to the poor amplitude excitation is to multiply each interval of constant signal level with a random factor that is uniformly distributed on some interval
, cf. [34]. Hence, if the PRBS takes the values and , then the resulting sequence will contain constant intervals with random amplitudes between and . We denote such a random amplitude sequence where is the maximum amplitude.V-B Performance metric
For the examples considered here the system does not necessarily belong to the model class, and thus there is no true parameter vector to compare with the estimated parameters. Hence, the different methods will instead be evaluated with respect to the simulated model output . For Lava-R
and is computed as (35) with replaced by .
The performance can then be evaluated using the root mean squared error (RMSE) for each output channel ,
The expectations are computed using 100 Monte Carlo simulations on validation data.
For the dataset collected from a real system, it is not possible to evaluate the expectation in the RMSE formula. For such sets we use the fit of the data, i.e.,
where contains the simulated outputs for channel , is the empirical mean of and is a vector of ones. Hence, FIT compares the simulated output errors with those obtained using the empirical mean as the model output.
V-C System with saturation
Consider the following state-space model,
(36) | ||||
(37) | ||||
(38) |
where and
A block-diagram for the above system is shown in Fig. 1. The measurement noise was chosen as a white Gaussian process with covariance matrix where .
Data was collected from the system using an input signal for several different amplitudes . The identification was performed using , , , and data samples. This means that and , and therefore there are 10 parameters in and in .

Note that, for sufficiently low amplitudes , will be smaller than the saturation level for all , and thus the system will behave as a linear system. However, when increases, the saturation will affect the system output more and more. The RMSE was computed for eight different amplitudes , and the result is shown in Fig. 2. It can be seen that for small amplitudes, when the system is effectively linear, the Arx model gives a marginally better result than Lava-R and Wave. However, as the amplitude is increased, the nonlinear effects become more important, and Lava-R outperforms both Wave and Arx models.

V-D Water tank
In this example a real cascade tank process is studied. It consists of two tanks mounted on top of each other, with free outlets. The top tank is fed with water by a pump. The input signal is the voltage applied to the pump, and the output consists of the water level in the two tanks. The setup is described in more detail in [34]. The data set consists of 2500 samples collected every five seconds. The first 1250 samples where used for identification, and the last 1250 samples for validation.

The identification was performed using , and . With two outputs, this results in 14 parameters in and 1458 parameters in . Lava-R found a model with only 37 nonzero parameters in , and the simulated output together with the measured output are shown in Fig. 3. The FIT values, computed on the validation data are shown in Table I. It can be seen that an affine ARX model gives a good fit, but also that using Lava-R the FIT measure can be improved significantly. In this example, Wave did not perform very well.
Lava-R | Wave | Arx | |
---|---|---|---|
Upper tank | |||
Lower tank |
V-E Pick-and-place machine
In the final example, a real pick-and-place machine is studied. This machine is used to place electronic components on a circuit board, and is described in detail in [13]. This system exhibits saturation, different modes, and other nonlinearities. The data used here are from a real physical process, and were also used in e.g. [3, 14, 20]. The data set consists of a 15s recording of the single input and the vertical position of the mounting head . The data was sampled at 50 Hz, and the first 8s () were used for identification and the last 7s for validation.
The identification was performed using , and . For the SISO system considered here, this results in 5 parameters in and 1296 parameters in . Lava-R found a model with 33 of the parameters in being nonzero, the output of which is shown in Fig. 4.

Lava-R | Wave | Arx | |
---|---|---|---|
FIT |
The FIT values, computed on the validation data, for Lava-R, Wave and affine Arx are shown in Table II. Lava-R outperforms Narx using wavelet networks, and both are better than Arx.
Vi Conclusion
We have developed a method for learning nonlinear systems with multiple outputs and inputs. We began by modelling the errors of a nominal predictor using a latent variable formulation. The nominal predictor could for instance be a linear approximation of the system but could also include known nonlinearities. A learning criterion was derived based on the principle of maximum likelihood, which obviates the tuning of regularization parameters. The criterion is then minimized using a majorization-minimization approach. Specifically, we derived a convex user-parameter free formulation, which led to a computationally efficient recursive algorithm that can be applied to large datasets as well as online learning problems.
The method introduced in this paper learns parsimonious predictor models and captures nonlinear system dynamics. This was illustrated via synthetic as well as real data examples. As shown in these examples a recursive implementation of the proposed method was capable of outperforming a batch method using a Narx model with a wavelet network.
Appendix A Derivation of the distributions (8) and (14)
We start by computing given in (7). The function can be found from (1)–(4
) and the chain rule:
(39) |
where we have neglected initial conditions [27]. Since
it follows that
(40) |
Using the vectorized variable in (9)-(10) we can see that
and thus,
Next, we note that the following useful equality holds:
(41) |
where is given by (11), by (15), and by (16). To see that the equality holds, expand the norms on both sides of (41) and apply the matrix inversion lemma.
Appendix B Derivation of the majorizing tangent plane (22)
The first-order Taylor expansion of the log-determinant can be written as
where is the vector of diagonal elements in and contains the diagonal elements in .
For the derivatives with respect to we have
Note that
so Hence
In the same way
Since is concave in and , it follows that
(44) |
where
Appendix C Proof of Theorem 2
It follows from (21) that
where . Hence,
Thus, we can rewrite (25) as (to within an additive constant):
(45) |
where .
Appendix D Proof of Theorem 3
Initializing (18) at and where , produces two sequences denoted and for , respectively. This results also in sequences and . The theorem states that:
(46) |
(47) |
We now show the stronger result that the covariance matrices converge as
(48) |
where . Note that as . Hence (48) implies (47). We prove (48) and (46) by induction. That (48) and (46) holds for follows directly from Theorem 2. Now assume that (48) holds for some . Let
Comments
There are no comments yet.