# Recursive nonlinear-system identification using latent variables

In this paper we develop a method for learning nonlinear systems with multiple outputs and inputs. We begin by modelling the errors of a nominal predictor of the system using a latent variable framework. Then using the maximum likelihood principle we derive a criterion for learning the model. The resulting optimization problem is tackled using a majorization-minimization approach. Finally, we develop a convex majorization technique and show that it enables a recursive identification method. The method learns parsimonious predictive models and is tested on both synthetic and real nonlinear systems.

## Authors

• 2 publications
• 24 publications
• 14 publications
07/10/2019

### Variational Autoencoders and Nonlinear ICA: A Unifying Framework

The framework of variational autoencoders allows us to efficiently learn...
10/22/2019

### Reduced-dimensional Monte Carlo Maximum Likelihood for Latent Gaussian Random Field Models

Monte Carlo maximum likelihood (MCML) provides an elegant approach to fi...
11/25/2020

### Free Energy Minimization: A Unified Framework for Modelling, Inference, Learning,and Optimization

The goal of these lecture notes is to review the problem of free energy ...
08/17/2020

### Computation for Latent Variable Model Estimation: A Unified Stochastic Proximal Framework

Latent variable models have been playing a central role in psychometrics...
05/06/2020

### Restricted maximum-likelihood method for learning latent variance components in gene expression data with known and unknown confounders

Linear mixed modelling is a popular approach for detecting and correctin...
02/02/2005

### Identification of complex systems in the basis of wavelets

In this paper is proposed the method of the identification of complex dy...
09/15/2018

### Modelling Latent Travel Behaviour Characteristics with Generative Machine Learning

In this paper, we implement an information-theoretic approach to travel ...
##### This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

## I Introduction

In this paper we consider the problem of learning a nonlinear dynamical system model with multiple outputs and multiple inputs (when they exist). Generally this identification problem can be tackled using different model structures, with the class of linear models being arguably the most well studied in engineering, statistics and econometrics [27, 15, 6, 2, 7].

Linear models are often used even when the system is known to be nonlinear [10, 25]. However certain nonlinearities, such as saturations, cannot always be neglected. In such cases using block-oriented models is a popular approach to capture static nonlinearities [11]

. Recently, such models have been given semiparametric formulations and identified using machine learning methods, cf.

[22, 23]. To model nonlinear dynamics a common approach is to use Narmax models [26, 5].

In this paper we are interested in recursive identification methods [18]. In cases where the model structure is linear in the parameters, recursive least-squares can be applied. For certain models with nonlinear parameters, the extended recursive least-squares has been used [8]. Another popular approach is the recursive prediction error method which has been developed, e.g., for Wiener models, Hammerstein models, and polynomial state-space models [33, 19, 31].

Nonparametric models are often based on weighted sums of the observed data [24]. The weights vary for each predicted output and the number of weights increases with each observed datapoint. The weights are typically obtained in a batch manner; in [1, 4] they are computed recursively but must be recomputed for each new prediction of the output.

For many nonlinear systems, however, linear models work well as an initial approximation. The strategy in [21] exploits this fact by first finding the best linear approximation using a frequency domain approach. Then, starting from this approximation, a nonlinear polynomial state-space model is fitted by solving a nonconvex problem. This two-step method cannot be readily implemented recursively and it requires input signals with appropriate frequency domain properties.

In this paper, we start from a nominal model structure. This class can be based on insights about the system, e.g. that linear model structures can approximate a system around an operating point. Given a record of past outputs, and inputs , that is,

 Dt≜{(y(1),u(1)),…,(y(t),u(t))},

a nominal model yields a predicted output which differs from the output . The resulting prediction error is denoted [16]. By characterizing the nominal prediction errors in a data-driven manner, we aim to develop a refined predictor model of the system. Thus we integrate classic and data-driven system modeling approaches in a natural way.

The general model class and problem formulation are introduced in Section II. Then in Section III we apply the principle of maximum likelihood to derive a statistically motivated learning criterion. In Section IV this nonconvex criterion is minimized using a majorization-minimization approach that gives rise to a convex user-parameter free method. We derive a computionally efficient recursive algorithm for solving the convex problem, which can be applied to large datasets as well as online learning scenarios. In Section V, we evaluate the proposed method using both synthetic and real data examples.

In a nutshell, the contribution of the paper is a modelling approach and identification method for nonlinear multiple input-multiple output systems that:

• explicitly separates modeling based on application-specific insights from general data-driven modelling,

• circumvents the choice of regularization parameters and initialization points,

• learns parsimonious predictor models,

• admits a computationally efficient implementation.

Notation: denotes the th standard basis matrix. and denote the Kronecker and Hadamard products, respectively.

is the vectorization operation.

, and , where , denote -, - and weighted norms, respectively. The Moore-Penrose pseudoinverse of is denoted .

###### Remark 1.

An implementation of the proposed method is available at https://github.com/magni84/lava.

## Ii Problem formulation

Given , the -dimensional output of a system can always be written as

 y(t)=y0(t)+ε(t), (1)

where is any one-step-ahead predictor based on a nominal model. Here we consider nominal models on the form

 y0(t)=Θφ(t), (2)

where the vector is a given function of and denotes the unknown parameters.

###### Remark 2.

A typical example of is

 φ(t)=[y⊤(t−1)⋯y⊤(t−na)u⊤(t−1)⋯u⊤(t−nb)1]⊤, (3)

in which case the nominal predictor is linear in the data and therefore captures the linear system dynamics. Nonlinearities can be incorporated if such are known about the system, in which case will be nonlinear in the data.

The popular Arx model structure, for instance, can be cast into the framework (1) and (2) by assuming that the nominal prediction error

is a white noise process

[27, 15]. For certain systems, (2) may accurately describe the dynamics of the system around its operation point and consequently the white noise assumption on may be a reasonable approximation. However, this ceases to be the case even for systems with weak nonlinearities, cf. [10].

Next, we develop a data-driven model of the prediction errors in (1), conditioned on the past data . Specifically, we assume the conditional model

 ε(t)|Dt−1∼N(Zγ(t),Σ), (4)

where is an matrix of unknown latent variables, is an unknown covariance matrix, and the vector is any given function of . This is a fairly general model structure that can capture correlated data-dependent nominal prediction errors.

Note that when , the prediction errors are temporally white and the nominal model (2) captures all relevant system dynamics. The latent variable is modeled as random here. Before data collection, we assume to have mean

as we have no reason to depart from the nominal model assumptions until after observing data. Using a Gaussian distribution, we thus get

 vec(Z)∼N(0,D), (5)

where is an unknown covariance matrix.

Our goal here is to identify a refined predictor of the form

 ^y(t)=ˆΘφ(t)^y0(t)+ˆZγ(t)^ε(t), (6)

from a data set

, by maximizing the likelihood function. The first term is an estimate of the nominal predictor model while the second term tries to capture structure in the data that is not taken into account by the nominal model. Note that when

is sparse we obtain a parsimonious predictor model.

###### Remark 3.

The model (1)-(4) implies that we can write the output in the equivalent form

 y(t)=Θφ(t)+Zγ(t)+v(t),

where is a white process with covariance . In order to formulate a flexible data-driven error model (4), we overparametrize it using a high-dimensional . In this case, regularization of is desirable, which is achieved by (5). Note that and are both unknown. Estimating these covariance matrices corresponds to using a data-adaptive regularization, as we show in subsequent sections.

###### Remark 4.

The nonlinear function of can be seen as a basis expansion which is chosen to yield a flexible model structure of the errors. In the examples below we will use the Laplace operator basis functions [28]. Other possible choices include the polynomial basis functions, Fourier basis functions, wavelets, etc. [26, 15, 32].

###### Remark 5.

In (6), is a one-step-ahead predictor. However, the framework can be readily applied to -step-ahead prediction where and depend on .

## Iii Latent variable framework

Given a record of data samples, , our goal is to estimate and to form the refined predictor (6). In Section III-A, we employ the maximum likelihood approach based on the likelihood function , which requires the estimation of nuisance parameters and . For notational simplicity, we write the parameters as and in Section III-B we show how an estimator of is obtained as a function of and .

### Iii-a Parameter estimation

We write the output samples in matrix form as

 Y=[y(1)⋯y(N)]∈Rny×N.

In order to obtain maximum likelihood estimates of , we first derive the likelihood function by marginalizing out the latent variables from the data distribution:

 p(Y|Ω)=∫p(Y|Ω,Z)p(Z)dZ, (7)

where the data distribution and are given by (4) and (5), respectively.

To obtain a closed-form expression of (7), we begin by constructing the regression matrices

 Φ =[φ(1)⋯φ(N)]∈Rp×N, Γ =[γ(1)⋯γ(N)]∈Rq×N.

It is shown in Appendix A that (7) can be written as

 p(Y|Ω)=1(2π)Nny|R|exp(−12∥y−Fθ∥2R−1), (8)

where

 y=vec(Y),θ=vec(Θ),z=vec(Z), (9)

are vectorized variables, and

 F =Φ⊤⊗Iny,G=Γ⊤⊗Iny, (10) R ≜GDG⊤+IN⊗Σ. (11)

Note that (8) is not a Gaussian distribution in general, since may be a function of . It follows that maximizing (8) is equivalent to solving

 minΩV(Ω), (12)

where

 V(Ω)≜∥y−Fθ∥2R−1+ln|R| (13)

and is nothing but the vector of nominal prediction errors.

### Iii-B Latent variable estimation

Next, we turn to the latent variable which is used to model the nominal prediction error in (4). As we show in Appendix A, the conditional distribution of is Gaussian and can be written as

 p(Z|Ω,Y)=1√(2π)nyq|Σz|exp(−12∥z−ζ∥2Σ−1z), (14)

with conditional mean

 ζ=DG⊤R−1(y−Fθ), (15)

and covariance matrix

 Σz=(D−1+G⊤(IN⊗Σ−1)G)−1. (16)

An estimate is then given by evaluating the conditional (vectorized) mean (15) at the optimal estimate obtained via (12).

## Iv Majorization-minimization approach

The quantities in the refined predictor (6) are readily obtained from the solution of (12). In general, may have local minima and (12) must be tackled using computationally efficient iterative methods to find the optimum. The obtained estimates will then depend on the choice of initial point . Such methods includes the majorization-minimization approach [12, 35]

, which in turn include Expectation Maximization

[9] as a special case.

The majorization-minimization approach is based on finding a majorizing function with the following properties:

 V(Ω)≤V′(Ω|˜Ω),∀Ω (17)

with equality when . The key is to find a majorizing function that is simpler to minimize than . Let denote the minimizer of . Then

 V(Ωk+1)≤V′(Ωk+1|Ωk)≤V′(Ωk|Ωk)=V(Ωk). (18)

This property leads directly to an iterative scheme that decreases monotonically, starting from an initial estimate .

Given the overparameterized error model (4), it is natural to initialize at points in the parameter space which correspond to the nominal predictor model structure (2). That is,

 Ω0={Θ0,0,Σ0},whereΣ0≻0, (19)

at which .

### Iv-a Convex majorization

For a parsimonious parameterization and computationally advantageous formulation, we consider a diagonal structure of the covariance matrices in (4), i.e., we let

 Σ=diag(σ1,σ2,…,σny), (20)

and we let denote the covariance matrix corresponding to the th row of , so that

 D=ny∑i=1Di⊗Ei,i. (21)

We begin by majorizing (13) via linearization of the concave term :

 ln|R|≤ln|˜R|−tr{˜R−1(IN⊗˜Σ)}−tr{G⊤˜R−1G˜D}+tr{˜R−1(IN⊗Σ)}+tr{G⊤˜R−1GD}, (22)

where and are arbitrary diagonal covariance matrices and is obtained by inserting and into (11). The right-hand side of the inequality above is a majorizing tangent plane to , see Appendix B. The use of (22) yields a convex majorizing function of in (12):

 V′(Ω|˜Ω)=∥y−Fθ∥2R−1+tr{˜R−1(IN⊗Σ)}+tr{G⊤˜R−1GD}+˜K, (23)

where is a constant. To derive a computationally efficient algorithm for minimizing (23), the following theorem is useful:

###### Theorem 1.

The majorizing function (23) can also be written as

 V′(Ω|˜Ω)=minZV′(Ω|Z,˜Ω) (24)

where

 V′(Ω|Z,˜Ω)=∥Y−ΘΦ−ZΓ∥2Σ−1+∥vec(Z)∥2D−1+tr{˜R−1(IN⊗Σ)}+tr{G⊤˜R−1GD}+˜K. (25)

The minimizing is given by the conditional mean (15).

###### Proof.

The problem in (24) has a minimizing which, after vectorization, equals in (15). Inserting the minimizing into (25) and using (41) yields (23). ∎

###### Remark 6.

The augmented form in (23), enables us to solve for the nuisance parameters and in closed-form and also yields the conditional mean as a by-product.

To prepare for the minimization of the function (25) we write the matrix quantities using variables that denote the th rows of the following matrices:

 Y=⎡⎢ ⎢ ⎢⎣⋮y⊤i⋮⎤⎥ ⎥ ⎥⎦,Θ=⎡⎢ ⎢ ⎢⎣⋮θ⊤i⋮⎤⎥ ⎥ ⎥⎦,Z=⎡⎢ ⎢ ⎢⎣⋮z⊤i⋮⎤⎥ ⎥ ⎥⎦,Γ=⎡⎢ ⎢ ⎢⎣⋮γ⊤i⋮⎤⎥ ⎥ ⎥⎦.
###### Theorem 2.

After concentrating out the nuisance parameters, the minimizing arguments and of the function (25) are obtained by solving the following convex problem:

 minΘ,Zny∑i=1(∥yi−Φ⊤θi−Γ⊤zi∥2+∥wi⊙zi∥1) (26)

where

 wi =[wi,1⋯wi,q]⊤,wi,j=  ⎷γ⊤j˜R−1iγjtr{˜R−1i} (27) ˜Ri =Γ˜DiΓ⊤+˜σiIN. (28)

The closed-form expression for the minimizing nuisance parameters (20) and (21) are given by

 ^σi =∥yi−Φ⊤θi−Γ⊤zi∥2√tr{˜R−1i},^di,j=|zi,j|√γ⊤j˜R−1iγj. (29)
###### Proof.

See Appendix C. ∎

###### Remark 7.

Problem (26) contains a data-adaptive regularizing term which typically leads to parsimonious estimates of , cf. [30].

###### Remark 8.

Majorizing at a nominal predictor model, i.e. as discussed above, yields and the weights are given by

 wi,j=∥γj∥2√N. (30)

Then problem (26) and consequently the minimization of (24) becomes invariant with respect to .

The iterative scheme (18) is executed by initializing at and solving (26). The procedure is then repeated by updating the majorization point using the new estimate . It follows that the estimates will converge to a local minima of (13). The following theorem establishes that the local minima found, and thus the resulting predictor (6), is invariant to in the form (19).

###### Theorem 3.

All initial points in the form (19) result in the same sequence of minimizers of (26), for all . Moreover, the sequence converges to a unique point when .

###### Proof.

See Appendix D. ∎

###### Remark 9.

As a result we may initialize the scheme (18) at . This obviates the need for carefully selecting an initialization point, which would be needed in e.g. the Expectation Maximization algorithm.

### Iv-B Recursive computation

We now show that the convex problem (26) can be solved recursively, for each new sample and .

#### Iv-B1 Computing ˆΘ

If we fix and only solve for , the solution is given by

 (31)

where

 ¯¯¯¯¯Θ=YΦ†%andH⊤=ΓΦ†.

Note that both and are independent of , and that they can be computed for each sample using a standard recursive least-squares (LS) algorithm:

 ¯¯¯¯¯Θ(t) =¯¯¯¯¯Θ(t−1)+(y(t)−¯¯¯¯¯Θ(t−1)φ(t))φ⊤(t)P(t) (32) H(t) =H(t−1)+P(t)φ(t)(γ⊤(t)−φ⊤(t)H(t−1)) (33) P(t) =P(t−1)−P(t−1)φ(t)φ⊤(t)P(t−1)1+φ⊤(t)P(t−1)φ(t). (34)
###### Remark 10.

Natural initial values are and . The matrix equals when samples yield a full-rank matrix . The matrix converges to . A common choice for the initial value of is , where a larger constant leads to a faster convergence of (34), cf. [27, 29].

#### Iv-B2 Computing ˆZ

Using (31), we concentrate out from (26) to obtain

 V′(Z)=ny∑i=1(∥ξi−(Γ⊤−Φ⊤H)zi∥2+∥wi⊙zi∥1)

where

 ξi=yi−Φ⊤¯¯¯θi.

In Appendix E it is shown how the minimum of can be found via cyclic minimization with respect to the elements of , similar to what has been done in [36] in a simpler case. This iterative procedure is implemented using recursively computed quantities and produces an estimate at sample .

#### Iv-B3 Summary of the algorithm

The algorithm computes and recursively by means of the following steps at each sample :

1. Compute and , using (32)-(34).

2. Compute via the cyclic minimization of . Cycle through all elements times.

3. Compute

The estimates are initialized as and . In practice, small works well since we cycle times through all elements of for each new data sample. The computational details are given in Algorithm 1 in Appendix E, which can be readily implemented e.g. in Matlab.

## V Numerical experiments

In this section we evaluate the proposed method and compare it with two alternative identification methods.

### V-a Identification methods and experimental setup

The numerical experiments were conducted as follows. Three methods have been used: LS identification of affine Arx, Narx using wavelet networks (Wave for short), and the latent variable method (Lava) presented in this paper. From our numerical experiments we found that performing even only one iteration of the majorization-minimization algorithm produces good results, and doing so leads to a computationally efficient recursive implementation (which we denote Lava-R for recursive).

For each method the function is taken as the linear regressor in (3). Then the dimension of equals . For affine ARX, the model is given by

 ^yARX(t)=¯¯¯¯¯Θφ(t),

where is estimated using recursive least squares [27]. Note that in Lava-R, is computed as a byproduct (32).

For the wavelet network, nlarx in the System Identification Toolbox for Matlab was used, with the number of nonlinear units automatically detected [17].

For Lava-R, the model is given by (6) and are found by the minimization of (26) using . The minimization is performed using the recursive algorithm in Section IV-B3 with . The nonlinear function of the data can be chosen to be a set of basis functions evaluated at . Then can be seen as a truncated basis expansion of some nonlinear function. In the numerical examples, uses the Laplace basis expansion due to its good approximation properties [28]. Each element in the expansion is given by

 γk1,…,kp−1(t)=p−1∏i=11√ℓisin(πki(φi(t)+ℓi)2ℓi), (35)

where are the boundaries of the inputs and outputs for each channel and are the indices for each element of . Then the dimension of

 γ(t)=[γ1,…,1(t)⋯γp−1,…,p−1(t)]⊤

equals , where is a user parameter which determines the resolution of the basis.

Finally, an important part of the identification setup is the choice of input signal. For a nonlinear system it is important to excite the system both in frequency and in amplitude. For linear models a commonly used input signal is a pseudorandom binary sequence (PRBS), which is a signal that shifts between two levels in a certain fashion. One reason for using PRBS is that it has good correlation properties [27]

. Hence, PRBS excites the system well in frequency, but poorly in amplitude. A remedy to the poor amplitude excitation is to multiply each interval of constant signal level with a random factor that is uniformly distributed on some interval

, cf. [34]. Hence, if the PRBS takes the values and , then the resulting sequence will contain constant intervals with random amplitudes between and . We denote such a random amplitude sequence where is the maximum amplitude.

### V-B Performance metric

For the examples considered here the system does not necessarily belong to the model class, and thus there is no true parameter vector to compare with the estimated parameters. Hence, the different methods will instead be evaluated with respect to the simulated model output . For Lava-R

 ˆφ(t)=[^y⊤s(t−1)⋯^y⊤s(t−na)u⊤(t−1)⋯u⊤(t−nb)1]⊤. ˆys(t)=ˆΘˆφ(t)+ˆZˆγ(t)

and is computed as (35) with replaced by .

The performance can then be evaluated using the root mean squared error (RMSE) for each output channel ,

 RMSEi= ⎷1TT∑t=1E[∥yi(t)−^ys,i(t)∥22].

The expectations are computed using 100 Monte Carlo simulations on validation data.

For the dataset collected from a real system, it is not possible to evaluate the expectation in the RMSE formula. For such sets we use the fit of the data, i.e.,

 FITi=100(1−∥yi−^ys,i∥2∥yi−¯yi1∥2),

where contains the simulated outputs for channel , is the empirical mean of and is a vector of ones. Hence, FIT compares the simulated output errors with those obtained using the empirical mean as the model output.

### V-C System with saturation

Consider the following state-space model,

 x1(t+1) =sat2[0.9x1(t)+0.1u1(t)] (36) x2(t+1) =0.08x1(t)+0.9x2(t)+0.6u2(t) (37) y(t) =x(t)+e(t). (38)

where and

 sata(x)={xif |x|

A block-diagram for the above system is shown in Fig. 1. The measurement noise was chosen as a white Gaussian process with covariance matrix where .

Data was collected from the system using an input signal for several different amplitudes . The identification was performed using , , , and data samples. This means that and , and therefore there are 10 parameters in and in .

Note that, for sufficiently low amplitudes , will be smaller than the saturation level for all , and thus the system will behave as a linear system. However, when increases, the saturation will affect the system output more and more. The RMSE was computed for eight different amplitudes , and the result is shown in Fig. 2. It can be seen that for small amplitudes, when the system is effectively linear, the Arx model gives a marginally better result than Lava-R and Wave. However, as the amplitude is increased, the nonlinear effects become more important, and Lava-R outperforms both Wave and Arx models.

### V-D Water tank

In this example a real cascade tank process is studied. It consists of two tanks mounted on top of each other, with free outlets. The top tank is fed with water by a pump. The input signal is the voltage applied to the pump, and the output consists of the water level in the two tanks. The setup is described in more detail in [34]. The data set consists of 2500 samples collected every five seconds. The first 1250 samples where used for identification, and the last 1250 samples for validation.

The identification was performed using , and . With two outputs, this results in 14 parameters in and 1458 parameters in . Lava-R found a model with only 37 nonzero parameters in , and the simulated output together with the measured output are shown in Fig. 3. The FIT values, computed on the validation data are shown in Table I. It can be seen that an affine ARX model gives a good fit, but also that using Lava-R the FIT measure can be improved significantly. In this example, Wave did not perform very well.

### V-E Pick-and-place machine

In the final example, a real pick-and-place machine is studied. This machine is used to place electronic components on a circuit board, and is described in detail in [13]. This system exhibits saturation, different modes, and other nonlinearities. The data used here are from a real physical process, and were also used in e.g. [3, 14, 20]. The data set consists of a 15s recording of the single input and the vertical position of the mounting head . The data was sampled at 50 Hz, and the first 8s () were used for identification and the last 7s for validation.

The identification was performed using , and . For the SISO system considered here, this results in 5 parameters in and 1296 parameters in . Lava-R found a model with 33 of the parameters in being nonzero, the output of which is shown in Fig. 4.

The FIT values, computed on the validation data, for Lava-R, Wave and affine Arx are shown in Table II. Lava-R outperforms Narx using wavelet networks, and both are better than Arx.

## Vi Conclusion

We have developed a method for learning nonlinear systems with multiple outputs and inputs. We began by modelling the errors of a nominal predictor using a latent variable formulation. The nominal predictor could for instance be a linear approximation of the system but could also include known nonlinearities. A learning criterion was derived based on the principle of maximum likelihood, which obviates the tuning of regularization parameters. The criterion is then minimized using a majorization-minimization approach. Specifically, we derived a convex user-parameter free formulation, which led to a computationally efficient recursive algorithm that can be applied to large datasets as well as online learning problems.

The method introduced in this paper learns parsimonious predictor models and captures nonlinear system dynamics. This was illustrated via synthetic as well as real data examples. As shown in these examples a recursive implementation of the proposed method was capable of outperforming a batch method using a Narx model with a wavelet network.

## Appendix A Derivation of the distributions (8) and (14)

We start by computing given in (7). The function can be found from (1)–(4

) and the chain rule:

 p(Y|Ω,Z)=N∏t=1pε(y(t)−Θφ(t)|Dt−1,Z), (39)

where we have neglected initial conditions [27]. Since

 pε(y(t)−Θφ(t)|Dt−1,Z)∝exp(−12∥y(t)−Θφ(t)−Zγ(t)∥2Σ−1),

it follows that

 p(Y|Ω,Z)=1√(2π)nyN|Σ|Nexp(−12∥Y−ΘΦ−ZΓ∥2Σ−1). (40)

Using the vectorized variable in (9)-(10) we can see that

 vec(ΘΦ)=Fθandvec(ZΓ)=Gz.

and thus,

 ∥Y−ΘΦ−ZΓ∥2Σ−1=∥y−Fθ−Gz∥2IN⊗Σ−1.

Next, we note that the following useful equality holds:

 ∥y−Fθ−Gz∥2IN⊗Σ−1+∥z∥2D−1=∥y−Fθ∥2R−1+∥z−ζ∥2Σ−1z (41)

where is given by (11), by (15), and by (16). To see that the equality holds, expand the norms on both sides of (41) and apply the matrix inversion lemma.

The sought-after distribution is given by (7). By using (41) it follows that

 p(Y|Ω,Z)p(Z)∝exp(−12∥y−Fθ∥2R−1)exp(−12∥z−ζ∥2Σ−1z). (42)

with the normalization constant . Noting that

 ∫exp(−12∥z−ζ∥2Σ−1z)dZ=√(2π)nyq|Σz|

it can be seen that

 p(Y|Ω)=1√(2π)Nny|R|exp(−12∥y−Fθ∥2R−1), (43)

which proves (8). To obtain an expression for simply insert (42) and (43) into Bayes’ rule to get (14).

## Appendix B Derivation of the majorizing tangent plane (22)

The first-order Taylor expansion of the log-determinant can be written as

 ln|R|≃ln|˜R|+(∂σln|R|)|R=˜R(σ−˜σ)+(∂dln|R|R=˜R)(d−~d)

where is the vector of diagonal elements in and contains the diagonal elements in .

For the derivatives with respect to we have

 ∂∂di,jln|R|=tr(R−1∂R∂di,j)=tr(G⊤R−1G∂D∂di,j).

Note that

 ny∑i=1q∑j=1di,j∂D∂di,j=D

so Hence

 ∂dln|R|R=˜R(d−~d)=tr(G⊤˜R−1G(D−˜D)).

In the same way

 ∂σln|R|R=˜R(σ−~σ)=tr(˜R−1(IN⊗(Σ−˜Σ)).

Since is concave in and , it follows that

 ln|R|≤˜K+tr(G⊤˜R−1GD)+tr(˜R−1(IN⊗Σ)) (44)

where

 ˜K=ln|˜R|−tr(G⊤˜R−1G˜D)−tr(˜R−1(IN⊗˜Σ)).

## Appendix C Proof of Theorem 2

It follows from (21) that

 R=ny∑i=1(Ri⊗Ei,i)

where . Hence,

 R−1=ny∑i=1(R−1i⊗Ei,i).

Thus, we can rewrite (25) as (to within an additive constant):

 V′(Ω|Z,˜Ω)=ny∑i=1(1σi∥¯yi∥22+∥zi∥2D−1i+σitr(˜R−1i)+tr(Γ˜R−1iΓ⊤Di)). (45)

where .

We next derive analytical expressions for the and that minimize . Note that

 ∂∂σiV′(Ω|Z,˜Ω)=−1σ2i∥¯yi∥22+tr(˜R−1i),

and setting the derivative to zero yields the estimate (29). In the same way it can be seen that the minimum of is attained at (29).

Inserting and into (45), we see that we can find the minimizing and by minimizing

 2 ny∑i=1(√tr(˜R−1i)∥yi−Φ⊤θi−Γ⊤zi∥2+ q∑j=1|zi,j|√¯¯¯γ⊤j˜R−1i¯¯¯γj).

Since term in the above sum is invariant with respect to and for , we can divide term by , and see that minimizing the criterion above is equivalent to (26).

## Appendix D Proof of Theorem 3

Initializing (18) at and where , produces two sequences denoted and for , respectively. This results also in sequences and . The theorem states that:

 Θk=¯¯¯¯¯ΘkandZk=¯¯¯¯Zk (46)
 Dk−¯¯¯¯¯Dk→0andΣk−¯¯¯¯¯Σk→0. (47)

We now show the stronger result that the covariance matrices converge as

 D(k)i=c(k)i¯¯¯¯¯D(k)i,σ(k)i=c(k)i¯¯¯σ(k)i,∀k>0, (48)

where . Note that as . Hence (48) implies (47). We prove (48) and (46) by induction. That (48) and (46) holds for follows directly from Theorem 2. Now assume that (48) holds for some . Let

 ¯¯¯¯¯Ri =Γ¯¯¯¯¯D(k)i