I Introduction
The problem of blind source separation (BSS) has been studied extensively in the past and a plethora of algorithms have been proposed in the past based on statistical properties of the source signals [1, 2]. For those based on the secondorder statistics (SOS), one particular class of them is those based on the canonical correlation analysis (CCA) approach [3, 4, 5, 6, 7, 8, 9, 10, 11], where the demixing matrix is found by maximizing the autocorrelation of each of the recovered signals. This approach rests on the idea that the sum of any uncorrelated signals has an autocorrelation whose value is less or equal to the maximum value of individual signals. As shown in [8]
, the maximization of the autocorrelation value is equivalent to finding the generalised eigenvectors within the matrix pencil approach
[12].As shown in [8]
, the CCA approach will work for the noisefree situations. For noisy mixtures, its performance will no doubt degrade. If we can estimate the variance of white noise in the mixtures, then we can remove the effect of noise from the mixtures before applying the noisefree BSS algorithms
[13, 14]. However, the variance of noise in the mixtures is not always available and in this case the algorithm proposed in [14] will not work.In this paper, we will generalise the traditional CCA to include the case with noise and then apply it to the separation problem of noisy mixtures [15, 16]. A key advantage of this approach is that successful separation of source signals can be achieved without estimation of the white noise parameters. An online adaptive algorithm is derived accordingly as its adaptive realisation.
Moreover, similar to the CCA case [8], we have also related the GCCA approach to a duallinear predictor based blind source extraction (BSE) structure, and an adaptive algorithm based on such a structure is also derived with rigorous proof for its effectiveness in this context.
This paper is organised as follows. In Section II, the generalised CCA approach will be provided with a detailed proof and analysis about its condition on which it can be applied to the BSS problem. The class of adaptive BSE algorithms is derived in Sec. III. Simulation results are shown in Section IV and conclusions drawn in Section V.
Ii Generalised CCA and Its Application to BSS
Iia Overview of the CCA Approach to BSS
The instantaneous mixing problem in BSS with mixtures, sources and a mixing matrix can be expressed as
(1) 
with
(2) 
For BSS employing secondorder statistics (SOS), we assume the sources are spatially uncorrelated and have different temporal structures:
(3) 
with being the autocorrelation function of the th source signal and for some nonzero delays .
The BSS problem can be solved in one single step by the CCA approach. In CCA [17], two sets of zeromean variables (with components) and (with
components) with a joint distribution are considered. For convenience, we assume
. The linear combination of the variables in each of the vectors is respectively given by
(4) 
where and are vectors containing the combination coefficients and they are determined by maximizing the correlation between and
(5) 
with
(6)  
where , , and denotes the statistical expectation operator.
After finding the first pair of optimal vectors and , we can proceed to find the second pair and which maximizes the correlation and at the same time ensures that the new pair of combinations is uncorrelated with the first set . This process is repeated until we find all the pairs of optimal vectors and , , , .
It has been shown that
can be obtained by solving the following generalized eigenvalue problem
[8](7) 
can be found in the same way by changing the subscripts of the matrices in (7) accordingly.
To apply CCA to the BSS problem [8], we choose the vector as in CCA and as . Then the eigenvalue problem in (7) becomes
(8) 
In the context of BSS, and are the same and we use to represent it. (8) can be simplified as
(9) 
Multiplying both sides with , we arrive at the following generalised eigenvector problem
(10) 
Moreover, the correlation maximization problem in (5) becomes
(11) 
and we can prove that by CCA the source signals will be recovered completely [8]. However, with added noise, the proof given in the noisefree case will not be valid any more since the denominator in (6) will have a noise component. As a result, the performance of the CCA approach will degrade with increasing noise level.
IiB Generalised CCA (GCCA)
For noisy mixtures, is given by
(12) 
where is the additive noise vector, which is spatially and temporally white and uncorrelated with the source signals. Its correlation matrix is given by
(13) 
where
is the identity matrix and
is the variance of noise.Similarly, we can form a modified CCA problem for two set of variables with added white noise. Now consider the two sets of zeromean variables
(14) 
and their corresponding linear combinations:
(15) 
where and are the added white noise vectors and not correlated with each other. Now the two vectors and are not given by (5) any more, but by
(16) 
with
(17)  
where , , and is a nonzero integer. In this new function, the correlation between the two variables and is not normalised by their variances, but by their own correlation for a common time lag of . and , , are all obtained in a similar way with the same normalisation. Since the noise components are not correlated with each other and not correlated with and either, we have and for nonzero , the denominator in (17) does not include any noise information. So although there is noise component existing in the original variables, the vectors and obtained in this way will not depend on the noise component at all. So the effect of noise has been removed without estimating its variances.
IiC Applying GCCA to the BSS Problem
Applying this generalised CCA to the BSS problem, we can replace by in (12) and by with . The two vectors and will be the same as the extraction vector [8]. The extracted signal will be
(18) 
Then the maximization problem in (16) becomes
(19) 
with
(20) 
where , , is the correlation matrix of the observed mixtures. From (12), we have
(21)  
since for .
So the cost function can be further simplified to
(22) 
We assume that all of the diagonal elements of are positive, which means each of the source signals themselves should have a positive correlation value with its delayed version by .
In the next, we give a brief proof that maximization of with respect to will lead to a successful extraction of one of the source signals in the presence of noise.
Let denote the first global mixing vector. Then in (22) changes into
(23) 
Since all of the diagonal elements of the diagonal matrix are positive, we shall assume , as the differences in the diagonal elements of can always be absorbed into the mixing matrix . This way, the diagonal elements of become the “normalised” autocorrelation values of each source signal and they are assumed to be different from each other. Note the “normalisation” here is not by , but by , . Now we have
(24) 
where , which has a property .
This is an eigenvalue problem and starting from here, we can use the results given in [8] to complete the proof and draw the conclusion that when we maximize with respect to , this will result in a successful extraction of the source signal with the maximum “normalised” autocorrelation value.
After extracting the first source signal, we may use a deflation approach to remove it from the mixtures and then subsequently perform the next extraction [2]. This procedure is repeated until the last source signal is recovered.
Iii Adaptive Realisation
Iiia A direction approach
As in the noisefree case [8], from the proof we can see that both the correlation matrix in both the numerator and the denominator in the cost function can be replaced by a linear combination of the correlation matrices at different time lags to improve its robustness, as long as the one at the denominator is positive definite. More specifically, instead of maximizing the correlation between and , we maximize the correlation between and a weighted sum of , [16]. Now the new cost function is given by
(25) 
where
(26) 
with
(27) 
As shown in (25), we have chosen because in reality more likely the signal is positively correlated with its delayed version by one sample.
Applying the standard gradient descent method to , we have
(28)  
where
(29) 
The correlation can be estimated recursively by
(30) 
where is the corresponding forgetting factor with .
Following some standard stochastic approximation techniques [18], we obtain the following online update equation
(31)  
where is the updating step size.
To avoid the critical case where the norm of becomes too small, after each update, we normalize it to unit length, which yields
(32) 
IiiB Adaptive Realisation Based on the DualLinear Predictor Structure
For noisefree mixtures, a linear predictor can be employed to extract one of the sources [19, 20, 21, 22], and it is closely related to the CCA approach, as shown in [8]. Similarly, for the GCCA approach, we can develop a corresponding duallinear predictor structure for its implementation [15].
IiiB1 The Structure
For noisefree mixtures, a linear predictor can be employed to extract one of the sources, as shown in Figure 1, where the extracted signal and the instantaneous output error of the linear predictor with a length are given by
(33) 
where is the demixing vector and
(34) 
The cost function is given by
(35) 
As proved in [21], by minimising with respect to , the sources can be extracted successfully.
However, in the presence of noise, there will be a noise term in both the numerator and the denominator of (35) and the proof in [21] is not valid any more. To remove the effect of noise, as in GCCA, we propose to exploit the white nature of the noise components and employ a duallinear predictor structure as shown in Fig. 2, where a second linear predictor with coefficients vector of length is employed and the error signal is given by
(36) 
where
(37) 
For the first linear predictor, the mean square prediction error (MSPE) is given by
(38)  
where with , and is when or , and otherwise. From (12), (13) and (3), we have
(39)  
for . Then we have
(40)  
with denoting the global demixing vector and is a diagonal matrix given by
(41) 
with its diagonal element given by
(42) 
Similarly, for the second linear predictor, we have
(43)  
with with and is a diagonal matrix given by
(44) 
with its diagonal element given by
(45) 
IiiB2 The Proposed Cost Function
Note in the second term of both (40) and (43), there is not any noise component. Then we can construct a new cost function as follows
(46) 
Now we impose another condition on the second linear predictor: suppose the coefficients are chosen in such a way that all of the diagonal elements of are of positive value. This is a difficult condition due to the blind nature of the problem. However, for a special case with and , i.e. a one step ahead predictor, we have
(47) 
which is the correlation matrix of the source signals with a time lag of . Then the condition means each of the source signals should have a positive correlation with a delayed version of itself by lag . As mentioned in Sec. IIIA, in reality, there are many signals having this correlation property and therefore can meet this requirement. Now we can see the cost function has the same form as in 23. Therefore, we can consider this duallinear predictor structure as an indirect implementation of the GCCA approach for solving the BSS problem.
Since all of the diagonal elements of are positive, we shall assume , i.e. , , as the differences in the diagonal elements can always be absorbed into the mixing matrix . This way, the diagonal elements , , of in the numerator become the “normalised” autocorrelation values of each source signal and they are assumed to be different from each other. For the case with and , the “normalisation” here is not by , but by .
Now we have
(48) 
where , which has a property . Clearly, according to the proof provided earlier, we can draw the conclusion that when we minimize with respect to , this will result in successful extraction of the source signal with the minimum “normalised” autocorrelation value. Note here the extracted signal is not the one with the maximum “normalised” autocorrelation value.
IiiB3 Adaptive Algorithm
Applying the standard gradient descent method to , we have
(49)  
where
(50) 
, and can be estimated respectively by
(51) 
where , and are the corresponding forgetting factors with .
Iv Simulations
Here we only provide some preliminary simulation results based on the duallinear predictor structure [15]. Three source signals are used which are generated by passing three randomly generated white Gaussian signals through three different filters. The power of the sources is normalised to one. The correlation value of each of the source signals is checked to make sure it is positive and not close to zero for one sample shift. Fig. 3 shows the three source signals, denoted by , and , respectively.
The coefficients of the first linear predictor coefficients were randomly generated with a length of , and given by
(53) 
For the second linear predictor, , , and .
The normalised correlation value for each source signal with this duallinear predictor configuration is , and , respectively. As already proved, since the first source signal has the smallest correlation value of , it will be extracted by minimizing the cost function.
The mixing matrix is randomly generated and given by
(54) 
Its row vector is normalised to unity to make sure it is comparable to the noise variance, which is . The forgetting factors is and the stepsize . A learning curve for this case is shown in Fig. 4, with the performance index defined as [2]
(55) 
with .
To show its performance in a more general context, we change the initial value of the demixing vector randomly each time to run the algorithm and the average learning curve over runs is given in Fig. 5. Both curves show a successful extraction of the source signal.
V Conclusions
The traditional canonical correlation analysis has been generalised to include noisy signals where the effect of noise can be eliminated effectively by the proposed approach. It was then applied to the blind source separation problem and adaptive implementations were derived. In particular, a duallinear predictor structure was proposed to blindly extract the source signals from their noisy mixtures, and it can be considered as an indirect implementation of GCCA. Some preliminary simulation results have been provided to show the effectiveness of the proposed approach.
References
 [1] A. Hyvarinen, J. Karhunen, and E. Oja, Independent Component Analysis, John Wiley & Sons, Inc., New York, 2001.
 [2] A. Cichocki and S. Amari, Adaptive Blind Signal and Image Processing, John Wiley & Sons, Inc., New York, 2003.
 [3] S.V. Schell and W.A. Gardner, “Programmable canonical correlation analysis: a flexible framework for blind adaptive spatial filtering,” IEEE Transactions on Signal Processing, vol. 42, no. 12, pp. 2898–2908, December 1995.

[4]
J. Galy and C. Adnet,
“Canonical correlation analysis: a blind source separation using
noncircularity,”
in
IEEE Workshop on Neural Networks for Signal Processing
, Australia, December 2000, vol. 1, pp. 465–473.  [5] M. Borga and H. Knutsson, “A canonical correlation approach to blind source separation,” Technical report LiUIMTEX0062, Department of Biomedical Engineering, Linkoping University, Sweden, June 2001.
 [6] O. Friman, M. Borga, P. Lundberg, and H. Knutsson, “Exploratory fMRI analysis by autocorrelation maximization,” NeuroImage, vol. 16, pp. 454–464, June 2002.
 [7] W. Liu, D. P. Mandic, and A. Cichocki, “An analysis of the CCA approach for blind source separation and its adaptive realization,” in Proc. IEEE International Symposium on Circuits and Systems, Kos, Greece, May 2006, pp. 3590–3593.
 [8] W. Liu, D. P. Mandic, and A. Cichocki, “Analysis and online realization of the CCA approach for blind source separation,” IEEE Transactions on Neural Networks, vol. 18, no. 5, pp. 1505–1510, September 2007.
 [9] Y. O. Li, T. Adalı, W. Wang, and V. D. Calhoun, “Joint blind source separation by multiset canonical correlation analysis,” IEEE Transactions on Signal Processing, vol. 57, no. 10, pp. 3918–3929, October 2009.
 [10] B. Peng, W. Liu, and D. P. Mandic, “Subbandbased joint blind source separation for convolutive mixtures employing MCCA,” in Proc. the Constantinides International Workshop on Signal Processing, January 2013.

[11]
B. Peng, W. Liu, and D. P. Mandic,
“Design of oversampled generalized discrete fourier transform filter banks for application to subband based blind source separation,”
IET Signal Processing, pp. 843–853, December 2013.  [12] C. Chang, Z. Ding, S. Yau, and F. Chan, “A matrixpencil approach to blind separation of nonwhite signals in white noise,” in Proc. IEEE International Conference on Acoustics, Speech, and Signal Processing, Seattle, USA, May 1998, vol. 4, pp. 2485–2488.

[13]
W. Liu and D. P. Mandic,
“A normalised kurtosis based algorithm for blind source extraction from noisy measurements,”
Signal Processing, vol. 86, pp. 1580–1585, July 2006.  [14] W. Liu, D. P. Mandic, and A. Cichocki, “Blind secondorder source extraction of instantaneous noisy mixtures,” IEEE Trans. Circuits and Systems II: Express Briefs, vol. 53, no. 9, pp. 931–935, September 2006.
 [15] W. Liu, D. P. Mandic, and A. Cichocki, “A duallinear predictor approach to blind source extraction for noisy mixtures,” in Proc. IEEE Workshop on Sensor Array and Multichannel Signal Processing, Darmstadt, Germany, July 2008, pp. 515–519.
 [16] W. Liu, D. P. Mandic, and A. Cichocki, “Blind source separation based on generalised canonical correlation analysis and its adaptive realization,” in Proc. International Congress on Image and Signal Processing, Hainan, China, May 2008, vol. 5, pp. 417–421.
 [17] T. W. Anderson, An Introduction to Multivariate statistical Analysis, John Wiley & Sons, New York, 2rd edition, 1984.
 [18] S. Haykin, Adaptive Filter Theory, Prentice Hall, Englewood Cliffs, New York, 3rd edition, 1996.
 [19] W. Liu, D. P. Mandic, and A. Cichocki, “A class of novel blind source extraction algorithms based on a linear predictor,” in Proc. IEEE International Symposium on Circuits and Systems, Kobe, Japan, May 2005, pp. 3599–3602.
 [20] W. Liu, D. P. Mandic, and A. Cichocki, “Blind source extraction of instantaneous noisy mixtures using a linear predictor,” in Proc. IEEE International Symposium on Circuits and Systems, Kos, Greece, May 2006, pp. 4199–4202.
 [21] W. Liu, D. P. Mandic, and A. Cichocki, “Blind source extraction based on a linear predictor,” IET Signal Processing, vol. 1, no. 1, pp. 29–34, March 2007.
 [22] W. Y. Leong, W. Liu, and D. P. Mandic, “Blind source extraction: standard approaches and extensions to noisy and postnonlinear mixing,” Neurocomputing, vol. 71, no. 1012, pp. 2344–2355, June 2008.
Comments
There are no comments yet.