I Introduction
Sensory devices often receive signals from multiple physical stimuli that evolve simultaneously but are unrelated to one another. In many of these situations, it is necessary to create separate representations of one or more of these stimuli by blindly processing the observed signals (i.e., by processing them without prior knowledge of the nature of the stimuli). In recent years, there has be considerable progress in the solution of this “blind source separation” (BSS) problem for the special case in which the signals and source variables are linearly related. However, although nonlinear BSS is often performed effortlessly by humans, computational methods for doing this are quite limited [1].
Consider a time series of data , where is a multiplet of measurements (). The usual objectives of nonlinear BSS are: 1) determine if these data are instantaneous mixtures of statistically independent source components
(1) 
where is a possibly nonlinear, invertible mixing function; 2) if this is the case, compute the mixing function. In other words, the problem is to find a coordinate transformation that transforms the observed data from the measurementdefined coordinate system () on state space to a special source coordinate system () in which the components of the transformed data are statistically independent. Let
be the state space probability density function (PDF) in the source coordinate system, defined so that
is the fraction of total time that the source trajectory is located within the volume element at location . In the usual formulation of the BSS problem, the source components are required to be statistically independent in the sense that their state space PDF is the product of the density functions of the individual components(2) 
In every formulation of BSS, multiple solutions can be created by permutations and componentwise transformations of any one solution. However, it is well known that the criterion in (2) is so weak that it suffers from a much worse nonuniqueness problem: namely, in this form of the BSS problem, multiple solutions can be created by transformations that mix the source variables (see [2] and references therein).
The issue of nonuniqueness can be circumvented by considering the data’s trajectory in () instead of (i.e., state space). First, let be the PDF in this space, defined so that is the fraction of total time that the location and velocity of the source trajectory are within the volume element at location . An earlier paper [3] described a formulation of the BSS problem in which this PDF was required to be the product of the density functions of the individual components
(3) 
Separability in is a stronger requirement than separability in state space. To see this, note that (2) can be recovered by integrating both sides of (3) over all velocities, but the latter equation cannot be deduced from the former one. In fact, it can be shown that (3) is strong enough to guarantee that the BSS problem in has a unique solution, up to permutations and componentwise transformations [3]. Furthermore, this type of statistical independence has the virtue of being satisfied by almost all classical physical systems that are composed of noninteracting subsystems, which are the generators of most signals of interest.
The author previously demonstrated [3] that the PDF of a time series induces a Riemannian geometry on the state space, with the metric equal to the local secondorder correlation matrix of the data’s velocity. Nonlinear BSS can be performed by computing this metric in the coordinate system (i.e., by computing the secondorder correlation of at each point ), as well as its first and second derivatives with respect to . However, although this is a mathematically correct and complete method of solving the nonlinear BSS problem, it suffers from a practical difficulty: namely, if the dimensionality of state space is high, a great deal of data is required to cover it densely enough in order to calculate these derivatives accurately. The current paper [4] shows how to perform nonlinear BSS by computing higherorder local correlations of the data’s velocity, instead of computing derivatives of its secondorder correlation. This approach is advantageous because it requires much less data for an accurate computation. For example, in the synthetic speech separation experiment in Section III, the new method can separate two synthetic utterances recorded with a single microphone after minutes of observation, rather than the hours of observation required by the differential geometric method.
The method described in this paper differs significantly from the methods proposed by other investigators because it uses a criterion of statistical independence in , instead of state space. In addition, there are technical differences between the proposed method and conventional ones. First of all, the technique in this paper exploits statistical constraints on the data that are locally defined in state space, in contrast to the usual criteria for statistical independence that are global conditions on the data time series or its time derivatives [5]. Furthermore, unlike many other methods [6, 7]
, the mixing function is derived in a constructive, deterministic, and nonparametric manner, without employing iterative algorithms, without using probabilistic learning methods, and without parameterizing it with a neural network architecture or other means. In addition, the proposed method can handle any differentiable mixing function, unlike some other techniques that only apply to a restricted class of mixing functions
[8].The next section describes how to separate twodimensional data into two onedimensional source variables. Section III illustrates the method by using it to separate two simultaneous speechlike sounds that are recorded with a single microphone. The implications of this work are discussed in the last section. The appendix describes how the method can be generalized to separate data of arbitrary dimensionality into possibly multidimensional source variables.
Ii Method
The BSS procedure, which is described in this section, is initiated by constructing scalar functions on the data space from combinations of local velocity correlations. The values of these scalars are invariant under any nonlinear transformations of coordinates on the data space. It is relatively easy to show that separability imposes necessary conditions on these scalar functions in the source coordinate system. Because of their scalarity, these conditions can readily be transferred to the measurementdefined coordinate system (), where they can be tested with the data. If the data do not satisfy these necessary conditions, the data are simply not separable. If the data do satisfy these conditions, we show that there is only one possible source coordinate system, and it can be explicitly constructed. The data can then be transformed into this putative source coordinate system to see if their PDF and/or correlations factorize there. The data are separable if and only if this factorization occurs.
The first step is to construct local correlations of the data’s velocity, such as
(4) 
where , where the bracket denotes the time average over the trajectory’s segments in a small neighborhood of , where , and where “” denotes possible additional indices on the left side and corresponding factors of
on the right side. The definition of the PDF implies that this velocity correlation is one of its moments
(5) 
where is the PDF in the coordinate system. Incidentally, although (5) is useful in a formal sense, in practical applications, all required correlation functions can be computed directly from local time averages of the data ((4)), without explicitly computing the data’s PDF. Also, note that velocity “correlations” with a single subscript vanish identically.
Next, let be a local matrix, and use it to define velocity correlations
(6) 
where “” denotes possible additional indices of and , as well as corresponding factors of . Because is positive definite at any point , it is always possible to find an such that
(7) 
(8) 
where is a diagonal matrix. Such an can always be constructed from the product of three matrices: 1) a rotation that diagonalizes
, 2) a diagonal rescaling matrix that transforms this diagonalized correlation into the identity matrix, 3) another rotation that diagonalizes
after the fourthorder correlation has been transformed by the first rotation and the rescaling matrix. As long as the lastdiagonalized matrix is not degenerate, is unique, up to arbitrary local permutations and reflections. In almost all realistic applications, the velocity correlations will be continuous functions of the state space coordinate . Therefore, in any neighborhood of state space, there will always be a continuous solution for , and this solution is unique, up to arbitrary global reflections and permutations.
In order to show that the velocity correlations (i.e., the ) transform like scalars, imagine constructing these quantities in some other coordinate system . An that satisfies (7) and (8) in the coordinate system is given by
(9) 
where is a matrix that satisfies (7) and (8) in the coordinate system. To prove this, substitute this equation into the definition of
. Because velocity correlations transform as contravariant tensors, the partial derivative factors within
transform correlations from the coordinate system to the coordinate system, leading toTherefore, because and satisfy (7) and (8), so do and , thereby proving that (9) is one of the solutions for in the coordinate system. All other solutions for differ from this one by global reflections and permutations. Similar reasoning shows that, for any choice of and , each of the functions equals the corresponding function , up to possible global permutations and reflections. In other words,
(10) 
where denotes an element of a product of permutation, reflection, and identity matrices. In other words, the functions transform as scalar functions on the state space, except for possible reflections and index permutations.
We now assume that the system is separable and derive some necessary conditions on these scalar functions in the source coordinate system (). Because these separability conditions involve scalar functions, they can then be transferred to the measurementdefined coordinate system (), where they can be tested with the data. In order to make the notation simple, it is assumed that in the following. However, the appendix describes how the methodology can be generalized in order to separate higherdimensional data into possibly multidimensional source variables.
Separability implies that there is a transformation from the coordinate system to a source coordinate system () in which (3) is true. Because of (5), the velocity correlation functions in the coordinate system are products of correlations of the independent sources
(11) 
where and denote arbitrary numbers of indices equal to 1 and 2, respectively. It follows from this equation and from the vanishing of all velocity “correlations” with one index that the source variable correlations and
are diagonal. Therefore, in the coordinate system, (7) and (8) are satisfied by a diagonal matrix of the form
(12) 
It follows from (11) and (12) that the scalar functions with all subscripts equal to 1 (2) must equal the corresponding functions derived for subsystem 1 (2), and these latter functions depend on () alone. Although these constraints were derived in the coordinate system, scalarity ((10)) implies that these separability conditions are true in all coordinate systems, except for possible permutations and reflections. Therefore, in the measurementdefined coordinate system (), the functions with all subscripts equal to 1 must be functions of either or . Likewise, the functions with all subscripts equal to 2 must be functions of the other source variable ( or , respectively).
This coordinatesystemindependent consequence of separability can be used to perform nonlinear BSS in the following manner:

Use (4) to compute velocity correlations from the data .

Use (6) to compute the functions .

Plot the values of the triplets
(13) (14) as varies over the measurementdefined coordinate system.

If the plotted values of and/or do not lie in onedimensional subspaces within the threedimensional space of the plots, and/or cannot be functions of single source components ( or ) as required by separability, and the data are not separable.

If the plotted values of both and do lie on onedimensional manifolds, define onedimensional coordinates ( and , respectively) on those subspaces. Then, compute the function that maps each coordinate onto the value of , which parameterizes the point . Notice that, because of the Takens’ embedding theorem [9], is invertibly related to the six components of and , and, therefore, it is invertibly related to .

Transform the PDF (or correlations) of the measurements from the coordinate system to the coordinate system. The data are separable if and only if the PDF factorizes (the correlations factorize) in the coordinate system.
The last statement can be understood in the following manner. As shown above, separability implies that must be a function of a single source variable ( or ), and the Takens theorem implies that this function is invertible. Because is also an invertible function of , it follows that must be invertibly related to one of the source variables, and, in a similar manner, must be invertibly related to the other source variable. Thus, separability implies that and are themselves source variables. It follows that the data are separable if and only if the PDF factorizes in the coordinate system.
Although the abovedescribed procedure will perform BSS for any mixing function, it is interesting to consider the special case in which source variables exist that are linearly related to the measurements; namely,
(15) 
where is a constant matrix. In general, the above BSS procedure will construct source variables that are related to these “linear” source variables by
(16)  
(17) 
where and are some invertible nonlinear transformations determined by the choice of the and coordinates, respectively. Therefore, at each point the partial derivatives
will be proportional to constant (i.e.,
) vectors (denoted by
and ), which are themselves proportional to the first and second rows of , respectively. Furthermore, these vectors can be used to construct other linearlyrelated source variables(18) 
that are just rescaled versions of the ones in (15). Consequently, given the source variables produced by the BSS procedure, the following process can be used to determine whether these can be transformed into source variables that are linearly related to the measurements: 1) compute the abovementioned partial derivatives and determine if each is proportional to an vector; 2) if the partial derivatives do not satisfy this condition, there are no linearlyrelated source variables; 3) if the partial derivative do satisfy condition 1, transform the data into the coordinate system in order to see if the data’s PDF factorizes there. There are linearlyrelated source variables if and only if this factorization occurs.
Iii Numerical Example: Separating Two SpeechLike Sounds Recorded with a Single Microphone
This section describes a numerical experiment in which two speechlike sounds were synthesized and then summed, as if they were simultaneously recorded with a single microphone. Each sound simulated an “utterance” of a vocal tract resembling a human vocal tract, except that: 1) it had one degree of freedom, instead of the 35 degrees of freedom of the human vocal tract; 2) its impulse response was characterized by one pole pair, instead of the 46 pole pairs characteristic of the human vocal tract. The methodology of Section II was blindly applied to a time series of two features extracted from the synthetic recording, in order to recover the time dependence of the state variable of each vocal tract (up to an unknown transformation on each voice’s state space). BSS was performed with only 16 minutes of data, instead of the hours of data required to separate similar sounds using a differential geometric method
[3].Each speaker was simulated by having a simulated glottis drive a simulated resonant cavity that represented the vocal tract. The glottal waveform of the each “voice” was a series of spikes separated by a pitch interval (100 Hz and 160 Hz). The impulse response of each “vocal tract” was taken to be a characteristic damped sinusoid, whose amplitude, resonant frequency, and damping were linear functions of a single state variable. For each voice, a 16 minute utterance was produced by convolving its glottal waveform with the impulse response of its vocal tract, which was a function of a slowlyvarying state variable. The state variable time series of each voice was synthesized by smoothly interpolating among successive states. The latter were chosen at 100120 msec intervals so that the state variable time series of the two voices were statistically independent of each other. The resulting utterances had energies differing by 2.4 dB, and they were summed and sampled at 16 kHz with 16bit depth. Then, this “recorded” waveform was preemphasized and subjected to a shortterm Fourier transform (using frames with 25 msec length and 5 msec spacing). The log energies of a bank of 12 melfrequency filters between 08000 Hz were computed for each frame, and these were then averaged over pairs of consecutive frames. These log filterbank outputs were nonlinear functions of the two vocal tract state variables.
In order to blindly analyze these data, we first determined if any data components were redundant in the sense that they were simply functions of other components. Fig. 1
a shows the first three principal components of the log filterbank outputs during a typical short recording of the simultaneous utterances. Inspection showed that these data lay on a curved twodimensional surface within the ambient 12D space, making it apparent that they were produced by a “hidden” dynamical system with two degrees of freedom. The redundant components were eliminated by using dimensional reduction (principal components analysis in small overlapping neighborhoods of the data) to establish a coordinate system
on this surface and to find , the trajectory of the recorded sound in that coordinate system. Next, the BSS procedure in Section II was used to determine if was a nonlinear mixture of two source variables that were statistically independent of one another. Following steps 14 of the BSS procedure, of the entire recording was used to compute invariants with up to five indices, and the related functions and were plotted, as illustrated in Figs. 1bc. It was evident that the plotted values of both and lay in or close to onedimensional subspaces. Following step 6 of the BSS procedure, a dimensional reduction procedure [12] was used to define coordinates ( and ) on these onedimensional manifolds, and was computed. If the data were separable, must be a set of source variables, and must describe the evolution of the underlying vocal tract states (up to invertible componentwise transformations). As illustrated in Figs. 2ab and Figs. 3ab, the time courses of the putative source variables () did resemble distorted versions of the state variable time series that were originally used to generate the voices’ utterances. The scatter plots in Fig. 2c and Fig. 3c show that, in each case, the recovered source variable and the corresponding state variable were related by a nonlinear transformation that was nearly monotonic, except for the effects of noise due to the limited number of data samples. Thus, starting with a singlemicrophone recording, the BSS procedure was able to extract the information encoded in the time series of each speaker’s state variable. The time course of the analogous multidimensional state variable of the human vocal tract contains the speech content of each utterance. This indicates that the BSS procedure is capable of recovering the speech content of superposed utterances, without recovering their original waveforms.Iv Discussion
In a previous paper [3], the nonlinear BSS problem was reformulated in (state, state velocity)space, instead of state space as in conventional formulations. This approach is attractive because: 1) the reformulated BSS problem has a unique solution in the following sense: either the data are inseparable, or they can be separated by a mixing function that is unique (up to permutations and transformations of independent source variables); 2) statistical independence in (state, state velocity)space is manifested by almost all classical physical systems that are composed of noninteracting subsystems. This paper [4] shows how a general solution of this problem can be constructed in a deterministic manner, which avoids the difficulties of the iterative, probabilistic, and parametric BSS techniques proposed by other investigators. Furthermore, an accurate computation can be performed with far less data than that required by the differential geometric solution, previously proposed by the author [3].
The BSS procedure in Section II shows how to compute
, the trajectory of each independent subsystem in a specific coordinate system on that subsystem’s state space. In many practical applications, a pattern recognition “engine” has been trained to recognize the meaning of trajectories of one subsystem (e.g., “
”) in another coordinate system (e.g., ) on that subsystem’s state space. In order to use this information, it is necessary to know the transformation to this particular coordinate system (). For example, subsystem may be the vocal tract of speaker , and subsystem may be a noise generator of some sort. In this example, we may have trained an automatic speech recognition (ASR) engine on the quiet speech of speaker (or, equivalently, on the quiet speech of another speaker who mimics in the sense that their state space trajectories are related by an invertible transformation when they speak the same utterances). In order to recognize the speaker’s utterances in the presence of , we must know the transformation from the vocal tract coordinates recovered by BSS () to the coordinates used to train the ASR engine (). This mapping can be determined by using the training data to compute more than invariants (like those in (6)) as functions of . These must equal the invariants of one of the subsystems identified by the BSS procedure, up to a global permutation and/or reflection ((10)). This global transformation can be determined by permuting and reflecting the distribution of invariants produced by the training data, until it matches the distribution of invariants of one of the subsystems produced by the BSS procedure. Then, the mapping can be determined by finding paired values of and that correspond to the same invariant values within these matching distributions. This type of analysis of human speech data is currently underway.[Separating data of any dimensionality
into possibly multidimensional sources]
The procedure in Section II is capable of separating twodimensional data into onedimensional source variables. This appendix describes the solution of the more general nonlinear BSS problem in which data of any dimensionality may be separated into possibly multidimensional source variables, each of which is statistically independent of the others but each of which may contain statistically dependent components. This is sometimes called multidimensional independent component analysis, subspace independent component analysis, or independent subspace analysis
[10, 11].Separability implies that there is a transformation from the coordinate system to a source coordinate system () in which
(19) 
where is a possibly multidimensional source variable with components and is a possibly multidimensional source variable with components. Because of (5), the velocity correlation functions in the coordinate system are products of correlations of independent sources
(20) 
where and denote arbitrary series of indices in the ranges and , respectively. It follows from this equation and from the vanishing of all velocity “correlations” with one index that the source variable correlations and
have blockdiagonal forms with and upper and lower blocks, respectively. Consequently, in the coordinate system, (7) and (8) are satisfied by a blockdiagonal matrix of the form
(21) 
where and are matrices that satisfy (7) and (8) for the and subsystems, respectively. In order to prove that (21) satisfies (7), substitute it into the definition of , and note that each block of is defined to transform the corresponding block of into an identity matrix. In order to prove that (21) satisfies (8), substitute it into the definition of
(22) 
Then, note that: 1) when and belong to different blocks, each term in this sum vanishes because it factorizes into a product of a oneindex correlation and a threeindex correlation; 2) when and belong to the same block and are unequal, each term with in the other block contains a factor equal to , which vanishes, as proved above; 3) when and belong to the same block and are unequal, the sum over in the same block vanishes, because each block of is defined to satisfy (8) for the corresponding subsystem.
It follows from (6), (20), and (21) that the scalar functions with all subscripts in the range (or in the range ) must depend on (or ) alone. Although these constraints were derived in the coordinate system, scalarity ((10)) implies that these separability conditions are true in all coordinate systems, except for possible permutations. Therefore, in the measurementdefined coordinate system (), it must be possible to partition the indices of () into and groups (containing “” indices and “” indices, respectively) so that the functions with all subscripts in the (or ) group are functions of (or ) alone.
This coordinatesystemindependent consequence of separability can be used to perform nonlinear BSS in the following manner:

Use (4) to compute velocity correlations from the data .

Use (6) to compute the functions .

Consider each choice of an integer in the range , and consider each way of partitioning the data indices () into and groups (containing “” indices and “” indices, respectively). For each of these choices, let () be any set of more than () of the functions for which all subscripts belong to the () group, and plot the values of and as varies over the measurementdefined coordinate system.

Suppose that, for all of the choices in step 4, the plotted values of and/or do not lie in () subspaces within the higherdimensional space of the plots. Then, there is no way that and can be functions of single source variables ( and ) as required by separability, and the data are not separable.

Suppose that, for one or more of the choices in step 4, the plotted values of both and do lie in and manifolds, respectively. In that case, define () coordinates () on those subspaces. Then, compute the function that maps each coordinate onto the value of , which parameterizes the point . Notice that, because of the Takens’ embedding theorem [9], is invertibly related to the (or more) components of and , and, therefore, it is invertibly related to .

Transform the PDF (or correlations) of the measurements from the coordinate system to the coordinate system. The data are separable, and and are source variables if and only if the PDF factorizes (the correlations factorize) in a coordinate system created in this way.
The last statement can be understood in the following manner. As shown above, separability implies that, for some choice of and index partitioning, must be a function of , and the Takens theorem implies that this function is invertible. Because is also an invertible function of , it follows that must be invertibly related to , and, in a similar manner, must be invertibly related to . Thus, separability implies that and are themselves source variables and, therefore, the PDF factorizes in the coordinate system. Finally, note that, if the data are separable, the same procedure can then be used to determine if each multicomponent source variable ( or ) can be further separated into lowerdimensional source variables.
The abovedescribed procedure will perform BSS for any linear or nonlinear mixing. However, a few comments should be made about the special case in which source variables exist that are linearly related to the measurements; namely,
(23) 
(24) 
where , where , and where and are constant and matrices, respectively. In general, the above BSS procedure will construct source variables that are related to these “linear” source variables by
(25)  
(26) 
where and are some nonlinear transformations (with and components, respectively), determined by the choice of the and coordinates, respectively. Therefore, at each point , the sets of partial derivatives
will be lie in the subspace spanned by the rows of and , respectively. Let and (for and ) denote any sets of constant vectors that span these two subspaces. Then, another set of linearlyrelated source variables is given by
(27)  
(28) 
which are just linear combinations of the ones in (23) and (24), respectively. Consequently, given the source variables produced by the BSS procedure, the following process can be used to determine whether these can be transformed into source variables that are linearly related to the measurements: 1) compute the abovementioned sets of partial derivatives and determine if each set is spanned by the appropriate number of vectors; 2) if one or both sets of partial derivatives do not satisfy this condition, there are no linearlyrelated source variables; 3) if both sets of partial derivative do satisfy condition 1, transform the data into the coordinate system in order to see if the data’s PDF factorizes there. There are linearlyrelated source variables if and only if this factorization occurs.
References
 [1] C. Jutten and J. Karhunen, “Advances in nonlinear blind source separation,” in Proceedings of the International Symposium on Independent Component Analysis and Blind Signal Separation, Nara, Japan, April 2003.
 [2] A. Hyvärinen and P. Pajunen, “Nonlinear independent component analysis: existence and uniqueness results,” Neural Networks, vol. 12, pp. 429439, 1999.
 [3] D. N. Levin, “Using state space differential geometry for nonlinear blind source separation,” J. Applied Physics, vol. 103, art. no. 044906, 2008.
 [4] D. N. Levin, “Using signal invariants to perform nonlinear blind source separation,” in Proc. of the International Conference on Independent Component Analysis and Signal Separation, LNCS, vol. 5441, T. Adali, C. Jutten, J. M. T. Romano, A. K. Barros (eds). Heidelberg: Springer, 2009, pp. 5865.
 [5] S. Lagrange, L. Jaulin, V. Vigneron, C. Jutten, “Analytic solution of the blind source separation problem using derivatives,” in Independent Component Analysis and Blind Signal Separation, LNCS, vol. 3195, C. G. Puntonet and A. G. Prieto (eds). Heidelberg: Springer, 2004, pp. 8188.
 [6] H. H. Yang, SI. Amari, A. Cichocki, “Informationtheoretic approach to blind separation of sources in nonlinear mixture,” Signal Processing, vol. 64, pp. 291300, 1998.
 [7] S. Haykin, Neural Networks  A Comprehensive Foundation. New York: Prentice Hall, 1998.
 [8] A. Taleb and C. Jutten, “Source separation in postnonlinear mixtures,” IEEE Trans. Signal Process., vol. 47, pp. 28072820, 1999.
 [9] T. Sauer, J. A. Yorke, M. Casdagli, “Embedology,” J. Statistical Physics, vol. 65, pp. 579616, 1991.
 [10] J.F. Cardoso, “Multidimensional independent component analysis,” in Proc. IEEE International Conference on Acoustics, Speech, and Signal Processing, vol. 4, Seattle, 1215 May, 1998, pp. 19411944.
 [11] Y. Nishimori, S. Akaho, M. D. Plumbley, “Riemannian optimization method on the flag manifold for independent subspace analysis,” in Proc. International Conference on Independent Component Analysis and Blind Source Separation, LNCS, vol. 3889. Berlin: Springer, 2006, pp. 295302
 [12] S. T. Roweis and L. K.Saul, “Nonlinear dimensionality reduction by locally linear embedding,” Science, vol. 290, pp. 23232326, 2000.
Comments
There are no comments yet.