Unsupervised Feature Extraction by Time-Contrastive Learning and Nonlinear ICA

05/20/2016 ∙ by Aapo Hyvärinen, et al. ∙ 0

Nonlinear independent component analysis (ICA) provides an appealing framework for unsupervised feature learning, but the models proposed so far are not identifiable. Here, we first propose a new intuitive principle of unsupervised deep learning from time series which uses the nonstationary structure of the data. Our learning principle, time-contrastive learning (TCL), finds a representation which allows optimal discrimination of time segments (windows). Surprisingly, we show how TCL can be related to a nonlinear ICA model, when ICA is redefined to include temporal nonstationarities. In particular, we show that TCL combined with linear ICA estimates the nonlinear ICA model up to point-wise transformations of the sources, and this solution is unique --- thus providing the first identifiability result for nonlinear ICA which is rigorous, constructive, as well as very general.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 2

page 8

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Unsupervised nonlinear feature learning, or unsupervised representation learning, is one of the biggest challenges facing machine learning. Various approaches have been proposed, many of them in the deep learning framework. Some of the most popular methods are multi-layer belief nets and Restricted Boltzmann Machines

[13]

as well as autoencoders

[14, 31, 21], which form the basis for the ladder networks [30]. While some success has been obtained, the general consensus is that the existing methods are lacking in scalability, theoretical justification, or both; more work is urgently needed to make machine learning applicable to big unlabeled data.

Better methods may be found by using the temporal structure in time series data. One approach which has shown a great promise recently is based on a set of methods variously called temporal coherence [17] or slow feature analysis [32]. The idea is to find features which change as slowly as possible, originally proposed in [6]. Kernel-based methods [12, 26] and deep learning methods [23, 27, 9]

have been developed to extend this principle to the general nonlinear case. However, it is not clear how one should optimally define the temporal stability criterion; these methods typically use heuristic criteria and are not based on generative models.

In fact, the most satisfactory solution for unsupervised deep learning would arguably be based on estimation of probabilistic generative models, because probabilistic theory often gives optimal objectives for learning. This has been possible in linear unsupervised learning, where sparse coding and independent component analysis (ICA) use independent, typically sparse, latent variables that generate the data via a linear mixing. Unfortunately, at least without temporal structure, the nonlinear ICA model is seriously unidentifiable

[18], which means that the original sources cannot be found. In spite of years of research [20], no generally applicable identifiability conditions have been found. Nevertheless, practical algorithms have been proposed [29, 1, 5] with the hope that some kind of useful solution can still be found even for i.i.d. data.

Here, we combine a new heuristic principle for analysing temporal structure with a rigorous treatment of a nonlinear ICA model, leading to a new identifiability proof. The structure of our theory is illustrated in Figure 1.

[width=]fig1.pdf

Figure 1:

An illustration of how we combine a new generative nonlinear ICA model with the new learning principle called time-contrastive learning (TCL). (A) The probabilistic generative model of nonlinear ICA, where the observed signals are given by a nonlinear transformation of source signals, which are mutually independent, and have segment-wise nonstationarity. (B) In TCL we train a feature extractor sensitive to the nonstationarity of the data by using a multinomial logistic regression which attempts to discriminate between the segments, labelling each data point with the segment label

. The feature extractor and logistic regression together can be implemented by a conventional multi-layer perceptron.

First, we propose to learn features using the (temporal) nonstationarity of the data. The idea is that the learned features should enable discrimination between different time windows; in other words, we search for features that provide maximal information on which part of the time series a given data point comes from. This provides a new, intuitively appealing method for feature extraction, which we call time-contrastive learning (TCL).

Second, we formulate a generative model in which independent components have different distributions in different time windows, and we observe nonlinear mixtures of the components. While a special case of this principle, using nonstationary variances, has been very successfully used in linear ICA

[22], our extension to the nonlinear case is completely new. Such nonstationarity of variances seems to be prominent in many kinds of data, for example EEG/MEG [2], natural video [17], and closely related to changes in volatility in financial time series; but we further generalize the nonstationarity to modulated exponential families.

Finally, we show that as a special case, TCL estimates the nonlinear part of the nonlinear ICA model, leaving only a simple linear mixing to be determined by linear ICA, and a final indeterminacy in terms of a component-wise nonlinearity similar to squaring. For modulated Gaussian sources, even the squaring can be removed and we have “full” identifiability. This gives the very first identifiability proof for a high-dimensional, nonlinear, ICA mixing model — together with a practical method for its estimation.

2 Time-contrastive learning

TCL is a method to train a feature extractor by using a multinomial logistic regression (MLR) classifier which aims to discriminate all segments (time windows) in a time series, given the segment indices as the labels of the data points. In more detail, TCL proceeds as follows:

  1. Divide a multivariate time series into segments, i.e. time windows, indexed by . Any temporal segmentation method can be used, e.g. simple equal-sized bins.

  2. Associate each data point with the corresponding segment index in which the data point is contained; i.e. the data points in the segment are all given the same segment label .

  3. Learn a feature extractor

    together with an MLR with a linear regression function

    to classify all data points with the corresponding segment labels defined above used as class labels . (For example, by ordinary deep learning with being outputs in the last hidden layer and being network weights.)

The purpose of the feature extractor is to extract a feature vector that enables the MLR to discriminate the segments. Therefore, it seems intuitively clear that the feature extractor needs to learn a useful representation of the temporal structure of the data, in particular the differences of the distributions across segments. Thus, we are effectively using a classification method (MLR) to accomplish unsupervised learning. Methods such as noise-contrastive estimation

[11] and generative adversarial nets [8], see also [10], are similar in spirit, but clearly distinct from TCL which uses the temporal structure of the data by contrasting different time segments.

In practice, the feature extractor needs to be capable of approximating a general nonlinear relationship between the data points and the log-odds of the classes, and it must be easy to learn from data simultaneously with the MLR. To satisfy these requirements, we use here a multilayer perceptron (MLP) as the feature extractor. Essentially, we use ordinary MLP/MLR training according to very well-known neural network theory, with the last hidden layer working as the feature extractor. Note that the MLR is only used as an instrument for training the feature extractor, and has no practical meaning after the training.

3 TCL as approximator of log-pdf ratios

We next show how the combination of the optimally discriminative feature extractor and MLR learns to model the nonstationary log-pdf’s of the data. The posterior over classes for one data point in the multinomial logistic regression of TCL is given by well-known theory as

(1)

where is a class label of the data at time , is the -dimensional data point at time , is the parameter vector of the -dimensional feature extractor (neural network) , , and are the weight and bias parameters of the MLR. We fixed the elements of and to zero to avoid the well-known indeterminacy of the softmax function.

On the other hand, the true posteriors of the segment labels can be written, by the Bayes rule, as

(2)

where is a prior distribution of the segment label , and .

Assume that the feature extractor has a universal approximation capacity, and that the amount of data is infinite, so that the MLR converges to the optimal classifier. Then, we will have equality between the model posterior Eq. (1) and the true posterior in Eq. (2) for all . Well-known developments, intuitively based on equating the numerators in those equations and taking the pivot into account, lead to the relationship

(3)

where last term on the right-hand side is zero if the segments have equal prior probability (i.e. equal length). In other words, what the feature extractor computes after TCL training (under optimal conditions) is the log-pdf of the data point in each segment (relative to that in the first segment which was chosen as pivot above). This gives a clear probabilistic interpretation of the intuitive principle of TCL, and will be used below to show its connection to nonlinear ICA.

4 Nonlinear nonstationary ICA model

In this section, seemingly unrelated to the preceding section, we define a probabilistic generative model; the connection will be explained in the next section. We assume, as typical in nonlinear ICA, that the observed multivariate time series is a smooth and invertible nonlinear mixture of a vector of source signals ; in other words:

(4)

The components in are assumed mutually independent over (but not over time ). The crucial question is how to define a suitable model for the sources, which is general enough while allowing strong identifiability results.

Here, we start with the fundamental principle that the source signals are nonstationary. For example, the variances (or similar scaling coefficients) could be changing as proposed earlier in the linear case [22, 24, 16]

. We generalize that idea and propose a generative model for nonstationary sources based on the exponential family. Merely for mathematical convenience, we assume that the nonstationarity is much slower than the sampling rate, so the time series can be divided into segments in each of which the distribution is approximately constant (but the distribution is different in different segments). The probability density function (pdf) of the source signal with index

in the segment is then defined as:

(5)

where is a “stationary baseline” log-pdf of the source, and the are nonlinear scalar functions defining the exponential family for source . The essential point is that the parameters of the source depend on the segment index , which creates nonstationarity. The normalization constant is needed in principle although it disappears in all our proofs below.

A simple example would be obtained by setting , i.e., using a single modulated function with which means that the variance of a Gaussian source is modulated, or

, a modulated Laplacian source. Another interesting option might be to use two ReLU-like nonlinearities

and to model both changes in scale (variance) and location (mean). Yet another option is to use a Gaussian baseline with a nonquadratic function .

Our definition thus generalizes the linear model [22, 24, 16] to the nonlinear case, as well as to very general modulated non-Gaussian densities by allowing to be non-quadratic and using more than one per source (i.e. we can have ). Note that our principle of nonstationarity is clearly distinct from the principle of linear autocorrelations previously used in the nonlinear case [12, 26]; also, some authors prefer to use the term blind source separation (BSS) for generative models with temporal structure.

5 Solving nonlinear ICA by TCL

Now we consider the case where TCL as defined in Section 2 is applied on data generated by the nonlinear ICA model in Section 4. We refer again to Figure 1 which illustrates the total system. For simplicity, we consider the case , i.e. the exponential family has a single modulated function per source, and this function is the same for all sources; we will discuss the general case separately below. The modulated function will be simply denoted by in the following.

First, we show that the nonlinear functions

of the sources can be obtained as unknown linear transformations of the outputs of the feature extractor

trained by the TCL:

Theorem 1.

Assume the following:

  1. We observe data which is obtained by generating sources according to (5), and mixing them as in (4) with a smooth invertible . For simplicity, we assume only a single function defining the exponential family, i.e.  and as explained above.

  2. We apply TCL on the data so that the dimension of the feature extractor is equal to the dimension of the data vector , i.e., .

  3. The modulation parameter matrix with elements has full column rank . (Intuitively speaking, the variances of the independent components are modulated sufficiently independently of each other.)

Then, after learning the parameter vector , the outputs of the feature extractor are equal to up to an invertible linear transformation. In other words,

(6)

for some constant invertible matrix

and a constant vector .

Sketch of proof: (see supplementary material for full proof) The basic idea is that after convergence we must have equality between the model of the log-pdf in each segment given by TCL in Eq. (3) and that given by nonlinear ICA, obtained by summing the RHS of Eq. (5) over :

(7)

where does not depend on , and does not depend on or . We see that the functions and must span the same linear subspace. (TCL looks at differences of log-pdf’s, introducing , but this does not actually change the subspace). This implies that the must be equal to some invertible linear transformation of and a constant bias term, which gives (6). ∎

To further estimate the linear transformation in (6), we can simply use linear ICA:

Corollary 1.

The estimation (identification) of the

can be performed by first performing TCL, and then linear ICA on the hidden representation

.

Proof: We only need to combine the well-known identifiability proof of linear ICA [3] with Theorem 1, noting that the quantities are independent, and since has a strict upper bound (which is necessary for integrability), must be non-Gaussian. ∎

In general, TCL followed by linear ICA does not allow us to exactly recover the independent components because the function can hardly be invertible, typically being something like squaring or absolute values. However, for a specific class of including the modulated Gaussian family, we can prove a stricter form of identifiability. Slightly counterintuitively, we can recover the signs of the , since we also know the corresponding and the transformation is invertible:

Corollary 2.

Assume is a strictly monotonic function of . Then, we can further identify the original , up to strictly monotonic transformations of each source.

Proof: To make integrable, necessarily when , and must have a finite maximum, which we can set to zero without restricting generality. For each fixed , consider the manifold defined by . By invertibility of , this divides the space of into two halves. In one half, define , and in the other, . With such , we have thus recovered the original sources, up to the strictly monotonic transformation , where is either or . (Note that in general, the are meaningfully defined only up to a strictly monotonic transformation, analogue to multiplication by an arbitrary constant in the linear case [3].) ∎

Summary of Theory

What we have proven is that in the special case of a single which is a monotonic function of , our nonlinear ICA model is identifiable, up to inevitable component-wise monotonic transformations. We also provided a practical method for the estimation of the nonlinear transformations for any general , given by TCL followed by linear ICA. (The method provided in the proof of Corollary 2 may be very difficult to implement in practice.)

Extension 1: Combining ICA with dimension reduction

In practice we may want to set the feature extractor dimension to be smaller than , to accomplish dimension reduction. It is in fact simple to modify the generative model and the theorem so that a dimension reduction similar to nonlinear PCA can be included, and performed by TCL. It is enough to assume that while in the nonlinear mixing (4) we have the same number of dimensions for both and , in fact some of the components are stationary, i.e. for them, do not depend on . The nonstationary components will then be identified as in the Theorem, using TCL.

Extension 2: General case with many nonlinearities

With many (), the left-hand-side of (6) will have entries given by all the possible , and the dimension of the feature extractor must be equally increased; the condition of full rank on is likewise more complicated. Corollary 1 must then consider an independent subspace model, but it can still be proven in the same way.

6 Simulation on artificial data

Data generation

We created data from the nonlinear ICA model in Section 4, using the simplified case of the Theorem as follows. Nonstationary source signals (, segment length 512) were randomly generated by modulating Laplacian sources by

randomly drawn from a uniform distribution in

. As the nonlinear mixing function , we used an MLP (“mixing-MLP”). In order to guarantee that the mixing-MLP is invertible, we used leaky ReLU units and the same number of units in all layers.

TCL settings, training, and final linear ICA

As the feature extractor to be trained by the TCL, we adopted an MLP (“feature-MLP”). The segmentation in TCL was the same as in the data generation, and the number of layers was the same in the mixing-MLP and the feature-MLP. Note that when , both the mixing-MLP and feature-MLP are a one layer model, and then the observed signals are simply linear mixtures of the source signals as in a linear ICA model. As in the Theorem, we set

. As the activation function in the hidden layers, we used a “maxout” unit, constructed by taking the maximum across

affine fully connected weight groups. However, the output layer has “absolute value” activation units exclusively. This is because the output of the feature-MLP (i.e., ) should resemble , based on Theorem 1, and here we used the Laplacian distribution for the sources. The initial weights of each layer were randomly drawn from a uniform distribution for each layer, scaled as in [7]. To train the MLP, we used back-propagation with a momentum term. To avoid overfitting, we used regularization for the feature-MLP and MLR.

According to the Corollary above, after TCL we further applied linear ICA (FastICA, [15]) to the , and used its outputs as the final estimates of . To evaluate the performance of source recovery, we computed the mean correlation coefficients between the true and their estimates. For comparison, we also applied a linear ICA method based on nonstationarity of variance (NSVICA) [16], a kernel-based nonlinear ICA method (kTDSEP) [12]

, and a denoising autoencoder (DAE)

[31] to the observed data. We took absolute values of the estimated sources to make a fair comparison with TCL. In kTDSEP, we selected the 20 estimated components with the highest correlations with the source signals. We initialized the DAE by the stacked DAE scheme [31], and sigmoidal units were used in the hidden layers; we omitted the case because of instability of training.

Results

Figure 2a) shows that after training the feature-MLP by TCL, the MLR achieved higher classification accuracies than chance level which implies that the feature-MLP was able to learn a representation of the data nonstationarity. (Here, chance level denotes the performance of the MLP with a randomly initialized feature-MLP.) We can see that the larger the number of layers is (which means that the nonlinearity in the mixing-MLP is stronger), the more difficult it is to train the feature-MLP and the MLR. The classification accuracy also goes down when the number of segments increases, since when there are more and more classes, some of them will inevitably have very similar distributions and are thus difficult to discriminate; this is why we computed the chance level as above.

Figure 2b) shows that the TCL method could reconstruct the reasonably well even for the nonlinear mixture case (), while all other methods failed (NSVICA obviously performed very well in the linear case).The figure also shows that (1) the larger the number of segments (amount of data) is, the higher the performance of the TCL method is (i.e. the method seems to converge), and (2) again, more layers makes learning more difficult.

To summarize, this simulation confirms that TCL is able to estimate the nonlinear ICA model based on nonstationarity. Using more data increases performance, perhaps obviously, while making the mixing more nonlinear decreases performance.

a) [width=0.46]fig2.pdf b) [width=0.46]fig3.pdf

Figure 2: Simulation on artificial data. a) Mean classification accuracies of the MLR simultaneously trained with the feature-MLP to implement TCL, with different settings of the number of layers and segments. Note that chance levels (dotted lines) change as a function of the number of segments (see text). The MLR achieved higher accuracy than chance level. b) Mean absolute correlation coefficients between the true and the features learned by TCL (solid line) and, for comparison: nonstationarity-of-variance-based linear ICA (NSVICA, dashed line), kernel-based nonlinear ICA (kTDSEP, dotted line), and denoising autoencoder (DAE, dash-dot line). TCL has much higher correlations than DAE or kTDSEP, and in the nonlinear case (), higher than NSVICA.

7 Experiments on real brain imaging data

To evaluate the applicability of the TCL method to real data, we applied it on magnetoencephalography (MEG), i.e. measurements of the electrical activity in the human brain. In particular, we used data measured in a resting-state session, during which the subjects did not have any task nor were receiving any particular stimulation. In recent years, many studies shown the existence of networks of brain activity in resting state, with MEG as well [2, 4]. Such networks mean that the data is nonstationary, and thus this data provides an excellent target for TCL.

Data and preprocessing

We used MEG data from an earlier neuroimaging study [25], graciously provided by P. Ramkumar. MEG signals were measured from nine healthy volunteers by a Vectorview helmet-shaped neuromagnetometer at a sampling rate of 600 Hz with 306 channels. The experiment consisted of two kinds of sessions, i.e., resting sessions (2 sessions of 10 min) and task sessions (2 sessions of 12 min). In the task sessions, the subjects were exposed to a sequence of 6–33 s blocks of auditory, visual and tactile stimuli, which were interleaved with 15-s rest periods. We exclusively used the resting-session data for the training of the network, and task-session data was only used in the evaluation. The modality of the sensory stimulation (incl. no stimulation, i.e. rest) provided a class label that we used in the evaluation, giving in total four classes. We preprocessed the MEG signals by Morlet filtering around the alpha frequency band.

TCL settings

We used segments of equal size, of length 12.5 s or 625 data points (downsampling to 50 Hz). The number of layers takes the values , and the number of nodes of each hidden layer was a function of so that we always fixed the number of output layer nodes to 10, and increased gradually the number of nodes when going to earlier layer as , , , and . We used ReLU units in the middle layers, and adaptive units exclusively for the output layer, which is more flexible than the “absolute value” unit used in the simulation. In order to prevent overfitting, we applied dropout [28]

to inputs, and batch normalization

[19] to hidden layers. Since different subjects and sessions are likely to have artefactual differences, we used a multi-task learning scheme, with a separate top-layer MLR classifier for each measurement session and subject, but a shared feature-MLP. (In fact, if we use the MLR to discriminate all segments of all sessions, it tends to mainly learn the artifactual differences across sessions.) Otherwise, all the settings and comparisons were as in Section 6.

Evaluation methods

To evaluate the obtained features, we performed classification of the sensory stimulation categories (modalities) by applying feature extractors trained with (unlabeled) resting-session data to (labeled) task-session data. Classification was performed using a linear support vector machine (SVM) classifier trained on the stimulation modality labels, and its performance was evaluated by a session-average of session-wise one-block-out cross-validation (CV) accuracies. The hyperparameters of the SVM were determined by nested CV without using the test data. The average activities of the feature extractor during each block were used as feature vectors in the evaluation of TCL features. However, we used log-power activities for the other (baseline) methods because the average activities had much lower performance with those methods. We balanced the number of blocks between the four categories. We measured the CV accuracy 10 times by changing the initial values of the feature extractor training, and showed their average performance. We also visualized the spatial activity patterns obtained by TCL, using weighted-averaged sensor signals; i.e., the sensor signals are averaged while weighted by the activities of the feature extractor.

Results

Figure 3a) shows the comparison of classification accuracies between the different methods, for different numbers of layers . The classification accuracies by the TCL method were consistently higher than those by the other (baseline) methods.111Note that the classification using the final linear ICA is equivalent to using whitening since ICA only makes a further orthogonal rotation, and could be replaced by whitening without affecting classification accuracy. We can also see a superior performance of multi-layer networks () compared with that of the linear case (), which indicates the importance of nonlinear demixing in the TCL method.

Figure 3b) shows an example of spatial patterns learned by the TCL method. For simplicity of visualization, we plotted spatial patterns for the three-layer model. We manually picked one out of the ten hidden nodes from the third layer, and plotted its weighted-averaged sensor signals (Figure 3b, L3). We also visualized the most strongly contributing second- and first-layer nodes. We see progressive pooling of L1 units to form left temporal, right temporal, and occipito-parietal patterns in L2, which are then all pooled together in the L3 resulting in a bilateral temporal pattern with negative contribution from the occipito-parietal region. Most of the spatial patterns in the third layer (not shown) are actually similar to those previously reported using functional magnetic resonance imaging (fMRI), and MEG [2, 4]. Interestingly, none of the hidden units seems to represent artefacts, in contrast to ICA.

a)

[width=0.32]accu.pdf

b)

L3                  [trim= 0mm 25mm 0mm 30mm,clip,width=2.5cm]L3-1.png

L2 [trim= 0mm 25mm 0mm 30mm,clip,width=1.75cm]L2-11.png [trim= 0mm 25mm 0mm 30mm,clip,width=1.75cm]L2-14.png [trim= 0mm 25mm 0mm 30mm,clip,width=1.75cm]L2-16.png

L1

[trim= 0mm 25mm 0mm 30mm,clip,width=1.25cm]L1-38.png [trim= 0mm 25mm 0mm 30mm,clip,width=1.25cm]L1-39.png

[trim= 0mm 25mm 0mm 30mm,clip,width=1.25cm]L1-33.png [trim= 0mm 25mm 0mm 30mm,clip,width=1.25cm]L1-25.png

[trim= 0mm 25mm 0mm 30mm,clip,width=1.25cm]L1-8.png [trim= 0mm 25mm 0mm 30mm,clip,width=1.25cm]L1-6.png

Figure 3: Real MEG data. a)

Classification accuracies of linear SMVs newly trained with task-session data to predict stimulation labels in task-sessions, with feature extractors trained in advance with resting-session data. Error bars give standard errors of the mean across ten repetitions. For TCL and DAE, accuracies are given for different numbers of layers

. Horizontal line shows the chance level (25%). b) Example of spatial patterns of nonstationary components learned by TCL. Each small panel corresponds to one spatial pattern with the measurement helmet seen from three different angles (left, back, right); red/yellow is positive and blue is negative. “L3” shows approximate total spatial pattern of one selected third-layer unit. “L2” shows the patterns of the three second-layer units maximally contributing to this L3 unit. “L1” shows, for each L2 unit, the two most strongly contributing first-layer units.

8 Conclusion

We proposed a new learning principle for unsupervised feature (representation) learning. It is based on analyzing nonstationarity in temporal data by discriminating between time segments. The ensuing “time-contrastive learning” is easy to implement since it only uses ordinary neural network training: a multi-layer perceptron with logistic regression. However, we showed that, surprisingly, it can estimate independent components in a nonlinear mixing model up to certain indeterminacies, assuming that the independent components are nonstationary in a suitable way. The indeterminacies include a linear mixing (which can be resolved by a further linear ICA step), and component-wise nonlinearities, such as squares or absolute values. TCL also avoids the computation of the gradient of the Jacobian, which is a major problem with maximum likelihood estimation [5].

Our developments also give by far the strongest identifiability proof of nonlinear ICA in the literature. The indeterminacies actually reduce to just inevitable monotonic component-wise transformations in the case of modulated Gaussian sources. Thus, our results pave the way for further developments in nonlinear ICA, which has so far seriously suffered from the lack of almost any identifiability theory.

Experiments on real MEG found neuroscientifically interesting networks. Other promising future application domains include video data, econometric data, and biomedical data such as EMG and ECG, in which nonstationary variances seem to play a major role.

References

  • [1] L. B. Almeida. MISEP—linear and nonlinear ICA based on mutual information. J. of Machine Learning Research, 4:1297–1318, 2003.
  • [2] M. J. Brookes et al. Investigating the electrophysiological basis of resting state networks using magnetoencephalography. Proc. Natl. Acad. Sci., 108(40):16783–16788, 2011.
  • [3] P. Comon. Independent component analysis—a new concept? Signal Processing, 36:287–314, 1994.
  • [4] F. de Pasquale et al. A cortical core for dynamic integration of functional networks in the resting human brain. Neuron, 74(4):753–764, 2012.
  • [5] L. Dinh, D. Krueger, and Y. Bengio. NICE: Non-linear independent components estimation. arXiv:1410.8516 [cs.LG], 2015.
  • [6] P. Földiák. Learning invariance from transformation sequences. Neural Computation, 3:194–200, 1991.
  • [7] X. Glorot and Y. Bengio. Understanding the difficulty of training deep feedforward neural networks. In AISTATS’10, 2010.
  • [8] I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio. Generative adversarial nets. In NIPS, pages 2672–2680, 2014.
  • [9] R. Goroshin, J. Bruna, J. Tompson, D. Eigen, and Y. LeCun. Unsupervised feature learning from temporal data. arXiv:1504.02518, 2015.
  • [10] M. U. Gutmann, R. Dutta, S. Kaski, and J. Corander. Likelihood-free inference via classification. arXiv:1407.4981 [stat.CO], 2014.
  • [11] M. U. Gutmann and A. Hyvärinen. Noise-contrastive estimation of unnormalized statistical models, with applications to natural image statistics. J. of Machine Learning Research, 13:307–361, 2012.
  • [12] S. Harmeling, A. Ziehe, M. Kawanabe, and K.-R. Müller. Kernel-based nonlinear blind source separation. Neural Comput., 15(5):1089–1124, 2003.
  • [13] G. E. Hinton. Learning multiple layers of representation. Trends Cogn. Sci., 11:428–434, 2007.
  • [14] G. E. Hinton and R. S. Zemel. Autoencoders, minimum description length, and helmholtz free energy. Adv. Neural Inf. Process. Syst., 1994.
  • [15] A. Hyvärinen. Fast and robust fixed-point algorithms for independent component analysis. IEEE Trans. Neural Netw., 10(3):626–634, 1999.
  • [16] A. Hyvärinen. Blind source separation by nonstationarity of variance: A cumulant-based approach. IEEE Transactions on Neural Networks, 12(6):1471–1474, 2001.
  • [17] A. Hyvärinen, J. Hurri, and P. O. Hoyer. Natural Image Statistics. Springer-Verlag, 2009.
  • [18] A. Hyvärinen and P. Pajunen. Nonlinear independent component analysis: Existence and uniqueness results. Neural Netw., 12(3):429–439, 1999.
  • [19] S. Ioffe and C. Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. CoRR, abs/1502.03167, 2015.
  • [20] C. Jutten, M. Babaie-Zadeh, and J. Karhunen. Nonlinear mixtures. Handbook of Blind Source Separation, Independent Component Analysis and Applications, pages 549–592, 2010.
  • [21] D. P. Kingma and M. Welling. Auto-encoding variational bayes. arXiv:1312.6114 [stat.ML], 2014.
  • [22] K. Matsuoka, M. Ohya, and M. Kawamoto. A neural net for blind separation of nonstationary signals. Neural Netw., 8(3):411–419, 1995.
  • [23] H. Mobahi, R. Collobert, and J. Weston. Deep learning from temporal coherence in video. In Proceedings of the 26th Annual International Conference on Machine Learning, pages 737–744, 2009.
  • [24] D.-T. Pham and J.-F. Cardoso. Blind separation of instantaneous mixtures of non stationary sources. IEEE Trans. Signal Processing, 49(9):1837–1848, 2001.
  • [25] P. Ramkumar, L. Parkkonen, R. Hari, and A. Hyvärinen. Characterization of neuromagnetic brain rhythms over time scales of minutes using spatial independent component analysis. Hum. Brain Mapp., 33(7):1648–1662, 2012.
  • [26] H. Sprekeler, T. Zito, and L. Wiskott. An extension of slow feature analysis for nonlinear blind source separation. J. of Machine Learning Research, 15(1):921–947, 2014.
  • [27] J. T. Springenberg and M. Riedmiller. Learning temporal coherent features through life-time sparsity. In Neural Information Processing, pages 347–356. Springer, 2012.
  • [28] N. Srivastava, G. Hinton, A. Krizhevsky, I. Sutskever, and R. Salakhutdinov. Dropout: A simple way to prevent neural networks from overfitting. J. Mach. Learn. Res., 15(1):1929–1958, 2014.
  • [29] Y. Tan, J. Wang, and J.M. Zurada.

    Nonlinear blind source separation using a radial basis function network.

    IEEE Transactions on Neural Networks, 12(1):124–134, 2001.
  • [30] H. Valpola. From neural PCA to deep unsupervised learning. In Advances in Independent Component Analysis and Learning Machines, pages 143–171. Academic Press, 2015.
  • [31] P. Vincent, H. Larochelle, I. Lajoie, Y. Bengio, and P.-A. Manzagol. Stacked denoising autoencoders: Learning useful representations in a deep network with a local denoising criterion. J. Mach. Learn. Res., 11:3371–3408, 2010.
  • [32] L. Wiskott and T. J. Sejnowski. Slow feature analysis: Unsupervised learning of invariances. Neural Comput., 14(4):715–770, 2002.

Supplementary Material

Proof of Theorem

We start by computing the log-pdf of a data point in the segment under the nonlinear ICA model. Denote for simplicity . Using the probability transformation formula, the log-pdf is given by

(8)

where we drop the index from for simplicity, is the inverse function of (the true) mixing function , and denotes the Jacobian; thus, by definition. By Assumption A1, this holds for the data for any . Based on Assumptions A1 and A2, the optimal discrimination relation in Eq. (3) holds as well and is here given by

(9)

where and are the th element of and , respectively, we drop from for simplicity, and is the last term in (3).

Now, from Eq. (8) with , we have

(10)

Substituting Eq. (10) into Eq. (9), we have equivalently

(11)

Setting Eq. (11) and Eq. (8) to be equal for arbitrary , we have:

(12)

where and . Remarkably, the log-determinants of the Jacobians cancel out and disappear here.

Collecting the equations in Eq. (12) for all the segments, and noting that by definition , we have a linear system with the “tall” matrix in Assumption A3 on the left-hand side:

(13)

where we collect the in the vector and the in the matrix . Assumption A3 ( has full column rank) implies that its pseudoinverse fullfills . We multiply the equation above from the left by such pseudoinverse and obtain

(14)

Here, we see that the are obtained as a linear transformation of the feature values , plus an additional bias term , denoted by in the Theorem. Furthermore, the matrix , denoted by in the theorem, must be full rank (i.e. invertible), because if it were not, the functions would be linearly dependent, which is impossible since they are each a function of a unique variable . ∎