Deeply Coupled Auto-encoder Networks for Cross-view Classification

02/10/2014 ∙ by Wen Wang, et al. ∙ 0

The comparison of heterogeneous samples extensively exists in many applications, especially in the task of image classification. In this paper, we propose a simple but effective coupled neural network, called Deeply Coupled Autoencoder Networks (DCAN), which seeks to build two deep neural networks, coupled with each other in every corresponding layers. In DCAN, each deep structure is developed via stacking multiple discriminative coupled auto-encoders, a denoising auto-encoder trained with maximum margin criterion consisting of intra-class compactness and inter-class penalty. This single layer component makes our model simultaneously preserve the local consistency and enhance its discriminative capability. With increasing number of layers, the coupled networks can gradually narrow the gap between the two views. Extensive experiments on cross-view image classification tasks demonstrate the superiority of our method over state-of-the-art methods.



There are no comments yet.


page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Real-world objects often have different views, which might be endowed with the same semantic. For example, face images can be captured in different poses, which reveal the identity of the same object; images of one face can also be in different modalities, such as pictures under different lighting condition, pose, or even sketches from artists. In many computer vision applications, such as image retrieval, interests are taken in comparing two types of heterogeneous images, which may come from different views or even different sensors. Since the spanned feature spaces are quite different, it is very difficult to classify these images across views directly. To decrease the discrepancy across views, most of previous works endeavored to learn view-specific linear transforms and to project cross-view samples into a common latent space, and then employed these newly generated features for classification.

Though there are lots of approaches used to learn view-specific projections, they can be divided roughly based on whether the supervised information is used. Unsupervised methods such as Canonical Correlation Analysis (CCA)[14] and Partial Least Square (PLS) [26] are employed to the task of cross-view recognition. Both of them attempt to use two linear mappings to project samples into a common space where the correlation is maximized, while PLS considers the variations rather than only the correlation in the target space. Besides, with use of the mutual information, a Coupled Information-Theoretic Encoding (CITE) method is developed to narrow the inter-view gap for the specific photo-sketch recognition task. And in [30], a semi-coupled dictionary is used to bridge two views. All the methods above consider to reduce the discrepancy between two views, however, the label information is not explicitly taken into account. With label information available, many methods were further developed to learn a discriminant common space For instance, Discriminative Canonical Correlation Analysis (DCCA) [16] is proposed as an extension of CCA. And In [22]

, with an additional local smoothness constraints, two linear projections are simultaneously learnt for Common Discriminant Feature Extraction (CDFE). There are also other such methods as the large margin approach

[8] and the Coupled Spectral Regression (CSR) [20]. Recently, multi-view analysis [27, 15] is further developed to jointly learn multiple specific-view transforms when multiple views (usually more than 2 views) can be available.

Although the above methods have been extensively applied in the cross-view problem, and have got encouraging performances, they all employed linear transforms to capture the shared features of samples from two views. However, these linear discriminant analysis methods usually depend on the assumption that the data of each class agrees with a Gaussian distribution, while data in real world usually has a much more complex distribution

[33]. It indicates that linear transforms are insufficient to extract the common features of cross-view images. So it’s natural to consider about learning nonlinear features.

A recent topic of interest in nonlinear learning is the research in deep learning. Deep learning attempts to learn nonlinear representations hierarchically via deep structures, and has been applied successfully in many computer vision problems. Classical deep learning methods often stack or compose multiple basic building blocks to yield a deeper structure. See

[5] for a recent review of Deep Learning algorithms. Lots of such basic building blocks have been proposed, including sparse coding [19]

, restricted Boltzmann machine (RBM)

[12], auto-encoder [13, 6], etc. Specifically, the (stacked) auto-encoder has shown its effectiveness in image denoising [32], domain adaptation [7], audio-visual speech classification [23], etc.

As we all known, the kernel method, such as Kernel Canonical Correlation Analysis(Kernel CCA) [1], is also a widely used approach to learn nonlinear representations. Compared with the kernel method, deep learning is much more flexible and time-saving because the transform is learned rather than fixed and the time needed for training and inference process is beyond the limit of the size of training set.

Inspired by the deep learning works above, we intend to solve the cross-view classification task via deep networks. It’s natural to build one single deep neural network with samples from both views, but this kind of network can’t handle complex data from totally different modalities and may suffer from inadequate representation capacity. Another way is to learn two different deep neural networks with samples of the different views. However, the two independent networks project samples from different views into different spaces, which makes comparison infeasible. Hence, building two neural networks coupled with each other seems to be a better solution.

In this work, we propose a Deeply Coupled Auto-encoder Networks(DCAN) method that learns the common representations to conduct cross-view classification by building two neural networks deeply coupled respectively, each for one view. We build the DCAN by stacking multiple discriminative coupled auto-encoders, a denoising auto-encoder with maximum margin criterion. The discriminative coupled auto-encoder has a similar input corrupted and reconstructive error minimized mechanism with the denoising auto-encoder proposed in [28], but is modified by adding a maximum margin criterion. This kind of criterion has been used in previous works, like [21, 29, 35], etc. Note that the counterparts from two views are added into the maximum margin criterion simultaneously since they both come from the same class, which naturally couples the corresponding layer in two deep networks. A schematic illustration can be seen in Fig.1.

The proposed DCAN is related to Multimodal Auto-encoders [23], Multimodal Restricted Boltzmann Machines and Deep Canonical Correlation Analysis [3]. The first two methods tend to learn a single network with one or more layers connected to both views and to predict one view from the other view, and the Deep Canonical Correlation Analysis build two deep networks, each for one view, and only representations of the highest layer are constrained to be correlated. Therefore, the key difference is that we learn two deep networks coupled with each other in representations in each layer, which is of great benefits because the DCAN not only learn two separate deep encodings but also makes better use of data from the both two views. What’s more, these differences allow for our model to handle the recognition task even when data is impure and insufficient.

The rest of this paper is organized as follows. Section 2 details the formulation and solution to the proposed Deeply Coupled Auto-encoder Networks. Experimental results in Section 3 demonstrate the efficacy of the DCAN. In section 4 a conclusion is given.

2 Deeply Coupled Auto-encoder Networks

In this section, we first present the basic idea. The second part gives a detailed description of the discriminative coupled auto-encoder. Then, we describe how to stack multiple layers to build a deep network. Finally, we briefly describe the optimization of the model.

2.1 Basic Idea

Figure 1: An illustration of our proposed DCAN. The left-most and right-most schematic show the structure of the two coupled network respectively. And the schematic in the middle illustrates how the whole network gradually enhances the separability with increasing layers, where pictures with solid line border denote samples from view 1, those with dotted line border denote samples from view 2, and different colors imply different subjects.

As shown in Fig.1, the Deeply Coupled Auto-encoder Networks(DCAN) consists of two deep networks coupled with each other, and each one is for one view. The network structures of the two deep networks are just like the left-most and the right-most parts in Fig.1

, where circles means the units in each layers (pixels in a input image for the input layer and hidden representation in higher layers), and arrows denote the full connections between adjacent layers. And the middle part of Fig.

1 illustrates how the whole network projects samples in different views into a common space and gradually enhances the separability with increasing layers.

The two deep networks are both built through stacking multiple similar coupled single layer blocks because a single coupled layer might be insufficient, and the method of stacking multiple layers and training each layer greedily has be proved efficient in lots of previous works, such as those in [13, 6]. With the number of layers increased, the whole network can compactly represent a significantly larger set of transforms than shallow networks , and gradually narrow the gap with the discriminative capacity enhanced.

We use a discriminative coupled auto-encoders trained with maximum margin criterion as a single layer component. Concretely, we incorporate the additional noises in the training process while maximizing the margin criterion, which makes the learnt mapping more stable as well as discriminant. Note that the maximum margin criterion also works in coupling two corresponding layers. Formally, the discriminative coupled auto-encoder can be written as follows:


where denote inputs from the two views, and denote hidden representations of the two views respectively. are the transforms we intend to learn, and we denote the reconstructive error as , and maximum margin criterion as , which are described detailedly in the next subsection. is the threshold of the maximum margin criterion.

2.2 Discriminative coupled auto-encoder

In the problem of cross-view, there are two types of heterogenous samples. Without loss of generality, we denote samples from one view as , and those from the other view as , in which is the sample sizes. Noted that the corresponding labels are known, and denote hidden representations of the two views we want to learn.

The DCAN attempts to learn two nonlinear transforms and that can project the samples from two views to one discriminant common space respectively, in which the local neighborhood relationship as well as class separability should be well preserved for each view. The auto-encoder like structure stands out in preserving the local consistency, and the denoising form enhances the robustness of learnt representations. However, the discrimination isn’t taken into consideration. Therefore, we modify the denoising auto-encoder by adding a maximum margin criterion consisting of intra-class compactness and inter-class penalty. And the best nonlinear transformation is a trade-off between local consistency preserving and separability enhancing.

Just like the one in denoising auto-encoder, the reconstructive error in Eq.(1) is formulated as follows:


where calculates the expectation over corrupted versions of examples obtained from a corruption process . specifies the two nonlinear transforms , where is the weight matrix, and are the bias of encoder and decoder respectively, and are calculated through the decoder process :


And hidden representations are obtained from the encoder that is a similar mapping with the decoder,



is the nonlinear activation function, such as the point-wise hyperbolic tangent operation on linear projected features,



in which is the gain parameter.

Moreover, for the maximum margin criterion consisting of intra-class compactness and inter-class penalty, the constraint term in Eq.(1) is used to realize coupling since samples of the same class are treated similarly no matter which view they are from.

Assuming is the set of sample pairs from the same class, and is the set of sample pairs from different classes. Note that the counterparts from two views are naturally added into since it’s the class rather than the view that are considered.

Then, we characterize the compactness as follows,


where denotes the corresponding hidden representation of an input and is a sample from either view 1 or view 2, and is the size of .

Meanwhile, the goal of the inter-class separability is to push the adjacent samples from different classes far away, which can be formulated as follows,


where belongs to the nearest neighbors of with different class labels, and is the number of all pairs satisfying the condition.

And the function of is illustrated in the middel part of Fig.1. In the projected common space denoted by , the compactness term shown by red ellipse works by pulling intra-class samples together while the penalty term shown by black ellipse tend to push adjacent inter-class samples away.

Finally, by solving the optimization problem Eq.(1), we can learn a couple of nonlinear transforms to transform the original samples from both views into a common space.

2.3 Stacking coupled auto-encoder

Through the training process above, we model the map between original sample space and a preliminary discriminant subspace with gap eliminated, and build a hidden representation which is a trade-off between approximate preservation on local consistency and the distinction of the projected data. But since real-world data is highly complicated, using a single coupled layer to model the vast and complex real scenes might be insufficient. So we choose to stack multiple such coupled network layers described in subsection 2.2. With the number of layers increased, the whole network can compactly represent a significantly larger set of transforms than shallow networks, and gradually narrow the gap with the discriminative ability enhanced.

Training a deep network with coupled nonlinear transforms can be achieved by the canonical greedy layer-wise approach [12, 6]. Or to be more precise, after training a single layer coupled network, one can compute a new feature by the encoder in Eq.(6) and then feed it into the next layer network as the input feature. In practice, we find that stacking multiple such layers can gradually reduce the gap and improve the recognition performance (see Fig.1 and Section 3).

2.4 Optimization

We adopt the Lagrangian multiplier method to solve the objective function Eq.(1) with the constraints Eq.(2) as follows:


where the first term is the the reconstruction error, the second term is the maximum margin criterion, and the last term is the shrinkage constraints called the Tikhonov regularizers in [11], which is utilized to decrease the magnitude of the weights and further to help prevent over-fitting. is the balance parameter between the local consistency and empirical separability. And is called the weight decay parameter and is usually set to a small value, e.g., 1.0e-4.

To optimize the objective function (10), we use back-propagation to calculate the gradient and then employ the limited-memory BFGS (L-BFGS) method [24, 17], which is often used to solve nonlinear optimization problems without any constraints. L-BFGS is particularly suitable for problems with a large amount of variables under the moderate memory requirement. To utilize L-BFGS, we need to calculate the gradients of the object function. Obviously, the object function in (10) is differential to these parameters , and we use Back-propagation [18] method to derive the derivative of the overall cost function. In our setting, we find the objective function can achieve as fast convergence as described in [17].

3 Experiments

In this section, the proposed DCAN is evaluated on two datasets, Multi-PIE [9] and CUHK Face Sketch FERET (CUFSF) [34, 31].

3.1 Databases

Multi-PIE dataset [9]

is employed to evaluate face recognition across pose. Here a subset from the 337 subjects in 7 poses (

), 3 expression (Neutral,Smile, Disgust), no flush illumination from 4 sessions are selected to validate our method. We randomly choose 4 images for each pose of each subject, then randomly partition the data into two parts: the training set with 231 subjects (i.e., images) and the testing set with the rest subjects.

CUHK Face Sketch FERET (CUFSF) dataset [34, 31] contains two types of face images: photo and sketch. Total 1,194 images (one image per subject) were collected with lighting variations from FERET dataset [25]. For each subject, a sketch is drawn with shape exaggeration. According to the configuration of [15], we use the first 700 subjects as the training data and the rest subjects as the testing data.

3.2 Settings

All images from Multi-PIE and CUFSF are cropped into 6480 pixels without any preprocess. We compare the proposed DCAN method with several baselines and state-of-the-art methods, including CCA [14], Kernel CCA [1], Deep CCA [3], FDA [4], CDFE [22], CSR [20], PLS [26] and MvDA [15]. The first seven methods are pairwise methods for cross-view classification. MvDA jointly learns all transforms when multiple views can be utilized, and has achieved the state-of-the-art results in their reports [15].

The Principal Component Analysis (PCA)

[4] is used for dimension reduction. In our experiments, we set the default dimensionality as 100 with preservation of most energy except Deep CCA, PLS, CSR and CDFE, where the dimensionality are tuned in [50,1000] for the best performance. For all these methods, we report the best performance by tuning the related parameters according to their papers. Firstly, for Kernel CCA, we experiment with Gaussian kernel and polynomial kernel and adjust the parameters to get the best performance. Then for Deep CCA [3], we strictly follow their algorithms and tune all possible parameters, but the performance is inferior to CCA. One possible reason is that Deep CCA only considers the correlations on training data (as reported in their paper) so that the learnt mode overly fits the training data, which thus leads to the poor generality on the testing set. Besides, the parameter and are respectively traversed in [0.2,2] and [0.0001,1] for CDFE, the parameter and are searched in [0.001,1] for CSR, and the reduced dimensionality is tuned for CCA, PLS, FDA and MvDA.

As for our proposed DCAN, the performance on CUFSF database of varied parameters, , is shown in Fig.3. In following experiments, we set , and

. With increasing layers, the number of hidden neurons are gradually reduced by

, i.e., if four layers.

Method Accuracy
CCA[14] 0.698
KernelCCA[10] 0.840
DeepCCA[3] 0.599
FDA[4] 0.814
CDFE[22] 0.773
CSR[20] 0.580
PLS[26] 0.574
MvDA[15] 0.867
DCAN-1 0.830
DCAN-2 0.877
DCAN-3 0.884
DCAN-4 0.879
Table 1:

Evaluation on Multi-PIE database in terms of mean accuracy. DCAN-k means a stacked k-layer network.

1.000 0.816 0.588 0.473 0.473 0.515 0.511
0.816 1.000 0.858 0.611 0.664 0.553 0.553
0.588 0.858 1.000 0.894 0.807 0.602 0.447
0.473 0.611 0.894 1.000 0.909 0.604 0.484
0.473 0.664 0.807 0.909 1.000 0.874 0.602
0.515 0.553 0.602 0.604 0.874 1.000 0.768
0.511 0.553 0.447 0.484 0.602 0.768 1.000
(a) CCA,
1.000 0.878 0.810 0.756 0.706 0.726 0.737
0.878 1.000 0.892 0.858 0.808 0.801 0.757
0.810 0.892 1.000 0.911 0.880 0.861 0.765
0.756 0.858 0.911 1.000 0.938 0.759 0.759
0.706 0.808 0.880 0.938 1.000 0.922 0.845
0.726 0.801 0.861 0.759 0.922 1.000 0.912
0.737 0.757 0.765 0.759 0.845 0.912 1.000
(b) KernelCCA,
1.000 0.854 0.598 0.425 0.473 0.522 0.523
0.854 1.000 0.844 0.578 0.676 0.576 0.566
0.598 0.844 1.000 0.806 0.807 0.602 0.424
0.425 0.578 0.806 1.000 0.911 0.599 0.444
0.473 0.676 0.807 0.911 1.000 0.866 0.624
0.522 0.576 0.602 0.599 0.866 1.000 0.756
0.523 0.566 0.424 0.444 0.624 0.756 1.000
(c) DeepCCA,
1.000 0.847 0.754 0.686 0.573 0.610 0.664
0.847 1.000 0.911 0.847 0.807 0.766 0.635
0.754 0.911 1.000 0.925 0.896 0.821 0.602
0.686 0.847 0.925 1.000 0.964 0.872 0.684
0.573 0.807 0.896 0.964 1.000 0.929 0.768
0.610 0.766 0.821 0.872 0.929 1.000 0.878
0.664 0.635 0.602 0.684 0.768 0.878 1.000
(d) FDA,
1.000 0.854 0.714 0.595 0.557 0.633 0.608
0.854 1.000 0.867 0.746 0.688 0.697 0.606
0.714 0.867 1.000 0.887 0.808 0.704 0.579
0.595 0.746 0.887 1.000 0.916 0.819 0.651
0.557 0.688 0.808 0.916 1.000 0.912 0.754
0.633 0.697 0.704 0.819 0.912 1.000 0.850
0.608 0.606 0.579 0.651 0.754 0.850 1.000
(e) CDFE,
1.000 0.914 0.854 0.763 0.710 0.770 0.759
0.914 1.000 0.947 0.858 0.812 0.861 0.766
0.854 0.947 1.000 0.923 0.880 0.894 0.775
0.763 0.858 0.923 1.000 0.938 0.900 0.750
0.710 0.812 0.880 0.938 1.000 0.923 0.807
0.770 0.861 0.894 0.900 0.923 1.000 0.934
0.759 0.766 0.775 0.750 0.807 0.934 1.000
(f) MvDA,
1.000 0.872 0.819 0.730 0.655 0.708 0.686
0.856 1.000 0.881 0.825 0.754 0.737 0.650
0.807 0.874 1.000 0.869 0.865 0.781 0.681
0.757 0.854 0.896 1.000 0.938 0.858 0.790
0.688 0.777 0.854 0.916 1.000 0.900 0.823
0.708 0.735 0.788 0.834 0.918 1.000 0.916
0.719 0.715 0.697 0.752 0.832 0.909 1.000
(g) DCAN-1,
1.000 0.905 0.876 0.783 0.714 0.779 0.796
0.927 1.000 0.954 0.896 0.850 0.825 0.730
0.867 0.929 1.000 0.905 0.905 0.867 0.757
0.832 0.876 0.925 1.000 0.958 0.896 0.808
0.765 0.865 0.907 0.951 1.000 0.929 0.874
0.779 0.832 0.870 0.916 0.945 1.000 0.949
0.794 0.777 0.785 0.812 0.876 0.938 1.000
(h) DCAN-3,
Table 2: Results of CCA, FDA [4], CDFE [22], MvDA [15] and DCAN on MultiPIE dataset in terms of rank-1 recognition rate. DCAN-k means a stacked k-layer network. Due to space limitation, the results of other methods cannot be reported here, but their mean accuracies are shown in Table 1.

3.3 Face Recognition across Pose

First, to explicitly illustrate the learnt mapping, we conduct an experiment on Multi-PIE dataset by projecting the learnt common features into a 2-D space with Principal Component Analysis (PCA). As shown in Fig.2. The classical method CCA can only roughly align the data in the principal directions and the state-of-the-art method MvDA [15] attempts to merge two types of data but seems to fail. Thus, we argue that linear transforms are a little stiff to convert data from two views into an ideal common space. The three diagrams below shows that DCAN can gradually separate samples from different classes with the increase of layers, which is just as we described in the above analysis.

Figure 2: After learning common features by the cross-view methods, we project the features into 2-D space by using the principal two components in PCA. The depicted samples are randomly chosen form Multi-PIE [9] dataset. The “” and “” points come from two views respectively. Different color points belong to different classes. DCAN-k is our proposed method with a stacked k-layer neural network.

Next, we compare our methods with several state-of-the-art methods for the cross-view face recognition task on Multi-PIE data set. Since the images are acquired over seven poses on Multi-PIE data set, in total comparison experiments need to be conducted. The detailed results are shown in Table 2,where two poses are used as the gallery and probe set to each other and the rank-1 recognition rate is reported. Further, the mean accuracy of all pairwise results for each methods is also reported in Table 1.

From Table 1, we can find the supervised methods except CSR are significantly superior to CCA due to the use of the label information. And nonlinear methods except Deep CCA are significantly superior to the nonlinear methods due to the use of nonlinear transforms. Compared with FDA, the proposed DCAN with only one layer network can perform better with 1.6% improvement. With increasing layers, the accuracy of DCAN reaches a climax via stacking three layer networks. The reason of the degradation in DCAN with four layers is mainly the effect of reduced dimensionality, where 10 dimensions are cut out from the above layer network. Obviously, compared with two-view based methods, the proposed DCAN with three layers improves the performance greatly (88.4% vs. 81.4%). Besides, MvDA also achieves a considerably good performance by using all samples from all poses. It is unfair to compare these two-view based methods (containing DCAN) with MvDA, because the latter implicitly uses additional five views information except current compared two views. But our method performs better than MvDA, 88.4% vs. 86.7%. As observed in Table 2, three-layer DCAN achieves a largely improvement compared with CCA,FDA,CDFE for all cross-view cases and MvDA for most of cross-view cases. The results are shown in Table 2 and Table 1.

3.4 Photo-Sketch Recognition

Method Photo-Sketch Sketch-Photo
CCA[14] 0.387 0.475
KernelCCA[10] 0.466 0.570
DeepCCA[3] 0.364 0.434
CDFE[22] 0.456 0.476
CSR[20] 0.502 0.590
PLS[26] 0.486 0.510
FDA[4] 0.468 0.534
MvDA[15] 0.534 0.555
DCAN-1 0.535 0.555
DCAN-2 0.603 0.613
DCAN-3 0.601 0.652
Table 3: Evluation on CUFSF database in terms of mean accuracy. DCAN-k means a stacked k-layer network.
Figure 3: The performance with varied parameter values for our proposed DCAN. The sketch and photo images in CUFSF [34, 31] are respectively used for the gallery and probe set. (a) Varied with fixed . (b) Varied with fixed .

Photo-Sketch recognition is conducted on CUFSF dataset. The samples come from only two views, photo and sketch. The comparison results are provided in Table 3. As shown in this table, since only two views can be utilized in this case, MvDA degrades to a comparable performance with those previous two-view based methods. Our proposed DCAN with three layer networks can achieve even better with more than 6% improvement, which further indicates DCAN benefits from the nonlinear and multi-layer structure.

Discussion and analysis: The above experiments demonstrate that our methods can work very well even on a small sample size. The reasons lie in three folds:

  1. The maximum margin criterion makes the learnt mapping more discriminative, which is a straightforward strategy in the supervised classification task.

  2. Auto-encoder approximately preserves the local neighborhood structures.
    For this, Alain et al. [2]

    theoretically prove that the learnt representation by auto-encoder can recover local properties from the view of manifold. To further validate that, we employ the first 700 photo images from CUFSF database to perform the nonlinear self-reconstruction with auto-encoder. With the hidden presentations, we find the local neighbors with 1,2,3,4,5 neighbors can be preserved with the probability of 99.43%, 99.00%, 98.57%, 98.00% and 97.42% respectively. Thus, the use of auto-encoder intrinsically reduces the complexity of the discriminant model, which further makes the learnt model better generality on the testing set.

  3. The deep structure generates a gradual model, which makes the learnt transform more robust. With only one layer, the model can’t represent the complex data very well. But with layers goes deeper, the coupled networks can learn transforms much more flexible and hence can be allowed to handle more complex data.

4 Conclusion

In this paper, we propose a deep learning method, the Deeply Coupled Auto-encoder Networks(DCAN), which can gradually generate a coupled discriminant common representation for cross-view object classification. In each layer we take both local consistency and discrimination of projected data into consideration. By stacking multiple such coupled network layers, DCAN can gradually improve the learnt shared features in the common space. Moreover, experiments in the cross-view classification tasks demonstrate the superior of our method over other state-of-the-art methods.


  • [1] S. Akaho. A kernel method for canonical correlation analysis, 2006.
  • [2] G. Alain and Y. Bengio. What regularized auto-encoders learn from the data generating distribution. arXiv preprint arXiv:1211.4246, 2012.
  • [3] G. Andrew, R. Arora, J. Bilmes, and K. Livescu. Deep canonical correlation analysis.
  • [4] P. N. Belhumeur, J. P. Hespanha, and D. J. Kriegman. Eigenfaces vs. fisherfaces: Recognition using class specific linear projection. IEEE Transactions on Pattern Analysis and Machine Intelligence, 19(7):711–720, 1997.
  • [5] Y. Bengio, A. Courville, and P. Vincent. Representation learning: A review and new perspectives. 2013.
  • [6] Y. Bengio, P. Lamblin, D. Popovici, and H. Larochelle. Greedy layer-wise training of deep networks, 2007.
  • [7] M. Chen, Z. Xu, K. Weinberger, and F. Sha.

    Marginalized denoising autoencoders for domain adaptation, 2012.

  • [8] N. Chen, J. Zhu, and E. P. Xing. Predictive subspace learning for multi-view data: a large margin approach, 2010.
  • [9] R. Gross, I. Matthews, J. Cohn, T. Kanade, and S. Baker. The cmu multi-pose, illumination, and expression (multi-pie) face database, 2007.
  • [10] D. R. Hardoon, S. Szedmak, and J. Shawe-Taylor. Canonical correlation analysis: An overview with application to learning methods. Neural Computation, 16(12):2639–2664, 2004.
  • [11] T. Hastie, R. Tibshirani, and J. J. H. Friedman. The elements of statistical learning, 2001.
  • [12] G. E. Hinton, S. Osindero, and Y.-W. Teh. A fast learning algorithm for deep belief nets. Neural computation, 18(7):1527–1554, 2006.
  • [13] G. E. Hinton and R. R. Salakhutdinov. Reducing the dimensionality of data with neural networks. Science, 313(5786):504–507, 2006.
  • [14] H. Hotelling. Relations between two sets of variates. Biometrika, 28(3/4):321–377, 1936.
  • [15] M. Kan, S. Shan, H. Zhang, S. Lao, and X. Chen. Multi-view discriminant analysis. pages 808–821, 2012.
  • [16] T.-K. Kim, J. Kittler, and R. Cipolla. Discriminative learning and recognition of image set classes using canonical correlations. IEEE Transactions on Pattern Analysis and Machine Intelligence, 29(6):1005–1018, 2007.
  • [17] Q. V. Le, J. Ngiam, A. Coates, A. Lahiri, B. Prochnow, and A. Y. Ng. On optimization methods for deep learning, 2011.
  • [18] Y. LeCun, L. Bottou, G. B. Orr, and K.-R. Müller. Efficient backprop. In Neural networks: Tricks of the trade, pages 9–50. Springer, 1998.
  • [19] H. Lee, A. Battle, R. Raina, and A. Y. Ng. Efficient sparse coding algorithms, 2007.
  • [20] Z. Lei and S. Z. Li. Coupled spectral regression for matching heterogeneous faces, 2009.
  • [21] H. Li, T. Jiang, and K. Zhang. Efficient and robust feature extraction by maximum margin criterion. Neural Networks, IEEE Transactions on, 17(1):157–165, 2006.
  • [22] D. Lin and X. Tang. Inter-modality face recognition. pages 13–26, 2006.
  • [23] J. Ngiam, A. Khosla, M. Kim, J. Nam, H. Lee, and A. Y. Ng. Multimodal deep learning, 2011.
  • [24] J. Nocedal and S. J. Wright. Numerical optimization, 2006.
  • [25] P. J. Phillips, H. Wechsler, J. Huang, and P. J. Rauss. The feret database and evaluation procedure for face-recognition algorithms. Image and vision computing, 16(5):295–306, 1998.
  • [26] A. Sharma and D. W. Jacobs. Bypassing synthesis: Pls for face recognition with pose, low-resolution and sketch, 2011.
  • [27] A. Sharma, A. Kumar, H. Daume, and D. W. Jacobs. Generalized multiview analysis: A discriminative latent space, 2012.
  • [28] P. Vincent, H. Larochelle, Y. Bengio, and P.-A. Manzagol. Extracting and composing robust features with denoising autoencoders, 2008.
  • [29] F. Wang and C. Zhang. Feature extraction by maximizing the average neighborhood margin. In

    Computer Vision and Pattern Recognition, 2007. CVPR’07. IEEE Conference on

    , pages 1–8. IEEE, 2007.
  • [30] S. Wang, L. Zhang, Y. Liang, and Q. Pan.

    Semi-coupled dictionary learning with applications to image super-resolution and photo-sketch synthesis, 2012.

  • [31] X. Wang and X. Tang. Face photo-sketch synthesis and recognition. IEEE Transactions on Pattern Analysis and Machine Intelligence, 31(11):1955–1967, 2009.
  • [32] J. Xie, L. Xu, and E. Chen. Image denoising and inpainting with deep neural networks, 2012.
  • [33] S. Yan, D. Xu, B. Zhang, H.-J. Zhang, Q. Yang, and S. Lin. Graph embedding and extensions: a general framework for dimensionality reduction. IEEE Transactions on Pattern Analysis and Machine Intelligence, 29(1):40–51, 2007.
  • [34] W. Zhang, X. Wang, and X. Tang. Coupled information-theoretic encoding for face photo-sketch recognition, 2011.
  • [35] B. Zhao, F. Wang, and C. Zhang. Maximum margin embedding. In Data Mining, 2008. ICDM’08. Eighth IEEE International Conference on, pages 1127–1132. IEEE, 2008.