1 Introduction
Realworld objects often have different views, which might be endowed with the same semantic. For example, face images can be captured in different poses, which reveal the identity of the same object; images of one face can also be in different modalities, such as pictures under different lighting condition, pose, or even sketches from artists. In many computer vision applications, such as image retrieval, interests are taken in comparing two types of heterogeneous images, which may come from different views or even different sensors. Since the spanned feature spaces are quite different, it is very difficult to classify these images across views directly. To decrease the discrepancy across views, most of previous works endeavored to learn viewspecific linear transforms and to project crossview samples into a common latent space, and then employed these newly generated features for classification.
Though there are lots of approaches used to learn viewspecific projections, they can be divided roughly based on whether the supervised information is used. Unsupervised methods such as Canonical Correlation Analysis (CCA)[14] and Partial Least Square (PLS) [26] are employed to the task of crossview recognition. Both of them attempt to use two linear mappings to project samples into a common space where the correlation is maximized, while PLS considers the variations rather than only the correlation in the target space. Besides, with use of the mutual information, a Coupled InformationTheoretic Encoding (CITE) method is developed to narrow the interview gap for the specific photosketch recognition task. And in [30], a semicoupled dictionary is used to bridge two views. All the methods above consider to reduce the discrepancy between two views, however, the label information is not explicitly taken into account. With label information available, many methods were further developed to learn a discriminant common space For instance, Discriminative Canonical Correlation Analysis (DCCA) [16] is proposed as an extension of CCA. And In [22]
, with an additional local smoothness constraints, two linear projections are simultaneously learnt for Common Discriminant Feature Extraction (CDFE). There are also other such methods as the large margin approach
[8] and the Coupled Spectral Regression (CSR) [20]. Recently, multiview analysis [27, 15] is further developed to jointly learn multiple specificview transforms when multiple views (usually more than 2 views) can be available.Although the above methods have been extensively applied in the crossview problem, and have got encouraging performances, they all employed linear transforms to capture the shared features of samples from two views. However, these linear discriminant analysis methods usually depend on the assumption that the data of each class agrees with a Gaussian distribution, while data in real world usually has a much more complex distribution
[33]. It indicates that linear transforms are insufficient to extract the common features of crossview images. So it’s natural to consider about learning nonlinear features.A recent topic of interest in nonlinear learning is the research in deep learning. Deep learning attempts to learn nonlinear representations hierarchically via deep structures, and has been applied successfully in many computer vision problems. Classical deep learning methods often stack or compose multiple basic building blocks to yield a deeper structure. See
[5] for a recent review of Deep Learning algorithms. Lots of such basic building blocks have been proposed, including sparse coding [19], restricted Boltzmann machine (RBM)
[12], autoencoder [13, 6], etc. Specifically, the (stacked) autoencoder has shown its effectiveness in image denoising [32], domain adaptation [7], audiovisual speech classification [23], etc.As we all known, the kernel method, such as Kernel Canonical Correlation Analysis(Kernel CCA) [1], is also a widely used approach to learn nonlinear representations. Compared with the kernel method, deep learning is much more flexible and timesaving because the transform is learned rather than fixed and the time needed for training and inference process is beyond the limit of the size of training set.
Inspired by the deep learning works above, we intend to solve the crossview classification task via deep networks. It’s natural to build one single deep neural network with samples from both views, but this kind of network can’t handle complex data from totally different modalities and may suffer from inadequate representation capacity. Another way is to learn two different deep neural networks with samples of the different views. However, the two independent networks project samples from different views into different spaces, which makes comparison infeasible. Hence, building two neural networks coupled with each other seems to be a better solution.
In this work, we propose a Deeply Coupled Autoencoder Networks(DCAN) method that learns the common representations to conduct crossview classification by building two neural networks deeply coupled respectively, each for one view. We build the DCAN by stacking multiple discriminative coupled autoencoders, a denoising autoencoder with maximum margin criterion. The discriminative coupled autoencoder has a similar input corrupted and reconstructive error minimized mechanism with the denoising autoencoder proposed in [28], but is modified by adding a maximum margin criterion. This kind of criterion has been used in previous works, like [21, 29, 35], etc. Note that the counterparts from two views are added into the maximum margin criterion simultaneously since they both come from the same class, which naturally couples the corresponding layer in two deep networks. A schematic illustration can be seen in Fig.1.
The proposed DCAN is related to Multimodal Autoencoders [23], Multimodal Restricted Boltzmann Machines and Deep Canonical Correlation Analysis [3]. The first two methods tend to learn a single network with one or more layers connected to both views and to predict one view from the other view, and the Deep Canonical Correlation Analysis build two deep networks, each for one view, and only representations of the highest layer are constrained to be correlated. Therefore, the key difference is that we learn two deep networks coupled with each other in representations in each layer, which is of great benefits because the DCAN not only learn two separate deep encodings but also makes better use of data from the both two views. What’s more, these differences allow for our model to handle the recognition task even when data is impure and insufficient.
2 Deeply Coupled Autoencoder Networks
In this section, we first present the basic idea. The second part gives a detailed description of the discriminative coupled autoencoder. Then, we describe how to stack multiple layers to build a deep network. Finally, we briefly describe the optimization of the model.
2.1 Basic Idea
As shown in Fig.1, the Deeply Coupled Autoencoder Networks(DCAN) consists of two deep networks coupled with each other, and each one is for one view. The network structures of the two deep networks are just like the leftmost and the rightmost parts in Fig.1
, where circles means the units in each layers (pixels in a input image for the input layer and hidden representation in higher layers), and arrows denote the full connections between adjacent layers. And the middle part of Fig.
1 illustrates how the whole network projects samples in different views into a common space and gradually enhances the separability with increasing layers.The two deep networks are both built through stacking multiple similar coupled single layer blocks because a single coupled layer might be insufficient, and the method of stacking multiple layers and training each layer greedily has be proved efficient in lots of previous works, such as those in [13, 6]. With the number of layers increased, the whole network can compactly represent a significantly larger set of transforms than shallow networks , and gradually narrow the gap with the discriminative capacity enhanced.
We use a discriminative coupled autoencoders trained with maximum margin criterion as a single layer component. Concretely, we incorporate the additional noises in the training process while maximizing the margin criterion, which makes the learnt mapping more stable as well as discriminant. Note that the maximum margin criterion also works in coupling two corresponding layers. Formally, the discriminative coupled autoencoder can be written as follows:
(1)  
(2) 
where denote inputs from the two views, and denote hidden representations of the two views respectively. are the transforms we intend to learn, and we denote the reconstructive error as , and maximum margin criterion as , which are described detailedly in the next subsection. is the threshold of the maximum margin criterion.
2.2 Discriminative coupled autoencoder
In the problem of crossview, there are two types of heterogenous samples. Without loss of generality, we denote samples from one view as , and those from the other view as , in which is the sample sizes. Noted that the corresponding labels are known, and denote hidden representations of the two views we want to learn.
The DCAN attempts to learn two nonlinear transforms and that can project the samples from two views to one discriminant common space respectively, in which the local neighborhood relationship as well as class separability should be well preserved for each view. The autoencoder like structure stands out in preserving the local consistency, and the denoising form enhances the robustness of learnt representations. However, the discrimination isn’t taken into consideration. Therefore, we modify the denoising autoencoder by adding a maximum margin criterion consisting of intraclass compactness and interclass penalty. And the best nonlinear transformation is a tradeoff between local consistency preserving and separability enhancing.
Just like the one in denoising autoencoder, the reconstructive error in Eq.(1) is formulated as follows:
(3)  
(4) 
where calculates the expectation over corrupted versions of examples obtained from a corruption process . specifies the two nonlinear transforms , where is the weight matrix, and are the bias of encoder and decoder respectively, and are calculated through the decoder process :
(5) 
And hidden representations are obtained from the encoder that is a similar mapping with the decoder,
(6) 
where
is the nonlinear activation function, such as the pointwise hyperbolic tangent operation on linear projected features,
i.e.,(7) 
in which is the gain parameter.
Moreover, for the maximum margin criterion consisting of intraclass compactness and interclass penalty, the constraint term in Eq.(1) is used to realize coupling since samples of the same class are treated similarly no matter which view they are from.
Assuming is the set of sample pairs from the same class, and is the set of sample pairs from different classes. Note that the counterparts from two views are naturally added into since it’s the class rather than the view that are considered.
Then, we characterize the compactness as follows,
(8) 
where denotes the corresponding hidden representation of an input and is a sample from either view 1 or view 2, and is the size of .
Meanwhile, the goal of the interclass separability is to push the adjacent samples from different classes far away, which can be formulated as follows,
(9) 
where belongs to the nearest neighbors of with different class labels, and is the number of all pairs satisfying the condition.
And the function of is illustrated in the middel part of Fig.1. In the projected common space denoted by , the compactness term shown by red ellipse works by pulling intraclass samples together while the penalty term shown by black ellipse tend to push adjacent interclass samples away.
Finally, by solving the optimization problem Eq.(1), we can learn a couple of nonlinear transforms to transform the original samples from both views into a common space.
2.3 Stacking coupled autoencoder
Through the training process above, we model the map between original sample space and a preliminary discriminant subspace with gap eliminated, and build a hidden representation which is a tradeoff between approximate preservation on local consistency and the distinction of the projected data. But since realworld data is highly complicated, using a single coupled layer to model the vast and complex real scenes might be insufficient. So we choose to stack multiple such coupled network layers described in subsection 2.2. With the number of layers increased, the whole network can compactly represent a significantly larger set of transforms than shallow networks, and gradually narrow the gap with the discriminative ability enhanced.
Training a deep network with coupled nonlinear transforms can be achieved by the canonical greedy layerwise approach [12, 6]. Or to be more precise, after training a single layer coupled network, one can compute a new feature by the encoder in Eq.(6) and then feed it into the next layer network as the input feature. In practice, we find that stacking multiple such layers can gradually reduce the gap and improve the recognition performance (see Fig.1 and Section 3).
2.4 Optimization
We adopt the Lagrangian multiplier method to solve the objective function Eq.(1) with the constraints Eq.(2) as follows:
(10) 
where the first term is the the reconstruction error, the second term is the maximum margin criterion, and the last term is the shrinkage constraints called the Tikhonov regularizers in [11], which is utilized to decrease the magnitude of the weights and further to help prevent overfitting. is the balance parameter between the local consistency and empirical separability. And is called the weight decay parameter and is usually set to a small value, e.g., 1.0e4.
To optimize the objective function (10), we use backpropagation to calculate the gradient and then employ the limitedmemory BFGS (LBFGS) method [24, 17], which is often used to solve nonlinear optimization problems without any constraints. LBFGS is particularly suitable for problems with a large amount of variables under the moderate memory requirement. To utilize LBFGS, we need to calculate the gradients of the object function. Obviously, the object function in (10) is differential to these parameters , and we use Backpropagation [18] method to derive the derivative of the overall cost function. In our setting, we find the objective function can achieve as fast convergence as described in [17].
3 Experiments
In this section, the proposed DCAN is evaluated on two datasets, MultiPIE [9] and CUHK Face Sketch FERET (CUFSF) [34, 31].
3.1 Databases
MultiPIE dataset [9]
is employed to evaluate face recognition across pose. Here a subset from the 337 subjects in 7 poses (
), 3 expression (Neutral,Smile, Disgust), no flush illumination from 4 sessions are selected to validate our method. We randomly choose 4 images for each pose of each subject, then randomly partition the data into two parts: the training set with 231 subjects (i.e., images) and the testing set with the rest subjects.CUHK Face Sketch FERET (CUFSF) dataset [34, 31] contains two types of face images: photo and sketch. Total 1,194 images (one image per subject) were collected with lighting variations from FERET dataset [25]. For each subject, a sketch is drawn with shape exaggeration. According to the configuration of [15], we use the first 700 subjects as the training data and the rest subjects as the testing data.
3.2 Settings
All images from MultiPIE and CUFSF are cropped into 6480 pixels without any preprocess. We compare the proposed DCAN method with several baselines and stateoftheart methods, including CCA [14], Kernel CCA [1], Deep CCA [3], FDA [4], CDFE [22], CSR [20], PLS [26] and MvDA [15]. The first seven methods are pairwise methods for crossview classification. MvDA jointly learns all transforms when multiple views can be utilized, and has achieved the stateoftheart results in their reports [15].
The Principal Component Analysis (PCA)
[4] is used for dimension reduction. In our experiments, we set the default dimensionality as 100 with preservation of most energy except Deep CCA, PLS, CSR and CDFE, where the dimensionality are tuned in [50,1000] for the best performance. For all these methods, we report the best performance by tuning the related parameters according to their papers. Firstly, for Kernel CCA, we experiment with Gaussian kernel and polynomial kernel and adjust the parameters to get the best performance. Then for Deep CCA [3], we strictly follow their algorithms and tune all possible parameters, but the performance is inferior to CCA. One possible reason is that Deep CCA only considers the correlations on training data (as reported in their paper) so that the learnt mode overly fits the training data, which thus leads to the poor generality on the testing set. Besides, the parameter and are respectively traversed in [0.2,2] and [0.0001,1] for CDFE, the parameter and are searched in [0.001,1] for CSR, and the reduced dimensionality is tuned for CCA, PLS, FDA and MvDA.As for our proposed DCAN, the performance on CUFSF database of varied parameters, , is shown in Fig.3. In following experiments, we set , and
. With increasing layers, the number of hidden neurons are gradually reduced by
, i.e., if four layers.Method  Accuracy 

CCA[14]  0.698 
KernelCCA[10]  0.840 
DeepCCA[3]  0.599 
FDA[4]  0.814 
CDFE[22]  0.773 
CSR[20]  0.580 
PLS[26]  0.574 
MvDA[15]  0.867 
DCAN1  0.830 
DCAN2  0.877 
DCAN3  0.884 
DCAN4  0.879 
Evaluation on MultiPIE database in terms of mean accuracy. DCANk means a stacked klayer network.








3.3 Face Recognition across Pose
First, to explicitly illustrate the learnt mapping, we conduct an experiment on MultiPIE dataset by projecting the learnt common features into a 2D space with Principal Component Analysis (PCA). As shown in Fig.2. The classical method CCA can only roughly align the data in the principal directions and the stateoftheart method MvDA [15] attempts to merge two types of data but seems to fail. Thus, we argue that linear transforms are a little stiff to convert data from two views into an ideal common space. The three diagrams below shows that DCAN can gradually separate samples from different classes with the increase of layers, which is just as we described in the above analysis.
Next, we compare our methods with several stateoftheart methods for the crossview face recognition task on MultiPIE data set. Since the images are acquired over seven poses on MultiPIE data set, in total comparison experiments need to be conducted. The detailed results are shown in Table 2,where two poses are used as the gallery and probe set to each other and the rank1 recognition rate is reported. Further, the mean accuracy of all pairwise results for each methods is also reported in Table 1.
From Table 1, we can find the supervised methods except CSR are significantly superior to CCA due to the use of the label information. And nonlinear methods except Deep CCA are significantly superior to the nonlinear methods due to the use of nonlinear transforms. Compared with FDA, the proposed DCAN with only one layer network can perform better with 1.6% improvement. With increasing layers, the accuracy of DCAN reaches a climax via stacking three layer networks. The reason of the degradation in DCAN with four layers is mainly the effect of reduced dimensionality, where 10 dimensions are cut out from the above layer network. Obviously, compared with twoview based methods, the proposed DCAN with three layers improves the performance greatly (88.4% vs. 81.4%). Besides, MvDA also achieves a considerably good performance by using all samples from all poses. It is unfair to compare these twoview based methods (containing DCAN) with MvDA, because the latter implicitly uses additional five views information except current compared two views. But our method performs better than MvDA, 88.4% vs. 86.7%. As observed in Table 2, threelayer DCAN achieves a largely improvement compared with CCA,FDA,CDFE for all crossview cases and MvDA for most of crossview cases. The results are shown in Table 2 and Table 1.
3.4 PhotoSketch Recognition
Method  PhotoSketch  SketchPhoto 

CCA[14]  0.387  0.475 
KernelCCA[10]  0.466  0.570 
DeepCCA[3]  0.364  0.434 
CDFE[22]  0.456  0.476 
CSR[20]  0.502  0.590 
PLS[26]  0.486  0.510 
FDA[4]  0.468  0.534 
MvDA[15]  0.534  0.555 
DCAN1  0.535  0.555 
DCAN2  0.603  0.613 
DCAN3  0.601  0.652 
PhotoSketch recognition is conducted on CUFSF dataset. The samples come from only two views, photo and sketch. The comparison results are provided in Table 3. As shown in this table, since only two views can be utilized in this case, MvDA degrades to a comparable performance with those previous twoview based methods. Our proposed DCAN with three layer networks can achieve even better with more than 6% improvement, which further indicates DCAN benefits from the nonlinear and multilayer structure.
Discussion and analysis: The above experiments demonstrate that our methods can work very well even on a small sample size. The reasons lie in three folds:

The maximum margin criterion makes the learnt mapping more discriminative, which is a straightforward strategy in the supervised classification task.

Autoencoder approximately preserves the local neighborhood structures.
For this, Alain et al. [2]theoretically prove that the learnt representation by autoencoder can recover local properties from the view of manifold. To further validate that, we employ the first 700 photo images from CUFSF database to perform the nonlinear selfreconstruction with autoencoder. With the hidden presentations, we find the local neighbors with 1,2,3,4,5 neighbors can be preserved with the probability of 99.43%, 99.00%, 98.57%, 98.00% and 97.42% respectively. Thus, the use of autoencoder intrinsically reduces the complexity of the discriminant model, which further makes the learnt model better generality on the testing set.

The deep structure generates a gradual model, which makes the learnt transform more robust. With only one layer, the model can’t represent the complex data very well. But with layers goes deeper, the coupled networks can learn transforms much more flexible and hence can be allowed to handle more complex data.
4 Conclusion
In this paper, we propose a deep learning method, the Deeply Coupled Autoencoder Networks(DCAN), which can gradually generate a coupled discriminant common representation for crossview object classification. In each layer we take both local consistency and discrimination of projected data into consideration. By stacking multiple such coupled network layers, DCAN can gradually improve the learnt shared features in the common space. Moreover, experiments in the crossview classification tasks demonstrate the superior of our method over other stateoftheart methods.
References
 [1] S. Akaho. A kernel method for canonical correlation analysis, 2006.
 [2] G. Alain and Y. Bengio. What regularized autoencoders learn from the data generating distribution. arXiv preprint arXiv:1211.4246, 2012.
 [3] G. Andrew, R. Arora, J. Bilmes, and K. Livescu. Deep canonical correlation analysis.
 [4] P. N. Belhumeur, J. P. Hespanha, and D. J. Kriegman. Eigenfaces vs. fisherfaces: Recognition using class specific linear projection. IEEE Transactions on Pattern Analysis and Machine Intelligence, 19(7):711–720, 1997.
 [5] Y. Bengio, A. Courville, and P. Vincent. Representation learning: A review and new perspectives. 2013.
 [6] Y. Bengio, P. Lamblin, D. Popovici, and H. Larochelle. Greedy layerwise training of deep networks, 2007.

[7]
M. Chen, Z. Xu, K. Weinberger, and F. Sha.
Marginalized denoising autoencoders for domain adaptation, 2012.
 [8] N. Chen, J. Zhu, and E. P. Xing. Predictive subspace learning for multiview data: a large margin approach, 2010.
 [9] R. Gross, I. Matthews, J. Cohn, T. Kanade, and S. Baker. The cmu multipose, illumination, and expression (multipie) face database, 2007.
 [10] D. R. Hardoon, S. Szedmak, and J. ShaweTaylor. Canonical correlation analysis: An overview with application to learning methods. Neural Computation, 16(12):2639–2664, 2004.
 [11] T. Hastie, R. Tibshirani, and J. J. H. Friedman. The elements of statistical learning, 2001.
 [12] G. E. Hinton, S. Osindero, and Y.W. Teh. A fast learning algorithm for deep belief nets. Neural computation, 18(7):1527–1554, 2006.
 [13] G. E. Hinton and R. R. Salakhutdinov. Reducing the dimensionality of data with neural networks. Science, 313(5786):504–507, 2006.
 [14] H. Hotelling. Relations between two sets of variates. Biometrika, 28(3/4):321–377, 1936.
 [15] M. Kan, S. Shan, H. Zhang, S. Lao, and X. Chen. Multiview discriminant analysis. pages 808–821, 2012.
 [16] T.K. Kim, J. Kittler, and R. Cipolla. Discriminative learning and recognition of image set classes using canonical correlations. IEEE Transactions on Pattern Analysis and Machine Intelligence, 29(6):1005–1018, 2007.
 [17] Q. V. Le, J. Ngiam, A. Coates, A. Lahiri, B. Prochnow, and A. Y. Ng. On optimization methods for deep learning, 2011.
 [18] Y. LeCun, L. Bottou, G. B. Orr, and K.R. Müller. Efficient backprop. In Neural networks: Tricks of the trade, pages 9–50. Springer, 1998.
 [19] H. Lee, A. Battle, R. Raina, and A. Y. Ng. Efficient sparse coding algorithms, 2007.
 [20] Z. Lei and S. Z. Li. Coupled spectral regression for matching heterogeneous faces, 2009.
 [21] H. Li, T. Jiang, and K. Zhang. Efficient and robust feature extraction by maximum margin criterion. Neural Networks, IEEE Transactions on, 17(1):157–165, 2006.
 [22] D. Lin and X. Tang. Intermodality face recognition. pages 13–26, 2006.
 [23] J. Ngiam, A. Khosla, M. Kim, J. Nam, H. Lee, and A. Y. Ng. Multimodal deep learning, 2011.
 [24] J. Nocedal and S. J. Wright. Numerical optimization, 2006.
 [25] P. J. Phillips, H. Wechsler, J. Huang, and P. J. Rauss. The feret database and evaluation procedure for facerecognition algorithms. Image and vision computing, 16(5):295–306, 1998.
 [26] A. Sharma and D. W. Jacobs. Bypassing synthesis: Pls for face recognition with pose, lowresolution and sketch, 2011.
 [27] A. Sharma, A. Kumar, H. Daume, and D. W. Jacobs. Generalized multiview analysis: A discriminative latent space, 2012.
 [28] P. Vincent, H. Larochelle, Y. Bengio, and P.A. Manzagol. Extracting and composing robust features with denoising autoencoders, 2008.

[29]
F. Wang and C. Zhang.
Feature extraction by maximizing the average neighborhood margin.
In
Computer Vision and Pattern Recognition, 2007. CVPR’07. IEEE Conference on
, pages 1–8. IEEE, 2007. 
[30]
S. Wang, L. Zhang, Y. Liang, and Q. Pan.
Semicoupled dictionary learning with applications to image superresolution and photosketch synthesis, 2012.
 [31] X. Wang and X. Tang. Face photosketch synthesis and recognition. IEEE Transactions on Pattern Analysis and Machine Intelligence, 31(11):1955–1967, 2009.
 [32] J. Xie, L. Xu, and E. Chen. Image denoising and inpainting with deep neural networks, 2012.
 [33] S. Yan, D. Xu, B. Zhang, H.J. Zhang, Q. Yang, and S. Lin. Graph embedding and extensions: a general framework for dimensionality reduction. IEEE Transactions on Pattern Analysis and Machine Intelligence, 29(1):40–51, 2007.
 [34] W. Zhang, X. Wang, and X. Tang. Coupled informationtheoretic encoding for face photosketch recognition, 2011.
 [35] B. Zhao, F. Wang, and C. Zhang. Maximum margin embedding. In Data Mining, 2008. ICDM’08. Eighth IEEE International Conference on, pages 1127–1132. IEEE, 2008.
Comments
There are no comments yet.