1 Introduction
Learning to rank is an important research topic in information retrieval and data mining, which aims to learn a ranking model to produce a queryspecfic ranking list. The ranking model establishes a relationship between each pair of data samples by combining the corresponding features in an optimal way [1]. A score is then assigned to each pair to evaluate its relevance forming a global ranking list across all pairs. The success of learning to rank solutions has brought a wide spectrum of applications, including online advertising [2]
[3] and multimedia retrieval [4].Learning appropriate data representation and a suitable scoring function are two vital steps in the ranking problem. Traditionally, a feature mapping models the data distribution in a latent space to match the relevance relationship, while the scoring function is used to quantify the relevance measure [1]; however, the ranking problem in the real world emerges from multiple facets and data patterns are mined from diverse domains. For example, universities are positioned differently based on numerous factors and weights used for quality evaluation by different ranking agencies. Therefore, a global agreement across sources and domains should be achieved while still maintaining a high ranking performance.
Multiview learning has received a wide attention with a special focus on subspace learning [5, 6] and cotraining [7], and few attempts have been made in ranking problems [8]. It introduces a new paradigm to jointly model and combine information encoded in multiple views to enhance the learning performance. Specifically, subspace learning finds a common space from different input modalities using an optimization criterion. Canonical Correlation Analysis (CCA) [9, 10] is one of the prevailing unsupervised method used to measure a crossview correlation. By contrast, Multiview Discriminant Analysis (MvDA) [6]
is a supervised learning technique seeking the most discriminant features across views by maximizing the betweenclass scatter while minimizing the withinclass scatter in the underlying feature space. Furthermore, a generalized multiview embedding method
[5] was proposed using a graph embedding framework for numerous unsupervised and supervised learning techniques with extension to nonlinear transforms including (approximate) kernel mappings [11, 12]and neural networks
[5, 13]. A nonparametric version of [5] was also proposed in [14]. On the other hand, cotraining [7] was introduced to maximize the mutual agreement between two distinct views, and can be easily extended to multiple inputs by subsequently training over all pairs of views. A solution to the learning to rank problem was provided by minimizing the pairwise ranking difference using the same cotraining mechanism [8].Although there are several applications that could benefit from multiview learning to rank approach, the topic has still been insufficiently studied up to date [15]
. Ranking of multifacet objects is generally performed using composite indicators. The usefulness of a composite indicator depends upon the selected functional form and the weights associated with the component facets. Existing solutions for university ranking are an example of using the subjective weights in the method of composite indicators. However, the functional form and its assigned weights are difficult to define. Consequently, there is a high disparity in the evaluation metric between agencies, and the produced ranking lists usually cause dissension in academic institutes. However, one observation is that, the indicators from different agencies may partially overlap and have a high correlation between each other. We present an example in Fig.
1 to show that, several attributes in the THE dataset [16], including teaching, research, student staff ratio and student number are highly correlated with all of the attributes in the ARWU dataset [17]. Therefore, the motivation of this paper is to find a composite ranking by exploiting the correlation between individual rankings.Earlier success in multiview subspace learning provides a promising way for composite ranking. Concatenating multiple views into a single input overlooks possible view discrepancy and does not fully exploit their mutual agreement in ranking. Our goal is to study beyond the direct multiview subspace learning for ranking. This paper offers a multiobjective solution to ranking by capturing relevant information of feature mapping from within each view as well as across views. Moreover, we propose an endtoend method to optimize the tradeoff between viewspecific ranking and a discriminant combination of multiview ranking. To this end, we can improve crossview ranking performance while maintaining individual ranking objectives.
Intermediate feature representation in the neural network are exploited in our ranking solutions. Specifically, the first contribution is to provide two closely related methods by adopting an autoencoderlike network. We first train a network to learn viewspecific feature mappings, and then maximize their correlation with the intermediate representations using either an unsupervised or discriminant projection to a common latent space. A stochastic optimization method is introduced to fit the correlation criterion. Both the autoencoding subnetwork per view with a reconstruction objective and feedforward subnetworks with a joint correlationbased objective are iteratively optimized in the entire network. The projected feature representations in the common subspace are then combined and used to learn for the ranking function.
The second contribution (graphically described in Fig. 2) is an endtoend multiview learning to rank solution. A subnetwork for each view is trained with its own ranking objective. Then, features from intermediate layers of the subnetworks are combined after a discriminant mapping to a common space, and training towards the global ranking objective. As a result, a network assembly is developed to enhance the joint ranking with mimimum viewspecific ranking loss, so that we can achieve the maximum view agreement within a single optimization process.
The rest of the paper is organized as follows. In Section 2, we describe the related work close to our proposed methods. The proposed methods are introduced in Section 3. In Section 4, we present quantitative results to show the effectiveness of the proposed methods. Finally, Section 5 concludes the paper.
2 Related Work
2.1 Learning to rank
Learning to rank aims to optimize the combination of data representation for ranking problems [18]
. It has been widely used in a number of applications, including image retrieval and ranking
[4, 19], image quality ratings [20], online advertising [2], and text summarization
[8]. Solutions to this problem can be decomposed into several key components, including the input feature, the output vector and the scoring function. The framework is developed by training the scoring function from the input feature to the output ranking list, and then, scoring the ranking of new data. Traditional methods also include engineering the feature using the PageRank model
[21], for example, to optimally combine them for obtaining the output. Later, research was focused on discriminatively training the scoring function to improve the ranking outputs. The ranking methods can be organized in three categories for the scoring function: the pointwise approach, the pairwise apporach, and the listwise approach.We consider the pairwise approach in this paper and review the related methods as follows. A preference network is developed in [22] to evaluate the pairwise order between two documents. The network learns the preference function directly to the binary ranking output without using an additional scoring function. RankNet [23] defines the crossentropy loss and learns a neural network to model the ranking. Assuming the scoring function to be linear [24]
, the ranking problem can be transformed to a binary classification problem, and therefore, many classifiers are available to be applied for ranking document pairs. RankBoost
[25] adopts Adaboost algorithm [26], which iteratively focuses on the classfication errors between each pair of documents, and subsequently, improves the overall output. Ranking SVM [27]applies SVM to perform pairwise classification. GBRank is a ranking method based on Gradient Boost Tree
[28]. Semisupervised multiview ranking (SmVR) [8] follows the cotraining scheme to rank pairs of samples. Moreover, recent efforts focus on using the evaluation metric to guide the gradient with respect to the ranking pair during training. These studies include AdaRank [29], which optimizes the ranking errors rather than the classification error in an adaptive way, and LambdaRank [30]. However, all of these methods above consider the case of single view inputs, while multiview learning to rank is overlooked.2.1.1 Bipartite ranking
The pairwise approach of the ranking methods serves as the basis of our ranking method, and therefore, reviewed explicitly in this section. Suppose that the training data is organized in querysample pairs , where , is the dimensional feature vector for the pair of query , the th sample, is the relevance score, and the number of queryspecific samples is . We perform the pairwise transformation before the relevance prediction of each querysample pair, so that only the samples that belong to the same query are evaluated [24].
The modeled probability between each pair in this paper is defined as
where is the linear scoring function as , which maps the input feature vectors to the scores. Due to its linearity, we can transform the feature vectors and relevance score into () = (). In case of the ordered list () as the raw input, each data sample paired with its query is investigated, and their raw orders () are transformed as . In pairwise ranking, the relevance , if the query and sample are relevant, and , otherwise.
The feature difference becomes the new feature vector as the input data for nonlinear transforms and subspace learning. Therefore, the probability can be rewritten as
(1) 
The objective to make the right order of ranking can then be formulated as the cross entropy loss such that,
(2) 
which is proved in [23]
that it is an upper bound of the pairwise 01 loss function and optimized using gradient descent. The logistic regression or softmax function in neural networks can be used to learn the scoring function.
2.2 Multiview deep learning
Multiview learning considers enhancing the feature discriminability by taking inputs from diverse sources. One important approach to follow is subspace learning, which is traced back to CCA [31, 32] between two input domains, and its multiview extension, which has been studied in [33, 34, 35]. This approach can also be generalized using a higherorder correlation [35]. The main idea behind these techniques is to project the data representations in the two domains to a common subspace optimizing their mutual correlation. Subspace learning with supervision has also been extensively studied. Multiview Discriminant Analysis [6] performs the dimensionality reducation of features from multiple views exploiting the class information. Recently, these methods were generalized in the same framework [5, 36], which accommodates multiple views, supervision and nonlinearity. Cotraining [7] first trains two seperate regressors and then, iteratively maximizes their agreements.
Deep learning, which exploits the nonlinear transform of the raw feature space, has also been studied in the multiview scenario. The multimodal deep autoencoder [37] was proposed by taking nonlinear representations of a pair of views to learn their common characteristics. Deep CCA [13] is another twoview method which maximizes the pairwise correlation using neural networks. Thereafter, a twoview correlated autoencoder was developed [38, 39] with objectives to correlate the view pairs but also reconstruct the individual view in the same network. Multiview Deep Network [40] was also proposed as an extension of MvDA [6]. It optimizes the ratio trace of the graph embedding [41] to avoid the complexity of solutions without a closed form [42]. In this paper, however, we show that the trace ratio optimization can be solved efficiently in the updates of the multiview networks. Deep Multiview Canonical Correlation Analysis (DMvCCA) and Deep Multiview Modular Discriminant Analysis (DMvMDA) [5] are closely related to our work, and hence, they are described in the following sections.
2.2.1 Deep Multiview Canonical Correlation Analysis (DMvCCA)
The idea behind DMvCCA [5]
is to find a common subspace using a set of linear transforms
to project nonlinearly mapped input samples from the th view where the correlation is maximized. Specifically, it aims to maximize(3) 
where the matrix centralizes the input data matrix of each view , and is a vector of ones, . By defining the crossview covariance matrix between views and as , where , is the centered view, the data projection matrix , which has the column vector of in the
th view, can be obtained by solving the generalized eigenvalue problem
(4) 
It shows that the solution to this problem is derived with the maximal interview covariances and the minimal intraview covariances.
2.2.2 Deep Multiview Modular Discriminant Analysis (DMvMDA)
DMvMDA [5] is the neural networkbased multiview solution of LDA which maximizes the ratio of the determinant of the betweenclass scatter matrix of all view pairs to that of the withinclass scatter matrix. Mathematically, it is written as the projection matrix of the DMvMDA and is derived by optimizing the objective function
(5) 
where the betweenclass Laplacian matrix is
and the withinclass Laplacian matrix is
3 Model Formulation
We first introduce the formulation of MvCCAE and MvMDAE, and then the extension as multiview subspace learning to rank. Finally, the endtoend ranking method is described.
3.1 Multiview Canonically Correlated AutoEncoder (MvCCAE)
In contrast to DMvCCA and DMvMDA, where the nonlinear correlation between multiple views is optimized, we propose a multiobjective solution by maximizing the betweenview correlation while minimizing the reconstruction error from each view source. Given the data matrix of views, the encoding network and the decoding network , and the projection matrix , the objective of MvCCAE is formulated as follows,
(6) 
where we introduce the new objective , and the loss function of the th autoencoder is , with the regularization at the th intermediate layer of the th view denoted by . Here, and are controlling parameters for the tradeoff between the terms.
3.1.1 Optimization
Following the objective of DMvCCA [5], we aim to directly optimize the trace ratio in (3) and let
and
Here, the output of each subnetwork is denoted by . Then, we have
(7) 
and
(8) 
By using (7) and (8) and following the quotient rule, we derive the stochastic optimization of MvCCAE to be
(9) 
The gradient to compute the autoencoding loss is derived from the viewspecific subnetworks and . The subnetwork is optimized with to obtain the output , while the gradient of
network with respect to its parameters can be obtained using the chain rule from
.3.2 Multiview Modularly Discriminant AutoEncoder (MvMDAE)
Similar to MvCCAE, the objective of MvMDAE is to optimize the viewspecific reconstruction error and the crossview correlation as follows,
(10) 
3.2.1 Optimization
The detailed optimization is derived by replacing the laplacian matrix in MvCCAE with and . We let
and
Then, we have
(11) 
and
(12) 
The stochastic optimization of MvMDAE can be derived by using (11), (12) and applying the quotient rule as follows,
(13) 
The gradient of the objective can be calculated using the chain rule, and the stochastic gradient descent (SGD) is used with minibatches for optimization.
3.3 Multiview Subspace Learning to Rank (MvSL2R)
Multiview subspace learning to rank is formulated based on the fact that the projected feature in the common subspace can be used to train a scoring function for ranking. We generate the training data from the intersection of ranking samples between views to have the same samples but various representations from different view origins. The overall ranking agreement is made by calculating the average voting from the intersected ranking orders as
(14) 
By performing the pairwise transform in section 2.1.1 over the ranking data, we have the input of views and the crossview relevance scores obtained from the average ranking orders . The proposed ranking method consists of feature mapping into a common subspace, training a logistic regressor as the scoring function, and predicting the relevance of new sample pairs using the probability function
(15) 
where is the data projection matrix of the th view, and is the weight from the logistic regressor described in (1). We summarize these steps in the algorithm below.
3.4 Deep Multiview Discriminant Ranking (DMvDR)
Multiview Subspace Learning to Rank provides a promising method with MvCCAE and MvMDAE. However, it does not have a direct connection to ranking. Continuing the idea of multiobjective optimization, we propose to optimize the viewspecific and the joint ranking together in the single network as shown in Fig. 2. Taking university ranking as an example, various ranking lists are generated from different agencies, and each agency uses a different set of attributes to represent the universities. In training, given the inputs , the cross entropy loss (2) is optimized with the viewspecific relevance and the joint view relevance . Based on their evaluation metrics, the attributes , where , are trained through the viewspecific subnetwork . The nonlinear representations , are the inputs of the joint network as , , after the mappings to generate the joint university ranking list. Each of them is also the input to the viewspecific network , which minimizes its distance to the original ranking . We similarly exploit the effectiveness of intermediate layers in between the viewspecific subnetworks and , but towards the ranking loss for DMvDR. The detailed procedure of this method is described below.
The gradient of each viewspecific subnetwork is calculated from the output with respect to its parameters. Since the loss passes from each viewspecific to subnetwork, the gradient can be calculated independently with respect to the output of each viewspecific subnetwork as Then, the gradient of
with respect to its network weights can be determined through backpropagation
[43]. All subnetworks contain several layers with Sigmoid functions.
The fused subnetwork is updated with the gradient of the ranking loss from the crossview relevance scores . Similar to the generation of training data in MvSL2R, we find the intersection of the ranking data with different representations or measurements from various sources, and perform the pairwise transform to have the sample pairs as the input and from the crossview ranking orders in (14). As a result, the input to the fused subnetwork is the concatenation of the nonlinear mapping from the viewspecific networks as
(16) 
During testing, we can distinguish two possible scenarios: (a) If the samples are aligned and all presented from each view, the results from nonlinear mappings are combined in the same manner as the training phase to generate a fused ranking list at the end of subnetwork; and (b) If there are missing samples or completely unaligned in the test data, for the th view. The resulting viewspecific prediction still maintains the crossview agreement which is ranked from the trained joint network. The gradient of and can be easily calculated afterwards using the SGD.
Joint ranking is achieved using a multiview subspace embedding layer. Similar to MvMDAE, we take the mappings from the outputs from the subnetworks . The gradient of multiview subspace embedding (MvSE) in the trace ratio form is calculated by combining (11) and (12):
(17) 
The embedding layer is important as its gradient is forward passed to the fused subnetwork . Meanwhile, it is backward propagated in the layers of to reach the input . In turn, the parameters in are also affected by the outputs of subnetworks.
The update of the viewspecific depends on the viewspecific ranking output and the crossview relevance as it is a common subnetwork in both pipelines of networks. Through backpropagation, the th subnetworks and are optimized consecutively with respect to the gradient . Meanwhile, the training error with respect to the fused ranking is passed through multiview subspace embedding (MvSE) from in (16) as the input to the fused subnetwork . The resulting gradient of each subnetwork is given by
(18) 
where and are the scaling factors controlling the magnitude of the ranking loss. Similar to the other subnetworks, the gradients with respect to their parameters can be obtained by following the chain rule.
The update of the entire network of DMvDR can be summarized using the SGD with minibatches. The parameters of the subnetwork are denoted by . A gradient descent step is , where is the learning rate. The gradient update step at time can be written down with the chain rule collectively:
(19) 
We generate the training data using the pairwise transform presented in Section 2.1.1. During testing, the test samples can also be transformed into pairs to evaluate the relative relevance of each sample to its query. The raw ranking data can also be fed into the trained model to predict their overall ranking positions.
4 Experiments
In this section, we evaluate the performance of the proposed multiview learning to rank methods in three challenging problems: university ranking, multilinguistic ranking and image data ranking. The proposed methods are also compared to the related subspace learning and cotraining methods. The subspace learning methods follow the steps proposed in Section 3.3 for ranking. We compare the performance of the following methods in the experiments:

Best Single View: a method which shows the best performance of Ranking SVM [27] over the individual views.

Feature Concat: a method which concatenate the features of the common samples for training a Ranking SVM [27].

LMvCCA [5]: a linear multiview CCA method.

LMvMDA [5]: a linear supervised method for multiview subspace learning.

MvDA [6]: another linear supervised method for multiview subspace learning. It differs from the above in that the view difference is not encoded in this method.

SmVR [8]: a semisupervised method that seeks a global agreement in ranking. It belongs to the category of cotraining. We develop the complete data in the following experiments for training so that its comparison with the subspace learning methods is fair. Therefore, SmVR becomes a supervised method in this paper.

DMvCCA [5]: a nonlinear extension of LMvCCA using neural networks.

DMvMDA [5]: a nonlinear extension of LMvMDA using neural networks.

MvCCAE: the first proposed multiview subspace learning to rank method proposed in the paper.

MvMDAE: the supervised multiview subspace learning to rank method proposed in the paper.

DMvDR: the endtoend multiview learning to rank method proposed in the paper.
We present the quantitative results using several evaluation metrics including the Mean Average Precision (MAP), classification accuracy and Kendal’s tau. The Average Precision (AP) measures the relevance of all query and sample pairs with respect to the same query, while the MAP score calculates the mean AP across all queries [44]. After performing pairwise transform on the ranking data, the relevance prediction can be considered as a binary classification problem, and therefore the classification is utilized for evaluation. Kendal’s tau measures the ordinal association between two lists of samples.
We also present the experimental results graphically, and the following measures are used. The Mean Average Precision (MAP) score, which is the average precision at the ranks where recall changes, is illustrated on the 11point interpolated precisionrecall curves (PR curve) to show the ranking performance. Also, the ROC curve provides a graphical representation of the binary classification performance. It shows the true positive rates against the false positive rate at different thresholds. The correlation plots show linear correlation coefficients between two ranking lists.
4.1 University Ranking
The university ranking dataset available in Kaggle.com [45]
collects the world ranking data from three rating agencies, including the Times Higher Education (THE) World University Ranking, the Academic Ranking of World Universities (ARWU), and the Center for World University Rankings (CWUR). Despite political and controversial influences, they are widely considered as authorities for university ranking. The measurements are used as the feature vectors after feature preprocessings, which includes feature standardization and removal of categorical variables and groundtruth indicators including the ranking orders, university name, location, year and total scores. The
common universities from 2012 to 2014 are considered for training. After the pairwise transform in each year, samples are generated as the training data. The entire data in 2015 is considered for testing. The data distribution (after binary transform) of the common universities in 2015 is shown in Fig. 3.We can make several observations from the data distribution in Fig. 3. Firstly, the pairwise transform is applied on the university ranking data, which equally assigns the original ranking data to two classes. Then, the dimensionality of the data is reduced to 2dimensional using PCA in order to display it on the plots of Fig. 3. The data is then labelled with two colors red and green indicating the relevance between samples. We can notice a high overlap between the two classes in the case of raw data (left plot of Fig. 3), while the data on the right is clearly better separated after the projection using the proposed MvMDAE. This shows the discrimination power of the proposed supervised embedding method.
Furthermore, a rank correlation matrix of plots is presented in Fig. 4 with correlations among pairs of ranking lists from the views 13 and the predicted list denoted by ’Fused’. Histograms of the ranking data are shown along the matrix diagonal, while scatter plots of data pairs appear off diagonal. The slopes of the leastsquares reference lines in the scatter plots are equal to the displayed correlation coefficients. The fused ranking list is produced by the proposed DMvDR, and the results are also generated from the common universities in 2015. We first take a closer look at the correlations between the views 13. The correlation coefficients are generally low, with the highest () between view 1 and 3, while the others are around . In contrast, the fused rank has a high correlation to each view. The scatter plots and the reference lines are well aligned, and the correlation coefficients are all above , demonstrating that the proposed DMvDR effectively exploits the global agreement with all view.
Finally, the average prediction results over 3 different university datasets of the proposed and competing methods are reported in Table I. Due to the misalignment of ranking data in 2015 across datasets, we make the ranking prediction based on each view input, which is further elaborated in the Section 3.4. We observe that Ranking SVM [27] on the single feature or its concatenation performs poorly compared to the other methods. This shows that when the data is heterogeneous, simply combining the features cannot enhance joint ranking. Kendal’s tau from the linear subspace learning methods are comparatively higher than their nonlinear counterparts. This is due to the fact that the nonlinear methods aim to maximize to the correlation in the embedding space, while the scoring function is not optimized for ranking. In contrast, DMvDR optimizes the entire ranking process, which is confirmed with the highest ranking and classification performance.


Methods  Kendal’s tau  Accuracy 
Best Single View  65.38   
Feature Concat  35.10   
LMvCCA [5]  86.04  94.49 
LMvMDA [5]  87.00  94.97 
MvDA [6]  85.81  94.34 
SmVR [8]  80.75   
DMvCCA [5]  70.07  93.20 
DMvMDA [5]  70.81  94.75 
MvCCAE (ours)  75.94  94.01 
MvMDAE (ours)  81.04  94.85 
DMvDR (ours)  89.28  95.30 

4.2 Multilingual Ranking




2 views  3 views  4 views  5 views  
Methods  MAP@100  Accuracy  MAP@100  Accuracy  MAP@100  Accuracy  MAP@ 100  Accuracy 
Feature Concat  58.87  70.41  56.97  70.10  57.59  69.88  58.46  69.97 
LMvCCA [5]  59.10  70.20  62.40  72.01  54.41  66.61  60.41  72.62 
LMvMDA [5]  59.09  70.16  58.81  71.94  61.54  72.45  59.28  72.07 
MvDA [6]  55.95  69.03  55.42  67.57  55.64  68.64  58.93  68.46 
SmVR [8]  78.37  71.44  78.24  71.15  78.66  71.37  79.36  71.64 
DMvCCA [5]  53.87  67.41  42.68  62.02  54.51  68.03  57.27  65.00 
DMvMDA [5]  60.08  71.40  63.12  70.93  61.55  72.12  62.52  70.78 
MvCCAE (ours)  48.75  66.43  49.10  62.90  60.70  71.86  48.80  63.05 
MvMDAE (ours)  62.63  74.20  63.02  71.04  60.74  72.60  62.74  71.20 
DMvDR (ours)  80.01  72.68  79.34  72.23  80.32  73.07  81.64  72.39 

The Multilingual Ranking is performed on Reuters RCV1/RCV2 Multilingual, Multiview Text Categorization Test collection [3]. We use Reuters to indicate this dataset in later paragraphs. It is a large collection of documents with news ariticles written in five languages, and grouped into categories by topic. The bag of words (BOW) based on a TFIDF weighting method [44] is used to represent the documents. The vocabulary has a size of approximately on average and is very sparse.
We consider the English documents and their translations to the other 4 languages in our experiment. Specifically, the 5 views are numbered as follows:

View 1: original English documents;

View 2: English documents translated to French;

View 3: English documents translated to German;

View 4: English documents translated to Italian;

View 5: English documents translated to Spanish.
Due to its high dimensionality, the BOW representation of each document is projected using a sparse SVD to a 50dimensional compact feature vector. We randomly select 40 samples from each category in each view as training data. The training data composed of samples is generated between pairs of English documents based on the pairwise transform in Section 2.1.1, and the translations to other languages are used for augmenting the views. We select another 360 samples from 6 categories and create a test dataset of document pairs. If considering the ranking function linear as proved in [24], we make document pairs comparable and balance them by assigning some of the data to the other class with the opposite sign of the feature vectors, so that the number of samples is equally distributed in both classes.
We first analyze the PR and ROC curves in Fig. 5. Since we have all translations of the English documents, each sample is well aligned in all views and, therefore we perform joint learning and prediction in all multilingual experiments. The experiments start with 2 views with English and its translation to French, and then the views are augmented with the documents of other languages. Subspace ranking methods are trained by embedding with increasing number of views, while SmVR as a cotraining takes two views at a time, and the average performance of all pairs is reported. The proposed methods with two competing ones are included in the plots in Fig. 5. The proposed DMvDR clearly performs the best across all views as can be seen in the PR and ROC plots in Fig. 5. SmVR is the second best with a lower precision and less area under curve compared to DMvDR. Among the remaining three methods, DMvMDA performs favorably in the PR curves but not as well in the ROC plots. The results are comparatively consistent across all views.
We can observe the quantitative MAP and accuracy results in Table II. It shows that the linear methods together with the feature concatenation have similar results which are generally inferior to the nonlinear methods in classification. Note also that nonlinear subspace learning methods cannot provide any superior MAP scores, which can be explained by the fact that the embedding is only intended to construct a discrimative feature space for classifying the pairs of data. We can also observe the MAP scores and accuracies are stable across views. This can be interpreted as the global ranking agreement can be reached to a certain level when all languages correspond to each other. It is again confirmed that the endtoend solution consistently provides the highest scores, while SvMR is a few percentages behind. When the features from different views follow a similar data distribution, the cotraining method performs well and competes with the proposed DMvDR.
4.3 Image Data Ranking
Image data ranking is a problem to evaluate the relevance between two images represented by different types of features. We adopt the Animal With Attributes (AWA) dataset [46] for this problem due to its diversity of animal appearance and large number of classes. The dataset is composed of animal classes with a total of images, and animal attributes. We follow the feature generation in [5] to adopt 3 feature types forming the views:

Image Feature by VGG16 pretrained model: a 1000dimensional feature vector is produced from each image by resizing them to and taken from the outputs of the layer with a 16layer VGGNet [47].

Class Label Encoding: a 100dimensional Word2Vector is extracted from each class label. Then, we can map the visual feature of each image to the text feature space by using a ridge regressor with a similar setting as in [5] to genenate another set of textual feature, with connection to the visual world. The text embedding space is constructed by training a skipgram [48] model on the entire English Wikipedia articles, including billion words.

Attibute Encoding: an 85dimensional feature vetor can be produced with a similar idea as above. Since each class of animals contains some typical patterns of the attribute, a lookup table can be constructed to connect the classes and attributes [49, 50]. Then, we map each image feature to the attribute space to produce the midlevel feature.
We generate the image ranking data as follows. From the 50 classes of animal images, we find pairs of images with inclass pairs and outofclass image pairs from each class. We then end up with training data pairs. Similarly, we will have test data pairs. We select images from each class used for training data and a separate set of samples as test data. Another 10 images are used as queries: 5 of them are associated with the inclass images as positive sample pairs and 5 as negative sample pairs. For the negative sample pairs, we randomly select classes from remaining animal classes at a time, and one image per class is associated with each query image under study.


Methods  MAP@100  Accuracy 
Feature Concat  38.08  50.60 
LMvCCA [5]  49.97  51.85 
LMvMDA [5]  49.70  52.35 
MvDA [6]  49.20  52.82 
SmVR [8]  52.12  50.33 
DMvCCA [5]  51.38  50.83 
DMvMDA [5]  51.52  51.38 
MvCCAE (ours)  49.01  53.28 
MvMDAE (ours)  48.99  53.30 
DMvDR (ours)  76.83  71.48 

We can observe the performance of the methods on the animal dataset graphically in Fig. 6 and quantitatively in Table III. DMvDR outperforms the other competing methods by a large margin as shown in the plots of Fig. 6. Due to the variety of data distribution from different feature types as view inputs, the cotraining type of SmVR can no longer compete with the endtoend solution. From Table III, one can observe that the performance of the feature concatenation suffers from the same problem. On the other hand, our proposed subspace ranking methods produces satisfactory classification rates while the precisions remain somewhat low. This implies again the scoring function is critical to be trained together with the feature mappings. The other linear and nonlinear subspace ranking methods have comparatively similar performance at a lower position.
5 Conclusion
Learning to rank has been a popular research topic with numerous applications, while multiview ranking remains a relatively new research topic. In this paper, we aimed to associate the multiview subspace learning methods with the ranking problem and proposed three methods in this direction. MvCCAE is an unsupervised multiview embedding method, while MvMDAE is its supervised counterpart. Both of them incorporate multiple objectives, with a correlation maximization on one hand, and reconstruction error minimization on the other hand, and have been extended in the multiview subspace learning to rank scheme. Finally, DMvDR is proposed to exploit the global agreement while minimizing the individual ranking losses in a single optimization process. The experimental results validate the superior performance of DMvDR compared to the other subspace and cotraining methods on multiview datasets with both homogeneous and heterogeneous data representations.
In the future, we will explore the scenario when there exists missing data, which is beyond the scope of the current proposed subspace ranking methods during training. Multiple networks can also be combined by concatenating their outputs, and further optimized in a single subnetwork. This solution may also be applicable for homogeneous representations.
References
 [1] T.Y. Liu, “Learning to rank for information retrieval,” Foundations and Trends in Information Retrieval, vol. 3, no. 3, pp. 225–331, 2009.
 [2] Y. Zhu, G. Wang, J. Yang, D. Wang, J. Yan, J. Hu, and Z. Chen, “Optimizing search engine revenue in sponsored search,” in Proceedings of the 32nd international ACM SIGIR conference on Research and development in information retrieval. ACM, 2009, pp. 588–595.
 [3] M. Amini, N. Usunier, and C. Goutte, “Learning from multiple partially observed views  an application to multilingual text categorization,” in Advances in Neural Information Processing Systems 22. Curran Associates, Inc., 2009, pp. 28–36.
 [4] J. Yu, D. Tao, M. Wang, and Y. Rui, “Learning to rank using user clicks and visual features for image retrieval,” IEEE transactions on cybernetics, vol. 45, no. 4, pp. 767–779, 2015.
 [5] G. Cao, A. Iosifidis, K. Chen, and M. Gabbouj, “Generalized multiview embedding for visual recognition and crossmodal retrieval,” IEEE Transactions on Cybernetics, 2017, doi: 10.1109/TCYB.2017.2742705.
 [6] M. Kan, S. Shan, H. Zhang, S. Lao, and X. Chen, “Multiview discriminant analysis,” IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), vol. 38, no. 1, pp. 188–194, Jan 2016.

[7]
A. Blum and T. Mitchell, “Combining labeled and unlabeled data with
cotraining,” in
Proceedings of the eleventh annual conference on Computational learning theory
. ACM, 1998, pp. 92–100. 
[8]
N. Usunier, M.R. Amini, and C. Goutte, “Multiview semisupervised learning for ranking multilingual documents,”
Machine Learning and Knowledge Discovery in Databases, pp. 443–458, 2011.  [9] D. R. Hardoon, S. Szedmak, and J. ShaweTaylor, “Canonical correlation analysis: An overview with application to learning methods,” Neural computation, vol. 16, no. 12, pp. 2639–2664, 2004.
 [10] J. Rupnik and J. ShaweTaylor, “Multiview canonical correlation analysis,” in Slovenian KDD Conference on Data Mining and Data Warehouses (SiKDD 2010), 2010, pp. 1–4.
 [11] A. Iosifidis, A. Tefas, and I. Pitas, “Kernel reference discriminant analysis,” Pattern Recognition Letters, vol. 49, pp. 85–91, 2014.
 [12] A. Iosifidis and M. Gabbouj, “Nyströmbased approximate kernel subspace learning,” Pattern Recognition, vol. 57, pp. 190–197, 2016.
 [13] G. Andrew, R. Arora, J. Bilmes, and K. Livescu, “Deep canonical correlation analysis,” in Proceedings of the 30th International Conference on Machine Learning, 2013, pp. 1247–1255.
 [14] G. Cao, A. Iosifidis, and M. Gabbouj, “Multiview nonparametric discriminant analysis for image retrieval and recognition,” IEEE Signal Processing Letters, vol. 24, no. 10, pp. 1537–1541, Oct 2017.
 [15] F. Feng, L. Nie, X. Wang, R. Hong, and T.S. Chua, “Computational social indicators: A case study of chinese university ranking,” in Proceedings of the 40th International ACM SIGIR Conference on Research and Development in Information Retrieval. New York, NY, USA: ACM, 2017, pp. 455–464.
 [16] “The times higher education world university ranking,” https://www.timeshighereducation.com/worlduniversityrankings, 2016.
 [17] N. C. Liu and Y. Cheng, “The academic ranking of world universities,” Higher education in Europe, vol. 30, no. 2, pp. 127–136, 2005.
 [18] T. Liu, J. Wang, J. Sun, N. Zheng, X. Tang, and H.Y. Shum, “Picture collage,” IEEE Transactions on Multimedia (TMM), vol. 11, no. 7, pp. 1225 –1239, 2009.
 [19] X. Li, T. Pi, Z. Zhang, X. Zhao, M. Wang, X. Li, and P. S. Yu, “Learning bregman distance functions for structural learning to rank,” IEEE Transactions on Knowledge and Data EngineeringS, vol. 29, no. 9, pp. 1916–1927, Sept 2017.
 [20] O. Wu, Q. You, X. Mao, F. Xia, F. Yuan, and W. Hu, “Listwise learning to rank by exploring structure of objects,” IEEE Transactions on Knowledge and Data Engineering, vol. 28, no. 7, pp. 1934–1939, 2016.
 [21] L. Page, S. Brin, R. Motwani, and T. Winograd, “The pagerank citation ranking: Bringing order to the web.” Stanford InfoLab, Tech. Rep., 1999.
 [22] W. W. Cohen, R. E. Schapire, and Y. Singer, “Learning to order things,” in Advances in Neural Information Processing Systems, 1998, pp. 451–457.
 [23] C. Burges, T. Shaked, E. Renshaw, A. Lazier, M. Deeds, N. Hamilton, and G. Hullender, “Learning to rank using gradient descent,” in Proceedings of the 22nd international conference on Machine learning. ACM, 2005, pp. 89–96.
 [24] R. Herbrich, T. Graepel, and K. Obermayer, “Large margin rank boundaries for ordinal regression,” 2000.
 [25] Y. Freund, R. Iyer, R. E. Schapire, and Y. Singer, “An efficient boosting algorithm for combining preferences,” The Journal of machine learning research, vol. 4, pp. 933–969, 2003.
 [26] Y. Freund and R. E. Schapire, “A desiciontheoretic generalization of online learning and an application to boosting,” in European conference on computational learning theory. Springer, 1995, pp. 23–37.
 [27] T. Joachims, “Optimizing search engines using clickthrough data,” in Proceedings of the eighth ACM SIGKDD international conference on Knowledge discovery and data mining. ACM, 2002, pp. 133–142.
 [28] J. H. Friedman, “Greedy function approximation: a gradient boosting machine,” Annals of statistics, pp. 1189–1232, 2001.
 [29] J. Xu and H. Li, “AdaRank: a boosting algorithm for information retrieval,” in Proceedings of the 30th annual international ACM SIGIR conference on Research and development in information retrieval. ACM, 2007, pp. 391–398.
 [30] C. J. Burges, R. Ragno, and Q. V. Le, “Learning to rank with nonsmooth cost functions,” in Advances in neural information processing systems, 2007, pp. 193–200.
 [31] H. Hotelling, “Relations between two sets of variates,” Biometrika, pp. 321–377, 1936.
 [32] M. Borga, “Canonical correlation: a tutorial,” http://people.imt.liu.se/~magnus/cca/tutorial/tutorial.pdf, 2001.
 [33] A. A. Nielsen, “Multiset canonical correlations analysis and multispectral, truly multitemporal remote sensing data,” IEEE Transactions on Image Processing (TIP), vol. 11, no. 3, pp. 293–305, 2002.

[34]
Y. Gong, Q. Ke, M. Isard, and S. Lazebnik, “A multiview embedding space for
modeling internet images, tags, and their semantics,”
International Journal of Computer Vision
, vol. 106, no. 2, pp. 210–233, 2014. 
[35]
Y. Luo, D. Tao, K. Ramamohanarao, C. Xu, and Y. Wen, “Tensor canonical correlation analysis for multiview dimension reduction,”
IEEE Transactions on Knowledge and Data Engineering, vol. 27, no. 11, pp. 3111–3124, Nov 2015.  [36] A. Sharma, A. Kumar, H. Daume III, and D. W. Jacobs, “Generalized multiview analysis: A discriminative latent space,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 2012, pp. 2160–2167.
 [37] J. Ngiam, A. Khosla, M. Kim, J. Nam, H. Lee, and A. Y. Ng, “Multimodal deep learning,” in Proceedings of the 28th international conference on machine learning (ICML11), 2011, pp. 689–696.
 [38] W. Wang, R. Arora, K. Livescu, and J. Bilmes, “On deep multiview representation learning,” in Proceedings of the 32nd International Conference on Machine Learning (ICML), 2015, pp. 1083–1092.
 [39] S. Chandar, M. M. Khapra, H. Larochelle, and B. Ravindran, “Correlational neural networks,” Neural computation, 2016.
 [40] M. Kan, S. Shan, and X. Chen, “Multiview deep network for crossview classification,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2016, pp. 4847–4855.
 [41] S. Yan, D. Xu, B. Zhang, H.J. Zhang, Q. Yang, and S. Lin, “Graph embedding and extensions: a general framework for dimensionality reduction,” IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), vol. 29, no. 1, pp. 40–51, 2007.
 [42] Y. Jia, F. Nie, and C. Zhang, “Trace ratio problem revisited,” IEEE Transactions on Neural Networks, vol. 20, no. 4, pp. 729–735, 2009.
 [43] Y. LeCun, L. Bottou, G. B. Orr, and K.R. Müller, “Efficient backprop,” in Neural networks: Tricks of the trade. Springer, 1998, pp. 9–50.
 [44] C. D. Manning, P. Raghavan, H. Schütze et al., Introduction to information retrieval. Cambridge university press Cambridge, 2008, vol. 1, no. 1.
 [45] “World university rankings: A kaggle dataset.” https://www.kaggle.com/mylesoneill/worlduniversityrankings, 2016.
 [46] C. H. Lampert, H. Nickisch, and S. Harmeling, “Attributebased classification for zeroshot visual object categorization,” IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), vol. 36, no. 3, pp. 453–465, 2014.
 [47] K. Simonyan and A. Zisserman, “Very deep convolutional networks for largescale image recognition,” International Conference on Learning Representations (ICLR), 2015.

[48]
T. Mikolov, I. Sutskever, K. Chen, G. S. Corrado, and J. Dean, “Distributed representations of words and phrases and their compositionality,” in
Advances in neural information processing systems (NIPS), 2013, pp. 3111–3119.  [49] C. Kemp, J. B. Tenenbaum, T. L. Griffiths, T. Yamada, and N. Ueda, “Learning systems of concepts with an infinite relational model,” in AAAI, vol. 3, 2006, p. 5.
 [50] D. N. Osherson, J. Stern, O. Wilkie, M. Stob, and E. E. Smith, “Default probability,” Cognitive Science, vol. 15, no. 2, pp. 251–269, 1991.
Comments
There are no comments yet.