Deep Multi-view Learning to Rank

by   Guanqun Cao, et al.

We study the problem of learning to rank from multiple sources. Though multi-view learning and learning to rank have been studied extensively leading to a wide range of applications, multi-view learning to rank as a synergy of both topics has received little attention. The aim of the paper is to propose a composite ranking method while keeping a close correlation with the individual rankings simultaneously. We propose a multi-objective solution to ranking by capturing the information of the feature mapping from both within each view as well as across views using autoencoder-like networks. Moreover, a novel end-to-end solution is introduced to enhance the joint ranking with minimum view-specific ranking loss, so that we can achieve the maximum global view agreements within a single optimization process. The proposed method is validated on a wide variety of ranking problems, including university ranking, multi-view lingual text ranking and image data ranking, providing superior results.



There are no comments yet.


page 2

page 6


Multi-view Subspace Adaptive Learning via Autoencoder and Attention

Multi-view learning can cover all features of data samples more comprehe...

Discriminative multi-view Privileged Information learning for image re-ranking

Conventional multi-view re-ranking methods usually perform asymmetrical ...

Text-to-Text Multi-view Learning for Passage Re-ranking

Recently, much progress in natural language processing has been driven b...

Listwise View Ranking for Image Cropping

Rank-based Learning with deep neural network has been widely used for im...

Production Ranking Systems: A Review

The problem of ranking is a multi-billion dollar problem. In this paper ...

A Comparison of Multi-View Learning Strategies for Satellite Image-Based Real Estate Appraisal

In the house credit process, banks and lenders rely on a fast and accura...

VI-Net: View-Invariant Quality of Human Movement Assessment

We propose a view-invariant method towards the assessment of the quality...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Learning to rank is an important research topic in information retrieval and data mining, which aims to learn a ranking model to produce a query-specfic ranking list. The ranking model establishes a relationship between each pair of data samples by combining the corresponding features in an optimal way [1]. A score is then assigned to each pair to evaluate its relevance forming a global ranking list across all pairs. The success of learning to rank solutions has brought a wide spectrum of applications, including online advertising [2]

, natural language processing

[3] and multimedia retrieval [4].

Learning appropriate data representation and a suitable scoring function are two vital steps in the ranking problem. Traditionally, a feature mapping models the data distribution in a latent space to match the relevance relationship, while the scoring function is used to quantify the relevance measure [1]; however, the ranking problem in the real world emerges from multiple facets and data patterns are mined from diverse domains. For example, universities are positioned differently based on numerous factors and weights used for quality evaluation by different ranking agencies. Therefore, a global agreement across sources and domains should be achieved while still maintaining a high ranking performance.

Multi-view learning has received a wide attention with a special focus on subspace learning [5, 6] and co-training [7], and few attempts have been made in ranking problems [8]. It introduces a new paradigm to jointly model and combine information encoded in multiple views to enhance the learning performance. Specifically, subspace learning finds a common space from different input modalities using an optimization criterion. Canonical Correlation Analysis (CCA) [9, 10] is one of the prevailing unsupervised method used to measure a cross-view correlation. By contrast, Multi-view Discriminant Analysis (MvDA) [6]

is a supervised learning technique seeking the most discriminant features across views by maximizing the between-class scatter while minimizing the within-class scatter in the underlying feature space. Furthermore, a generalized multi-view embedding method

[5] was proposed using a graph embedding framework for numerous unsupervised and supervised learning techniques with extension to nonlinear transforms including (approximate) kernel mappings [11, 12]

and neural networks

[5, 13]. A nonparametric version of [5] was also proposed in [14]. On the other hand, co-training [7] was introduced to maximize the mutual agreement between two distinct views, and can be easily extended to multiple inputs by subsequently training over all pairs of views. A solution to the learning to rank problem was provided by minimizing the pairwise ranking difference using the same co-training mechanism [8].

Although there are several applications that could benefit from multi-view learning to rank approach, the topic has still been insufficiently studied up to date [15]

. Ranking of multi-facet objects is generally performed using composite indicators. The usefulness of a composite indicator depends upon the selected functional form and the weights associated with the component facets. Existing solutions for university ranking are an example of using the subjective weights in the method of composite indicators. However, the functional form and its assigned weights are difficult to define. Consequently, there is a high disparity in the evaluation metric between agencies, and the produced ranking lists usually cause dissension in academic institutes. However, one observation is that, the indicators from different agencies may partially overlap and have a high correlation between each other. We present an example in Fig.

1 to show that, several attributes in the THE dataset [16], including teaching, research, student staff ratio and student number are highly correlated with all of the attributes in the ARWU dataset [17]. Therefore, the motivation of this paper is to find a composite ranking by exploiting the correlation between individual rankings.

Earlier success in multi-view subspace learning provides a promising way for composite ranking. Concatenating multiple views into a single input overlooks possible view discrepancy and does not fully exploit their mutual agreement in ranking. Our goal is to study beyond the direct multi-view subspace learning for ranking. This paper offers a multi-objective solution to ranking by capturing relevant information of feature mapping from within each view as well as across views. Moreover, we propose an end-to-end method to optimize the trade-off between view-specific ranking and a discriminant combination of multi-view ranking. To this end, we can improve cross-view ranking performance while maintaining individual ranking objectives.

Fig. 1: The correlation matrix between the measurements of Times Higher Education (THE) and Academic Ranking of World Universities (ARWU) rankings. The data is extracted and aligned based on the performance of the common universities in 2015 between the two ranking agencies. The reddish color indicates high correlation, while the matrix elements with low correlation are represented in bluish colors.

Intermediate feature representation in the neural network are exploited in our ranking solutions. Specifically, the first contribution is to provide two closely related methods by adopting an autoencoder-like network. We first train a network to learn view-specific feature mappings, and then maximize their correlation with the intermediate representations using either an unsupervised or discriminant projection to a common latent space. A stochastic optimization method is introduced to fit the correlation criterion. Both the autoencoding sub-network per view with a reconstruction objective and feedforward sub-networks with a joint correlation-based objective are iteratively optimized in the entire network. The projected feature representations in the common subspace are then combined and used to learn for the ranking function.

The second contribution (graphically described in Fig. 2) is an end-to-end multi-view learning to rank solution. A sub-network for each view is trained with its own ranking objective. Then, features from intermediate layers of the sub-networks are combined after a discriminant mapping to a common space, and training towards the global ranking objective. As a result, a network assembly is developed to enhance the joint ranking with mimimum view-specific ranking loss, so that we can achieve the maximum view agreement within a single optimization process.

The rest of the paper is organized as follows. In Section 2, we describe the related work close to our proposed methods. The proposed methods are introduced in Section 3. In Section 4, we present quantitative results to show the effectiveness of the proposed methods. Finally, Section 5 concludes the paper.

2 Related Work

2.1 Learning to rank

Learning to rank aims to optimize the combination of data representation for ranking problems [18]

. It has been widely used in a number of applications, including image retrieval and ranking

[4, 19], image quality ratings [20], online advertising [2]

, and text summarization


. Solutions to this problem can be decomposed into several key components, including the input feature, the output vector and the scoring function. The framework is developed by training the scoring function from the input feature to the output ranking list, and then, scoring the ranking of new data. Traditional methods also include engineering the feature using the PageRank model

[21], for example, to optimally combine them for obtaining the output. Later, research was focused on discriminatively training the scoring function to improve the ranking outputs. The ranking methods can be organized in three categories for the scoring function: the pointwise approach, the pairwise apporach, and the listwise approach.

We consider the pairwise approach in this paper and review the related methods as follows. A preference network is developed in [22] to evaluate the pairwise order between two documents. The network learns the preference function directly to the binary ranking output without using an additional scoring function. RankNet [23] defines the cross-entropy loss and learns a neural network to model the ranking. Assuming the scoring function to be linear [24]

, the ranking problem can be transformed to a binary classification problem, and therefore, many classifiers are available to be applied for ranking document pairs. RankBoost

[25] adopts Adaboost algorithm [26], which iteratively focuses on the classfication errors between each pair of documents, and subsequently, improves the overall output. Ranking SVM [27]

applies SVM to perform pairwise classification. GBRank is a ranking method based on Gradient Boost Tree

[28]. Semi-supervised multi-view ranking (SmVR) [8] follows the co-training scheme to rank pairs of samples. Moreover, recent efforts focus on using the evaluation metric to guide the gradient with respect to the ranking pair during training. These studies include AdaRank [29], which optimizes the ranking errors rather than the classification error in an adaptive way, and LambdaRank [30]. However, all of these methods above consider the case of single view inputs, while multi-view learning to rank is overlooked.

2.1.1 Bipartite ranking

The pairwise approach of the ranking methods serves as the basis of our ranking method, and therefore, reviewed explicitly in this section. Suppose that the training data is organized in query-sample pairs , where , is the -dimensional feature vector for the pair of query , the -th sample, is the relevance score, and the number of query-specific samples is . We perform the pairwise transformation before the relevance prediction of each query-sample pair, so that only the samples that belong to the same query are evaluated [24].

The modeled probability between each pair in this paper is defined as

where is the linear scoring function as , which maps the input feature vectors to the scores. Due to its linearity, we can transform the feature vectors and relevance score into () = (). In case of the ordered list () as the raw input, each data sample paired with its query is investigated, and their raw orders () are transformed as . In pairwise ranking, the relevance , if the query and sample are relevant, and , otherwise.

The feature difference becomes the new feature vector as the input data for nonlinear transforms and subspace learning. Therefore, the probability can be rewritten as


The objective to make the right order of ranking can then be formulated as the cross entropy loss such that,


which is proved in [23]

that it is an upper bound of the pairwise 0-1 loss function and optimized using gradient descent. The logistic regression or softmax function in neural networks can be used to learn the scoring function.

2.2 Multi-view deep learning

Multi-view learning considers enhancing the feature discriminability by taking inputs from diverse sources. One important approach to follow is subspace learning, which is traced back to CCA [31, 32] between two input domains, and its multi-view extension, which has been studied in [33, 34, 35]. This approach can also be generalized using a higher-order correlation [35]. The main idea behind these techniques is to project the data representations in the two domains to a common subspace optimizing their mutual correlation. Subspace learning with supervision has also been extensively studied. Multi-view Discriminant Analysis [6] performs the dimensionality reducation of features from multiple views exploiting the class information. Recently, these methods were generalized in the same framework [5, 36], which accommodates multiple views, supervision and nonlinearity. Co-training [7] first trains two seperate regressors and then, iteratively maximizes their agreements.

Deep learning, which exploits the nonlinear transform of the raw feature space, has also been studied in the multi-view scenario. The multi-modal deep autoencoder [37] was proposed by taking nonlinear representations of a pair of views to learn their common characteristics. Deep CCA [13] is another two-view method which maximizes the pairwise correlation using neural networks. Thereafter, a two-view correlated autoencoder was developed [38, 39] with objectives to correlate the view pairs but also reconstruct the individual view in the same network. Multi-view Deep Network [40] was also proposed as an extension of MvDA [6]. It optimizes the ratio trace of the graph embedding [41] to avoid the complexity of solutions without a closed form [42]. In this paper, however, we show that the trace ratio optimization can be solved efficiently in the updates of the multi-view networks. Deep Multi-view Canonical Correlation Analysis (DMvCCA) and Deep Multi-view Modular Discriminant Analysis (DMvMDA) [5] are closely related to our work, and hence, they are described in the following sections.

2.2.1 Deep Multi-view Canonical Correlation Analysis (DMvCCA)

The idea behind DMvCCA [5]

is to find a common subspace using a set of linear transforms

to project nonlinearly mapped input samples from the th view where the correlation is maximized. Specifically, it aims to maximize


where the matrix centralizes the input data matrix of each view , and is a vector of ones, . By defining the cross-view covariance matrix between views and as , where , is the centered view, the data projection matrix , which has the column vector of in the

th view, can be obtained by solving the generalized eigenvalue problem


It shows that the solution to this problem is derived with the maximal inter-view covariances and the minimal intra-view covariances.

2.2.2 Deep Multi-view Modular Discriminant Analysis (DMvMDA)

DMvMDA [5] is the neural network-based multi-view solution of LDA which maximizes the ratio of the determinant of the between-class scatter matrix of all view pairs to that of the within-class scatter matrix. Mathematically, it is written as the projection matrix of the DMvMDA and is derived by optimizing the objective function


where the between-class Laplacian matrix is

and the within-class Laplacian matrix is

3 Model Formulation

We first introduce the formulation of MvCCAE and MvMDAE, and then the extension as multi-view subspace learning to rank. Finally, the end-to-end ranking method is described.

3.1 Multi-view Canonically Correlated Auto-Encoder (MvCCAE)

In contrast to DMvCCA and DMvMDA, where the nonlinear correlation between multiple views is optimized, we propose a multi-objective solution by maximizing the between-view correlation while minimizing the reconstruction error from each view source. Given the data matrix of views, the encoding network and the decoding network , and the projection matrix , the objective of MvCCAE is formulated as follows,


where we introduce the new objective , and the loss function of the th autoencoder is , with the regularization at the th intermediate layer of the th view denoted by . Here, and are controlling parameters for the trade-off between the terms.

3.1.1 Optimization

Following the objective of DMvCCA [5], we aim to directly optimize the trace ratio in (3) and let


Here, the output of each sub-network is denoted by . Then, we have




By using (7) and (8) and following the quotient rule, we derive the stochastic optimization of MvCCAE to be


The gradient to compute the autoencoding loss is derived from the view-specific sub-networks and . The sub-network is optimized with to obtain the output , while the gradient of

network with respect to its parameters can be obtained using the chain rule from


3.2 Multi-view Modularly Discriminant Auto-Encoder (MvMDAE)

Similar to MvCCAE, the objective of MvMDAE is to optimize the view-specific reconstruction error and the cross-view correlation as follows,


3.2.1 Optimization

The detailed optimization is derived by replacing the laplacian matrix in MvCCAE with and . We let


Then, we have




The stochastic optimization of MvMDAE can be derived by using (11), (12) and applying the quotient rule as follows,


The gradient of the objective can be calculated using the chain rule, and the stochastic gradient descent (SGD) is used with mini-batches for optimization.

Fig. 2: System diagram of the Deep Multi-view Discriminant Ranking (DMvDR). First, the features are extracted for data representations in different views and fed through the individual sub-network to obtain the nonlinear representation of the th view. The results are then passed through two pipelines of networks. One line goes to the projection , which maps all to the common subspace, and their concatenation is trained to optimize the fused ranking loss with the fused sub-network . The other line connects to the sub-network for the optimization of the th ranking loss.

3.3 Multi-view Subspace Learning to Rank (MvSL2R)

Multi-view subspace learning to rank is formulated based on the fact that the projected feature in the common subspace can be used to train a scoring function for ranking. We generate the training data from the intersection of ranking samples between views to have the same samples but various representations from different view origins. The overall ranking agreement is made by calculating the average voting from the intersected ranking orders as


By performing the pairwise transform in section 2.1.1 over the ranking data, we have the input of views and the cross-view relevance scores obtained from the average ranking orders . The proposed ranking method consists of feature mapping into a common subspace, training a logistic regressor as the scoring function, and predicting the relevance of new sample pairs using the probability function


where is the data projection matrix of the th view, and is the weight from the logistic regressor described in (1). We summarize these steps in the algorithm below.

1 Function MvSL2R ;
Input : The feature vectors of V views , the relevance , and the dimensionality in the subspace .
Output : The predicted relevance probabilities of the new data.
2 Perform the nonlinear transformation to obtain the representation in the th autoencoder.
3 Perform the subspace learning by optimizing (6) or (10) to obtain the projection matrix .
4 Train a logistic regressor (1) as the scoring function to obtain the weight matrix .
Predict the new sample pairs for their relevance probabilities using (15) with the trained sub-networks and , and the obtained weights and .
Algorithm 1 Multi-view Subspace Learning to Rank.

3.4 Deep Multi-view Discriminant Ranking (DMvDR)

Multi-view Subspace Learning to Rank provides a promising method with MvCCAE and MvMDAE. However, it does not have a direct connection to ranking. Continuing the idea of multi-objective optimization, we propose to optimize the view-specific and the joint ranking together in the single network as shown in Fig. 2. Taking university ranking as an example, various ranking lists are generated from different agencies, and each agency uses a different set of attributes to represent the universities. In training, given the inputs , the cross entropy loss (2) is optimized with the view-specific relevance and the joint view relevance . Based on their evaluation metrics, the attributes , where , are trained through the view-specific sub-network . The nonlinear representations , are the inputs of the joint network as , , after the mappings to generate the joint university ranking list. Each of them is also the input to the view-specific network , which minimizes its distance to the original ranking . We similarly exploit the effectiveness of intermediate layers in between the view-specific sub-networks and , but towards the ranking loss for DMvDR. The detailed procedure of this method is described below.

The gradient of each view-specific sub-network is calculated from the output with respect to its parameters. Since the loss passes from each view-specific to sub-network, the gradient can be calculated independently with respect to the output of each view-specific sub-network as Then, the gradient of

with respect to its network weights can be determined through backpropagation


. All sub-networks contain several layers with Sigmoid functions.

The fused sub-network is updated with the gradient of the ranking loss from the cross-view relevance scores . Similar to the generation of training data in MvSL2R, we find the intersection of the ranking data with different representations or measurements from various sources, and perform the pairwise transform to have the sample pairs as the input and from the cross-view ranking orders in (14). As a result, the input to the fused sub-network is the concatenation of the nonlinear mapping from the view-specific networks as


During testing, we can distinguish two possible scenarios: (a) If the samples are aligned and all presented from each view, the results from nonlinear mappings are combined in the same manner as the training phase to generate a fused ranking list at the end of sub-network; and (b) If there are missing samples or completely unaligned in the test data, for the th view. The resulting view-specific prediction still maintains the cross-view agreement which is ranked from the trained joint network. The gradient of and can be easily calculated afterwards using the SGD.

Joint ranking is achieved using a multi-view subspace embedding layer. Similar to MvMDAE, we take the mappings from the outputs from the sub-networks . The gradient of multi-view subspace embedding (MvSE) in the trace ratio form is calculated by combining (11) and (12):


The embedding layer is important as its gradient is forward passed to the fused sub-network . Meanwhile, it is backward propagated in the layers of to reach the input . In turn, the parameters in are also affected by the outputs of sub-networks.

The update of the view-specific depends on the view-specific ranking output and the cross-view relevance as it is a common sub-network in both pipelines of networks. Through backpropagation, the -th sub-networks and are optimized consecutively with respect to the gradient . Meanwhile, the training error with respect to the fused ranking is passed through multi-view subspace embedding (MvSE) from in (16) as the input to the fused sub-network . The resulting gradient of each sub-network is given by


where and are the scaling factors controlling the magnitude of the ranking loss. Similar to the other sub-networks, the gradients with respect to their parameters can be obtained by following the chain rule.

The update of the entire network of DMvDR can be summarized using the SGD with mini-batches. The parameters of the sub-network are denoted by . A gradient descent step is , where is the learning rate. The gradient update step at time can be written down with the chain rule collectively:


We generate the training data using the pairwise transform presented in Section 2.1.1. During testing, the test samples can also be transformed into pairs to evaluate the relative relevance of each sample to its query. The raw ranking data can also be fed into the trained model to predict their overall ranking positions.

4 Experiments

In this section, we evaluate the performance of the proposed multi-view learning to rank methods in three challenging problems: university ranking, multi-linguistic ranking and image data ranking. The proposed methods are also compared to the related subspace learning and co-training methods. The subspace learning methods follow the steps proposed in Section 3.3 for ranking. We compare the performance of the following methods in the experiments:

  • Best Single View: a method which shows the best performance of Ranking SVM [27] over the individual views.

  • Feature Concat: a method which concatenate the features of the common samples for training a Ranking SVM [27].

  • LMvCCA [5]: a linear multi-view CCA method.

  • LMvMDA [5]: a linear supervised method for multi-view subspace learning.

  • MvDA [6]: another linear supervised method for multi-view subspace learning. It differs from the above in that the view difference is not encoded in this method.

  • SmVR [8]: a semi-supervised method that seeks a global agreement in ranking. It belongs to the category of co-training. We develop the complete data in the following experiments for training so that its comparison with the subspace learning methods is fair. Therefore, SmVR becomes a supervised method in this paper.

  • DMvCCA [5]: a nonlinear extension of LMvCCA using neural networks.

  • DMvMDA [5]: a nonlinear extension of LMvMDA using neural networks.

  • MvCCAE: the first proposed multi-view subspace learning to rank method proposed in the paper.

  • MvMDAE: the supervised multi-view subspace learning to rank method proposed in the paper.

  • DMvDR: the end-to-end multi-view learning to rank method proposed in the paper.

We present the quantitative results using several evaluation metrics including the Mean Average Precision (MAP), classification accuracy and Kendal’s tau. The Average Precision (AP) measures the relevance of all query and sample pairs with respect to the same query, while the MAP score calculates the mean AP across all queries [44]. After performing pairwise transform on the ranking data, the relevance prediction can be considered as a binary classification problem, and therefore the classification is utilized for evaluation. Kendal’s tau measures the ordinal association between two lists of samples.

We also present the experimental results graphically, and the following measures are used. The Mean Average Precision (MAP) score, which is the average precision at the ranks where recall changes, is illustrated on the 11-point interpolated precision-recall curves (PR curve) to show the ranking performance. Also, the ROC curve provides a graphical representation of the binary classification performance. It shows the true positive rates against the false positive rate at different thresholds. The correlation plots show linear correlation coefficients between two ranking lists.

4.1 University Ranking

The university ranking dataset available in [45]

collects the world ranking data from three rating agencies, including the Times Higher Education (THE) World University Ranking, the Academic Ranking of World Universities (ARWU), and the Center for World University Rankings (CWUR). Despite political and controversial influences, they are widely considered as authorities for university ranking. The measurements are used as the feature vectors after feature preprocessings, which includes feature standardization and removal of categorical variables and groundtruth indicators including the ranking orders, university name, location, year and total scores. The

common universities from 2012 to 2014 are considered for training. After the pairwise transform in each year, samples are generated as the training data. The entire data in 2015 is considered for testing. The data distribution (after binary transform) of the common universities in 2015 is shown in Fig. 3.

(a) Raw data distribution.
(b) Projected data distribution.
Fig. 3: The left plot shows the data distribution by concatenating the measurements as features of the common universities from different agencies in 2015. The right plot shows the concatenated and projected features using MvMDAE for the same universities.

We can make several observations from the data distribution in Fig. 3. Firstly, the pairwise transform is applied on the university ranking data, which equally assigns the original ranking data to two classes. Then, the dimensionality of the data is reduced to 2-dimensional using PCA in order to display it on the plots of Fig. 3. The data is then labelled with two colors red and green indicating the relevance between samples. We can notice a high overlap between the two classes in the case of raw data (left plot of Fig. 3), while the data on the right is clearly better separated after the projection using the proposed MvMDAE. This shows the discrimination power of the proposed supervised embedding method.

Fig. 4: Rank correlation matrix for views 1-3 and the fused view.

Furthermore, a rank correlation matrix of plots is presented in Fig. 4 with correlations among pairs of ranking lists from the views 1-3 and the predicted list denoted by ’Fused’. Histograms of the ranking data are shown along the matrix diagonal, while scatter plots of data pairs appear off diagonal. The slopes of the least-squares reference lines in the scatter plots are equal to the displayed correlation coefficients. The fused ranking list is produced by the proposed DMvDR, and the results are also generated from the common universities in 2015. We first take a closer look at the correlations between the views 1-3. The correlation coefficients are generally low, with the highest () between view 1 and 3, while the others are around . In contrast, the fused rank has a high correlation to each view. The scatter plots and the reference lines are well aligned, and the correlation coefficients are all above , demonstrating that the proposed DMvDR effectively exploits the global agreement with all view.

Finally, the average prediction results over 3 different university datasets of the proposed and competing methods are reported in Table I. Due to the misalignment of ranking data in 2015 across datasets, we make the ranking prediction based on each view input, which is further elaborated in the Section 3.4. We observe that Ranking SVM [27] on the single feature or its concatenation performs poorly compared to the other methods. This shows that when the data is heterogeneous, simply combining the features cannot enhance joint ranking. Kendal’s tau from the linear subspace learning methods are comparatively higher than their nonlinear counterparts. This is due to the fact that the nonlinear methods aim to maximize to the correlation in the embedding space, while the scoring function is not optimized for ranking. In contrast, DMvDR optimizes the entire ranking process, which is confirmed with the highest ranking and classification performance.


Methods Kendal’s tau Accuracy
Best Single View 65.38 -
Feature Concat 35.10 -
LMvCCA [5] 86.04 94.49
LMvMDA [5] 87.00 94.97
MvDA [6] 85.81 94.34
SmVR [8] 80.75 -
DMvCCA [5] 70.07 93.20
DMvMDA [5] 70.81 94.75
MvCCAE (ours) 75.94 94.01
MvMDAE (ours) 81.04 94.85
DMvDR (ours) 89.28 95.30


TABLE I: Average Prediction Results (%) on 3 University Ranking Datasets in 2015.

4.2 Multi-lingual Ranking

(a) PR curve on Reuters.
(b) ROC curve on Reuters.
Fig. 5: The PR and ROC curves with 2-5 views applied to Reuters dataset.


2 views 3 views 4 views 5 views
Methods MAP@100 Accuracy MAP@100 Accuracy MAP@100 Accuracy MAP@ 100 Accuracy
Feature Concat 58.87 70.41 56.97 70.10 57.59 69.88 58.46 69.97
LMvCCA [5] 59.10 70.20 62.40 72.01 54.41 66.61 60.41 72.62
LMvMDA [5] 59.09 70.16 58.81 71.94 61.54 72.45 59.28 72.07
MvDA [6] 55.95 69.03 55.42 67.57 55.64 68.64 58.93 68.46
SmVR [8] 78.37 71.44 78.24 71.15 78.66 71.37 79.36 71.64
DMvCCA [5] 53.87 67.41 42.68 62.02 54.51 68.03 57.27 65.00
DMvMDA [5] 60.08 71.40 63.12 70.93 61.55 72.12 62.52 70.78
MvCCAE (ours) 48.75 66.43 49.10 62.90 60.70 71.86 48.80 63.05
MvMDAE (ours) 62.63 74.20 63.02 71.04 60.74 72.60 62.74 71.20
DMvDR (ours) 80.01 72.68 79.34 72.23 80.32 73.07 81.64 72.39


TABLE II: Quantitative Results (%) on the Reuter Dataset.

The Multi-lingual Ranking is performed on Reuters RCV1/RCV2 Multi-lingual, Multi-view Text Categorization Test collection [3]. We use Reuters to indicate this dataset in later paragraphs. It is a large collection of documents with news ariticles written in five languages, and grouped into categories by topic. The bag of words (BOW) based on a TF-IDF weighting method [44] is used to represent the documents. The vocabulary has a size of approximately on average and is very sparse.

We consider the English documents and their translations to the other 4 languages in our experiment. Specifically, the 5 views are numbered as follows:

  • View 1: original English documents;

  • View 2: English documents translated to French;

  • View 3: English documents translated to German;

  • View 4: English documents translated to Italian;

  • View 5: English documents translated to Spanish.

Due to its high dimensionality, the BOW representation of each document is projected using a sparse SVD to a 50-dimensional compact feature vector. We randomly select 40 samples from each category in each view as training data. The training data composed of samples is generated between pairs of English documents based on the pairwise transform in Section 2.1.1, and the translations to other languages are used for augmenting the views. We select another 360 samples from 6 categories and create a test dataset of document pairs. If considering the ranking function linear as proved in [24], we make document pairs comparable and balance them by assigning some of the data to the other class with the opposite sign of the feature vectors, so that the number of samples is equally distributed in both classes.

We first analyze the PR and ROC curves in Fig. 5. Since we have all translations of the English documents, each sample is well aligned in all views and, therefore we perform joint learning and prediction in all multi-lingual experiments. The experiments start with 2 views with English and its translation to French, and then the views are augmented with the documents of other languages. Subspace ranking methods are trained by embedding with increasing number of views, while SmVR as a co-training takes two views at a time, and the average performance of all pairs is reported. The proposed methods with two competing ones are included in the plots in Fig. 5. The proposed DMvDR clearly performs the best across all views as can be seen in the PR and ROC plots in Fig. 5. SmVR is the second best with a lower precision and less area under curve compared to DMvDR. Among the remaining three methods, DMvMDA performs favorably in the PR curves but not as well in the ROC plots. The results are comparatively consistent across all views.

We can observe the quantitative MAP and accuracy results in Table II. It shows that the linear methods together with the feature concatenation have similar results which are generally inferior to the nonlinear methods in classification. Note also that nonlinear subspace learning methods cannot provide any superior MAP scores, which can be explained by the fact that the embedding is only intended to construct a discrimative feature space for classifying the pairs of data. We can also observe the MAP scores and accuracies are stable across views. This can be interpreted as the global ranking agreement can be reached to a certain level when all languages correspond to each other. It is again confirmed that the end-to-end solution consistently provides the highest scores, while SvMR is a few percentages behind. When the features from different views follow a similar data distribution, the co-training method performs well and competes with the proposed DMvDR.

4.3 Image Data Ranking

Image data ranking is a problem to evaluate the relevance between two images represented by different types of features. We adopt the Animal With Attributes (AWA) dataset [46] for this problem due to its diversity of animal appearance and large number of classes. The dataset is composed of animal classes with a total of images, and animal attributes. We follow the feature generation in [5] to adopt 3 feature types forming the views:

  • Image Feature by VGG-16 pre-trained model: a 1000-dimensional feature vector is produced from each image by resizing them to and taken from the outputs of the layer with a 16-layer VGGNet [47].

  • Class Label Encoding: a 100-dimensional Word2Vector is extracted from each class label. Then, we can map the visual feature of each image to the text feature space by using a ridge regressor with a similar setting as in [5] to genenate another set of textual feature, with connection to the visual world. The text embedding space is constructed by training a skip-gram [48] model on the entire English Wikipedia articles, including billion words.

  • Attibute Encoding: an 85-dimensional feature vetor can be produced with a similar idea as above. Since each class of animals contains some typical patterns of the attribute, a lookup table can be constructed to connect the classes and attributes [49, 50]. Then, we map each image feature to the attribute space to produce the mid-level feature.

We generate the image ranking data as follows. From the 50 classes of animal images, we find pairs of images with in-class pairs and out-of-class image pairs from each class. We then end up with training data pairs. Similarly, we will have test data pairs. We select images from each class used for training data and a separate set of samples as test data. Another 10 images are used as queries: 5 of them are associated with the in-class images as positive sample pairs and 5 as negative sample pairs. For the negative sample pairs, we randomly select classes from remaining animal classes at a time, and one image per class is associated with each query image under study.

Fig. 6: PR and ROC curves on AWA.


Methods MAP@100 Accuracy
Feature Concat 38.08 50.60
LMvCCA [5] 49.97 51.85
LMvMDA [5] 49.70 52.35
MvDA [6] 49.20 52.82
SmVR [8] 52.12 50.33
DMvCCA [5] 51.38 50.83
DMvMDA [5] 51.52 51.38
MvCCAE (ours) 49.01 53.28
MvMDAE (ours) 48.99 53.30
DMvDR (ours) 76.83 71.48


TABLE III: Quantitative Results (%) on the AWA Dataset.

We can observe the performance of the methods on the animal dataset graphically in Fig. 6 and quantitatively in Table III. DMvDR outperforms the other competing methods by a large margin as shown in the plots of Fig. 6. Due to the variety of data distribution from different feature types as view inputs, the co-training type of SmVR can no longer compete with the end-to-end solution. From Table III, one can observe that the performance of the feature concatenation suffers from the same problem. On the other hand, our proposed subspace ranking methods produces satisfactory classification rates while the precisions remain somewhat low. This implies again the scoring function is critical to be trained together with the feature mappings. The other linear and nonlinear subspace ranking methods have comparatively similar performance at a lower position.

5 Conclusion

Learning to rank has been a popular research topic with numerous applications, while multi-view ranking remains a relatively new research topic. In this paper, we aimed to associate the multi-view subspace learning methods with the ranking problem and proposed three methods in this direction. MvCCAE is an unsupervised multi-view embedding method, while MvMDAE is its supervised counterpart. Both of them incorporate multiple objectives, with a correlation maximization on one hand, and reconstruction error minimization on the other hand, and have been extended in the multi-view subspace learning to rank scheme. Finally, DMvDR is proposed to exploit the global agreement while minimizing the individual ranking losses in a single optimization process. The experimental results validate the superior performance of DMvDR compared to the other subspace and co-training methods on multi-view datasets with both homogeneous and heterogeneous data representations.

In the future, we will explore the scenario when there exists missing data, which is beyond the scope of the current proposed subspace ranking methods during training. Multiple networks can also be combined by concatenating their outputs, and further optimized in a single sub-network. This solution may also be applicable for homogeneous representations.


  • [1] T.-Y. Liu, “Learning to rank for information retrieval,” Foundations and Trends in Information Retrieval, vol. 3, no. 3, pp. 225–331, 2009.
  • [2] Y. Zhu, G. Wang, J. Yang, D. Wang, J. Yan, J. Hu, and Z. Chen, “Optimizing search engine revenue in sponsored search,” in Proceedings of the 32nd international ACM SIGIR conference on Research and development in information retrieval.   ACM, 2009, pp. 588–595.
  • [3] M. Amini, N. Usunier, and C. Goutte, “Learning from multiple partially observed views - an application to multilingual text categorization,” in Advances in Neural Information Processing Systems 22.   Curran Associates, Inc., 2009, pp. 28–36.
  • [4] J. Yu, D. Tao, M. Wang, and Y. Rui, “Learning to rank using user clicks and visual features for image retrieval,” IEEE transactions on cybernetics, vol. 45, no. 4, pp. 767–779, 2015.
  • [5] G. Cao, A. Iosifidis, K. Chen, and M. Gabbouj, “Generalized multi-view embedding for visual recognition and cross-modal retrieval,” IEEE Transactions on Cybernetics, 2017, doi: 10.1109/TCYB.2017.2742705.
  • [6] M. Kan, S. Shan, H. Zhang, S. Lao, and X. Chen, “Multi-view discriminant analysis,” IEEE Transactions on Pattern Analysis and Machine Intelligence (T-PAMI), vol. 38, no. 1, pp. 188–194, Jan 2016.
  • [7] A. Blum and T. Mitchell, “Combining labeled and unlabeled data with co-training,” in

    Proceedings of the eleventh annual conference on Computational learning theory

    .   ACM, 1998, pp. 92–100.
  • [8]

    N. Usunier, M.-R. Amini, and C. Goutte, “Multiview semi-supervised learning for ranking multilingual documents,”

    Machine Learning and Knowledge Discovery in Databases, pp. 443–458, 2011.
  • [9] D. R. Hardoon, S. Szedmak, and J. Shawe-Taylor, “Canonical correlation analysis: An overview with application to learning methods,” Neural computation, vol. 16, no. 12, pp. 2639–2664, 2004.
  • [10] J. Rupnik and J. Shawe-Taylor, “Multi-view canonical correlation analysis,” in Slovenian KDD Conference on Data Mining and Data Warehouses (SiKDD 2010), 2010, pp. 1–4.
  • [11] A. Iosifidis, A. Tefas, and I. Pitas, “Kernel reference discriminant analysis,” Pattern Recognition Letters, vol. 49, pp. 85–91, 2014.
  • [12] A. Iosifidis and M. Gabbouj, “Nyström-based approximate kernel subspace learning,” Pattern Recognition, vol. 57, pp. 190–197, 2016.
  • [13] G. Andrew, R. Arora, J. Bilmes, and K. Livescu, “Deep canonical correlation analysis,” in Proceedings of the 30th International Conference on Machine Learning, 2013, pp. 1247–1255.
  • [14] G. Cao, A. Iosifidis, and M. Gabbouj, “Multi-view nonparametric discriminant analysis for image retrieval and recognition,” IEEE Signal Processing Letters, vol. 24, no. 10, pp. 1537–1541, Oct 2017.
  • [15] F. Feng, L. Nie, X. Wang, R. Hong, and T.-S. Chua, “Computational social indicators: A case study of chinese university ranking,” in Proceedings of the 40th International ACM SIGIR Conference on Research and Development in Information Retrieval.   New York, NY, USA: ACM, 2017, pp. 455–464.
  • [16] “The times higher education world university ranking,”, 2016.
  • [17] N. C. Liu and Y. Cheng, “The academic ranking of world universities,” Higher education in Europe, vol. 30, no. 2, pp. 127–136, 2005.
  • [18] T. Liu, J. Wang, J. Sun, N. Zheng, X. Tang, and H.-Y. Shum, “Picture collage,” IEEE Transactions on Multimedia (TMM), vol. 11, no. 7, pp. 1225 –1239, 2009.
  • [19] X. Li, T. Pi, Z. Zhang, X. Zhao, M. Wang, X. Li, and P. S. Yu, “Learning bregman distance functions for structural learning to rank,” IEEE Transactions on Knowledge and Data EngineeringS, vol. 29, no. 9, pp. 1916–1927, Sept 2017.
  • [20] O. Wu, Q. You, X. Mao, F. Xia, F. Yuan, and W. Hu, “Listwise learning to rank by exploring structure of objects,” IEEE Transactions on Knowledge and Data Engineering, vol. 28, no. 7, pp. 1934–1939, 2016.
  • [21] L. Page, S. Brin, R. Motwani, and T. Winograd, “The pagerank citation ranking: Bringing order to the web.” Stanford InfoLab, Tech. Rep., 1999.
  • [22] W. W. Cohen, R. E. Schapire, and Y. Singer, “Learning to order things,” in Advances in Neural Information Processing Systems, 1998, pp. 451–457.
  • [23] C. Burges, T. Shaked, E. Renshaw, A. Lazier, M. Deeds, N. Hamilton, and G. Hullender, “Learning to rank using gradient descent,” in Proceedings of the 22nd international conference on Machine learning.   ACM, 2005, pp. 89–96.
  • [24] R. Herbrich, T. Graepel, and K. Obermayer, “Large margin rank boundaries for ordinal regression,” 2000.
  • [25] Y. Freund, R. Iyer, R. E. Schapire, and Y. Singer, “An efficient boosting algorithm for combining preferences,” The Journal of machine learning research, vol. 4, pp. 933–969, 2003.
  • [26] Y. Freund and R. E. Schapire, “A desicion-theoretic generalization of on-line learning and an application to boosting,” in European conference on computational learning theory.   Springer, 1995, pp. 23–37.
  • [27] T. Joachims, “Optimizing search engines using clickthrough data,” in Proceedings of the eighth ACM SIGKDD international conference on Knowledge discovery and data mining.   ACM, 2002, pp. 133–142.
  • [28] J. H. Friedman, “Greedy function approximation: a gradient boosting machine,” Annals of statistics, pp. 1189–1232, 2001.
  • [29] J. Xu and H. Li, “AdaRank: a boosting algorithm for information retrieval,” in Proceedings of the 30th annual international ACM SIGIR conference on Research and development in information retrieval.   ACM, 2007, pp. 391–398.
  • [30] C. J. Burges, R. Ragno, and Q. V. Le, “Learning to rank with nonsmooth cost functions,” in Advances in neural information processing systems, 2007, pp. 193–200.
  • [31] H. Hotelling, “Relations between two sets of variates,” Biometrika, pp. 321–377, 1936.
  • [32] M. Borga, “Canonical correlation: a tutorial,”, 2001.
  • [33] A. A. Nielsen, “Multiset canonical correlations analysis and multispectral, truly multitemporal remote sensing data,” IEEE Transactions on Image Processing (TIP), vol. 11, no. 3, pp. 293–305, 2002.
  • [34] Y. Gong, Q. Ke, M. Isard, and S. Lazebnik, “A multi-view embedding space for modeling internet images, tags, and their semantics,”

    International Journal of Computer Vision

    , vol. 106, no. 2, pp. 210–233, 2014.
  • [35]

    Y. Luo, D. Tao, K. Ramamohanarao, C. Xu, and Y. Wen, “Tensor canonical correlation analysis for multi-view dimension reduction,”

    IEEE Transactions on Knowledge and Data Engineering, vol. 27, no. 11, pp. 3111–3124, Nov 2015.
  • [36] A. Sharma, A. Kumar, H. Daume III, and D. W. Jacobs, “Generalized multiview analysis: A discriminative latent space,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR).   IEEE, 2012, pp. 2160–2167.
  • [37] J. Ngiam, A. Khosla, M. Kim, J. Nam, H. Lee, and A. Y. Ng, “Multimodal deep learning,” in Proceedings of the 28th international conference on machine learning (ICML-11), 2011, pp. 689–696.
  • [38] W. Wang, R. Arora, K. Livescu, and J. Bilmes, “On deep multi-view representation learning,” in Proceedings of the 32nd International Conference on Machine Learning (ICML), 2015, pp. 1083–1092.
  • [39] S. Chandar, M. M. Khapra, H. Larochelle, and B. Ravindran, “Correlational neural networks,” Neural computation, 2016.
  • [40] M. Kan, S. Shan, and X. Chen, “Multi-view deep network for cross-view classification,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2016, pp. 4847–4855.
  • [41] S. Yan, D. Xu, B. Zhang, H.-J. Zhang, Q. Yang, and S. Lin, “Graph embedding and extensions: a general framework for dimensionality reduction,” IEEE Transactions on Pattern Analysis and Machine Intelligence (T-PAMI), vol. 29, no. 1, pp. 40–51, 2007.
  • [42] Y. Jia, F. Nie, and C. Zhang, “Trace ratio problem revisited,” IEEE Transactions on Neural Networks, vol. 20, no. 4, pp. 729–735, 2009.
  • [43] Y. LeCun, L. Bottou, G. B. Orr, and K.-R. Müller, “Efficient backprop,” in Neural networks: Tricks of the trade.   Springer, 1998, pp. 9–50.
  • [44] C. D. Manning, P. Raghavan, H. Schütze et al., Introduction to information retrieval.   Cambridge university press Cambridge, 2008, vol. 1, no. 1.
  • [45] “World university rankings: A kaggle dataset.”, 2016.
  • [46] C. H. Lampert, H. Nickisch, and S. Harmeling, “Attribute-based classification for zero-shot visual object categorization,” IEEE Transactions on Pattern Analysis and Machine Intelligence (T-PAMI), vol. 36, no. 3, pp. 453–465, 2014.
  • [47] K. Simonyan and A. Zisserman, “Very deep convolutional networks for large-scale image recognition,” International Conference on Learning Representations (ICLR), 2015.
  • [48]

    T. Mikolov, I. Sutskever, K. Chen, G. S. Corrado, and J. Dean, “Distributed representations of words and phrases and their compositionality,” in

    Advances in neural information processing systems (NIPS), 2013, pp. 3111–3119.
  • [49] C. Kemp, J. B. Tenenbaum, T. L. Griffiths, T. Yamada, and N. Ueda, “Learning systems of concepts with an infinite relational model,” in AAAI, vol. 3, 2006, p. 5.
  • [50] D. N. Osherson, J. Stern, O. Wilkie, M. Stob, and E. E. Smith, “Default probability,” Cognitive Science, vol. 15, no. 2, pp. 251–269, 1991.