cs-ranking
Context-sensitive ranking in Python with Tensorflow
view repo
Object ranking is an important problem in the realm of preference learning. On the basis of training data in the form of a set of rankings of objects, which are typically represented as feature vectors, the goal is to learn a ranking function that predicts a linear order of any new set of objects. Current approaches commonly focus on ranking by scoring, i.e., on learning an underlying latent utility function that seeks to capture the inherent utility of each object. These approaches, however, are not able to take possible effects of context-dependence into account, where context-dependence means that the utility or usefulness of an object may also depend on what other objects are available as alternatives. In this paper, we formalize the problem of context-dependent ranking and present two general approaches based on two natural representations of context-dependent ranking functions. Both approaches are instantiated by means of appropriate neural network architectures. We demonstrate empirically that our methods outperform traditional approaches on benchmark tasks, for which context-dependence is playing a relevant role.
READ FULL TEXT VIEW PDF
We study the problem of learning choice functions, which play an importa...
read it
Object ranking or "learning to rank" is an important problem in the real...
read it
In 2019, Anderson et al. proposed the concept of rankability, which refe...
read it
We consider a context-dependent ranking and selection problem. The best
...
read it
In domains like bioinformatics, information retrieval and social network...
read it
A recommender system based on ranks is proposed, where an expert's ranki...
read it
Ranking functions used in information retrieval are primarily used in th...
read it
Context-sensitive ranking in Python with Tensorflow
In preference learning (PL-book), the learner is generally provided with a set of items (e.g., products) for which preferences are known, and the task is to learn a function that predicts preferences for a new set of items (e.g., new products not seen so far), or for the same set of items in a different situation (e.g., the same products but for a different user). Frequently, the predicted preference relation is required to form a total order, in which case we also speak of a ranking problem
. In fact, among the problems in the realm of preference learning, the task of “learning to rank” has probably received the most attention in the literature so far, and a number of different ranking problems have already been introduced.
The focus of this paper is on so-called object ranking (cohe_lt98; kami_as10). Given training data in the form of a set of exemplary rankings of subsets of objects, the goal in object ranking is to learn a ranking function that is able to predict the ranking of any new set of objects. As a typical example, consider an eCommerce scenario, in which a customer is ranking a set of products, each characterized by different properties and attributes, according to her preferences.
In economics, classical choice theory assumes that, for a given user, each alternative has an inherent utility, and that choices and decisions are made on the basis of these utilities. Yet, many studies have shown that these idealized assumptions are often violated in practice. For example, choices are also influenced by the decision context, i.e., by the availability of other alternatives (huber1982adding; simonson1992; tversky1993; dhar2000). Motivated by observations of that kind, the focus of this paper is on the problem of context-dependent ranking. In this regard, our contributions are as follows. First, we formalize the problem of context-dependent ranking and present two general approaches based on two natural representations of context-dependent ranking functions: First Evaluate Then Aggregate (FETA) and First Aggregate Then Evaluate (FATE). Second, both approaches are instantiated by means of appropriate neural network architectures, called FETA-Net and FATE-Net, respectively. These architectures can be trained in an end-to-end manner. Third, we conduct an experimental evaluation of our methods, using both synthetic and real-world data, for which context-dependence is playing a relevant role. Empirically, we are able to show that our methods outperform traditional approaches on these tasks.
We assume a reference set of objects denoted by , where each object is described by a feature vector; thus, an object is a vector , and . A ranking task is specified by a finite subset of objects, for some , and the task itself consists of predicting a preferential ordering of these objects, that is, a ranking. The latter is encoded in terms of a permutation , where denotes the set of all permutations of length , i.e., all mappings (symmetric group of order ). A permutation represents the total order such that for all , where is the position of the th object , and the index of the object on position ( is often called a ranking and an ordering). Formally, a ranking function can thus be understood as a mapping , where is the ranking task space (or simply task space) and the ranking space.
Methods for object ranking seek to induce a ranking function from training data in the form of exemplary ranking tasks together with observed rankings . Typically, this is done by learning a latent utility function
(1) |
which assigns a real-valued score to each object . Given a task , a ranking is then simply constructed by sorting the objects according to their scores. This approach implies important properties of the induced preferences, i.e., the set of rankings produced by on the ranking task space. In particular, preferences have to be transitive and, moreover, context-independent.
Context-independence means that the preference between two items and does not depend on the set of other items in the query (which we consider as defining the context in which and are ranked). More formally, consider any pair of items and any subsets such that . Moreover, let be the ranking induced by on and the corresponding ranking on . Then, context-independence implies that if an only if . Obviously, context-independence is closely connected to the famous Luce axiom of choice (luce1959).
Let us note that the notion of “context” is also used with a different meaning in the learning-to-rank literature (and in machine learning in general), namely as a kind of extra dimension. For instance,
agrawal06 illustrate their notion of “context-sensitive ranking” with an example in which objects are actors and the extra dimension is the film genre: “Contextual preferences take the form that item is preferred to item in the context of . For example, a preference might state the choice for Nicole Kidman over Penelope Cruz in drama movies, whereas another preference might choose Penelope Cruz over Nicole Kidman in the context of Spanish dramas.” Obviously, this differs from our definition of “context”, which is derived from its use in the economics literature.In practice, the assumption of context-independence of preferences is often violated, because preferences of individuals are influenced by the context in which decisions are made (bettman1998constructive). In economics, three major context effects have been identified in the literature: the compromise effect (simonson1989choice), the attraction effect (huber1983market), and the similarity effect (tversky1972elimination). To capture effects of context-dependence, our goal is to learn a generalized latent utility function
(2) |
which can be used in the same way as (1) to assign a score to each object of the ranking task. Since the utility function has a second argument, namely a context, it allows for representing context-dependent ranking functions
where, for each object in a task , we denote by its context in this task.
In this section, we present two general approaches based on two natural representations of context-dependent ranking functions. These representations are based on two rather natural ways to decompose the problem of assigning a context-dependent score to an object: “First evaluate then aggregate” (FETA) first evaluates the object in each “sub-context” of a fixed size, and then aggregates these evaluations, whereas “first aggregate then evaluate” (FATE) first aggregates the entire set of alternatives into a single representative, and then evaluates the object in the context of that representative. Interestingly, the former approach has already been used in the literature (volkovs2009), at least implicitly, while the latter is novel to the best of our knowledge. Before explaining these approaches in more detail, we make a few more general remarks on the representation of context-dependent ranking functions.
The representation of a context-dependent utility function (2) comes with (at least) two important challenges, which are both connected to the fact that the second argument of such a function is a set of variable size. First, the arity of the function is therefore not fixed, because different ranking tasks, and hence different contexts, can have different size. Second, the function should be permutation-invariant (symmetric) with regard to the elements in the second argument, the context, because the order in which the alternative objects are presented does not play any role. Formally, function is permutation-invariant if and only if for all permutations of the indices . A function with this property is also called symmetric (stanley2001).
As for the problem of rating objects in contexts of variable size, one possibility is to decompose a context into sub-contexts of a fixed size . More specifically, the idea is to learn context-dependent utility functions of the form , and to represent the original function (2) as an aggregation
(3) |
Note that, provided permutation-invariance holds for as well as the aggregation, itself will also be symmetric. Taking the arithmetic average as an aggregation function, the second condition is obviously satisfied. Thus, the problem that essentially remains is to guarantee the symmetry of .
Roughly speaking, the idea of the above decomposition is that dependencies and interaction effects between objects only occur up to a certain order , or at least can be limited to this order without loosing too much information. This is an assumption that is commonly made in the literature on aggregation functions (grab_af) and also in other types of applications. The special cases and correspond to independence and pairwise interaction, respectively.
An interesting question concerns the expressivity of a th order approximation (3), where, for example, expressivity could be measured in terms of the number of different ranking functions that can be defined on . To study this question, suppose that is finite and consists of objects. Obviously, for , only different ranking functions can be produced, because the entire function is determined by the order on the maximal ranking task . Naturally, the number of possible ranking functions should increase with increasing . For the extreme case , we can indeed show that all ranking functions can be generated (see Proposition 1 in the Appendix).
Our first approach realizes (3) for the special case , which can be seen as a first-order approximation of a fully context-dependent ranking function. Thus, we propose the representation of a ranking function which, in addition to a utility function , is based on a pairwise predicate . Given a ranking task , a ranking is obtained as follows:
(4) |
We refer to this approach as First Evaluate Then Aggregate (FETA).
The observation that FETA is able to capture context-dependence is quite obvious. As a simple illustration, suppose that and is given on as follows:
For the queries and we obtain rankings and . That is, the preference between and changes depending on whether the third item to be ranked is or .
It is important to note that our interpretation of is not the standard interpretation in terms of a pairwise preference relation. Specific properties such as asymmetry () are therefore not necessarily required, although they could be incorporated for the purpose of regularization. Instead, should be interpreted more generally as a measure of support given by to . This interpretation is in line with Ragain2016
, who model distributions on rankings using Markov chains. Here, individual preferences are defined in terms of probabilities (of the stationary distribution), and binary relations
define transition probabilities. Thus, is the probability of moving from to , and the larger the probability of being in , the higher the preference for this item. Roughly speaking, is a measure of how favorable it is for that is part of its context . In other words, a large value suggests that, whenever and are part of the objects to be ranked, tends to occupy a high position.volkovs2009 introduce the algorithm BoltzRank, which learns a combination of pairwise and individual scoring functions, thus falling under our category of FETA approaches.
To deal with the problem of contexts of variable size, our previous approach was to decompose the context into sub-contexts of a fixed size, evaluate an object in each of the sub-contexts, and then aggregate these evaluations into an overall assessment. An alternative to this “first evaluate then aggregate” strategy, and in a sense contrariwise approach, consists of first aggregating the context into a representation of fixed size, and then evaluating the object in this “super-context”.
More specifically, consider a ranking task . To evaluate an object in the context , the “first aggregate then evaluate” (FATE) strategy first computes a representative for the context:
(5) |
where maps each object to an -dimensional embedding space . The evaluation itself is then realized by a context-dependent utility function , so that we eventually obtain a ranking
(6) |
A computationally more efficient variant of this approach is obtained by including an object in its own context, i.e., by setting for all . In this case, the aggregation (5) only needs to be computed once. Note, that this approach bears resemblance to the recent work by zaheer2017 on dealing with set-valued inputs and the general approach proposed by ravan2017 on encoding equivariance with respect to group operations.
In this section, we propose realizations of the FETA and FATE approaches in terms of neural network architectures FETA-Net and FATE-Net
, respectively. Our design goals for both neural networks are twofold. First, they should be end-to-end trainable on any differentiable listwise ranking loss function. Second, the architectures should be able to generalize beyond the ranking task sizes encountered in the training data, since in practice it is unreasonable to expect all rankings to be of similar size. Our focus is on optimizing the 0/1-ranking loss, for which we introduced a suitable differentiable surrogate loss called the
hinge ranking loss which is described in the Appendix. However, it is also possible to substitute it with any other differentiable loss function.FETA as outlined above requires the binary predicate to be given. In FETA-Net, learning this predicate is accomplished by means of a deep neural network architecture. More specifically, we make use of the CmpNN architecture (Rigutini2011; Huybrechts2016).
In our FETA-Net architecture (shown in Figure 1), we evaluate the CmpNN network on all pairs of objects in the ranking task and build up a pairwise relation , where . Using the notation of Rigutini2011, this relation is defined as follows:
(7) |
This step is highlighted in blue in Figure 1. Then, each row of the relation is summed up to obtain a score for each object . Each is also passed through a th-order network that directly outputs latent utilities . Here, we use a densely connected, deep neural network with one output unit. The final score for object is then given by .
The training complexity of FETA-Net is , where denotes the number of rankings, is the number of features per object, and is an upper bound on the number of objects in each ranking. For a new ranking task (note that we can predict the ranking for any task size) the prediction time is in .
The FATE-Net architecture is depicted in Figure 2. Inputs are the objects of the ranking task (shown in green). Each object is independently passed through a deep, densely connected embedding layer (shown in blue). The embedding layer approximates the function in (5), where, for reasons of computational efficiency, we assume objects to be part of their context (i.e., for all ). Note that we employ weight sharing, i.e., the same embedding is used for each object. Then, the representative for the context is computed by averaging the representations of each object. To calculate the score for an object , the feature vector is concatenated with to form the input to the joint hidden layer (here depicted in orange).
The training complexity of FATE-Net is , where denotes the number of rankings, is the number of features per object, and is an upper bound on the number of objects in each ranking. For a new query (note that we can predict the ranking for any query size) the prediction can be done in time (i.e., linear in the number of objects). This is because, if objects are part of their context, the representative has to be computed only once for the forward pass. This makes the FATE-Net architecture more efficient to use than FETA-Net.
In order to empirically evaluate our FATE-Net and FETA-Net architectures, we make use of synthetic and real-world data. We mainly address the following questions: Are the architectures suitable for learning context-dependent ranking functions, and how do the approaches FETA and FATE compare with each other? Can the representation learned on one query size be generalized to arbitrary sizes of ranking tasks?
For typical real-world datasets such as OHSUMED and LETOR (letor), context-dependence is difficult to ascertain. For the evaluation of context-dependent ranking models, we therefore propose two new challenging benchmark problems, which are both inspired by real-world problems: the medoid and the hypervolume
problem. Besides, we also analyze a real-world dataset related to the problem of depth estimation in images. As baselines to compare with, we selected representative algorithms for three important classes of ranking methods: Expected rank regression (ERR)
(kami05) as a representative for so-called pointwise ranking algorithms, RankSVM (RankSVM) as a state-of-the-art pairwise ranking model, and deep versions of RankNet (Burges2005; burges2010; Tesauro1989) and ListNet (Cao2007; Luo2015a), which represent the family of deep latent utility models.All experiments are implemented in Python, and the code is publicly available^{1}^{1}1https://github.com/kiudee/cs-ranking
. The hyperparameters of each algorithm were tuned with scikit-optimize
(skopt) using nested cross-validation. We evaluate the algorithms in terms of 0/1-accuracy , 0/1-ranking accuracy , and Spearman rank correlation , where is the ranking induced by the predicted score vector . All implementation details for the experiments are listed in the Appendix.Our first synthetic problem is called the medoid problem. The goal of the algorithms is to sort a set of randomly generated points in based on their distance to the medoid of . This problem is inspired by the setting of similarity learning, where the goal is to learn a similarity function from triplets of objects (Wang2014). The rankings produced by this procedure take the distance to each point in the ranking task into account. Thus the medoid and subsequently the resulting ranking are sensitive to changes of the points in the ranking task.
For the experiment, we generate sets of random points and determine the rankings as described above. The instances are split into training and test data. This is repeated times to get an estimate of the variation across datasets.
In multi-objective optimization, the goal is to find the set of objects that are non-dominated by any other object in terms of their fitness (solution quality). The set of all non-dominated objects is called the Pareto-set. Multi-objective evolutionary algorithms (MOEAs) approximate the Pareto-set by iteratively improving a population of objects. During optimization, it is not only important to improve the population’s fitness, but also to preserve its diversity. This allows the population to cover the complete Pareto-front.
The hypervolume is a set measure, which computes the volume dominated by a given set of objects. This very naturally encodes both dominance as well as diversity, which makes it a popular fitness criterion (bader2010). Usually, we are also interested in the contribution of each object/point on the Pareto-front to the hypervolume. bringmann12 proved that computing exact hypervolume contributions is #P-hard and NP-hard to approximate.
Our idea is to convert this problem into a challenging, context-dependent ranking problem. The input for the learner is the sets of points on the Pareto-front, and the target is a ranking of these points based on their contribution to the hypervolume. It is apparent that a learner given only a set of data points as input, needs to take all of the points into account to establish an accurate ranking.
Similar to the Medoid dataset, we generate sets of random points and determine the rankings as described. The instances are split into training () and test data (). We repeat this times to get an estimate of the variation.
As a real-world case study, we tackle the problem of relative depth estimation of regions in monocular images. Ewerth2017 motivate the formalization of this task as an object ranking problem and construct an object ranking dataset on the basis of the Make3D dataset, which consists of 534 photos with an original image resolution of (Make3D): The images are segmented into (i.e., ) super pixels and different feature sets are extracted for each super pixel. We use the feature set Basic, where basic monocular depth clues are available for each super pixel: linear perspective, atmospheric perspective, texture gradients, occlusion of objects, usual size of objects, relative height, relative size, distribution of light. These depth clues suggest that context-dependence could be a relevant issue in depth estimation. The ground truth rankings are constructed by ordering the super pixels based on their absolute depth.
Since the size of the rankings is too large for most of the approaches, we sample subrankings of size for training. FETA-Net in addition also samples several subrankings of size during the training process. Predictions are always the complete rankings of size , which are compared with the ground truth rankings using 0/1-ranking accuracy. Super pixels with a distance of more than are treated as tied, because this exceeds the range of the sensor.
The results on the Medoid and the Hypervolume dataset are shown in Table 1. ERR and RankSVM completely fail on both tasks, both having a correlation of 0 with the target rankings. This can be explained by the fact that both approaches ultimately learn a linear model, while the problems are highly non-linear. Being non-linear latent-utility approaches, RankNet and ListNet are able to improve upon random guessing and achieve a 0/1-ranking accuracy of around . This result is surprising, considering that both networks establish the final ranking by scoring each point independently, i.e., not taking the other points of the ranking task into account.
Our architectures FETA-Net and FATE-Net are both able to make use of the given context provided by the ranking task and beat the context-insensitive approaches by a wide margin. With a 0/1-ranking accuracy of more than , FATE-Net even performs significantly better than FETA-Net. This suggests that the pairwise decomposition (first-order approximation) is not able to completely capture the higher order interactions between the objects.
Since the ranking size is fixed for the Medoid and Hypervolume dataset, we ran additional experiments we varied the size of the rankings during test time. The results are shown in Figure 2 of the Appendix.
Ranker | Spearman correlation | 0/1-Ranking accuracy |
---|---|---|
ERR | ||
RankSVM | ||
RankNet | ||
ListNet | ||
RankBoost ^{2}^{2}footnotemark: 2 | ||
FETA-Net | ||
FATE-Net |
The results for the relative depth estimation problem are shown in Table 2. We additionally report the results obtained by Ewerth2017 using Rankboost on the same dataset (using the same split into training and test). Both our architectures achieve comparably high Spearman correlation and ranking accuracy, and slightly outperform the competitors.
In this paper, we addressed the novel problem of learning context-dependent ranking functions in the setting of object ranking and, moreover, proposed two general solutions to this problem. These solutions are based on two principled ways for representing context-dependent ranking functions that accept ranking problems of any size as input and guarantee symmetry. FETA (first evaluate then aggregate) is a first-order approximation to a more general latent-utility decomposition, which we proved to be flexible enough to learn any ranking function in the limit. FATE (first aggregate then evaluate) first transforms each object into an embedding space and computes a representative of the context by averaging. Objects are then scored with this representative as a fixed-size context. Ewerth2017, Freund2003
To enable end-to-end optimization of differentiable ranking losses using these decompositions, we further contribute two new neural network architectures called FETA-Net and FATE-Net. We demonstrate empirically that both architectures are able to learn context-dependent ranking functions on both synthetic and real-world data.
While FETA and FATE appear to be natural approaches to context-dependent ranking, and first experimental results are promising, the theoretical foundation of context-dependent ranking is still weakly developed. One important question concerns the expressivity of the two representations, i. e., what type of context-effects they are able to capture, and what class of context-dependent ranking functions they can model. As already said, a first result could be established in the case of FETA, showing that any ranking function on objects can be modeled by a decomposition of order (cf. supplementary material). Yet, while this result is theoretically interesting, a quantification of the expressivity for practically meaningful model classes (i. e., th-order approximations with small ) is an open question. Likewise, for FATE, there are no results in this direction so far.
This work is part of the Collaborative Research Center “On-the-Fly Computing” at Paderborn University, which is supported by the German Research Foundation (DFG). Calculations leading to the results presented here were performed on resources provided by the Paderborn Center for Parallel Computing.
The compromise effect states that the relative utility of an object increases by adding an extreme option that makes it a compromise in the set of alternatives (rooderkerk2011incorporating). For instance, consider the set of objects in Figure 3. The ordering of these objects depends on how much the consumer is weighing the quality and the price of the product. If price is the constraint, then the preference order will be . But as soon as there is another extreme option available, the object becomes a compromise option between the three alternatives. The preference relation between and gets inverted and turns into .
Figure 3 illustrates the attraction effect. Here, if we add another object to the set of objects , where is slightly dominated by , the relative utility share for object increases with respect to . The major psychological reason is that consumers have a strong preference for dominating products (huber1983market). Thus, the preference relation between and may again be influenced.
The similarity or substitution effect is another phenomenon, according to which the presence of similar objects tends to reduce the overall probability of an object to be chosen, as it will divide the loyalty of potential consumers (huber1983market). In Figure 3, and are two similar objects. Consumers who prefer high quality will be divided amongst the two objects, resulting in a decrease of the relative utility share of object . Again, this may lead to turning a preference into , at least on an aggregate (population) level, if preferences are defined on the basis of choice probabilities.
Let be a set of objects and be the corresponding ranking task space. Let be a ranking function mapping from queries to rankings with . There always exist preference functions
such that the corresponding ranking rule of order
(8) | ||||
(9) |
for all ranking tasks .
Let be the context for the ranking task when scoring object . First, notice that if and only if for a given ranking task of size . Thus for all if and only if all resulting inequalities defined on the scores are satisfied.
The result can be shown by induction over the size of the maximum ranking task size (i.e. ) for which is defined. In addition denote with the ranking task space of ranking tasks with for . For the base case of the corresponding rankings are all of size , which is why trivially holds for all .
For we have ranking functions defined on pairs of objects. Here it suffices to use a preference function with contexts of size . Note, that for one fixed ranking task each preference score only appears in one inequality (i.e. the one in which and are compared). It follows that we can set for all pairs and for all .
For the inductive step assume that for all the equality holds for all . Now we consider the step . We know that preference scores with only occur for inequalities defined for rankings of size and further appear only in one inequality since we evaluate it only once for each object .
This time it is not possible to independently set these preference scores as we did before, since we have to take into account all the summands in equation (2) of the main paper. We already know by induction hypothesis that there exists a preference function for any defined on such that for all . For any ranking task with let
(10) |
be the maximum score difference using only the existing preference scores. We will use now as a step size to define the preference scores for rankings of size .
Then set the preference scores for as follows:
(11) |
where and . In other words, we simply set the scores inversely proportional to the position of the object in their respective ranking. To guarantee, that the preference scores which were defined on do not have an effect, we additionally scale by the step size .
It follows that for any and any with :
(12) | |||
(13) | |||
(14) | |||
(15) |
Since this equation is if and only if and therefore if and otherwise.
We can conclude that we can obtain all possible rankings of size using this construction. Thus, the statement follows.
A key advantage of the above architectures is that they are fully differentiable, allowing us to use any differentiable loss function . In our case, a loss is supposed to compare a ground-truth ranking for a task with a vector of scores predicted for the objects in . Thus, the loss is of the form .
Unfortunately, many interesting ranking losses, such as the (normalized) 0/1-ranking loss
(16) |
or the popular nDCG, are not differentiable. Yet, just like in the binary classification setting, we can define a surrogate loss function that upper bounds the true binary ranking loss, is differentiable almost everywhere, and ideally even convex. We propose to use the hinge ranking loss:
(17) |
It is convex and has a constant subgradient with respect to the individual scores. Another choice for a differentiable loss function is the Plackett-Luce (PL) loss:
(18) |
which corresponds to the negative logarithm of the PL-probability to observe given parameters
. The networks can then be trained by gradient descent and backpropagating the loss through the network.
All experiments are implemented in Python, and the code is publicly available^{3}^{3}3https://github.com/kiudee/cs-ranking. The hyperparameters of each algorithm were tuned with scikit-optimize (skopt)
using nested cross-validation. For all neural network models, we make use of the following techniques: We use either ReLU non-linearities + batch normalization
(Ioffe2015) or SELU non-linearities (selu) for each hidden layer. For regularization, both andpenalties are applied. For optimization, stochastic gradient descent with Nesterov momentum
(nesterov1983) is used. ListNet has an additional parameter , which specifies the size of the top- rankings used for training. We set this parameter to in all experiments.We evaluate the algorithms in terms of 0/1-accuracy , 0/1-ranking accuracy , and Spearman rank correlation , where is the ranking induced by the predicted score vector .
Specifically, we generate as follows:
Generate data points for the ranking tasks uniformly at random.
Construct the corresponding rankings as follows: For all ,
compute the medoid
compute the ranking
(19) |
The input for the learners are the sets of points on the Pareto-front, and the target is a ranking of these points based on their contribution to the hypervolume. Data generations is done as follows:
Generate points for the ranking tasks uniformly on the negative surface of the unit sphere .
Construct the corresponding rankings as follows: For all ,
compute the contributions of each object to the hypervolume^{4}^{4}4We use the PyGMO library to compute exact contributions. of the ranking task , i.e.,
and then the ranking
(20) |
Given that our approaches FATE-Net and FETA-Net outperform the other approaches, we are interested in how well all of the approaches generalize to unseen ranking task sizes. To this end, we apply them on the Medoid and Hypervolume dataset with 5 objects as ranking task size, and then test it on ranking task sizes between 3 and 24. The results are shown in Figure 4. Expected rank regression and RankSVM did not achieve better results than random guessing, which is why we removed them from the plot. It is interesting to note, that for models which model the latent utility (i. e. FETA-Net and RankNet), the accuracy improves with increasing ranking task size. This hints at the context-dependency vanishing with increasing number of objects, since they densely populate the space. For FATE-Net, increasing the size of the ranking task apparently leads to a slightly lower ranking accuracy. Since the representative is computed as the average of the object embeddings, this behavior is to be expected when the ranking function behaves similarly for different ranking task sizes.
Comments
There are no comments yet.