1 Introduction
Label ranking is a topic in the machine learning literature
furnkranz+05 ; cheng2009icml ; VembuG10that studies the problem of learning a mapping from instances to rankings over a finite number of predefined labels. One characteristic that clearly distinguishes label ranking problems from classification problems is the order relation between the labels. While a classifier aims at finding the true class on a given unclassified example, the label ranker will focus on the relative preferences between a set of labels/classes. These relations represent relevant information from a decision support perspective, with possible applications in various fields such as elections, dominance of certain species over the others, user preferences, etc.
Due to its intuitive representation, Association Rules Agrawal1994 have become very popular in data mining and machine learning tasks (e.g. Mining rankings HenzgenH14 , Classification liu1998integrating and even Label Ranking rebelosa2011 , etc). The adaptation of AR for label ranking, Label Ranking Association Rules (LRAR) rebelosa2011 , are similar to their classification counterpart, Class Association Rules (CAR) liu1998integrating . LRAR can be used for predictive or descriptive purposes.
LRAR are relations, like typical association rules, between an antecedent and a consequent (), defined by interest measures. The distinction lies in the fact that the consequent is a complete ranking. Because the degree of similarity between rankings can vary, it lead to several interesting challenges. For instance, how to treat rankings that are very similar but not exactly equal. To tackle this problem, similaritybased interest measures were defined to evaluate LRAR. Such measures can be applied to existing rule generation methods rebelosa2011 (e.g. APRIORI Agrawal1994 ).
One important issue for the use of LRAR is the threshold that determines what should and should not be considered sufficiently similar. Here we present the results of sensitivity analysis study to show how LRAR behave in different scenarios, to understand the effect of this threshold better. Whether there is a rule of thumb or this threshold is dataspecific is the type of questions we investigate here. Ultimately we also want to understand which parameters have more influence in the predictive accuracy of the method.
Another important issue is related to the large number of distinct rankings. Despite the existence of many competitive approaches in Label Ranking, Decision trees
todorovski+02 ; cheng2009icml , kNearest Neighbor brazdil+03 ; cheng2009icml or LRAR rebelosa2011 , problems with a large number of distinct rankings can be hard to predict. One realworld example with a relatively large number of rankings, is the sushi dataset kamishima03 . This dataset compares demographics of 5000 Japanese citizens with their preferred sushi types. With only 10 labels, it has more than 4900 distinct rankings. Even though it has been known in the preference learning community for a while, no results with high predictive accuracy have been published, to the best of our knowledge. Cases like this have motivated the appearance of new approaches, e.g. to mine ranking data HenzgenH14 , where association rules are used to find patterns within rankings.We propose a method which combines the two approaches mentioned above rebelosa2011 ; HenzgenH14 , because it can could contribute to a better understanding of the datasets mentioned above. We define Pairwise Association Rules (PAR) as association rules with one or more pairwise comparisons in the consequent. In this work we present an approach to identify PAR and analyze the findings in two real world datasets.
By decomposing rankings into the unitary preference relation i.e. pairwise comparisons, we can look for subranking patterns. From which, as explained before, we expect to find more frequent patterns than with complete rankings.
LRAR and PARs can be regarded as a specialization of general association rules that are obtained from data containing preferences, which we refer to as Preference Rules. These two approaches are complementary in the sense that they can give different insights from preference data. We use LRAR and PAR in this work as predictive and descriptive models, respectively.
The paper is organized as follows: Sections 2 and3 introduce the task of association rule mining and the label ranking problem, respectively; Section 4 describes the Label Ranking Association Rules and Section 5 the Pairwise Association Rules proposed here; Section 6 presents the experimental setup and discusses the results; finally, Section 7 concludes this paper.
2 Association Rule Mining
An association rule (AR) is an implication: where and , where is the set of descriptors of instances in the instance space , typically pairs . The training data is represented as , , where
is a vector containing the values
of independent variables, , describing instance . We also denote as the set of descriptors of instance .2.1 Interest measures
There are many interest measures to evaluate association rules Omiecinski03 , but typically they are characterized by support and confidence. Here, we summarize some of the most common, assuming a rule in .
Support
percentage of the instances in that contain and :
Confidence
percentage of instances that contain from the set of instances that contain :
Coverage
proportion of examples in that contain the antecedent of a rule: coverage HalkidiV05 :
We say that a rule covers an instance , if .
Lift
measures the independence of the consequent, , relative to the antecedent, :
Lift values vary from 0 to . If is independent from then .
2.2 APRIORI Algorithm
The original method for induction of AR is the APRIORI algorithm, proposed in 1994 Agrawal1994 . APRIORI identifies all AR that have support and confidence higher than a given minimal support threshold () and a minimal confidence threshold (), respectively. Thus, the model generated is a set of AR, , of the form , where , and and . For a more detailed description see Agrawal1994 .
Despite the usefulness and simplicity of APRIORI, it runs a time consuming candidate generation process and needs substantial time and memory space, proportional to the number of possible combinations of the descriptors. Additionally it needs multiple scans of the data and typically generates a very large number of rules. Because of this, many alternative methods were previously proposed, such as hashing Park1995 , dynamic itemset counting Brin1997 , parallel and distributed mining Park1995b and mining integrated into relational database systems Sarawagi1998 .
In contrast to itemsetbased algorithms, which compute frequent itemsets and rule generation in two steps, there are rulebased approaches such as FPGrowth (Frequent pattern growth method) HanPYM04 . This means that, rules are generated at the same time as frequent itemsets are computed.
2.3 Pruning
AR algorithms typically generate a large number of rules (possibly tens of thousands), some of which represent only small variations from others. This is known as the rule explosion problem bayardo2000constraint which should be dealt with by pruning mechanisms. Many rules must be discarded for computational and simplicity reasons.
Pruning methods are usually employed to reduce the amount of rules without reducing the quality of the model. For example, an AR algorithm might find rules for which the confidence is only marginally improved by adding further conditions to their antecedent.Another example is when the consequent of a rule has the same distribution independently of the antecedent . In these cases, we should not consider these rules as meaningful.
Improvement
A common pruning method is based on the improvement that a refined rule yields in comparison to the original one bayardo2000constraint . The improvement of a rule is defined as the smallest difference between the confidence of a rule and the confidence of all subrules sharing the same consequent:
As an example, if one defines a minimum improvement , the rule will be kept if , where .
If we say that is a productive rule.
Significant rules
Another way to prune non productive rules is to use statistical tests Webb06 . A rule is significant if the confidence improvement over all its generalizations is statistically significant. The rule is significant if the difference is statistically significant for a given significance level ().
3 Label Ranking
In Label Ranking (LR), given an instance from the instance space , the goal is to predict the ranking of the labels associated with Chenga . A ranking can be represented as a strict total order over , defined on the permutation space .
The LR task is similar to the classification task, where instead of a class we want to predict a ranking of labels. As in classification, we do not assume the existence of a deterministic mapping. Instead, every instance is associated with a probability distribution over cheng2009icml . This means that, for each
, there exists a probability distribution
such that, for every , is the probability that is the ranking associated with . The goal in LR is to learn the mapping . The training data contains a set of instances , , where is a vector containing the values of independent variables, , describing instance and is the corresponding target ranking.The rankings can be either total or partial orders.
Total orders
A strict total order over is defined as:^{1}^{1}1For convenience, we say total order but in fact we mean a totally ordered set. Strictly speaking, a total order is a binary relation.
which represents a strict ranking VembuG10 , a complete ranking FurnkranzH10 , or simply a ranking. A strict total order can also be represented as a permutation of the set , such that is the position, or rank, of in . For example, the strict total order can be represented as .
However, in realworld ranking data, we do not always have clear and unambiguous preferences, i.e. strict total orders BrandenburgGH13 . Hence, sometimes we have to deal with indifference and incomparability. For illustration purposes, let us consider the scenario of elections, where a set of voters vote on candidates. If a voter feels that two candidates have identical proposals, then these can be expressed as indifferent so they are assigned the same rank (i.e. a tie).
To represent ties, we need a more relaxed setting, called nonstrict total orders, or simply total orders, over , by replacing the binary strict order relation, , with the binary partial order relation, :
These nonstrict total orders can represent partial rankings (rankings with ties) VembuG10 . For example, the nonstrict total order can be represented as .
Additionally, realworld data may lack preference data regarding two or more labels, which is known as incomparability. Continuing with the elections example, the lack of information about one or two of the candidates, and , leads to incomparability, . In other words, the voter cannot decide whether the candidates are equivalent or select one as the preferred, because he does not know the candidates. Incomparability should not be confused with intrinsic properties of the objects, as if we are comparing apples and oranges. Instead, it is like trying to compare two different types of apple without ever having tried either. In this cases, we can use partial orders.
Partial orders
Similarly to total orders, there are strict and nonstrict partial orders. Let us consider the nonstrict partial orders (which can also be referred to as partial orders) over :
We can represent partial orders with subrankings HenzgenH14 . For example, the partial order can be represented as , where 0 represents .
3.1 Methods
Several learning algorithms were proposed for modeling label ranking data in recent years. These can be grouped as decompositionbased or direct. Decompositionbased methods divide the problem into several simpler problems (e.g., multiple binary problems). An example is ranking by pairwise comparisons furnkranz+05 and mining rank data HenzgenH14 . Direct methods treat the rankings as target objects without any decomposition. Examples of that include decision trees todorovski+02 ; cheng2009icml , kNearest Neighbors brazdil+03 ; cheng2009icml and the linear utility transformation harpeled+02 ; dekel2003 . This second group of algorithms can be divided into two approaches. The first one contains methods that are based on statistical distributions of rankings (e.g. cheng2009icml ), such as Mallows lebanon+02b , or PlackettLuce ChengDH10 . The other group of methods are based on measures of similarity or correlation between rankings (e.g. todorovski+02 ; aiguzhinov+10 ).
LRspecific preprocessing methods have also been proposed, e.g. MDLPR rebelosa2013 and EDiRa rebelosa2016 . Both are direct methods and based on measures of similarity. Considering that supervised discretization approaches usually provide better results than unsupervised methods Dougherty1995 , such methods can be of a great importance in the field. In particular, for ARlike algorithms, such as the ones proposed in this work, which are typically not suitable for numerical data.
For more information on label ranking learning methods, more information ca be found in plbook .
3.1.1 Label Ranking by Learning Pairwise Preferences
Ranking by pairwise comparisons basically consists of reducing the problem of ranking into several classification problems. In the learning phase, the original problem is formulated as a set of pairwise preferences problem. Each problem is concerned with one pair of labels of the ranking, . The target attribute is the relative order between them, . Then, a separate model is obtained for each pair of labels. Considering , there will be classification problems to model.
In the prediction phase, each model is applied to every pair of labels to obtain a prediction of their relative order. The predictions are then combined to derive rankings, which can be done in several ways. The simplest is to order the labels, for each example, considering the predictions of the models as votes. This topic has been well studied and documented fodor1994fuzzy ; Chenga .
3.2 Evaluation
Given an instance with label ranking and a ranking predicted by a LR
model, several loss functions on
can be used to evaluate the accuracy of the prediction. One such function is the number of discordant label pairs:If there are no discordant label pairs, the distance . Alternatively, the function to define the number of concordant pairs is:
Kendall Tau
Kendall’s coefficient kendall1970rank is the normalized difference between the number of concordant, , and discordant pairs, :
where is the number of possible pairwise combinations, . The values of this coefficient range from , where if the rankings are equal and if denotes the inverse order of (e.g. and ). Kendall’s can also be computed in the presence of ties, using taub agresti2010analysis .
An alternative measure is the Spearman’s rank correlation coefficient spearman04 .
Gamma coefficient
If we want to measure the correlation between two partial orders (subrankings), or between total and partial orders, we can use the Gamma coefficient kruskal1954 :
Which is identical to Kendall’s coefficient in the presence of strict total orders, because .
Weighted rank correlation measures
When it is important to give more relevance to higher ranks, a weighted rank correlation coefficient can be used. They are typically adaptations of existing similarity measures, such as costa+04 , which is based on Spearman’s coefficient.
These correlation measures are not only used for evaluation estimation, they can be used within learning
rebelosa2011 or preprocessing rebelosa2016 models. Since Kendall’s has been used for evaluation in many recent LR studies cheng2009icml ; rebelosa2013 , we use it here as well.The accuracy of a label ranker can be estimated by averaging the values of any of the measures explained here, over the rankings predicted for a set of test examples. Given a dataset, , , the usual resampling strategies, such as holdout or crossvalidation, can be used to estimate the accuracy of a LR algorithm.
4 Label Ranking Association Rules
Association rules were originally proposed for descriptive purposes. However, they have been adapted for predictive tasks such as classification (e.g., liu1998integrating ). Given that label ranking is a predictive task, the adaptation of AR for label ranking comes in a natural way. A Label Ranking Association Rule (LRAR) rebelosa2011 is defined as:
where and . Let be the set of label ranking association rules generated from a given dataset. When an instance is covered by the rule , the predicted ranking is . A rule , covers an instance , if .
We can use the CAR frameworkliu1998integrating for LRAR. However this approach has two important problems. First, the number of classes can be extremely large, up to a maximum of , where is the size of the set of labels, . This means that the amount of data required to learn a reasonable mapping is unreasonably large.
The second disadvantage is that this approach does not take into account the differences in nature between label rankings and classes. In classification, two examples either have the same class or not. In this regard, label ranking is more similar to regression than to classification. In regression, a large number of observations with a given target value, say 5.3, increases the probability of observing similar values, say 5.4 or 5.2, but not so much for very different values, say 3.1 or 100.2. This property must be taken into account in the induction of prediction models. A similar reasoning can be made in label ranking. Let us consider the case of a data set in which ranking occurs in 1% of the examples. Treating rankings as classes would mean that . Let us further consider that the rankings and , which are obtained from by swapping a single pair of adjacent labels, occur in 50% of the examples. Taking into account the stochastic nature of these rankings cheng2009icml , seems to underestimate the probability of observing . In other words it is expected that the observation of , and increases the probability of observing and viceversa, because they are similar to each other.
This affects even rankings which are not observed in the available data. For example, even though a ranking is not present in the dataset it would not be entirely unexpected to see it in future data. This also means that it is possible to compute the probability of unseen rankings.
To take all this into account, similaritybased interestingness measures were proposed to deal with rankings rebelosa2011 .
4.1 Interestingness measures in Label Ranking
As mentioned before, because the degree of similarity between rankings can vary, similaritybased measures can be used to evaluate LRAR. These measures are able to distinguish rankings that are very similar from rankings that are very very distinct. In practice, the measures described bellow can be applied to existing rule generation methods rebelosa2011 (e.g. APRIORI Agrawal1994 ).
Support
The support of a ranking should increase with the observation of similar rankings and that variation should be proportional to the similarity. Given a measure of similarity between rankings , we can adapt the concept of support of the rule as follows:
Essentially, what we are doing is assigning a weight to each target ranking in the training data that represents its contribution to the probability that may be observed. Some instances give a strong contribution to the support count (i.e., 1), while others will give a weaker or even no contribution at all.
Any function that measures the similarity between two rankings or permutations can be used, such as Kendall’s kendall1970rank or Spearman’s spearman04 . The function used here is of the form:
(1) 
where is a similarity function. This general form assumes that below a given threshold, , is not useful to discriminate between different rankings, as they are so different from . This means that, the support of will be based only on the items of the form , for all where .
Many functions can be used as . However, given that the loss function we aim to minimize is known beforehand, it makes sense to use it to measure the similarity between rankings. Therefore, we use Kendall’s as .
Concerning the threshold, given that antimonotonicity can only be guaranteed with nonnegative values PeiHL01 , it implies that . Therefore we think that is a reasonable default value, because it separates between the positive and negative correlation between rankings.
Table 1 shows an example of a label ranking dataset represented according to this approach. Instance (TID=1) contributes to the support count of ruleitem with 1, as expected. However, that same instance, will also give a contribution of 0.33 to the support count of ruleitem , given their ranking similarity. On the other hand, no contribution to the support of ruleitem is given, because these rankings are clearly different. This means that .
TID  

1  L  0.33  0.00  1.00 
2  L  0.00  1.00  0.00 
3  L  1.00  0.00  0.33 
Confidence
The confidence of a rule comes in a natural way if we replace the classical measure of support with the similaritybased .
Improvement
Improvement in association rule mining is defined as the smallest difference between the confidence of a rule and the confidence of all subrules sharing the same consequent bayardo2000constraint . In LR it is not suitable to compare targets simply as equal or different (Section 4). Therefore, to implement pruning based on improvement for LR, some adaptation is required as well. Given that the relation between target values is different from classification, as discussed in Section 4.1, we have to limit the comparison between rules with different consequents, if .
Improvement for Label Ranking is defined as:
for , and where . As an illustrative example, consider the two rules and , where is a superset of , . If then will only be kept if, and only if, .
Lift
The lift measures the independence between the consequent and the antecedent of the rule AzevedoJ07 . The adaptation of lift for LRAR is straightforward since it only depends the concept of support, for which a version for LRAR already exists:
4.2 Generation of LRAR
Given the adaptations of the interestingness measures proposed, the task of learning LRAR can be defined essentially in the same way as the task of learning AR, i.e. to identify the set of LRAR that has a support and a confidence higher than the thresholds defined by the user. More formally, given a training set , the algorithm aims to create a set of high accuracy rules to cover a test set . If does not cover some , a DefaultRanking (Section 4.3.1) is assigned to it.
4.2.1 Implementation of LRAR in CAREN
The association rule generator we are using is CAREN Azevedo2010 .^{2}^{2}2http://www4.di.uminho.pt/~pja/class/caren.html CAREN implements an association rule algorithm to derive rulebased prediction models, like CAR and LRAR. For Label Ranking datasets, CAREN derives association rules where the consequent is a complete ranking.
CAREN is specialized in generating association rules for predictive models and employs a bitwise depthfirst frequent pattern mining algorithm. Rule pruning is performed using a Fisher exact test
Azevedo2010 . Like CMAR Pei2010 , CAREN is a rulebased algorithm rather than itemsetbased. This means that, frequent itemsets are derived at the same time as rules are generated, whereas itemsetbased algorithms carry out the two tasks in two separated steps.Rulebased approaches allow for different pruning methods. For example, let us consider the rule , where is the most frequent class in the examples covering . If then there is no need to search for a superset of , , since any rule of the form cannot have a support higher than .
CAREN generates significant rules Webb06 . Statistical significance of a rule is evaluated using a Fisher Exact Test by comparing its support to the support of its direct generalizations. The direct generalizations of a rule are and where is a single item.
The final set of rules obtained define the label ranking prediction model, which we can also refer as the label ranker.
CAREN also employs prediction for strict rankings using consensus ranking (Section 4.3), best rule, among others.
4.3 Prediction
A very straightforward method to generate predictions using a label ranker is used. The set of rules can be represented as an ordered list of rules, by some user defined measure of relevance:
As mentioned before, a rule covers (or matches) an instance , if . If only one rule, , matches , the predicted ranking for is . However, in practice, it is quite common to have more than one rule covering the same instance , . In there can be rules with conflicting ranking recommendations. There are several methods to address those conflicts, such as selecting the best rule, calculating the majority ranking, etc. However, it has been shown that a ranking obtained by ordering the average ranks of the labels across all rankings minimizes the euclidean distance to all those rankings kemeny+72 . In other words, it maximizes the similarity according to Spearman’s spearman04 . This can be referred to as the average ranking brazdil+03 .
Given any set of rankings () with labels, we compute the average ranking as:
(2) 
The average ranking can be obtained if we rank the values of . A weighted version of this method can be obtained by using the confidence or support of the rules in as weights.
4.3.1 Default rules
As in classification, in some cases, the label ranker might not find any rule that covers a given instance , so . To avoid this, we need to define a default rule, , which can be used in such cases:
A default class is also often used in classification tasks HanK2000 , which is usually the majority class of the training set . In a similar way, we could define the majority ranking as our default ranking. However, some label ranking datasets have as many rankings as instances, making the majority ranking not so representative.
4.4 Parameter tuning
Due to the intrinsic nature of each different dataset, or even of the preprocessing methods used to prepare the data (e.g., the discretization method), the maximum needed to obtain a rule set , that covers all the examples, may vary significantly LiuHM99a . The trivial solution would be, for example, to set which would generate many rules, hence increasing the coverage. However, this rule would probably lead to a lot of uninteresting rules as well, as the model would overfit the data. Then, our goal is to obtain a rule set which gives maximal coverage while keeping high confidence rules.
Let us define as the coverage of the model i.e. the coverage of the set of rules . Algorithm 1
represents a simple, heuristic method to determine the
that obtains the rule set such that a certain minimal coverage is guaranteed .This procedure has the important advantage that it does not take into account the accuracy of the rule sets generated, thus reducing the risk of overfitting.
5 Pairwise Association Rules
Association rules use a sets of descriptors to represent meaningful subsets of the data Hastie2009 , hence providing an easy interpretation of the patterns mined. Due to the intuitive representation, since its first application in the market basket analysis Agrawal1993 , they have become very popular in data mining and machine learning tasks (Mining rankings HenzgenH14 , Classification liu1998integrating , Label Ranking rebelosa2011 , etc).
LRAR proved to be an effective predictive model, however they are designed to find complete rankings. Despite its similarity measures, which take into account possible ranking noise, it does not capture subranking patterns because it will always try to infer complete rankings. On the other hand, association rules were used to find patterns within rankings HenzgenH14 , however, they do not relate it with the independent variables. Besides, in HenzgenH14 , the consequent is limited to one pairwise comparison.
In this work, we propose a decomposition method to look for meaningful associations between independent variables and preferences (in the form of pairwise comparisons), the Pairwise Association Rules (PAR), which can be regarded as predictive or descriptive model. We define PAR as:
where, as in the original AR paper Agrawal1994 , we allow rules with multiple items, not only in the antecedent but also in the consequent, i.e. PAR can have multiple sets of pairwise comparisons in the consequent.
Similarly to RPC (Section 3.1.1), we decompose the target rankings into pairwise comparisons. Therefore, PAR can be obtained from data with strict rankings, partial rankings and subrankings. ^{3}^{3}3To derive the PAR, we added a pairwise decomposition method to the CAREN Azevedo2010 software.
Contrary to LRAR, we use the same interestingness measures that are also used in typical AR approaches, instead of the similaritybased versions defined for LR problems, i.e. sup, conf, etc. This allows PAR to filter out nonfrequent/interesting patterns and makes it more difficult to derive strict rankings. When methods cannot find interesting rules with enough pairwise comparisons to define a strict ranking, partial rankings, subrankings or even with sets of disjoint pairwise comparisons can be found. This is, interest measures are defining the borders between what the model will keep or abstain.
Abstention is used in machine learning to describe the option to not make a prediction when the confidence in the output of a model is insufficient. The simplest case is classification, where the model can abstain itself to make a decision bartlettW08 . In the label ranking task, a method that makes partial abstentions was proposed in cheng12abs . A similar reasoning is used here both for predictive and descriptive models.
More formally, let us define where can be a complete ranking, partial ranking or a subranking. For each of size we can extract up to pairwise comparisons. We consider 4 possible outcomes for each pairwise comparison:



(indifference)

(incomparability)
As an example, a PAR can be of the form:
The consequent can be simplified into or represented as a subranking .
6 Experimental Results
In this section we start by describing the datasets used in the experiments, then we introduce the experimental setup and finally present the results obtained.
6.1 Datasets
The data sets in this work were taken from KEBI Data Repository in the Philipps University of Marburg cheng2009icml (Table 2).
To illustrate domainspecific interpretations of the results, we experiment with two additional datasets. We use an adapted dataset from the 1999 COIL Competition Bache+Lichman:2013 , Algae rebelosa2016epm , concerning the frequencies of algae populations in different environments. The original dataset consisted of 340 examples, each representing measurements of a sample of water from different European rivers on different periods. The measurements include concentrations of chemical substances like nitrogen (in the form of nitrates, nitrites and ammonia), oxygen and chlorine. Also the pH, season, river size and its flow velocity were registered. For each sample, the frequencies of 7 types of algae were also measured. In this work, we considered the algae concentrations as preference relations by ordering them from larger to smaller concentrations. Those with 0 frequency are placed in last position and equal frequencies are represented with ties. Missing values in the independent variables were set to 0.
Finally, the Sushi preference dataset kamishima03 , which is composed of demographic data about 5000 people and sushi preferences is also used. Each person sorted a set of 10 different sushi types by preference. The 10 types of sushi, are a) shrimp, b) sea eel, c) tuna, d) squid, e) sea urchin, f) salmon roe, g) egg h) fatty tuna, i) tuna roll and j) cucumber roll. Since the attribute names were not transformed in this dataset, we can make a richer analysis of it.
Datasets  type  #examples  #labels  #attributes  
bodyfat  B  252  7  7  94% 
calhousing  B  20,640  4  4  0.1% 
cpusmall  B  8,192  5  6  1% 
elevators  B  16,599  9  9  1% 
fried  B  40,769  5  9  0.3% 
glass  A  214  6  9  14% 
housing  B  506  6  6  22% 
iris  A  150  3  4  3% 
segment  A  2310  7  18  6% 
stock  B  950  5  5  5% 
vehicle  A  846  4  18  2% 
vowel  A  528  11  10  56% 
wine  A  178  3  13  3% 
wisconsin  B  194  16  16  100% 
Algae (COIL)  316  7  10  72%  
Sushi  5000  10  10  98% 
Table 2 presents a simple measure of the diversity of the target rankings, the Unique Ranking’s Proportion, . is the proportion of distinct target rankings for a given dataset. As a practical example, the iris dataset has 5 distinct rankings for 150 instances, which results in .
6.2 Experimental setup
Continuous variables were discretized with two distinct methods: (1) Entropybased Discretization for Ranking data (EDiRa) (rebelosa2016 ) and (2) equal width bins. EDiRa is the state of the art supervised discretization method in Label Ranking, while equal width is a simple, general method that serves as baseline.
The evaluation measure used in all experiments is Kendall’s . A tenfold crossvalidation was used to estimate the value for each experiment. The generation of Label Ranking Association Rules (LRAR) and PAR was performed with CAREN Azevedo2010 which uses a depthfirst based approach.
The confidence tuning Algorithm 1 was used to set parameters. We consider that seems a reasonable step value because the can be found in, at most, 20 iterations. Given that a common value for the in Association Rule (AR) mining is , we use it as default for all datasets. We define the minM as to get a reasonable coverage, and to avoid rule explosion.
In terms of similarity functions, we use a normalized Kendall between the interval as our similarity function (Equation 1).
6.3 Results with LRAR
In the experiments described in this section we analyze the performance from different perspectives, accuracy, number of rules and average confidence as the similarity threshold varies. We expect to understand the impact of using similarity measures in the generation of LRAR and provide some insights about its usage.
LRAR, despite being based on similarity measures, are consistent with the classical concepts underlying association rules. A special case is when , where, as in CAR, only equal rankings are considered. Therefore, by varying the threshold we also understand how similaritybased interest measures () contribute to the accuracy of the model, in comparison to frequencybased approaches ().
We would also like to understand how some properties of the data relate the sensitivity to . We can extract two simple measures of ranking diversity from the datasets, the Unique Ranking’s Proportion (), mentioned before, and the ranking entropy rebelosa2016 .
6.3.1 Sensitivity analysis
Here we analyze how the similarity threshold affects the accuracy, number and quality (in terms of confidence) of LRAR.
Accuracy
In Figure 1 we can see the behavior of the accuracy of CAREN in terms of . It shows that, in general, there is a tendency for the accuracy to decrease as gets closer to 1. This happens in 12 out of the 14 datasets analyzed. On the other hand, in 9 out of 14 datasets, the accuracy is rather stable in the range .
If we take into consideration that the model ignores all similarities between rankings for , the observed behavior seems to favor the similaritybased approach. In line with that, two extreme cases can be seen with fried and wisconsin datasets, where CAREN was not able to find any LRAR for . ^{4}^{4}4The default rule was not used in these experiments because it is not related with .
Let us consider the accuracy range, the maximum accuracy minus the minimum accuracy. To find out which datasets are more likely to be affected by the choice of , we can compare their ranking entropy with the measured accuracy range from Figure 1. In Figure 2 we compare the accuracy range with the ranking entropy rebelosa2016 . We can see that, the higher the entropy, the more the accuracy can be affected by the choice of .
Results seem to indicate that, when mining LRAR in datasets with low ranking entropy, the choice of is not so relevant. On the other hand, as the entropy gets bigger, a reasonable value should be .
One interesting behavior can be found in the dataset fried. Despite the fact that it has a very low proportion of unique rankings, (Table 2) its entropy is quite high (Figure 2). For this reason, it makes it more sensitive to , as seen in Figure 1. On the other hand, iris and wine, with very low entropy, seem unaffected by .
Number of rules
Ideally, we would like to obtain a small number of rules with high accuracy. However, such a balance is not expected to happen frequently. Ultimately, as accuracy is the most important evaluation criterion, if a reduction in the number of rules comes with a high cost in accuracy, it is better to have more rules. Thus, it is important to understand how the number of LRAR varies with the similarity threshold , while taking the impact in the accuracy of the model into account as well.
In Figure 3 we see how many LRAR are generated per dataset as varies. The majority of the plots, 10 out of 14, show a decrease in the number of rules as gets closer to 1. As discussed before, the accuracy in general also decreases as , so let us focus on .
In the interval , the number of rules generated is quite stable in 9 out of 14 datasets. In the first half of this interval, , it is even more remarkable for 13 datasets.
We expect the number of rules to decrease as increases, however, results show that the number of rules does not decrease so much, especially for values up to 0.3. This is due to the fact that is also used in the pruning step (Section 4.1), reducing the number of rules against which the improvement of an extension is measured and, thus, increasing the probability of an extension not being kept in the model. This means that, is being effective in the reduction of LRAR.
As mentioned before, not only compares rules where , but also rules where . In other words, with the we are pruning LRAR with similar rankings too.
These results do not lead to any strong conclusions about the ideal value for regarding the number of rules. However, they are in line with the previous analysis of accuracy.
Minimum Confidence
As mentioned before, we use a greedy algorithm to automatically adjust the minimum confidence in order to reduce the number of examples that are not covered by any rule. This means that the method has to adapt the value of per dataset per , as seen in Figure 4.
In general, the decreases in a monotonic way as increases. As the gets to its minimum with 13 out of 14 datasets, which is consistent with the accuracy plots (Figure 1). This means that, if we want to generate rules with as much confidence as possible, we should use the minimum , i.e. .
Support versus accuracy
We vary the minimum support threshold, , to test how it affects the accuracy of our learner. A similar study has been carried out on CBA iqbal2013comparison . Specifically, we vary the from to , using a step size of . Due to the complexity of these experiments, we only considered the six smallest datasets.
In general, as we increase the accuracy decreases, which is a strong indicator that the support should be small (Figure 5). All lines are monotonically decreasing, i.e. either the values remain constant or they decrease as increases.
From a different perspective, the changes are generally very small for . Considering that lower generate potentially more rules, we recommend as a reasonable value to start experiments with.
Discretization techniques
To test the influence of the discretization method used, we performed the same analysis using a nonsupervised discretization method, equal width.
In general, the accuracy had the same behavior, as a function of , as with EDiRa, i.e. the results are highly correlated (Figure 6). However, the supervised approach is consistently better.
These results add further evidence that EDiRa is a suitable discretization method for label ranking rebelosa2016 .
Similar behavior was observed concerning the number of rules generated and the minimum confidence.
Summary
It is well known that general, simple rules to set parameters of machine learning algorithms do not exist. Nevertheless it is good to know where reasonable values lie. Hence, we think that and are good default values for LRAR with CAREN. In terms of the discretization methods, our results confirm that a supervised approach, such as EDiRa, is a good choice.
6.4 Results with PAR
In this work we use PAR, as a descriptive model, to find patterns concerning subsets of labels. We focus in the descriptive task for two reasons. One is to make the approach more simple and the other one is because this complements with the predictive LRAR approach.
The minimum support and confidence presented here are defining the abstention level of the model. The and were adjusted manually to generate a small set of rules between 150 to 200.
In the generation of PAR, we set the minimum lift to . Despite that many interesting rules were found, due to space limitations we only present the most relevant.
Algae data
Using the Algae dataset, we found 179 PARs with and . With and the rule with the highest lift (approx. 6) was:
The consequent of this rule can be represented as . Considering that the labels represent algae populations, this rule states that it is always true that, under these conditions, type 6 is more prevalent than type 2. It also states that type 7 is less prevalent than types 2, 5 and 6.
The second rule with highest lift, with and is:
The target of this rule is the partial ranking .
If this PAR was used for prediction, the subranking would have been the prediction.
Sushi data
When analyzing the sushi dataset we got 166 rules with and the . With a lift of the following rule was found:
In the whole dataset, of the people show this relative preferences . This PAR shows that this number almost double (), if we consider males from Eastern Japan, aged between .
A related rule was also found concerning a different group of people, with different age and from a different region (, and ):
This rule includes one relative preference found in this group, , which is the opposite to what was observed in the previous rule. Based on this information, we analyzed the data and found out that of people that live in Eastern Japan prefer to while of people from Western Japan prefer to .
7 Conclusions
In this paper we address the problem of finding association patterns in label rankings. We present an extensive empirical analysis on the behavior of a label ranking method, the CAREN implementation of Label Ranking Association Rules. The performance was analyzed from different perspectives, accuracy, number of rules and average confidence. The results show that, similaritybased interest measures contribute positively to the accuracy of the model, in comparison to frequencybased approaches, i.e. when . The results confirm that LRAR are a viable label ranking tool which helps solving complex label ranking problems (i.e. problems with high ranking entropy). The results also enabled the identification of some values for the parameters of the algorithm that are good candidates to be used as default values.
Results also seem to indicate that, the higher the entropy, the more the accuracy can be affected by the choice of . An user can measure the ranking entropy of a dataset beforehand and adjust accordingly.
Additionally, we propose Preference Association Rules (PAR), which are association rules where the consequent represents multiple pairwise preferences. We illustrated the usefuleness of this approach to identify interesting patterns in label ranking datasets, which cannot be obtained with LRAR.
In future work, we will use PAR for predictive tasks.
Acknowledgments
This work is financed by the ERDF  European Regional Development Fund through the Operational Programme for Competitiveness and Internationalization  COMPETE 2020 Programme within project POCI010145FEDER006961, and by National Funds through the FCT  Fundação para a Ciência e a Tecnologia (Portuguese Foundation for Science and Technology) as part of project UID/EEA/50014/2013.
References
 (1) J. Fürnkranz, E. Hüllermeier, Preference learning, KI 19 (1) (2005) 60–.
 (2) W. Cheng, J. Hühn, E. Hüllermeier, Decision tree and instancebased learning for label ranking, in: ICML ’09: Proceedings of the 26th Annual International Conference on Machine Learning, ACM, New York, NY, USA, 2009, pp. 161–168.

(3)
S. Vembu, T. Gärtner,
Label ranking
algorithms: A survey, in: Preference Learning., 2010, pp. 45–64.
doi:10.1007/9783642141256_3.
URL http://dx.doi.org/10.1007/9783642141256_3 
(4)
R. Agrawal, R. Srikant, Fast
algorithms for mining association rules in large databases, in: VLDB’94,
Proceedings of 20th International Conference on Very Large Data Bases,
September 1215, 1994, Santiago de Chile, Chile, 1994, pp. 487–499.
URL http://www.vldb.org/conf/1994/P487.PDF 
(5)
S. Henzgen, E. Hüllermeier,
Mining rank data, in:
Discovery Science  17th International Conference, DS 2014, Bled, Slovenia,
October 810, 2014. Proceedings, 2014, pp. 123–134.
doi:10.1007/9783319118123_11.
URL http://dx.doi.org/10.1007/9783319118123_11  (6) B. Liu, W. Hsu, Y. Ma, Integrating classification and association rule mining, Knowledge Discovery and Data Mining (1998) 80–86.
 (7) C. R. de Sá, C. Soares, A. M. Jorge, P. J. Azevedo, J. P. da Costa, Mining association rules for label ranking, in: PAKDD (2), 2011, pp. 432–443.
 (8) L. Todorovski, H. Blockeel, S. Džeroski, Ranking with Predictive Clustering Trees, in: T. Elomaa, H. Mannila, H. Toivonen (Eds.), Proc. of the 13th European Conf. on Machine Learning, no. 2430 in LNAI, SpringerVerlag, 2002, pp. 444–455.
 (9) P. Brazdil, C. Soares, J. Costa, Ranking Learning Algorithms: Using IBL and MetaLearning on Accuracy and Time Results, Machine Learning 50 (3) (2003) 251–277.
 (10) T. Kamishima, Nantonac collaborative filtering: recommendation based on order responses, in: Proceedings of the Ninth ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, Washington, DC, USA, August 24  27, 2003, 2003, pp. 583–588. doi:10.1145/956750.956823.

(11)
E. Omiecinski, Alternative
interest measures for mining associations in databases, IEEE Trans. Knowl.
Data Eng. 15 (1) (2003) 57–69.
doi:10.1109/TKDE.2003.1161582.
URL http://dx.doi.org/10.1109/TKDE.2003.1161582  (12) M. Halkidi, M. Vazirgiannis, Quality assessment approaches in data mining, in: The Data Mining and Knowledge Discovery Handbook., 2005, pp. 661–696.
 (13) J. S. Park, M.S. Chen, P. S. Yu, An effective hashbased algorithm for mining association rules, ACM SIGMOD Record 24 (2) (1995) 175–186. doi:10.1145/568271.223813.
 (14) S. Brin, R. Motwani, J. D. Ullman, S. Tsur, Dynamic itemset counting and implication rules for market basket data, Proceedings of the 1997 ACM SIGMOD international conference on Management of data  SIGMOD ’97 (1997) 255–264doi:10.1145/253260.253325.
 (15) J. S. Park, M.S. Chen, P. S. Yu, Efficient parallel and data mining for association rules, in: CIKM, 1995, pp. 31–36.
 (16) S. Thomas, S. Sarawagi, Mining generalized association rules and sequential patterns using sql queries, in: KDD, 1998, pp. 344–348.

(17)
J. Han, J. Pei, Y. Yin, R. Mao,
Mining frequent
patterns without candidate generation: A frequentpattern tree approach,
Data Min. Knowl. Discov. 8 (1) (2004) 53–87.
doi:10.1023/B:DAMI.0000005258.31418.83.
URL http://dx.doi.org/10.1023/B:DAMI.0000005258.31418.83 
(18)
R. J. B. Jr., R. Agrawal, D. Gunopulos,
Constraintbased rule mining
in large, dense databases, Data Min. Knowl. Discov. 4 (2/3) (2000) 217–240.
doi:10.1023/A:1009895914772.
URL http://dx.doi.org/10.1023/A:1009895914772 
(19)
G. I. Webb, Discovering
significant rules, in: Proceedings of the Twelfth ACM SIGKDD
International Conference on Knowledge Discovery and Data Mining,
Philadelphia, PA, USA, August 2023, 2006, 2006, pp. 434–443.
doi:10.1145/1150402.1150451.
URL http://doi.acm.org/10.1145/1150402.1150451  (20) E. Hüllermeier, J. Fürnkranz, W. Cheng, K. Brinker, Label ranking by learning pairwise preferences, Artif. Intell. 172 (1617) (2008) 1897–1916.

(21)
J. Fürnkranz, E. Hüllermeier,
Preference learning: An
introduction, in: Preference Learning., 2010, pp. 1–17.
doi:10.1007/9783642141256_1.
URL http://dx.doi.org/10.1007/9783642141256_1 
(22)
F. Brandenburg, A. Gleißner, A. Hofmeier,
Comparing and aggregating
partial orders with kendall tau distances, Discrete Math., Alg. and Appl.
5 (2).
doi:10.1142/S1793830913600033.
URL http://dx.doi.org/10.1142/S1793830913600033  (23) S. HarPeled, D. Roth, D. Zimak, Constraint classification: a new approach to multiclass classification, in: Proc. of the International Workshop on Algorithmic Learning Theory (ALT), SpringerVerlag, 2002, pp. 135–150.

(24)
S. Thrun, L. K. Saul, B. Schölkopf (Eds.),
Advances
in Neural Information Processing Systems 16 [Neural Information Processing
Systems, NIPS 2003, December 813, 2003, Vancouver and Whistler, British
Columbia, Canada], MIT Press, 2004.
URL http://papers.nips.cc/book/advancesinneuralinformationprocessingsystems162003  (25) G. Lebanon, J. D. Lafferty, Conditional Models on the Ranking Poset., in: NIPS, 2002, pp. 415–422.

(26)
W. Cheng, K. Dembczynski, E. Hüllermeier,
Label ranking methods based on
the plackettluce model, in: Proceedings of the 27th International
Conference on Machine Learning (ICML10), June 2124, 2010, Haifa, Israel,
2010, pp. 215–222.
URL http://www.icml2010.org/papers/353.pdf 
(27)
A. Aiguzhinov, C. Soares, A. P. Serra, A similaritybased adaptation of naive bayes for label ranking: Application to the metalearning problem of algorithm recommendation, in: Discovery Science, 2010, pp. 16–26.
 (28) C. R. de Sá, C. Soares, A. Knobbe, P. J. Azevedo, A. M. Jorge, Multiinterval discretization of continuous attributes for label ranking, in: Discovery Science, 2013, pp. 155–169.

(29)
C. R. de Sá, C. Soares, A. Knobbe,
Entropybased
discretization methods for ranking data, Inf. Sci. 329 (2016) 921–936.
doi:10.1016/j.ins.2015.04.022.
URL http://dx.doi.org/10.1016/j.ins.2015.04.022  (30) J. Dougherty, R. Kohavi, M. Sahami, Supervised and unsupervised discretization of continuous features, in: Machine Learning, Proceedings of the Twelfth International Conference on Machine Learning, Tahoe City, California, USA, July 912, 1995, 1995, pp. 194–202.

(31)
J. Fürnkranz, E. Hüllermeier (Eds.),
Preference Learning,
Springer, 2010.
doi:10.1007/9783642141256.
URL http://dx.doi.org/10.1007/9783642141256  (32) J. Fodor, M. Roubens, Fuzzy preference modelling and multicriteria decision support, Springer, 1994.
 (33) M. Kendall, J. Gibbons, Rank correlation methods, Griffin London, 1970.
 (34) A. Agresti, Analysis of ordinal categorical data, Vol. 656, John Wiley & Sons, 2010.
 (35) C. Spearman, The proof and measurement of association between two things, American Journal of Psychology 15 (1904) 72–101.

(36)
W. H. K. Leo A. Goodman, Measures of
association for cross classifications, Journal of the American Statistical
Association 49 (268) (1954) 732–764.
URL http://www.jstor.org/stable/2281536  (37) J. Pinto da Costa, C. Soares, A weighted rank measure of correlation, Australian & New Zealand Journal of Statistics 47 (4) (2005) 515–529.

(38)
J. Pei, J. Han, L. V. S. Lakshmanan,
Mining frequent item sets
with convertible constraints, in: Proceedings of the 17th International
Conference on Data Engineering, April 26, 2001, Heidelberg, Germany, 2001,
pp. 433–442.
doi:10.1109/ICDE.2001.914856.
URL http://dx.doi.org/10.1109/ICDE.2001.914856 
(39)
P. J. Azevedo, A. M. Jorge,
Comparing rule measures
for predictive association rules, in: Machine Learning: ECML 2007, 18th
European Conference on Machine Learning, Warsaw, Poland, September 1721,
2007, Proceedings, 2007, pp. 510–517.
doi:10.1007/9783540749585_47.
URL http://dx.doi.org/10.1007/9783540749585_47  (40) P. J. Azevedo, A. M. Jorge, Ensembles of jittered association rule classifiers, Data Mining and Knowledge Discovery (March). doi:10.1007/s106180100173y.

(41)
W. Pei,
CMAR:
Accurate and efficient classification based on multiple classassociation
rules, Citeseer.
URL http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.24.9014&rep=rep1&type=pdf  (42) J. Kemeny, J. Snell, Mathematical Models in the Social Sciences, MIT Press, 1972.
 (43) J. Han, M. Kamber, Data Mining: Concepts and Techniques, Morgan Kaufmann, 2000.

(44)
B. Liu, W. Hsu, Y. Ma, Mining
association rules with multiple minimum supports, in: Proceedings of the
Fifth ACM SIGKDD International Conference on Knowledge Discovery and Data
Mining, San Diego, CA, USA, August 1518, 1999, 1999, pp. 337–341.
doi:10.1145/312129.312274.
URL http://doi.acm.org/10.1145/312129.312274 
(45)
T. Hastie, R. Tibshirani, J. Friedman,
The Elements of
Statistical Learning: Data Mining, Inference, and Prediction
, Springer New York, New York, NY, 2009, Ch. Unsupervised Learning, pp. 485–585.
doi:10.1007/9780387848587_14.
URL http://dx.doi.org/10.1007/9780387848587_14  (46) R. Agrawal, T. Imieliński, A. Swami, Mining association rules between sets of items in large databases, ACM SIGMOD Record 22 (2) (1993) 207–216.

(47)
P. L. Bartlett, M. H. Wegkamp,
Classification with a
reject option using a hinge loss, Journal of Machine Learning Research 9
(2008) 1823–1840.
doi:10.1145/1390681.1442792.
URL http://doi.acm.org/10.1145/1390681.1442792 
(48)
W. Cheng, E. Hüllermeier, W. Waegeman, V. Welker,
Label
ranking with partial abstention based on thresholded probabilistic models,
in: Proceedings of Neural Information Processing Systems 2012, 2012, pp.
2510–2518.
URL http://papers.nips.cc/paper/4811labelrankingwithpartialabstentionbasedonthresholdedprobabilisticmodels  (49) K. Bache, M. Lichman, UCI machine learning repository (2013).
 (50) C. R. de Sá, W. Duivesteijn, C. Soares, A. Knobbe, Exceptional preferences mining, in: Discovery Science, 2016.
 (51) M. Iqbal, I. Mukhlash, H. M. Astuti, The comparison of cba algorithm and cbs algorithm for meteorological data classification, ISICO 2013.
Comments
There are no comments yet.