Scalable Multi-Output Label Prediction: From Classifier Chains to Classifier Trellises

01/20/2015
by   J. Read, et al.
aalto
0

Multi-output inference tasks, such as multi-label classification, have become increasingly important in recent years. A popular method for multi-label classification is classifier chains, in which the predictions of individual classifiers are cascaded along a chain, thus taking into account inter-label dependencies and improving the overall performance. Several varieties of classifier chain methods have been introduced, and many of them perform very competitively across a wide range of benchmark datasets. However, scalability limitations become apparent on larger datasets when modeling a fully-cascaded chain. In particular, the methods' strategies for discovering and modeling a good chain structure constitutes a mayor computational bottleneck. In this paper, we present the classifier trellis (CT) method for scalable multi-label classification. We compare CT with several recently proposed classifier chain methods to show that it occupies an important niche: it is highly competitive on standard multi-label problems, yet it can also scale up to thousands or even tens of thousands of labels.

READ FULL TEXT VIEW PDF

Authors

page 1

page 2

page 3

page 4

04/13/2022

A Three-phase Augmented Classifiers Chain Approach Based on Co-occurrence Analysis for Multi-Label Classification

As a very popular multi-label classification method, Classifiers Chain h...
12/26/2019

Classifier Chains: A Review and Perspectives

The family of methods collectively known as classifier chains has become...
06/07/2019

Rectifying Classifier Chains for Multi-Label Classification

Classifier chains have recently been proposed as an appealing method for...
07/18/2019

Probabilistic Regressor Chains with Monte Carlo Methods

A large number and diversity of techniques have been offered in the lite...
11/09/2012

Efficient Monte Carlo Methods for Multi-Dimensional Learning with Classifier Chains

Multi-dimensional classification (MDC) is the supervised learning proble...
05/01/2017

Regularizing Model Complexity and Label Structure for Multi-Label Text Classification

Multi-label text classification is a popular machine learning task where...
02/14/2021

Comprehensive Comparative Study of Multi-Label Classification Methods

Multi-label classification (MLC) has recently received increasing intere...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Multi-output classification (MOC) (also known variously as multi-target, multi-objective, and multidimensional classification) is the supervised learning problem where an instance is associated with a set of qualitative discrete variables (a.k.a.

labels), rather than with a single variable111We henceforth try to avoid the use of the term ‘class’; it generates confusion since it is used variously in the literature to refer to both the target variable, and a value that the variable takes. Rather, we refer to label variables, each of which takes a number of values.

. Since these label variables are often strongly correlated, modeling the dependencies between them allows MOC methods to improve their performance at the expense of an increased computational cost. Multi-label classification (MLC) is a special case of MOC where all the labels are binary; it has already attracted a great deal of interest and development in machine learning literature over the last few years. In

[27], the authors give a recent review of, and many references to, a number of recent and popular methods for MLC. fig:intro shows the relationship between different classification paradigms, according to the number of labels ( vs. ) and their type (binary or not ).

Figure 1: Different classification paradigms: is the number of labels and is the number of values that each label variable can take.

There are a vast range of active applications of MLC, including tagging images, categorizing documents, and labelling video and other media, and learning the relationship among genes and biological functions. Labels (e.g., tags, categories, genres) are either relevant or not. For example, an image may be labelled beach and urban; a news article may be sectioned under europe and economy. Relevance is usually indicated by , and irrelevance by . The general MOC scheme may add other information such as month, age, or gender. Note that month and therefore is not simply irrelevant or not. This MOC task has received relatively less attention than MLC (although there is some work emerging, e.g., [25] and [17]). However, most MLC-transformation methods (e.g., treating each label variable as a separate multi-class problem) are equally applicable to MOC. Indeed, in this paper we deal with a family of methods based on this approach. Note also that, as any integer can be represented in binary form (e.g., ), any MOC task can ‘decode’ into a MLC task and vice versa.

In this paper, we focus on scalable MLC methods, able to effectively deal with large datasets at feasible complexity. Many recent MLC methods, particularly those based on classifier chains, tend to be over engineered, investing evermore computational power to model label dependencies, but presenting poor scalability properties. In the first part of the paper, we review some state-of-the-art methods from the MLC and MOC literature to show that the more powerful solutions are not well-suited to deal with large-size label sets. For instance, classifier chains [3, 20]

consider a full cascade of labels along a chain to model their joint probability distribution and, either they explore all possible label orders in the chain, incurring in exponential complexity with the number of labels, or they compare a small subset of them chosen at random, which is ineffective for large dimensional problems.

The main contribution of the paper is a novel highly-scalable method: the classifier trellis (CT). Rather than imposing a long-range and ultimately computationally complex dependency model, as in classifier chains, CT captures the essential dependencies among labels very efficiently. This is achieved by considering a predefined trellis structure for the underlying graphical model, where dependent nodes (labels) are sequentially placed in the structure according to easily-computable probabilistic measures. Experimental results across a wide set of datasets show that CT is able to scale up to large sets (namely, thousands and tens of thousands of labels) while remaining very competitive on standard MLC problems. In fact, in most of our experiments, CT was very close to the running time of the naive baseline method, which neglects any statistical dependency between labels. Also, an ensemble version of CT, where the method is run multiple times with different random seeds and classification is done through majority voting, does not significantly outperform the single-shot CT. This demonstrates that our method is quite robust against initialization.

The paper is organized as follows. First, in sec:notation we formalize the notation and describe the problem’s setting. In sec:prior we review some state-of-the-art methods from the MLC and MOC literature, as well as their various strategies for modeling label dependence. This review is augmented with empirical results. In sec:CT we make use of the studies and theory from earlier sections to present the classifier trellis (CT) method. In sec:experiments we carry out two sets of experiments: firstly we compare CT to some state-of-the-art multi-label methods on an MLC task; and secondly, we show that CT can also provide competitive performance on typical structured output prediction task (namely localization via segmentation). Finally, in sec:conclusions we discuss the results and take conclusions.

Two appendixes have been included to help the readability of the paper and support the presented results. In A, we compare two low-complexity methods to infer the label dependencies from training data. In B we review Monte Carlo methods, which are required in this paper to perform probabilistic approximate inference of the label set associated to a new test input.

2 Problem Setup and Notation

Following a standard machine learning notation, the

-th feature vector can be represented as

where is the number of features and () indicates the support of each feature. In the traditional multi-class classification task, we have a single target variable which can take one out of values,

and for some test instance we wish to predict

(1)

in such a way that it coincides with the true (unknown) test label with a high probability222Eq. (1) corresponds to the widely used maximum a posteriori (MAP) estimator of given , but other approaches are possible.. Furthermore, the conditional distribution

is usually unknown and has to be estimated during the classifier construction stage. In the standard setting, classification is a supervised task where we have to infer the model

from a set of labelled examples (training data) , and then apply it to predict the labels for a set of novel unlabelled examples (test data). This prediction phase is usually straightforward in the single-output case, since only one of values needs to be selected.

In the multi-output classification (MOC) task, we have such output labels,

where

with being the finite number of values associated with the -th label. For some test instance , and provided that we know the conditional distribution , the MOC solution is given by

(2)

Once more, is usually unknown and has to be estimated from the training data, , in order to construct the model . Therein precisely lies the main challenge behind MOC, since must select one out of possible values333A simplification of (to keep notation cleaner).; clearly a much more difficult task than in eq:MC. Furthermore, finding for a given and is quite challenging from a computational point of view for large values of and [17, 25].

In MLC, all labels are binary labels, namely for , with the two possible label values typically notated as or . fig:intro2 shows one toy example of MLC with three labels (thus ). Because of the strong co-occurrence, we can interpret that the first label () implies the second label () with high probability, but not the other way around. When learning the model in (2), the goal of MLC (and MOC in general) is capturing this kind of dependence among labels in order to improve classification performance; and to do this efficiently enough to scale up to the size of the data in the application domains of interest. This typically means connecting labels (i.e., learning labels together) in an appropriate structure. TableNot summarizes the main notation used in the paper.

Figure 2: Toy example of multi-label classification (MLC), with possible values for each label and labels (thus for ) and, implicitly, features. Circles, squares and triangles are elements with only one active label (i.e., either or , but not both). Hexagons show vectors such that .
Notation Description
instance / input vector;

a label (binary variable)

an output (multi-class variable), possible values
-dimensional label/output vector
Training data set,
binary or multi-class classification (test instance )
multi-label multi-output classification (MLC, MOC)
Table 1: Summary of the main notation.

3 Background and Related Work

In this section, we step through some of the most relevant methods for MLC/MOC recently developed, as well as several works specifically related to the novel method, presented in later sections. All the methods discussed here, and also the CT method presented in sec:CT, aim to build a model for in (2) by first selecting a suitable model for the label joint posterior distribution and then using this model to provide a prediction to a new test input . It is in the first step where state-of-the-art methods present a complexity bottleneck to deal with large sets of labels and where CT offers a significantly better complexity-performance trade-off.

(a) Independent Classifiers (IC) (b) Classifier Chain (CC)
(c) Bayesian Classifier Chain (BCC) (d) Conditional Dependency Network (CDN)
Figure 3: Several multi-label methods depicted as directed/undirected graphical models.

3.1 Independent Classifiers

A naive solution to multi-output learning is training -class models as in eq:MC, i.e., independent classifiers (IC),444In the MLC literature, the IC approach is also known as the binary relevance method. and using them to classify times a test instance , as ]. IC is represented by the directed graphical model shown in fig:models (a). Note that this approach implicitly assumes the independence among the target variables, i.e., , which is not the case in most (if not all) multi-output datasets.

3.2 Classifier Chains

The classifier chains methodology is based on the decomposition of the conditional probability of the label vector using the product rule of probability:

(3)
(4)

which is approximated with probabilistic classifiers, . As a graphical model, this approach is illustrated by fig:models (b).

The complexity associated to learn eq:approx_joint increases with , but with a fast greedy inference, as in [20], it reduces to

(5)

for . This is not significant for most datasets, and time complexity is close to that of IC in practice. In fact, it would be identical if not for the extra attributes.

With greedy inference comes the concern of error propagation along the chain, since an incorrect estimate will negatively affect all following labels. However, this problem is not always serious, and easily overcome with an ensemble [20, 3]. Therefore, although there exist a number of approaches for avoiding error propagation via exhaustive iteration or various search options [3, 13, 17], we opt for the ensemble approach.

3.3 Bayesian Classifier Chains

Instead of considering a fully parameterized Markov chain model for

, we can use a simpler Bayesian network. Hence, (3) becomes

(6)

where are the parents of the -th label, as proposed in [25, 26], known as as Bayesian Classifier Chains (BCC), since it may remind us of Bayesian networks. Using a structure makes training the individual classifiers faster, since there are fewer inputs to them, and also speeds up any kind of inference. fig:models (c) shows one example of many possible such network structures.

Unfortunately, finding the optimal structure is NP hard due to an impossibly large search space. Consequently, a recent point of interest has been finding a good suboptimal structure, such that eq: BN can be used. The literature has focused around the idea of label dependence (see [5] for an excellent discussion). The least complex approach is to measure marginal label dependence, i.e., the relative co-occurrence frequencies of the labels. Such approach has been considered in [25, 8]. In the latter, the authors exploited the frequent sets approach [1], which measures the co-occurrence of several labels, to incorporate edges into the Bayesian network. However, they noted problems with attributes and negative co-occurrence (i.e., mutual exclusiveness; labels whose presence indicate the absence of others). The resulting algorithm (hereafter referred to as the fs algorithm) can deal with moderately large datasets, but the final network construction approach ends up being rather involved.

Finding a graph based on conditional label dependence is inherently more demanding, because the input feature space must be taken into account, i.e., classifiers must be trained. Of course, training time is a strongly limiting factor here. However, a particularly interesting approach to modelling conditional dependence, the so-called lead method, was presented in [26]. This scheme tries to remove first the dependency of the labels on the feature set, which is the common parent of all the labels, to facilitate learning the label dependencies. In order to do so, lead trains first an independent classifier for each label (i.e., it builds independent models, as in the IC approach), and then uses the dependency relations in the residual errors of these classifiers to learn a Bayesian network following some standard approach (the errors can in fact be treated exactly as if they were labels and plugged, e.g., into the fs approach). lead is thus a fast method for finding conditional label dependencies, and has shown good performance on small-sized datasets.

Neither the fs nor the lead methods assume any particular constraint on the underlying graph and are well suited for MOC in the high dimensional regime because of their low complexity. However, if the underlying directed graph is sparse, the PC algorithm and its modifications [12, 24] are the state-of-the-art solution in directed structured learning. The PC-algorithm runs in the worst case in exponential time (as a function of the number of nodes), but if the true underlying graph is sparse, this reduces to a polynomial runtime. However, this is typically not the case in MLC/MOC problems.

For the sake of comparison between the different MLC/MOC approaches, in this paper we only consider the fs and lead methods to infer direct dependencies between labels. In order to delve deeper into the issue of structure learning using the fs and lead methods, in A we have generated a synthetic dataset, where the underlying structure is known, and compare their solutions and the degree of similarity with respect to the true graphical model. As these experiments illustrate, one of the main problems behind learning the graphical model structure from scratch is that we typically get too dense networks, where we cannot control the complexity associated to training and evaluating each one of the probabilistic classifiers corresponding to the resulting factorization in (6). This issue is solved by the classifier trellis method proposed in sec:CT.

3.4 Conditional Dependency Networks (Cdn)

Conditional Dependency Networks represent an alternative approach, in which the conditional distribution factorizes according to an undirected graphical model, i.e.,

(7)

where is a normalizing constant, is a positive function or potential, and is a subset of the labels (a clique in the undirected graph). The notion of directionality is dropped, thus simplifying the task of learning the graph structure. Undirected graphical models are more natural for domains such as spatial or relational data. Therefore, they are well suited for tasks such as image segmentation (e.g., [4], [14]), and regular MLC problems (e.g., [7]).

Unlike classifier chain methods, a CDN does not construct an approximation to based on a product of probabilistic classifiers. In contrast, for each conditional probability of the form

(8)

a probabilistic classifier is learnt. In an undirected graph, where all the labels that belong to the same clique are connected to each other, it is easy to check that

(9)

where is the set of variables connected to in the undirected graph555In other words, is the so called Markov blanket of [2].. Finally, for is approximated by a probabilistic classifier .

In order to classify a new test input , approximate inference using Gibbs sampling is a viable option. In B, we present the formulation of Monte Carlo approaches (including Gibbs sampling) specially tailored to perform approximate inference in MLC/MOC methods based on Bayesian networks and undirected graphical models.

3.5 Other MLC/MOC Approaches

A final note on related work: there are many other ‘families’ of methods designed for multi-label, multi-output and structured output prediction and classification, including many ‘algorithm adapted’ methods. A fairly complete and recent overview can be seen in [27] for example. However, most of these methods suffer from similar challenges as the classifier chains family; and similarly attempt to model dependence yet remain tractable by using approximations and some form of randomness [22, 18, 23]. To cite a more recent example, [21] uses ‘random graphs’ in a way that resembles [9]’s CDN, since it uses undirected graphical models, and [25]’s BCC in the sense of the randomness of the graphs considered.

3.6 Comparison of State-of-the-art Methods for Extracting Structure

It is our view that many methods employed for multi-label chain classifiers have not been properly compared in the literature, particularly with regard to their method for finding structure. There is not yet any conclusive evidence that modelling the marginal dependencies (among ) is enough, or whether it is advisable to model the conditional dependencies also (for best performance); and how much return one gets on a heavy investment in searching for a ‘good’ graph structure, over random structures.

We performed two experiments to get an idea; comparing the following methods:

IC Labels are independent, no structure.
ECC Ensemble of CC, each with random order [20]
EBCC-fs Ensemble of BCCs [25], based on marginal dependence [8]
EBCC-lead as above, but on conditional dependence, as per [26]
OCC the optimal CC – of all possible (complete) chain orders
IC ECC OCC EBCC-fs EBCC-lead
Music
Scene
Table 2: Comparison of the classification accuracy of existing methods with CV (Cross validation). We chose the two smallest datasets so that the ‘optimal’ OCC could complete.

Results are shown in tab:2 of CV (Cross validation) on two small real-world datasets (Music and Scene); small to ensure a comparison with OCC ( possible chain orderings).

fig:musicrecon shows an example of the structure found by BCC-fs and BCC-lead in a real dataset, namely Music

(emotions associated with pieces of music). As a base classifier, we use support vector machines (SVMs), fitted with logistic models (as according to

[11]) in order to obtain a probabilistic output, with the default hyper-parameters provided in the SMO implementation of the Weka framework [10]. tab:2 confirms that IC’s assumption of independence harms its performance, and that the bulk of the MOC literature is justified in trying to overcome this. However, it also suggests that investing factorial and exponential time to find the best-fitting chain order and label combination (respectively) does not guarantee the best results. In fact, by comparing the results of ECC with EBCC-lead and EBCC-fs, even the relatively higher investment in conditional label dependence over marginal dependence (EBCC-lead vs EBCC-fs) does not necessarily pay off. Finally, ECC’s performance is quite close to MCC. Surprisingly, the ECC method tends to provide excellent performance, even though it only learns randomly ordered (albeit fully connected) chains. However, as discussed in seq:cc, its complexity is prohibitively large in high dimensional MLC problems.

(a)
(b)
Figure 4: Graphs derived from the Music dataset, with links based on (a) marginal dependence (FS, label-frequency) and (b) conditional dependence (LEAD, error-frequency). Here we have based the links on mutual information, therefore links represent both co-occurrences (e.g., ) and mutual exclusiveness (e.g., ). Generally, we see that the graph makes intuitive sense; amazed and happy are neither strongly similar nor opposite emotions, and thus there is not much benefit in modelling them together; same with angry and sad.
Figure 5: Three possible directed trellises for . Each trellis is defined by a fixed pattern for the parents of each vertex (varying only near the edges, where parents are not possible). Note that no directed loops can exist.

4 A scalable Approach: Classifier Trellis (Ct)

The goal for a highly scalable CC-based method brings up the common question: which structure to use. On the one hand, ignoring important dependency relations will harm performance on typical MLC problems. On the other hand, assumptions must be made to scale up to large scale problems. Even though in certain types of MLC problems there is a clear notion of the local structure underlying the labels (e.g., in image segmentation pixels next to each other should exhibit interdependence), this assumption is not valid for general MLC problems, where the th and th labels might be highly correlated for example. Therefore, we cannot escape the need to discover structure, but we must do it efficiently. Furthermore, the structure used should allow for fast inference.

Our proposed solution is the classifier trellis (CT). To relieve the burden of specification ‘from scratch’, we maintain a fixed structure, namely, a lattice or trellis (hence the name). This escapes the high complexity of a complete structure learning (as in [20] and [3]), and at the same time avoids the complexity involved in discovering a structure (e.g., [25] and [26]). Instead, we a impose a structure a-priori, and only seek an improvement to the order of labels within that structure.

fig:trellis gives three simple example trellises for (specifically, we use the first one in experiments). Each of the vertices of the trellis corresponds to one of the labels of the dataset. Note the relationship to eq: BN; we simply fix the same pattern to each . Namely, the parents of each label are the labels laying on the vertices above and to the left in the trellis structure (except, obviously, the vertices at the top and left of the trellis which form the border). A more linked structure will model dependence among more labels.

Hence, instead of trying to solve the NP hard structure discovery problem, we use a simple heuristic (label-frequency-based pairwise mutual information) to place the labels into a fixed structure (the trellis) in a sensible order: one that tries to maximize label dependence between parents and children. This ensures a good structure, which captures some of the main label dependencies, while maintaining scalability to a large number of labels and data. Namely, we employ an efficient hill climbing method to insert nodes into this trellis according to marginal dependence information, in a manner similar to the

fs method in [1].

The process followed is outlined in code:CT, to which we pass a pairwise matrix of mutual information, where

Essentially, new nodes are progressively added to the graph based on the mutual information. Since the algorithm starts by placing a random label in the upper left corner vertex, a different trellis will be discovered for different random seeds. Each label is directly connected to a fixed number of parent labels in the directed graph (e.g., in Figure 5 each node has two parents in the first two graphs, and four in the third one) – except for the border cases where no parents are possible. [t] Constructing a Classifier Trellis

input :  (an matrix of labels), (width, default: ), a parent-node function
begin
       ;
       ;
       ;
       for ; do
             ;
             ;
            
       end for
      
end
output :  the trellis structure, .

We henceforth assume that . The computational cost of this algorithm is , but due to the simple calculations involved, in practice it is easily able to scale up to tens of thousands of labels. Indeed, we show later in sec:experiments that this method is fast and effective. Furthermore, it is possible to limit the complexity by searching for some number of labels (e.g., building clusters of labels).

Given a proper user-defined parent pattern (see fig:trellis), we are ensured that the trellis obtained in code:CT is a directed acyclic graph. Hence, there is no need to check for cycles during construction, which is a very time consuming stage in many algorithms (e.g., in the PC algorithm, see sec:BCC). We can now employ probabilistic classifiers to construct an approximation to according to the directed graph. This approach is simply referred to as the Classifier Trellis (CT). Afterwards, we can either do inference greedily or via Monte Carlo sampling (see B for a detailed discussion of Monte Carlo methods).

Alternatively, note that we can interpret the trellis structure provided by code:CT in terms of an undirected graph, following the approach of Classifier Dependency Networks, described in sec:CDNG. For example, in fig:trellis (middle), we would get (with directionality removed) . [t] Classifier Trellis (CT)

Train begin
       Find the directed trellis graph using code:CT. Train classifiers , each taking all parents in the directed graph as additional features, such that
end
Test begin
       for ; do
              
       end for
      return
end

This compares666We did not add the diagonals in the set , since we wish that it be comparable in terms of the number of connections to . We refer to this approach as the Classifier Dependency Trellis (CDT).

Both CT and CDT are outlined in code:CT2 and code:CDNG respectively. Some may argue that the undirected version is more powerful, since learning an undirected graph is typically easier than learning a directed graph that encodes causal relations. However, CDT constructs an undirected graphical model where greedy inference cannot be implemented and we have to rely on (slower) Monte Carlo sampling methods in the test stage. This effect can be clearly noticed in tab:times.

Finally, we will consider a simple ensemble method for CT, similar to that proposed in [19] or [25] to improve classifier chain methods: CT classifiers, each built from a different random seed, where the final label decision is made by majority voting. This makes the training time and inference times larger. We denote this method as ECT. A similar approach could be followed for the CDT (thus building an ECDT), but, given the higher computational cost of CDT during the test stage, we have concerns regarding the scalability of this approach, so we have excluded it from the simulations.

Classifier Dependency Trellis (CDT)

Train begin
       Find the undirected trellis graph using code:CT. Train classifiers , each taking all neighbouring labels as additional features, such that
end
Test begin
       for ; do
             for ; do
                    
             end for
            
       end for
      return
end
where , : settling time (discarded samples) and total iterations.

5 Experiments

Key Description/Name Reference
IC Independent Classifiers Sec. 1
ECC Ensemble of random CCs (majority vote per label) Sec. 3, [20]
MCC Best of random CC Sec. 3, [17]
EBCC Ensemble of BCCs (discovered directed graph) Sec. 3, [25]
CT Classifier trellis, as in fig:trellis, left (directed trellis) Sec. 4, Alg. 4
ECT Ensemble of CTs Sec. 4
CDT Classifier dependency trellis, as in fig:trellis, middle (but undirected); Sec. 4, Alg. 4
Table 3: Methods tested in experiments.

Firstly, in sec:experiments1, we compare E/CT and CDT with some high-performance MLC methods (namely ECC, BCC, MCC) that were discussed in sec:prior. We show that an imposed trellis structure can compete with fully-cascaded chains such as ECC and MCC, and discovered structures like those provided by BCC. Our approach based on trellis structures achieves similar MLC performance (or better in many cases) while presenting improved scalable properties and, consequently, significantly lower running times.

All the methods considered are listed in tab:methods. In tab:complexity we summarize their complexity. represents the input dimensions (the dataset-dependent number of features times number of instances); we use ensemble methods and Gibbs iterations for CDT. While this complexity is just an intuitive measure, the experimental results reported in sec:experiments1 confirm that CT running times are indeed very close to IC.

tab:datasets summarizes the collection of datasets that we use, of varied type and dimensions; most of them familiar to the MLC/MOC community [22, 3, 20].

method train complexity test complexity
IC
CT
EBCC
CDT
ECT
ECC
Table 4: Complexity per algorithm, roughly sorted by training complexity. represents the input dimensions (number of features times number of instances). Train complexity indicates roughly how many values are looked at by a classifier. Test complexity indicates how many (times) individual models are addressed as inference.

The last two sets (Local400 and Local10k) are synthetically generated and they correspond to the localization problem described in sec:experiments2.

We use some standard metrics from the multi-label literature (see, e.g., [15]), namely,

where is an identity function, returning if condition is true, whereas and are the bitwise logical AND and OR operations, respectively.

LC Type
Music 593 6 72 1.87 audio
Scene 2407 6 294 1.07 image
Yeast 2417 14 103 4.24 biology
Medical 978 45 1449 1.25 medical/text
Enron 1702 53 1001 3.38 text
TMC2007 28596 22 500 2.16 text
MediaMill 43907 101 120 4.38 video
Delicious 16105 983 500 19.02 text
Local400 10000 400 30 12.84 localization
Local10k 50000 10000 30 25.78 localization
Table 5: A collection of datasets and associated statistics, where LC is label cardinality: the average number of labels relevant to each example.

5.1 Comparison of E/Ct to other MLC methods

First of all, to confirm that our proposed hill climbing strategy can actually have a beneficial effect (i.e., an improvement over a random trellis), we do a

cross validation (CV) on the smaller datasets. Results are displayed in tab:ECTcompare. A significant increase in performance can be seen, with a decrease in the standard deviation (especially relevant in the Scene dataset). This confirms that the proposed hill climbing strategy can help in optimizing performance and decreasing the sensitivity of

CT wrt the initialization.

Then, we compare all the methods listed in Table 3 on all the datasets of Table 5. The results of predictive performance are displayed in Table 7, and running times can be seen in Table 8. On the small datasets () we use Support Vector Machines as base classifiers, with fitted logistic models (as in [11]) for CDN

(which requires probabilistic output for inference). As an alternative, other authors have used Logistic Regression directly (e.g.,

[3, 9]) due to its probabilistic output. In our experience, we obtain better and faster all-round performance with SVMs. Note that, for best accuracy, it is highly recommended to tune the base classifier. However, we wish to avoid this “dimension” and instead focus on the multi-label methods. On the larger datasets (where

) we instead use Stochastic Gradient Descent (SGD), with a maximum of only

epochs, to deal with the scale presented by these large problems. All our methods are implemented and made available within the Meka framework;888http://meka.sourceforge.net an open-source multi-output learning framework based on the Weka machine learning framework [10]. The SMO SVM and SGD implementations pertain to Weka.

CT-random CT-HC
Music
Scene
Table 6: Comparing CT accuracy (on Scene, Music) without (just a random trellis) and with our ‘hill-climbing’ (HC) method, over CV.

Results confirm that both ECT and CT are very competitive in terms of performance and running time. Using the Hamming score as figure of merit and given the running times reported, CT is clearly superior to the rest of methods. Note that tab:times shows that CT’s running times are close to IC, namely the method that neglects all statistical dependency between labels. With respect to exact match and accuracy performance results, which are measures oriented to the recovery of the whole set of labels, ECT and CT achieve very competitive results with respect to ECC and MCC, which are high-complexity methods that model the full-chain of labels. tab:times reports training and test average times computed for the different MLC methods. We also include explicitly the number of labels per dataset. Note also that even though ECT considers a set of 10 possible random initializations, it does not significantly improve the performance of CT (a single initialization) for most cases, which suggest that the hill climbing strategy makes the CT algorithm quite robust with respect to initialization. Regarding scalability, note that the computed train/running times for CT scale roughly linearly with the number of labels . For instance, in the MediaMill dataset while in Delicious is approximately one order of magnitude higher, . As we can observe, CT running times are multiplied approximately by a factor of 10 between both datasets. The same conclusions can be drawn also for the largest datasets, compare for instance running times between Local400 and Local10k. Finally, CDT, our scalable modification of CDN, shows worse performance than CT while it requires larger test running times.

Accuracy

Dataset IC ECC MCC EBCC CT ECT CDT
Music 0.483 (7) 0.572 (2) 0.568 (4) 0.566 (5) 0.577 (1) 0.571 (3) 0.505 (6)
Scene 0.571 (7) 0.684 (2) 0.685 (1) 0.618 (4) 0.602 (6) 0.666 (3) 0.604 (5)
Yeast 0.502 (6) 0.538 (2) 0.534 (4) 0.535 (3) 0.533 (5) 0.541 (1) 0.438 (7)
Medical 0.699 (7) 0.733 (3) 0.721 (5) 0.731 (4) 0.755 (2) 0.769 (1) 0.704 (6)
Enron 0.406 (5) 0.448 (1) 0.403 (6) 0.441 (3) 0.409 (4) 0.443 (2) 0.310 (7)
TMC07 0.614 (5) 0.645 (1) 0.619 (4) 0.628 (3) 0.613 (6) 0.633 (2) 0.601 (7)
MediaMill 0.379 (2) 0.350 (5) 0.349 (6) 0.375 (3) 0.391 (1) 0.344 (7) 0.374 (4)
Delicious 0.122 (3) DNF 0.121 (5) DNF 0.127 (2) 0.157 (1) 0.122 (3)
Local400 0.536 (7) 0.625 (1) 0.583 (3) 0.578 (4) 0.542 (6) 0.587 (2) 0.559 (5)
Local10k 0.125 (4) DNF DNF 0.175 (1) 0.133 (3) 0.166 (2) 0.122 (5)
avg rank 5.30 2.12 4.22 3.33 3.60 2.40 5.50

Hamming Score

Dataset IC ECC MCC EBCC CT ECT CDT
Music 0.785 (6) 0.795 (3) 0.789 (5) 0.800 (1) 0.798 (2) 0.795 (3) 0.768 (7)
Scene 0.886 (4) 0.892 (1) 0.892 (1) 0.886 (4) 0.884 (6) 0.891 (3) 0.871 (7)
Yeast 0.800 (1) 0.789 (4) 0.794 (2) 0.787 (5) 0.791 (3) 0.786 (6) 0.719 (7)
Medical 0.988 (4) 0.988 (4) 0.989 (2) 0.988 (4) 0.990 (1) 0.989 (2) 0.986 (7)
Enron 0.943 (1) 0.940 (4) 0.942 (3) 0.939 (5) 0.943 (1) 0.939 (5) 0.922 (7)
TMC07 0.947 (2) 0.948 (1) 0.947 (2) 0.946 (5) 0.947 (2) 0.946 (5) 0.937 (7)
MediaMill 0.965 (2) 0.947 (7) 0.958 (4) 0.954 (5) 0.966 (1) 0.951 (6) 0.965 (2)
Delicious 0.982 (1) DNF 0.981 (4) DNF 0.982 (1) 0.981 (4) 0.982 (1)
Local400 0.968 (3) 0.969 (1) 0.967 (6) 0.968 (3) 0.969 (1) 0.968 (3) 0.962 (7)
Local10k 0.968 (1) DNF DNF 0.968 (1) 0.968 (1) 0.968 (1) 0.968 (1)
avg rank 2.50 3.12 3.22 3.67 1.90 3.80 5.30

Exact Match

Dataset IC ECC MCC EBCC CT ECT CDT
Music 0.252 (7) 0.327 (1) 0.292 (5) 0.302 (4) 0.312 (2) 0.312 (2) 0.257 (6)
Scene 0.491 (7) 0.579 (2) 0.638 (1) 0.516 (5) 0.542 (4) 0.557 (3) 0.503 (6)
Yeast 0.160 (5) 0.190 (3) 0.212 (1) 0.150 (6) 0.198 (2) 0.169 (4) 0.067 (7)
Medical 0.614 (4) 0.612 (5) 0.634 (3) 0.612 (5) 0.670 (1) 0.655 (2) 0.598 (7)
Enron 0.121 (3) 0.112 (5) 0.126 (1) 0.114 (4) 0.123 (2) 0.112 (5) 0.067 (7)
TMC07 0.330 (4) 0.342 (2) 0.345 (1) 0.316 (6) 0.341 (3) 0.317 (5) 0.263 (7)
MediaMill 0.055 (2) 0.034 (5) 0.053 (3) 0.019 (6) 0.058 (1) 0.007 (7) 0.052 (4)
Delicious 0.003 (3) DNF 0.006 (1) DNF 0.004 (2) 0.002 (5) 0.003 (3)
Local400 0.064 (4) 0.108 (2) 0.129 (1) 0.059 (6) 0.079 (3) 0.063 (5) 0.029 (7)
Local10k 0.000 (1) DNF DNF 0.000 (1) 0.000 (1) 0.000 (1) 0.000 (1)
avg rank 4.00 3.12 1.89 4.78 2.10 3.90 5.50
Table 7: Predictive performance and dataset-wise (rank). DNF = Did Not Finish (within 24 hours or 2 GB memory).

In order to illustrate the most significant statistical differences between all methods, in fig:Nemenyi we include the results of the Nemenyi test based on tab:results and tab:times. However, note that here we excluded the two rows with DNFs. The Nemenyi test [6]

rejects the null hypothesis if the average rank difference is greater than the

critical distance over algorithms and datasets, and according to the table for some value (we use ). Any method with a rank greater than another method by at least the critical distance, is considered statistically better. In fig:Nemenyi, for each method, we place a bar spanning from the average rank of the method, to this point plus the critical distance. Thus, any pair of bars that do not overlap correspond to methods that are statistically different in terms of performance. Note that, regarding both training and test running times, CT overlaps considerably with IC, whereas other methods such as ECC and MCC need significantly more training time. We can say that ECC and ECT are statistically stronger than IC and CDT, but not so wrt exact match. CT performs particularly well on the Hamming score, indicating that error propagation is limited, compared to other CC methods.

In the following section we present the framework behind the localization datasets Local400 and Local10k in the tables presented above.

Training Time

Dataset IC ECC MCC EBCC CT ECT CDT
Music 1 (3) 4 (6) 4 (7) 2 (5) 0 (1) 2 (4) 1 (2)
Scene 3 (1) 10 (5) 28 (7) 7 (4) 3 (3) 10 (6) 3 (2)
Yeast 11 (3) 53 (6) 79 (7) 45 (5) 5 (2) 26 (4) 5 (1)
Medical 4 (2) 19 (6) 67 (7) 17 (4) 3 (1) 19 (5) 5 (3)
Enron 51 (3) 207 (6) 734 (7) 95 (4) 24 (1) 100 (5) 37 (2)
TMC07 11402 (2) 48019 (6) 73433 (7) 34559 (4) 10847 (1) 44986 (5) 13547 (3)
MediaMill 42 (1) 347 (6) 1121 (7) 238 (5) 45 (2) 219 (4) 55 (3)
Delicious 468 (1) DNF 18632 (5) DNF 529 (2) 2791 (4) 599 (3)
Local400 2 (1) 15 (5) 57 (7) 8 (3) 2 (2) 9 (4) 31 (6)
Local10k 3 (1) DNF DNF 13 (4) 3 (2) 15 (5) 3 (3)
avg rank 1.80 5.75 6.78 4.22 1.70 4.60 2.80

Test Time

Dataset IC ECC MCC EBCC CT ECT CDT
Music 0 (3) 1 (7) 0 (1) 0 (5) 0 (4) 0 (2) 0 (6)
Scene 0 (1) 1 (5) 0 (2) 0 (4) 0 (3) 2 (6) 7 (7)
Yeast 1 (2) 3 (5) 3 (6) 1 (3) 0 (1) 2 (4) 8 (7)
Medical 4 (2) 28 (6) 9 (3) 26 (5) 2 (1) 22 (4) 141 (7)
Enron 4 (1) 112 (6) 8 (3) 19 (4) 4 (2) 45 (5) 310 (7)
TMC07 7 (2) 50 (5) 3 (1) 42 (4) 7 (3) 76 (6) 534 (7)
MediaMill 15 (1) 211 (5) 31 (2) 143 (4) 32 (3) 972 (6) 5469 (7)
Delicious 167 (1) DNF 322 (3) DNF 207 (2) 7532 (4) 31985 (5)
Local400 1 (1) 17 (6) 3 (3) 9 (4) 1 (2) 17 (5) 398 (7)
Local10k 4 (2) DNF DNF 18 (3) 4 (1) 39 (4) 4228 (5)
avg rank 1.60 5.62 2.67 4.00 2.20 4.60 6.50
Table 8: Time results (seconds). DNF = Did Not Finish (within 24 hours or 2 GB memory).
(a) test
(b) test
(c) test
(d) test
(e) test
Figure 6: Results of Nemenyi test, based on tab:results and tab:times, If methods’ bars overlap, they can be considered statistically indifferent. The graphs based on time should be interpreted such that higher rank (more to the left) corresponds to slower (i.e., less desirable) times.

5.2 A Structured Output Prediction Problem

We investigate the application of CT (and the other MLC methods) to a type of structured output prediction problem: segmentation for localization. In this section we consider a localization application using light sensors, based on the real-world scenario described in [16], where a number of light sensors are arranged around a room for the purpose of detecting the location of a person. We take a ‘segmentation’ view of this problem, and use synthetic models (which are based on real sensor data) to generate our own observations, thus creating a semi-synthetic dataset, which allows us to easily control the scale and complexity. fig:scenL shows the scenario. It is a top-down view of a room with light sensors arranged around the edges, one light source (a window, drawn as a thin rectangle) on the bottom edge and four targets. Note that targets can only be detected if they come between a light sensor and the light source, so the target in the lower right corner is undetectable.

Figure 7: An tile localization scenario. light sensors are arranged around the edges of the scenario at coordinates , and there is a light source between points and (shown as a thick black line) on the horizontal axis. In this example, three observations are positive (). Note that the object in the bottom-right tile ( in this case) cannot be detected.

We divide the scenario into square ‘tiles’, representing . Given an instance ,

where if the -th tile (i.e., pixel) is active, and otherwise, with . For the -th instance we have binary sensor observations , where if the -th sensor detects an object inside its ‘detection zone’ (shown in colors in fig:scenL). Otherwise, .

5.2.1 Sensor Model

Consider, for simplicity, a specific instance (in order to avoid here the use of the super index ). Moreover, let us denote as the position of the -th sensor, and the triangle of vertices , and (the corners of the light source). This triangle is the “detection zone” of the -th sensor. Now, we define the indicator variable

where is the middle point of the -th pixel (tile), whose vertices are , , and (for ). Next, we define the variable

(10)

which corresponds to the number of active tiles/pixels inside the triangle associated to the -th sensor. The likelihood function for the -th sensor is then given by

(11)

and ; where is the false negative rate and is the false positive rate.

5.2.2 Generation of Artificial Data

fig:scenL shows a low dimensional scenario (), for the purpose of a clear illustration, but we consider datasets with much higher levels of segmentation (namely Local400, where , and Local10k, where – see tab:datasets) to compare the performance of several MOC techniques on this problem. Given a scenario with tiles, sensors and observations, we generate the synthetic data, , as follows:

  1. Start with an ‘empty’ , i.e., for .

  2. Set for relevant tiles to create a rectangle of width and height starting from some random point .

  3. Create a square in the corner furthest from the rectangle.

  4. Generate the observations according to eq:observation_model.

  5. Add dynamic noise in by flipping pixels uniformly at random.

Any MLC method can be applied to this problem, to infer the binary vector , which encodes the presence of blocking-light elements in the room, given the vector of measurements from the light sensors. Finally, we also consider that each sensor provides observations given the same . MAP inference using the sensor model

input :  (measurements), (sensor positions), and (light source location).
begin
  1. Initialize for .

  2.  

    for ; do

              
  1. Calculate the detection triangle .

  2. If , then set .

  3. Otherwise, if , then set .

  4. If , then set for all .

        end for
  • For all such that and for all , check if the decision is still . Then, set .

  •  

    The remaining tiles with correspond to “shadow” zones, where we leave .

  • end
    output :  for .

    5.2.3 Maximum A Posteriori (MAP) Estimator

    Given the likelihood function of eq:observation_model, and considering a uniform prior over each variable , the posterior w.r.t. the -th triangle is

    If we also assume independency in the received measurements, the posterior density can be expressed as follows

    (12)

    We are interested in studying for , but we can only compute the posterior distribution of the variables , which depend on through eq:cd. Making inference directly on using the posterior distribution is not straightforward. Let us address the problem in two steps. First, the measurements received by each sensor, , can be considered as Bernoulli trials: if , then with probability ; if , then with success probability . Now, given measurements for each sensor and uniform prior density over , the MAP estimator of is given by

    Then, if we decide . Otherwise, if , we estimate . Considering a uniform prior over the pixels , a simple procedure to estimate from is the one described in Algorithm 5.2.2.

    5.2.4 Classifier Trellis vs MAP Estimator

    Results for CT are already given in tab:results (predictive performance) and tab:times (running time). Results in tab:results illustrate the robustness of the CT algorithm to address multi-output classification in several scenarios. Beyond the training set, no further knowledge about the underlying model is needed to achieve remarkable classification performance. To emphasize this property of CT, we now compare it to the MAP estimator presented above, which exploits a perfect knowledge of the sensor model.

    Table 9 shows the results using Algorithm 5.2.2 with sensors and different values of (i.e., the grid precision). The corresponding results obtained by CT are provided in Table 10. A detailed discussion of these results is provided at the end of the next Section. However, let us remark that increasing the number of tiles (i.e., ) for a given number of sensors makes the problem harder, as a finer resolution is sought. This explains the decrease in performance seen in the tables as increases.

    Measure
    Accuracy 0.523 0.141
    Hamming score 0.857 0.795
    Times (total, s) 1 3
    Table 9: Results using Algorithm 5.2.2 with sensors.
    Measure
    Accuracy 0.542 0.133
    Hamming score 0.969 0.968
    Time (total, s) 3 7
    Table 10: Results using CT ( sensors).

    6 Discussion

    As in most of the multi-label literature, we found that independent classifiers consistently under-perform, thus justifying the development of more complex methods to model label dependence. However, in contrary to what much of the multi-label literature suggests, greater investments in modelling label dependence do not always correspond to greater returns. In fact, it appears that many methods from the literature have been over-engineered. Our small experiment in tab:2 suggests that none of the approaches we investigated were particularly dominant in their ability to uncover structure with respect to predictive performance. Indeed, our results indicate that none of the techniques is significantly better than another. Using ECC is a ‘safe bet’ in terms of high accuracy, since it models long term dependencies with a fully cascaded chain; also noted previously (e.g,. [20, 3]). In terms of EBCC (for which we elected to represent methods that uncover a structure), there was no clear advantage over the other methods, and surprisingly also no clear difference between searching for a structure based on marginal dependence versus conditional label dependence. This makes it more difficult to justify computationally complex expenditures for modelling dependence on the basis of improved accuracy; particularly so for large datasets, where the scalability is crucial.

    We presented a classifier trellis (CT) as an alternative to methods that model a full chain (as MCC or ECC) or methods that unravel the label graphical model structure from scratch, such as BCC. Our approach is systematic, we consider a fixed structure in which we place the labels in an ordered procedure according to easily computable mutual information measures, see code:CT. An ensemble version of CT performs particularly well on exact match but, surprisingly, it does not perform much stronger than CT as we expected in the beginning. It does not perform as strong overall as ECC (although there is no statistically significant difference), but is much more scalable, as indicated in tab:complexity.

    The CT algorithm then emerges as a powerful MLC algorithm, able to excellent performance (specially in terms of average number of successfully classified labels) with near IC running times. Through the Nemenyi test, we have shown the statistical similitude between the classification outputs of (E)CT and MCC/ECC, proving that our approach based on the classifier trellis captures the necessary inter-label dependencies to achieve high performance classification. Moreover, we have not analyzed yet the impact that the trellis structure chosen has in the CT performance. In future work, we intend to experiment with trellis structures with different degrees of connectedness.

    7 Acknowledgements

    This work was supported by the Aalto University AEF research programme; by the Spanish government’s (projects projects ’COMONSENS’, id. CSD2008-00010, ’ALCIT’, id. TEC2012-38800-C03-01, ’DISSECT’, id. TEC2012-38058-C03-01); by Comunidad de Madrid in Spain (project ’CASI-CAM-CM’, id. S2013/ICE-2845); and by and by the ERC grant 239784 and AoF grant 251170.

    References

    • [1] R. Agrawal, T. Imielinski, and A. Swami. Mining association rules between sets in items in large databases. In Proc. of ACM SIGMOD 12, pages 207–216, 1993.
    • [2] D. Barber. Bayesian Reasoning and Machine Learning. Cambridge University Press, 2012.
    • [3] Weiwei Cheng, Krzysztof Dembczyński, and Eyke Hüllermeier. Bayes optimal multilabel classification via probabilistic classifier chains. In ICML ’10: 27th International Conference on Machine Learning, Haifa, Israel, June 2010. Omnipress.
    • [4] Andrea Pohoreckyj Danyluk, Léon Bottou, and Michael L. Littman, editors. Proceedings of the 26th Annual International Conference on Machine Learning, ICML 2009, Montreal, Quebec, Canada, June 14-18, 2009, volume 382 of ACM International Conference Proceeding Series. ACM, 2009.
    • [5] Krzysztof Dembczyński, Willem Waegeman, Weiwei Cheng, and Eyke Hüllermeier. On label dependence and loss minimization in multi-label classification. Mach. Learn., 88(1-2):5–45, July 2012.
    • [6] Janez Demšar. Statistical comparisons of classifiers over multiple data sets. The Journal of Machine Learning Research, 7:1–30, 2006.
    • [7] Nadia Ghamrawi and Andrew McCallum. Collective multi-label classification. In CIKM ’05: 14th ACM international Conference on Information and Knowledge Management, pages 195–200, New York, NY, USA, 2005. ACM Press.
    • [8] Anna Goldenberg and Andrew Moore. Tractable learning of large bayes net structures from sparse data. In Proceedings of the twenty-first international conference on Machine learning, ICML ’04, pages 44–, New York, NY, USA, 2004. ACM.
    • [9] Yuhong Guo and Suicheng Gu. Multi-label classification using conditional dependency networks. In

      IJCAI ’11: 24th International Conference on Artificial Intelligence

      , pages 1300–1305. IJCAI/AAAI, 2011.
    • [10] Mark Hall, Eibe Frank, Geoffrey Holmes, Bernhard Pfahringer, Reutemann Peter, and Ian H. Witten. The weka data mining software: An update. SIGKDD Explorations, 11(1), 2009.
    • [11] Trevor Hastie and Robert Tibshirani. Classification by pairwise coupling. In Michael I. Jordan, Michael J. Kearns, and Sara A. Solla, editors, Advances in Neural Information Processing Systems, volume 10. MIT Press, 1998.
    • [12] Markus Kalisch and Peter Bühlmann. Estimating high-dimensional directed acyclic graphs with the pc-algorithm. Journal of Machine Learning Research, 8:613–636, 2007.
    • [13] Abhishek Kumar, Shankar Vembu, Aditya Krishna Menon, and Charles Elkan. Learning and inference in probabilistic classifier chains with beam search. In Peter A. Flach, Tijl De Bie, and Nello Cristianini, editors, Machine Learning and Knowledge Discovery in Databases, volume 7523, pages 665–680. Springer, 2012.
    • [14] L. Ladick , C. Russell, P. Kohli, and P. H S Torr. Associative hierarchical crfs for object class image segmentation. In Computer Vision, 2009 IEEE 12th International Conference on, pages 739–746, Sept 2009.
    • [15] Gjorgji Madjarov, Dragi Kocev, Dejan Gjorgjevikj, and Sašo Deroski. An extensive experimental comparison of methods for multi-label learning. Pattern Recognition, 45(9):3084–3104, September 2012.
    • [16] Jesse Read, Katrin Achutegui, and Joaquin Miguez. A distributed particle filter for nonlinear tracking in wireless sensor networks. Signal Processing, 98:121–134, 2014.
    • [17] Jesse Read, Luca Martino, and David Luengo. Efficient monte carlo methods for multi-dimensional learning with classifier chains. Pattern Recognition, 47(3), 2014.
    • [18] Jesse Read, Bernhard Pfahringer, and Geoff Holmes. Multi-label classification using ensembles of pruned sets. In ICDM’08: Eighth IEEE International Conference on Data Mining, pages 995–1000. IEEE, 2008.
    • [19] Jesse Read, Bernhard Pfahringer, Geoff Holmes, and Eibe Frank. Classifier chains for multi-label classification. In ECML ’09: 20th European Conference on Machine Learning, pages 254–269. Springer, 2009.
    • [20] Jesse Read, Bernhard Pfahringer, Geoffrey Holmes, and Eibe Frank. Classifier chains for multi-label classification. Machine Learning, 85(3):333–359, 2011.
    • [21] Hongyu Su and Juho Rousu. Multilabel classification through random graph ensembles. In Asian Conference on Machine Learning (ACML), pages 404–418, 2013.
    • [22] Grigorios Tsoumakas and Ioannis P. Vlahavas. Random k-labelsets: An ensemble method for multilabel classification. In ECML ’07: 18th European Conference on Machine Learning, pages 406–417. Springer, 2007.
    • [23] Celine Vens and Fabrizio Costa. Random forest based feature induction. In Proceedings of the 2011 IEEE 11th International Conference on Data Mining, ICDM ’11, pages 744–753, Washington, DC, USA, 2011. IEEE Computer Society.
    • [24] Raanan Yehezkel and Boaz Lerner. Bayesian network structure learning by recursive autonomy identification. Journal of Machine Learning Research, 10:1527–1570, 2009.
    • [25] Julio H. Zaragoza, Luis Enrique Sucar, Eduardo F. Morales, Concha Bielza, and Pedro Larrañaga. Bayesian chain classifiers for multidimensional classification. In 24th International Conference on Artificial Intelligence (IJCAI ’11), 2011.
    • [26] Min-Ling Zhang and Kun Zhang. Multi-label learning by exploiting label dependency. In KDD ’10: 16th ACM SIGKDD International conference on Knowledge Discovery and Data mining, pages 999–1008. ACM, 2010.
    • [27] Min-Ling Zhang and Zhi-Hua Zhou. A review on multi-label learning algorithms. IEEE Transactions on Knowledge and Data Engineering, 99(PrePrints):1, 2013.

    Appendix A Graphical Model Structure Learning: Fs vs. Lead

    In order to delve deeper into the issue of structure learning, we generated a synthetic dataset, where the underlying structure is known. The synthetic generative model is as follows. For the feature vector, we consider a -dimensional independent Gaussian vector , where for . Let () be a binary -dimensional vector containing exactly ones (and thus zeros), and let us assume that we have a directed acyclic graph between the labels in which each label has at most one parent. Both the vectors and the dependency label graph are generated uniformly at random. Given the value of its parent label, , the following probabilistic model is used to generate the -th label :

    (13)

    where is a real constant and , with and . Note that, according to the model,

    is a Bernoulli random variable that takes value

    with average probability

    where and

    is the cumulative distribution function of the normal Gaussian distribution. Consequently, with

    and we control the likelihood of being equal to its parent , thus modulating the complexity of inferring such dependencies by using the fs and lead methods.

    In fig:toyrecon we show three examples of synthetically generated datasets, in terms of their ground truth structure and the structure discovered using the fs and lead methods, for three different scenarios: ‘easy’ (, ), ‘medium’ (, ), and ‘hard’ (, ) datasets. Recall that we use a mutual information matrix for both methods, with the difference being that the lead matrix is based on the error frequencies rather than the label frequencies. Visually it appears that both fs and lead are able to discover the original structure, relative to the difficulty of the dataset. There appears to be a small improvement of fs over lead. This is confirmed in a batch analysis using the F-measure of 10 random datasets of random difficulty ranging between ‘easy’ and ‘hard’: fs gets and lead gets . A more in depth comparison, taking into account varying numbers of labels and features, is left for future work.

    (a) test
    (b) test
    (c) test
    (d) test
    (e) test
    (f) test
    (g) test
    (h) test
    (i) test
    Figure 8: Ground-truth graphs of the synthetic dataset (left) – easy, medium, and hard according to the table – and their reconstruction found by the fs strategy [8] (middle) and lead strategy [26] (right).

    Appendix B Approximate inference via Monte Carlo

    A better understanding of the MLC/MOC approaches described in Section 3 and the novel scheme introduced in this work can be achieved by describing the Monte Carlo (MC) procedures used to perform approximate inference over the graphical models constructed to approximate .

    Given a probabilistic model for the conditional distribution and a new test input , the goal of an MC scheme is generating samples from that can be used to estimate its mode (which is the MAP estimator of given ), the marginal distribution per label (i.e., for ) or any other relevant statistical function of the data.

    b.1 Bayesian networks

    In a directed acyclic graphical model, the probabilistic dependencies between variables are ordered. For instance, in the CC scheme factorizes according to eq:chain. If it is possible to draw samples directly from each conditional density , then exact sampling can be performed in a simple manner. For (where is the desired number of samples), repeat

    For Bayesian networks that are not fully-connected, as in BCC, the procedure is similar. Each sampled vector, for , is obtained by drawing each individual component independently as for .

    b.2 Markov networks

    In an undirected graphical model (like that of a CDN), exact sampling is generally unfeasible. However, a Markov Chain Monte Carlo (MCMC) technique that is able to generate samples from the target density can be implemented. Within this class, Gibbs Sampling is often the most adequate approach. Let us assume that the conditional distribution factorizes according to an undirected graphical model as in eq:undirected. Then, from an initial configuration , repeat for :

    where each label can be sampled by conditioning just on the neighbors in the graph, as seen from eq:Markov. Thus,

    which can be simply denoted as , with denoting the state of the neighbors of at time .

    Following this approach, the state of the chain