Recent years have witnessed substantial advances of machine learning techniques that promise to address many complex large-scale problems that were previously thought intractable. However, in many applications, annotating enough representative training data to train a recognition system is costly, and in such cases, one can resort to AL to reduce the annotation burdenFreund et al. (1997); Dasgupta (2011). Moreover, several applications allow to leverage some targeted interactions with human experts, as needed, to label informative data and drive the training process. AL has been used in various applications to reduce the cost of annotations, e.g., in medical image segmentation Konyushkova et al. (2015), text classification Tong and Koller (2001); Hoi et al. (2006) and visual object detection Vijayanarasimhan and Grauman (2014).
Alternatively, the cost of annotations can be reduced through weakly supervised learning. It generalizes many kinds of learning paradigms including semi-supervised learning and MIL in partially observable environments or learning from uncertain labels. With MIL, training instances are grouped in sets (commonly referred to as bags), and a label is only provided for an entire set, but not for each individual instance. MIL has also been shown to efficiently reduce annotation costs in several applications such as object detection (where labels are obtained for whole images)Ren et al. (2016), description sentences Xu et al. (2016); Karpathy and Fei-Fei (2015); Fang et al. (2015) and web search engine results Zhu et al. (2015). This is particularly attractive for medical image analysis where a system can learn using labeled images that were not locally annotated by experts Quellec et al. (2017). Other successful applications of MIL include text classification Ray and Craven (2005); Zhang et al. (2013)2015), and sound classification Briggs et al. (2012).
This paper focuses on methods that are suitable for MIAL problems. Although several AL methods exist for single instance learning Settles (2009), only a handful of methods have been proposed to address MIAL problems Meessen et al. (2007); Settles et al. (2008); Zhang et al. (2010); Melendez et al. (2016). Single instance active learning (SIAL) methods are not suitable for MIL because: 1) in MIL, instances are grouped in sets or bags, and 2) training instances have weak labels. The arrangement of instances into bags gives rise to several different tasks, such as bag classification and instance classification which must be addressed differently Carbonneau et al. (2016a).
Different learning scenarios exist for active MIL Settles et al. (2008). In this paper, we focus on the scenario where the learner has a set of labeled bags at its disposal, and must predict the label of each individual instance. The learner can query the oracle to label the bag’s content. The final objective is to uncover the true labels of the instances, which corresponds to the transduction setting described in Garcia-Garcia and Williamson (2011). Given instances that are correctly labeled, any classifier can be used in a supervised fashion to classify instances not belonging to the training set in an inductive setting Garcia-Garcia and Williamson (2011). To our knowledge, this scenario has never been studied in the literature. The few existing MIAL methods focus on bag classification Meessen et al. (2007); Settles et al. (2008); Zhang et al. (2010) or select groups of instances in a scenario where there is only one query round Melendez et al. (2016).
The MIAL scenario that we address is relevant in several real-world problems. For example, in some computer-assisted diagnosis applications, classifier is trained to identify localized regions of organs or tissues afflicted by a given pathology. A classifier is typically trained using afflicted regions identified by an expert or a committee of experts, which is costly in terms of time and resources. This limits the quantity of available data for training. However, it is easier to obtain images along with a subject diagnosis as a weak label (bag label). In order to make better use of the experts, the MIAL learner identifies the subject whose local annotations would most improve the classifier. In this example, we believe that our learning scenario is more plausible than the second scenario where instances are queried individually. When experts are asked to provide local annotations of afflicted tissues or organs, it makes more sense to provide an entire image (bag) of the patient rather than provide isolated regions (instances). In this kind of applications, it is important for the annotator to be aware of the context provided by the surroundings of the segment when assigning a label. A similar argument can be made for text classification where an instance can be a sentence or a paragraph. It is easier to provide an accurate label for individual parts with knowledge of the entire text.
Beyond the well-known difficulties associated with AL, MIL instance classification raise several challenges. First, leveraging the weak supervision provided by bag labels is challenging because it is not explicitly known how each instance relates to its bag label. Also, the fact that training instances are arranged in sets adds an extra layer of complexity regarding relations between training instances. Moreover, in MIL, instance classification is often associated with severe class imbalance problems. Finally, AL and weakly supervised learning are often used to reduce the annotation cost of large amount of data which calls for algorithms with low computational complexity. For cost-effective design of an instance classifier through MIL, an AL algorithm should:
characterize uncertainty in the instance space – assess which regions of the instance space are most ambiguous to the classifier, and thus informative for design.
identify the most informative bag for the learner given multiple regions of the instance space.
leverage bag label information, from queried and non-queried bags. This is in contrast to traditional AL problems because in our context bag labels provide weak indication of the instance labels.
Two new MIAL methods are proposed in this paper for bag-level aggregation of instance informativeness, allowing to select the most informative bags to query, and then learn. The first method – aggregated informativeness (AGIN) – assesses the informativeness of each instance to compute the informativeness of bags. Informativeness is based on classifier uncertainty, and instances near the decision boundaries are prioritized. The second method – cluster-based aggregative sampling (C-BAS) – characterizes clusters in the instance space by computing a criterion based on how much is known about the cluster composition and the level of conflict between bag and instance labels. The criterion enforces the exploration of the instance space and promotes queries in regions near the decision boundary. Moreover, the criterion discourages the learner from querying about instances for which the label can be inferred from bag labels. Extensive experiments have been conducted to assess the benefits of using both proposed methods in three application domains: text, image and sound classification.
2 Multiple Instance Active Learning
This paper focuses on pool-based AL methods Settles (2009) where the learner is supplied with a collection of unlabeled and labeled samples. The learner must select the best instance, or groups of instances, to query. Pool-based AL problems have been tackled following two intuitions Dasgupta (2011): 1) queried instances should shrink the classifier hypothesis space as much as possible, and 2) cluster structure of the data should be exploited for efficient exploration of the input space. The methods proposed in this paper address the MIAL problem from each intuition perspective.
Several types of approaches shrink the classifier hypothesis space. The methods based on uncertainty query the most ambiguous instances for the classifier Tong and Koller (2001); Lewis and Gale (1994) or the instance causing the most disagreement in a pool of classifiers Seung et al. (1992); Melville and Mooney (2004)
. A drawback of these methods is that they tend to choose outliers for query since they are often ambiguous for the classifierTang et al. (2002); Zhu et al. (2008). To avoid this problem, some methods compute the expected error reduction Roy and McCallum (2001); Guo and Greiner (2007) or expected model change Settles et al. (2008)
. They estimate the impact of obtaining each individual instance label on the generalization error or the model parameters. However, these methods are computationally expensive because classifiers must be trained for each possible label assignment of each unlabeled data sample. To avoid this problem, some methods aim to reduce generalization error by minimizing the model varianceHoi et al. (2006); Cohn et al. (1994), typically by inverting a Fisher information matrix for each training instance. The size of the matrix depends on the number of parameters in the model which can rapidly become intractable Settles (2009). All these approaches are subject to sampling bias problems Dasgupta (2011), where some true instance labels may never be discovered for multi-modal distributions. This is because at the start of the learning process a classifier is trained using sampled data, and then later, queries are proposed near the decision boundaries of this classifier. If data structure exists, but was not captured by the initial samples, it may never be discovered.
Another group of AL methods relies on the characterization of the data distribution in the input space Settles and Craven (2008); Fujii et al. (1998); Nguyen and Smeulders (2004). Instead of concentrating on decision boundaries, they assess the structure of input data in order to query for informative instances that are representative of the input distribution. Leveraging the input data structure promotes exploration and discourages the selection of outliers. As a result, methods characterizing the input space yield better performance than other types of method when the quantity of labeled data is limited. However, as more labels are queried, methods that shrink the hypothesis space tend to perform better Wang and Ye (2015). The complexity of these approaches is generally similar to other kind of approaches with an added initial cost of a clustering or density estimation step Settles and Craven (2008).
As will be described in Section 3, the AL methods proposed in this paper follow these two different intuitions. AGIN seeks to shrink the hypothesis space based on classifier uncertainty, while C-BAS characterizes the data distribution. These methods have been developed with computational efficiency in mind, which is increasingly important to address the growing complexity of large-scaled applications.
Although MIL methods were initially proposed for bag classification Amores (2013), instance classification problems have more recently attracted growing interest Xu et al. (2016); Zhu et al. (2015); Vanwinckelen et al. (2015); Vezhnevets and Buhmann (2010). These are different tasks that require different approaches Carbonneau et al. (2016a); Vanwinckelen et al. (2015). MIL methods fall into one of two main categories depending on which level, bag or instance, discriminant information is extracted Amores (2013)
. Bag-level methods compare bags directly using set distance metrics or embed bags in a single summarizing feature vectorChen et al. (2006); Wang and Zucker (2000); Cheplygina et al. (2015); Gärtner et al. (2002); Zhou et al. (2009). These methods do not perform instance classification and are unsuitable in our context. In contrast, instance-level methods predict the class of instances and combine these predictions to infer the bag label (e.g., APR Dietterich et al. (1997), DD and EM-DD Maron and Lozano-Pérez (1998); Zhang and Goldman (2001), mi-SVM and MI-SVM Andrews et al. (2002), RSIS Carbonneau et al. (2016b) and MI-Boost Babenko et al. (2008)). While these methods are usually designed for bag classification, they can be employed for instance classification tasks. It has been shown that bag classification and instance classification tasks have different misclassification costs Carbonneau et al. (2016a), which means that the best bag classifier is not necessarily the best instance classifier Vanwinckelen et al. (2015). Moreover, experiments in Ray and Craven (2005); Carbonneau et al. (2016a) show that single instance classifiers often perform comparably to MIL methods, especially for instance classification.
The literature on MIAL is limited and almost each method is proposed for a specific learning scenario. There are methods that query bag labels for bag classification. The method in Meessen et al. (2007) embeds bags in a single feature vector using a representation based on MILES Chen et al. (2006). An SVM is used for classification and the embedded bags which are closest to the decision hyper-plane are selected as in Tong and Koller (2001). This method has been generalized in Zhang et al. (2010) and a selection method based on Fisher’s Information criterion has also been proposed. The learning scenario in Settles et al. (2008)
is similar to ours in that all bag labels are known and the learner queries instance labels from positive bags. However, our goal is to train an instance classifier (not a bag classifier), and the learner queries all instance labels from a bag (instead of only one), which we believe to be more efficient in practice. They train a logistic regression classifier optimized for bag-level classification accuracy. Their selection method is based on uncertainty sampling and expected gradient length. Queried instances are duplicated and added to the training set as singleton bags. While this method works well in practice, it is computationally expensive and the expected gradient length method is sensitive to feature scaleSettles (2009). The method proposed in Melendez et al. (2016) targets the instance classification task in a peculiar MIAL scenario where there is only one query round. First, instances are classified using a MIL algorithm Melendez et al. (2014) and then, the most valuable instances are grouped in regions. These hundreds of regions are then labeled by an expert and the MIL classifier is retrained. This differs from the scenario in this paper because there is only one query round, and the expert must annotate a region instead of an image.
3 Proposed Methods
Figure 1 presents an overview of the MIAL framework for our learning scenario. The training data set is a set of bags, each one is associated with a label and contains instances: . Each instance has an associated label . All the bag labels are known a priori. Following the standard MIL assumption Dietterich et al. (1997), the labels of instances in negative bags are assumed to be negative, while positive bags contain negative instances and at least one positive instance:
The task consists in training a classifier to correctly predict the label of each individual instance . The classifier’s decision function can be iteratively improved by querying an oracle about a bag. To select the most informative bag for query, the function assigns an informativeness score to each of them. Once a bag has been selected for query (), the oracle provides labels for all its instances. Then, the hypothesis on instance labels is updated, and the classifier is retrained. The next best candidate bag for query is selected, and so on. The rest of this section presents two new methods to derive for selecting bags for query.
3.1 Aggregated Informativeness (AGIN)
This method is inspired from SIAL methods (like in Tong and Koller (2001)) that select the instance expected to provide the largest reduction in the set of all consistent hypotheses. For instance, when working with SVM classifiers, this amounts to selecting the instance which is the closest to the decision hyper-plane. However, in MIL problems, instances are grouped into bags and the bag containing the single most informative instance is not necessarily the optimal choice. If the most informative instance is part of a bag containing only trivial instances, it may be advantageous to select another bag containing several difficult instances, even if none of them are the single most informative instance in the entire data set. In other words, a bag should be selected based on the combined informativeness of its instances.
Here we describe the method as an adaptation of Tong and Koller (2001). The SVM classifier is used as an example, but it can easily be replaced with any type of classifier. First, the distance to the decision hyper-plane must be transformed into instance informativeness. Let be a function returning a classification score for an instance x. This is the same as the classifier function , without a decision threshold.
For an SVM, the decision hyper-plane is defined by
. The informativeness of an instance can be obtained using a radial basis functioncentered at 0. Any type of function can be used as long as it is maximized at the decision threshold, and it decreases monotonically with distance. In this paper we use:
This function decreases exponentially as the magnitude of increases. This ensures that instances located close to the hyper-plane are highly prioritized over other less ambiguous instances.
The informativeness score of a bag is the aggregation of informativeness scores over all its instances:
The bag () with the highest informativeness score is selected for query:
3.2 Clustering-Based Aggregative Sampling (C-BAS)
This method is proposed to alleviate problems associated with the sample bias, and to leverage the weak information provided by bag labels and classifier predictions on instance labels. The intuition behind C-BAS is that a cluster of instances should meet three conditions to be informative: 1) bag label disagreement, 2) instance label disagreement, and 3) contain a considerable proportion of non-queried labels. If a cluster contains instances from only one class of bags, the label of these instances is the same as the label of their bag. Obtaining the true labels for these instances is not informative. Inversely, if a cluster contains different types of instances, it should define a decision boundary. Acquiring labels in this cluster is likely to help refine the overall decision boundary. Finally, to encourage exploration, clusters for which few labels are known will be considered as informative. Figure 2 illustrates these situations.
C-BAS starts by hierarchical clustering of data in the instance space. As inDasgupta and Hsu (2008), we employ agglomerative hierarchical clustering, although it can be replaced with any type of hierarchical clustering algorithm. This type of method does not require setting the number of expected clusters a priori, and creates a clustering dendogram or tree that is used to create space partitioning of different granularities. The informativeness of instances in each cluster is evaluated by a criterion that accounts for cluster composition of the cluster. The criterion is composed of 3 terms enforcing the aforementioned conditions of informativeness:
The term measures the level of disagreement between bag labels with an entropy-based function:
where is the proportion of instances from positive bags among the instances assigned to the cluster. If all instances come from bags of the same class, this term is equal to 0 which inhibits further research in this cluster. When bag labels are equally divided among the two classes, the term value is equal to 1. Similarly, the term measures the degree of disagreement between instance labels:
where is the proportion of positive instances among the instances assigned to the cluster. When the true label of an instance remains unknown, the classifier’s prediction is used as label. Finally, The term promotes cluster exploration based on the proportion of unlabeled instances () in contains:
When all instance labels are known this terms is equal to 0, and when none are known, it is 1 .
Exploring the Clustering Tree
The clustering tree is explored from top to bottom. Iteratively the tree is pruned farther away from the trunk, each time yielding a clustering of finer granularity. For each clustering level , the informativeness criterion of each cluster is computed. The informativeness of an instance is an accumulation of the informativeness of each cluster to which it was assigned:
where is the set of clusters obtained when the tree is cut at level .
Different levels of granularity are necessary to correctly assess the informativeness of instances. By considering only large clusters obtained (top of the tree), all instances would be provide the same level of information. They would all be assigned to few large clusters which are likely to present a high level of disagreement between labels, and include many non-queried instances. Inversely, by considering very fine cluster granularity (bottom of the tree), the levels of disagreement between labels and tend towards 0, which means and thus for all x. This is equivalent to randomly picking any unlabeled instances. Accumulating evidences on informativeness over levels of cluster granularity allows to compromise between the two extreme cases. Once all instance informativeness scores are computed, the query bag is selected in the same way as for AGIN (see (3) and (4)).
|Inst. per Bag||Class imbalance|
|SIVAL Settles et al. (2008); Rahmani et al. (2005)||25||180||5690||30||31||32||32||0.035||0.218||0.095|
|Birds Briggs et al. (2012)||13||548||10232||38||2||43||19||0.003||0.143||0.040|
|Newsgroups Settles et al. (2008)||20||100||4060||200||8||84||40||0.012||0.035||0.018|
All experiments were repeated 100 times and conducted with the following protocol. The data sets were randomly split in test (1/3) and training (2/3) subsets. For fair comparison, all MIAL methods are the same except for the bag selection scheme. The initial hypothesis for the labels individual instance is that they inherit the label of their bag, which is often successful in practice Ray and Craven (2005); Carbonneau et al. (2016a). Bags are queried one by one until there are no positive bags left to query in the training set. After each query, the performance of classifiers is measured on the training and test subsets. This corresponds to the transductive and inductive learning settings described in Garcia-Garcia and Williamson (2011).
As bags are queried, class imbalance of instance labels grows, which is an important concern for MIL instance classification tasks Herrera et al. (2016). This is particularly true in data sets where the proportion of positive instances in positive bags is low. We handle class imbalance using Different Error Costs SVM (DEC-SVM) Veropoulos et al. (1999). This SVM method assigns different misclassification costs to different classes. Table 2 reports the configuration of the SVM used for each data set. These parameters were obtained with 5-fold cross-validation using the real instance labels. We used the LIBSVM implementation Chang and Lin (2011). The ratio between the misclassification penalty cost of the classes corresponds to the class imbalance ratio (). and are the number of positive and negative instances in the training set. Each time an SVM is trained, class imbalance ratio is recomputed and misclassification costs are adjusted accordingly.
Performance is reported in terms of -Score and the area under the precision-recall curve () which are appropriate metrics for problems with class imbalance.
To assess the benefits of employing bag selection schemes for query selection, the first reference method selects bags at random. It selects only positive bags since the label of instances in negative bags are assumed to be known. The few MIAL methods proposed in literature were not designed for instance classification, so the simple margin method Tong and Koller (2001) was considered as the second reference method. It consists in picking the closest unlabeled instance to the decision hyper-plane of the SVM. In our experiments the method selects the bag containing this most informative instance. This method is originally intended for single instance learning scenarios and is closely related to AGIN. It is therefore relevant to show the effect of the proposed aggregation schemes.
4.1 Data Sets
The MIAL methods are evaluated using the three most widely used collection of MIL data sets providing instance annotations: Birds Briggs et al. (2012), SIVAL and Newsgroups. The last two were introduced to compare MIAL methods in Settles et al. (2008)
. They represent 3 different application domains – content-based image retrieval, text and sound classification. Each dataset contains different classes which are in turn used as the positive class yielding a total of 58 different problems. Table1 gives an overview of the properties for each data set.
The Spatially Independent, Variable Area and Lighting (SIVAL) data set for visual object retrieval Rahmani et al. (2005) contains 1500 images each depicting one of 25 complex objects photographed from different viewpoints in various environments. The version used in this paper has been segmented and hand-labeled to compare MIAL approaches in Settles et al. (2008). Each object is in turn considered as the positive class, and all remaining objects are part of the negative class. This yields 25 different 2-class learning problems. Each of the 25 data sets contains 60 positive images and 120 negative images sampled uniformly from all 24 negative classes. Images are represented as bags which are a collection of segments. Texture and color features are extracted from segments as well as neighborhood information yielding a 30-dimensional feature vector for each. The proportion of positive instances in positive bags is 25.5% in average and ranges from 3.1% to 90.6%. This data set exhibits high intra-class variation which means that the positive instance distribution is multimodal.
This data set Briggs et al. (2012) contains recordings of bird songs captured by unattended microphones in the forest. Each bag is the spectrogram of a 10 seconds recording. The recording is temporally segmented and 38 features characterizing shape, time and frequency profile statistics are extracted from each segment. The data set contains 13 species of birds, which are in turn considered as the positive class yielding 13 problems. This data set is difficult because in some cases there is extreme class imbalance at bag and instance level. For example, there are only 32 instances out of 10232 that belong to the hermit thrush. In the best case, positive instances represent 12.5% of all instances. As opposed to the other data sets, each class (except for background noise) is represented by a single compact cluster in space.
This MIL data set was created using instances from the 20 Newsgroups data set corpus in Settles et al. (2008). Instances are posts from newsgroups about 20 different subjects. Each post is represented by a 200 term frequency-inverse document frequency feature vector. For each version of the data set, a subject is selected as the positive class and the remaining 19 other subjects constitute the negative class. A bag is a collection of posts. The feature vectors used for this data set are sparse histograms which makes the distribution different from the two other problems. It constitutes a good way to evaluate the robustness of the proposed method to different data distribution types. Moreover, the average proportion of positive instances in positive bags is rather low, which also makes the problem difficult and accentuate problems related to class imbalance.
4.2 Implementation Details for C-BAS
Here we detail the particular implementation of C-BAS that we use in the experiments. The clustering three is obtained using the Ward’s average linkage algorithm. We then obtain different clustering refinements by cutting the tree at different levels. To make sure to cut at significant levels in the tree, we compute the inconsistency coefficient of all links in the tree:
where is the height of the link (cophenetic distance between the clusters). The set contains all links in the hierarchical levels under . and
are the average and the standard deviation of the height of the links contained in. A high inconsistency coefficient means that the two clusters joined by the link are farther apart then the clusters linked in the levels below, which indicates a natural separation in the data structure.
Once the inconsistency coefficients has been computed for all links, they are sorted from highest to lowest. Clusters are obtained using these values as thresholds. Instances or clusters can only be linked together if the inconsistency coefficient of the link is lower than the threshold. Iteratively, the threshold is lowered and finer clusterings are obtained. In the experiments of this paper, we use 20 threshold levels and has been arbitrarily set to 16 for all data sets. Both parameters could be optimized depending on the application.
5 Results and Discussion
MIAL methods are evaluated based on their ability to uncover the true instance labels in the training set (transductive learning task) and to classify a test set with a classifier trained using these uncovered labels (inductive learning task). Fig. 3 shows an example (over 100 runs) of the evolution of average -score values on the training subset as a function of the number queries to the oracle. Similar learning curves were obtained with but are not shown here since they do not provide pertinent additional information. Results show that for each data set, the proposed methods can significantly improve the learning process. Each curve starts (no bags have been queried) and finishes (all true instance labels are known) at the same level of performance.
From these curves, it is possible to see how many queries are necessary to achieve the same level of performance with different methods. For example, selecting random bag may necessitate as much as 23 (out of 40) more queries than C-BASS to obtain the same -Score on the Glaze Wood Pot training set. This is a best case scenario but nonetheless, out of the 58 data sets, using AGIN has lead to a reduction of the number of query necessary on all but 1 test data set with the metric. Similarly, C-BASS has resulted in a query reduction for all but 2 data sets.
In some of these curves, after a certain number of queries, the performance starts to decrease (see Fig. 3
). While it seems counter-intuitive, this can be explained by the fact that the metric reported in the graph is different from the surrogate loss function used as an optimization objective. In our case, the SVM optimizes the hinge loss over all instances which does not guarantees the optimization of the-Score (see Loog and Duin (2012); Loog et al. (2017) for a more detailed discussion on the subject).
To compare the overall performance of methods for the entire AL sequence, the normalized area under the learning curve (NAULC) was used for both -score and metrics. It corresponds to the area under curves as displayed in Fig. 3
divided by the total number of queries. For each problem in each data set, we compute the average NAULC and identify the best performing method as a win. Statistical significance of results is assessed using a t-test (=5%). Table 3 reports the number of wins for all methods (complete result tables can be found in the supplementary material document). Both proposed methods outperform the reference methods for all three application domains and for both the transductive and inductive tasks. Results indicate that aggregating instance informativeness to select queried bags is a better strategy than selecting the most ambiguous instance, and that SIAL methods should be adapted to MIL problems to improve performance.
Results suggest that proposed methods are better suited for different type of data. For example, AGIN outperforms other methods on the Birds dataset, while C-BAS yields better results with SIVAL data. Indeed, the positive instances in Birds data are likely to be grouped in very few clusters since birds of the same specie tend to have similar songs. In that case, the best strategy is to concentrate on refining the decision boundary since there are no hidden cluster structure to discover. Inversely, the positive distribution in SIVAL data is likely to have several modes. The appearance of an object, and thus its corresponding feature representation, can be very different depending on point-of-view, scale and illumination. In that case, it is important to discover these multiple clusters as rapidly as possible, which favors the C-BAS approach.
|Random Bags||Simple Margin||AGIN||C-BAS|
The results in Table 3 suggest that AGIN and C-BAS are better suited for different tasks. This is because uncovering the labels of instances in labeled bags is slightly different than training a classifier that generalizes well to unseen data. This has to do with how the algorithms approach the problem, class imbalance and the initial hypothesis on instance labels. The initial hypothesis that all instances inherit their bag labels ensures that all positive instances are used for training the classifier. At the same time, many negative instances are falsely labeled positive (FP). These noisy labels do not necessarily pose a serious difficulty when training the classifier. In regions densely populated with negative instances, FP are outweighed by true negative instances, and thus, overlooked by the classifier. In regions where there is a mix of true positives and negatives, FPs artificially expand the classifier positive regions which has the effect of increasing the sensitivity of the classifier. This means that, as bags are queried, precision increases but recall decreases. The initial increased sensitivity of the classifier has a beneficial effect on generalization (under these metrics) in context where there is class imbalance. Therefore preserving this effect while refining the decision boundary insures better generalization while learning. This explains why AGIN performs better for test set classification. Inversely, C-BAS uncover FP in all regions of the instance space which helps in yielding better results for the transductive task but mitigates the beneficial effect of the temporary increased sensitivity when compared to AGIN.
It had been previously shown that when very few instances are labeled, methods characterizing the distribution of the input space, like C-BAS, perform better than methods reducing the classifier hypothesis space, like AGIN, and vice-versa Wang and Ye (2015). This is observed in our experiments (see Fig. 4). This is because C-BAS pushes the learner to quickly explore the most promising data clusters through the term. Moreover, the term prevents the learner from querying instance labels that can be inferred from bag labels. After a certain number of queries, it becomes more important to refine decision boundaries, and that is when AGIN performs better.
For instance classification problems in MIAL, the exploration of the instance space is always promoted indirectly, which reduces the severity of sample-bias problems as found in SIAL problems. This implicit exploration comes from the fact that all instances of a queried bag are labeled together. Even if a bag is selected because it contains instances near a decision boundary, the other instances in the bag provide information about other regions of the instance space. This helps AGIN achieve a high level of performance.
Based on these experiments, it seems that the AGIN method is preferable to the others in many situations. It achieves a high level of accuracy while remaining fairly simple to implement. It exhibits competitive levels of performance in both transductive and inductive learning tasks. There are two situations where it is preferable to use C-BAS: 1) when there are few known labels, and 2) when the positive instances are distributed in several regions of the input space.
While the proposed methods perform well with the type of data used in our experiments, we believe that there are some types of MIL problems were they might not yield optimal performance. As explained in Carbonneau et al. (2016a)
MIL problems can possess several characteristics which require special care. Some of them would probably be difficult to address with the proposed algorithms. For example the proposed methods assume that all features are relevant for classification. This makes it difficult to deal with MIL data presenting strong intra-bag similarity. This means that instances from the same bag are similar and thus located in the same region of space. Also, AGIN and C-BASS were developed under the standard MIL assumption where all instances in negative bags are assumed to be negative. This assumption is sometimes violated in practice. Finally, the algorithms are designed for single bag query. In batch mode AL contexts the oracle is ask to label a set of query. The proposed algorithms do not implement a mechanism that ensure that bags contained in a set of query are different, which might be sub-optimal in this context.
This paper introduces two methods for MIAL in instance classification problems. Experiments show that leveraging the bag-level structure of data provides a significant reduction in the number of queries needed to accurate classifiers for difference benchmark problems. Future research includes studying how different types of structure and correlation within and between bags affect the behavior of MIAL algorithms. An extension of the methods should be proposed mitigate the effect of similar instance in a same bag and to improve the batch mode learning process. Finally, experiments will be conducted to measure the benefit of using MIAL on data collected from large real-world clinical contexts.
- Freund et al. (1997) Y. Freund, H. S. Seung, E. Shamir, N. Tishby, Selective Sampling Using the Query by Committee Algorithm, Mach. Learn. 28 (2) (1997) 133–168.
- Dasgupta (2011) S. Dasgupta, Two faces of active learning, Theor. Comput. Sci. 412 (19) (2011) 1767–1781.
- Konyushkova et al. (2015) K. Konyushkova, R. Sznitman, P. Fua, Introducing Geometry in Active Learning for Image Segmentation, in: ICCV, 2015.
Tong and Koller (2001)
S. Tong, D. Koller, Support Vector Machine Active Learning with Applications to Text Classification, J. Mach. Learn. Res. 2 (2001) 45–66, ISSN 1532-4435.
- Hoi et al. (2006) S. C. H. Hoi, R. Jin, M. R. Lyu, Large-scale Text Categorization by Batch Mode Active Learning, in: WWW, 2006.
- Vijayanarasimhan and Grauman (2014) S. Vijayanarasimhan, K. Grauman, Large-Scale Live Active Learning: Training Object Detectors with Crawled Data and Crowds, Int. J. Comput. Vis. 108 (1) (2014) 97–114.
- Ren et al. (2016) W. Ren, K. Huang, D. Tao, T. Tan, Weakly Supervised Large Scale Object Localization with Multiple Instance Learning and Bag Splitting, IEEE Trans. Pattern Anal. Mach. Intell. 38 (2) (2016) 405–416.
- Xu et al. (2016) H. Xu, S. Venugopalan, V. Ramanishka, M. Rohrbach, K. Saenko, A Multi-scale Multiple Instance Video Description Network, CoRR abs/1505.0.
- Karpathy and Fei-Fei (2015) A. Karpathy, L. Fei-Fei, Deep Visual-Semantic Alignments for Generating Image Descriptions, in: CVPR, 2015.
- Fang et al. (2015) H. Fang, S. Gupta, F. Iandola, R. K. Srivastava, L. Deng, P. Dollar, J. Gao, X. He, M. Mitchell, J. C. Platt, C. Lawrence Zitnick, G. Zweig, From Captions to Visual Concepts and Back, in: CVPR, 2015.
- Zhu et al. (2015) J. Y. Zhu, J. Wu, Y. Xu, E. Chang, Z. Tu, Unsupervised Object Class Discovery via Saliency-Guided Multiple Class Learning, IEEE Trans. Pattern Anal. Mach. Intell. 37 (4) (2015) 862–875.
- Quellec et al. (2017) G. Quellec, G. Cazuguel, B. Cochener, M. Lamard, Multiple-instance learning for medical image and video analysis, IEEE Rev. Biomed. Eng. .
- Ray and Craven (2005) S. Ray, M. Craven, Supervised Versus Multiple Instance Learning: An Empirical Comparison, in: ICML, 2005.
- Zhang et al. (2013) D. Zhang, J. He, R. Lawrence, MI2LS: Multi-instance Learning from Multiple Informationsources, in: KDD, 2013.
Kotzias et al. (2015)
D. Kotzias, M. Denil, N. de Freitas, P. Smyth, From Group to Individual Labels Using Deep Features, in: KDD, 2015.
- Briggs et al. (2012) F. Briggs, X. Z. Fern, R. Raich, Rank-Loss Support Instance Machines for MIML Instance Annotation, in: KDD, 2012.
- Settles (2009) B. Settles, Active Learning Literature Survey, Computer Sciences Technical Report 1648, University of Wisconsin–Madison, 2009.
- Meessen et al. (2007) J. Meessen, X. Desurmont, J. F. Delaigle, C. D. Vleeschouwer, B. Macq, Progressive Learning for Interactive Surveillance Scenes Retrieval, in: CVPR, ISSN 1063-6919, 2007.
- Settles et al. (2008) B. Settles, M. Craven, S. Ray, Multiple-Instance Active Learning, in: NIPS, 2008.
Zhang et al. (2010)
D. Zhang, F. Wang, Z. Shi, C. Zhang, Interactive localized content based image retrieval with multiple-instance active learning, Pattern Recognit. 43 (2) (2010) 478–484.
- Melendez et al. (2016) J. Melendez, B. van Ginneken, P. Maduskar, R. H. H. M. Philipsen, H. Ayles, C. I. Sánchez, On Combining Multiple-Instance Learning and Active Learning for Computer-Aided Detection of Tuberculosis, IEEE Trans. Med. Imaging 35 (4) (2016) 1013–1024.
- Carbonneau et al. (2016a) M.-A. Carbonneau, V. Cheplygina, E. Granger, G. Gagnon, Multiple Instance Learning: A Survey of Problem Characteristics and Applications, ArXiv e-prints abs/1612.0.
- Garcia-Garcia and Williamson (2011) D. Garcia-Garcia, R. C. Williamson, Degrees of supervision, in: NIPS, 897–904, 2011.
- Lewis and Gale (1994) D. D. Lewis, W. A. Gale, A Sequential Algorithm for Training Text Classifiers, in: SIGIR, 1994.
- Seung et al. (1992) H. S. Seung, M. Opper, H. Sompolinsky, Query by Committee, in: COLT, 1992.
- Melville and Mooney (2004) P. Melville, R. J. Mooney, Diverse Ensembles for Active Learning, in: ICML, 2004.
- Tang et al. (2002) M. Tang, X. Luo, S. Roukos, Active Learning for Statistical Natural Language Parsing, in: ACL, 2002.
- Zhu et al. (2008) J. Zhu, H. Wang, T. Yao, B. K. Tsou, Active Learning with Sampling by Uncertainty and Density for Word Sense Disambiguation and Text Classification, in: COLING, 2008.
- Roy and McCallum (2001) N. Roy, A. McCallum, Toward Optimal Active Learning Through Sampling Estimation of Error Reduction, in: ICML, 2001.
- Guo and Greiner (2007) Y. Guo, R. Greiner, Optimistic Active Learning Using Mutual Information, in: IJCAI, San Francisco, CA, USA, 2007.
- Cohn et al. (1994) D. A. Cohn, Z. Ghahramani, M. I. Jordan, Active Learning with Statistical Models, in: NIPS, 1994.
- Settles and Craven (2008) B. Settles, M. Craven, An Analysis of Active Learning Strategies for Sequence Labeling Tasks, in: EMNLP, 2008.
- Fujii et al. (1998) A. Fujii, T. Tokunaga, K. Inui, H. Tanaka, Selective Sampling for Example-based Word Sense Disambiguation, Comput. Linguist. 24 (4) (1998) 573–597.
- Nguyen and Smeulders (2004) H. T. Nguyen, A. Smeulders, Active Learning Using Pre-clustering, in: ICML, 2004.
- Wang and Ye (2015) Z. Wang, J. Ye, Querying Discriminative and Representative Samples for Batch Mode Active Learning, ACM Trans. Knowl. Discov. Data 9 (3) (2015) 17:1—-17:23.
- Amores (2013) J. Amores, Multiple Instance Classification: Review, Taxonomy and Comparative Study, Artif. Intell. 201 (2013) 81–105.
- Vanwinckelen et al. (2015) G. Vanwinckelen, V. do O, D. Fierens, H. Blockeel, Instance-level accuracy versus bag-level accuracy in multi-instance learning, Data Mini. and Knowl. Discovery (2015) 1–29.
- Vezhnevets and Buhmann (2010) A. Vezhnevets, J. M. Buhmann, Towards weakly supervised semantic segmentation by means of multiple instance and multitask learning, in: CVPR, 2010.
- Chen et al. (2006) Y. Chen, J. Bi, J. Z. Wang, MILES: Multiple-Instance Learning via Embedded Instance Selection, IEEE Trans. Pattern Anal. Mach. Intell. 28 (12) (2006) 1931–1947.
- Wang and Zucker (2000) J. Wang, J.-D. Zucker, Solving the Multiple-Instance Problem: A Lazy Learning Approach, in: ICML, 2000.
Cheplygina et al. (2015)
V. Cheplygina, D. M. J. Tax, M. Loog, Dissimilarity-Based Ensembles for Multiple Instance Learning, IEEE Trans. on Neural Networks and Learning Systems (2015) 1–13.
- Gärtner et al. (2002) T. Gärtner, P. A. Flach, A. Kowalczyk, A. J. Smola, Multi-Instance Kernels, in: ICML, 2002.
- Zhou et al. (2009) Z.-H. Zhou, Y.-Y. Sun, Y.-F. Li, Multi-Instance Learning by Treating Instances As non-I.I.D. Samples, in: ICML, 2009.
- Dietterich et al. (1997) T. G. Dietterich, R. H. Lathrop, T. Lozano-Pérez, Solving the Multiple Instance Problem with Axis-parallel Rectangles, Artif. Intell. 89 (1-2) (1997) 31–71.
- Maron and Lozano-Pérez (1998) O. Maron, T. Lozano-Pérez, A Framework for Multiple-Instance Learning, in: NIPS, 1998.
- Zhang and Goldman (2001) Q. Zhang, S. A. Goldman, EM-DD : An Improved Multiple-Instance Learning Technique, in: NIPS, 2001.
- Andrews et al. (2002) S. Andrews, I. Tsochantaridis, T. Hofmann, Support Vector Machines for Multiple-Instance Learning, in: NIPS, 2002.
- Carbonneau et al. (2016b) M.-A. Carbonneau, E. Granger, A. J. Raymond, G. Gagnon, Robust multiple-instance learning ensembles using random subspace instance selection, Pattern Recognit. 58 (2016b) 83–99.
- Babenko et al. (2008) B. Babenko, P. Dollár, Z. Tu, S. Belongie, Simultaneous Learning and Alignment: Multi-Instance and Multi-Pose Learning, in: ECCV, 2008.
- Melendez et al. (2014) J. Melendez, et al., A novel multiple-instance learning-based approach to computer-aided detection of tuberculosis on chest x-rays, TMI 31 (1) (2014) 179–192.
- Dasgupta and Hsu (2008) S. Dasgupta, D. Hsu, Hierarchical Sampling for Active Learning, in: ICML, 208–215, 2008.
- Rahmani et al. (2005) R. Rahmani, S. A. Goldman, H. Zhang, J. Krettek, J. E. Fritts, Localized Content Based Image Retrieval, in: SIGMM, 2005.
- Herrera et al. (2016) F. Herrera, S. Ventura, R. Bello, C. Cornelis, A. Zafra, D. Sánchez-Tarragó, S. Vluymans, Multiple Instance Multiple Label Learning, chap. 9, Springer International Publishing, 191–206, 2016.
- Veropoulos et al. (1999) K. Veropoulos, C. Campbell, N. Cristianini, Controlling the Sensitivity of Support Vector Machines, in: IJCAI, 1999.
- Chang and Lin (2011) C.-C. Chang, C.-J. Lin, LIBSVM: A Library for Support Vector Machines, ACM Trans. Intell. Syst. Technol. 2 (3) (2011) 27:1—-27:27.
- Loog and Duin (2012) M. Loog, R. P. W. Duin, The Dipping Phenomenon, in: Structural, Syntactic, and Statistical Pattern Recognition: Joint IAPR International Workshop, SSPR&SPR, 2012.
- Loog et al. (2017) M. Loog, J. H. Krijthe, A. C. Jensen, On Measuring and Quantifying Performance: Error Rates, Surrogate Loss, and an Example in SSL, ArXiv abs/1707.04025.