1 Introduction
Multi Instance Learning (MIL) is a generalization of supervised learning, where the data is given as
bags, and each bag is a set of objects. Each object can be either positive or negative, but we are not given this information. Instead, we are given the label for a bag as a whole, such that the bag is positive if and only if at least one of the elements in the bag is positive. The goal is to learn the instance classifier – a classifier for objects, using only the bag labels.In recent years, there has been a constant stream of work on the MIL problem. We refer to [2] and to [5] for a survey of recent results and applications.
One natural and important application of MIL is in the domain of images with weak labels. Here, one considers a large image, which may contain several objects, such as “car” or “tree”, but the location of the objects in the image is not specified. In this case, one may divide the image into smaller overlapping patches, which together constitute a bag, such that some of the patches correspond to some of the labels. The labels themselves can be derived from some text related to the image, such as captions in the COCO dataset. This scheme was, for instance, an important part of the automatic image description generation methods, such as [9], [6], but has numerous other applications.
One approach to the MIL problem is via the Single Instance (SI) method. In this method, one simply unpacks the bags, and considers a supervised learning problem where the data is the set of all objects from all the bags, and the label of each object is the label of the bag from which the object was extracted. In what follows we refer to this assignment of labels to objects as the SI assignment.
An oftcited advantage of the SI method is its conceptual and computational simplicity. However, perhaps an even more important advantage is scalability. Indeed, nonSI MIL methods typically compute a certain score for each bag, which depends on individual scores of the objects in it. In oder to compute this score, one therefore may need to design iterative procedures if the bag is too large to fit in a single batch. In contrast, in SI the objects are no longer tied to a bag, and each bag can be divided into independent batches. Additional details are given in Section 5.
While the SI method was empirically investigated in the literature, there seems to be no complete picture with regards to when the method is effective and why, and at the same time significant efforts are made construct new and highly involved MIL methods.
In this paper, we show that in the case of unbalanced labels, and when the class of classifiers is rich enough, the SI method is effective. We now describe the results in more detail.
Let and be the numbers of positive and negative bags in the dataset, respectively. Let be the ratio, such that . We call the dataset unbalanced if is large. For instance, in the COCO dataset, for a label “car”, is about .
An additional dataset characteristic, that affects the performance of all MIL algorithms, is the amount of intrabag dependence. Roughly speaking, we say that the dataset has a low intrabag dependence if the negative features in positive bags look like generic negative features. The full definition is given in Section 3.2, where we refer to this as the mixing assumption. It is known empirically, and for some methods theoretically, that under this assumption the MIL problem is relatively easy. Here we show that this is also the case for the SI method.
More importantly, however, we analyze the SI method in cases where the mixing assumption does not hold. In these cases, we find that the larger the imbalance constant is, the more tolerant the SI method is to the lack of mixing. Since many natural datasets exhibit lack of mixing, but also data imbalance, it follows that the SI method is expected to perform well. The lack of mixing for images data in particular is discussed in Section 3.2.
Finally, as discussed in the Literature section in more detail, the evaluation of SI in the literature is done with linear classifiers. On the other hand, our results, Theorems 3.2 and 3.3 express the optimizer of the SI objective as a certain functional related to the optimal instance classifier. This functional, however, is rarely a linear classifier even if the optimal instance classifier is. This strongly suggest that to use the SI method, at least some nonlinearity should be added, and that the SI method is particularly well suited to be used with neural networks.
In Section 4 we perform experiments on synthetic data, and on the COCO dataset with captions as weak labels. On the synthetic data, we demonstrate that the SI method is indeed tolerant to the lack of mixing, and that the use of a one hidden layer neural network significantly improves the results in comparison to a linear classifier, even when the ground truth data is linearly separable. We also employ this example to show that one possible alternative to the SI method, based on noisy label methods (see the discussion in Sections 2, 3.2), is strongly sensitive to the lack of mixing. In the COCO experiment, we reproduce the MIL setting of [6], with a 1000 tokens from captions as bag labels, and compare the SI objective with the softnor objective used in [6]. Since in this setting one can not measure instance level performance, due to the lack of ground truth, we measure baglevel performance. Our results show that both objectives have a very similar performance, although the SI results are slightly lower. Possible reasons for this are discussed.
To summarize, the contributions of this paper are as follows: We provide a largesample analysis of the SI method, and show (a) The optimal instance classifier can be obtained from the optimal SI assignment classifier simply by thresholding at an appropriate level. (b) The balance of the data plays an important role. The more unbalanced the data is (larger ) the more tolerant the algorithm becomes to data dependencies inside bags. To the best of our knowledge, these results are new and in particular the important role of the balance has not been previously noted. Next, we provide a link between the performance of the SI method and the richness of the classifier class and show that the SI method is particularly well suited to work with neural network classifiers. Finally, we show that an SI method achieves performance comparable to the stateofthe art on image and captions data, and in addition demonstrate the tolerance of the SI to dependencies in bags, to support our theoretical results.
The rest of this paper is organized as follows: In Section 2 we review the related literature. Section 3 contains the main results. In Section 4 the experiments on synthetic data, comparison to a noisy label classifier, and the experiment on COCO data are presented. We conclude the paper in Section 5 with a discussion of possible future research directions.
2 Literature
As discussed in the Introduction, the field of MIL has generated a large amount of interest and is still growing. General surveys can be found in [2] and in the very recent [5]. Examples of some recent work related to, or using MIL methods may be found in [7], [16],[10], [9],[6] and [8].
We now discuss specifically the SI related literature. SI methods were empirically evaluated and compared to other methods in [13], and more recently in [1]. In [13], it was found that in many cases, the SI methods yield the most competitive results. It is of interest to note that the evaluation in [13] is done with linear classifiers. As discussed in the Introduction and shown in Section 4.1, the results would have likely improved even more if one were to add even a slight nonlinearity.
In [1], it was found that MIL specific objectives perform better than SI methods in cases with intrabag dependency in the data. Here, it is important to distinguish between two situations. First, in part of the experiments in [1], the label is not assigned to the bag by the rule which we discuss here: the bag is positive if and only if it contains at least one positive instance. These experiments simply investigate a different scenario. Second, in the experiments where the bag label is assigned as above, only linear classifiers are evaluated. Again, one of the insights of this paper is that once we allow nonlinearity, the results improve significantly
In [4] it is argued that in some scenarios involving sparse bags, SI methods may not perform well and alternative methods are proposed. Note that the example of images with caption labels does qualify as sparse bag situation. For instance, if “frisbee” is the label, an image may contain hundreds of patches, but only few of them would contain the frisbee. Nevertheless, in this paper we show that at least in the unbalanced data situation, bag sparsity is not an issue. All our experiments are with sparse bags, and in Section 3, the parameter which controls sparsity, may be either small or large.
One possible approach to the MIL problem is to consider the SI label assignment as a noisy label problem. The idea is that the assignment of a positive label to a negative object in a positive bag may be considered as label noise. One may then apply noisy label learning methods to recover the original labels. A variation of this approach was analyzed in terms of sample complexity in [3], although that result does not lay itself to a practical algorithm. Another possibility is to use the noisy labels approach of [12]. In [12], given the noisy label data, an new cost is constructed, such that the minimization of the new cost with respect to the noisy labels yields a classifier that is optimal with respect to the original labels. Unfortunately, both the arguments and the actual methods in both [3] and [12] rely heavily on intrabag independence. We refer to Section 3.2 for additional details. In Section 4.1, we show how the method based on the cost from [12] fails, while the SI method does not, when the independence assumptions are violated.
3 SI Analysis
In this section we present theoretical analysis of the SI method. Definitions are given in the following section. In Section 3.2 we discuss the mixing intrabag dependence. The theorems and their interpretations are presented in Section 3.3.
3.1 Definitions
In this section we analyze the SI algorithm for the MIL problem. The loss function for multiple classes will be obtained by summing the losses of each class individually, and therefore we discuss here the case of single class with a binary label.
The MIL dataset is given as a set of bags, where each bag contains objects , , and for every bag there is a valued label . We assume that the labels belong to objects – each object in has a label , and we make the classical MIL assumption where if and only if for some in . Our objective is to learn an instancelevel classifier mapping from a single object to , by employing the baglevel training data .
Denote by the set of positive bags, and by the set of negative bags. We assume that each positive bag contains positive samples, where may be small compared to the size of the bag .
The balance of the dataset will be denoted by ,
(1) 
The balance is the ratio negative to positive examples. For instance, in the COCO dataset, the label “car” has , while the label “fish” has . Note that here we refer to the image level labels extracted from the captions, not the hard labels of the dataset, although balance numbers there are in general similar to those of similar labels in captions.
The unpacked dataset is the collection of all objects from all the bags in . Denote by the collection of all positive objects in ,
(2) 
and similarly set
(3) 
In words, is the collection of positive objects from positive bags, are negative objects from positive bags, and are all negative objects from all negative bags.
Denote . Then we have
(4) 
The ground truth label assignment assigns label to objects in and to objects in and . The assignment that is available to us is the SI assignment, which gives label to objects in and and to objects in .
3.2 Feature Dependence in Bags
As has been previously noted in the literature, the statistical distribution of features inside positive and negative bags can have a significant impact on performance of MIL algorithms. Empirical observations on datasets with different kinds of distributions may be found in [13]. See also [14] for connections of the MIL problem to NPhardness in cases where no restrictions on distributions are imposed.
Here we first discuss two extreme cases, that of complete dependence and of independence. Then we discuss realistic cases in between, and the relation of the dependence to the data balance and the SI objective. Specifically, in what follows we will be interested in the relation between the distribution of objects in and , the negative features in positive and negative bags.
To describe an example of complete dependence, consider a hypothetical COCO type dataset, with labels “car” and “tree” given at an image level, where each image is a bag of patches. We are interested in an instance level classifier for “car”. However, suppose that the dataset is such that “car” and “tree” either appear both in an image, or both of them do not appear. In that situation, without additional assumptions, it is clear that any MIL classifier, with any objective, will have to classify any instance of “tree” as “car”. This is simply since “tree” and “car” are indistinguishable from the label information.
On the other hand, one may consider a situation where features in and are generated by the same distribution. We formulate this as the mixing assumption, for future reference.
Assumption 3.1 (Mixing).
Objects in are generated from the same distribution as those in .
To understand this assumption, consider the label “car” in a more realistic dataset. The patches with cars will belong to . Patches with, “trees”, however, will belong to or to
depending on whether they were extracted from image containing a car or not. The mixing assumption essentially states that the probability of observing tree in an image is independent of whether there is a car in the image, and also that impossible to distinguish between the type of trees that appear in car images and in images without cars. The types of patches one expects in an image are independent of whether a car is present in the image or not.
When the mixing assumption holds, it is generally known that an SI assignment translates the MIL problem into a noisy label problem. One thinks of label on objects from as noise. Then, classification with noise methods, such as [12], may be applied. See [3] for a variation of this approach (under some additional assumptions on bag composition). As we discuss further in Section 4.1
, noisy label estimators depend heavily on the mixing assumption. On the other hand, in this paper we show that if the data is unbalanced, then the straightforward supervised learning classifier from the SI assignment is extremely robust against violations in the mixing assumption, which is indeed violated in real datasets.
Indeed, consider finally the real COCO dataset. While the concepts of “car” and “tree” may be independent, it is clear and easy to verify that the appearance of “car” in the image is strongly (but not completely) positively correlated with the concept “traffic light” and strongly negatively correlated with concept “bear”.
3.3 Results
We assume that we work with classifiers that take values in the interval , for instance classifiers of the form , where
is a logit of a neural network.
Theorem 3.2.
Assume there is a classifier which fits the ground truth assignment perfectly, for , and for and . If the mixing assumption 3.1 holds, then the SI loss objective
(5)  
is minimized by such that
(6) 
The loss (5) corresponds to supervised learning with the SI label assignment. With our definitions, we have
(7) 
Therefore, by learning the SI objective, and thresholding the result at a value higher than , we obtain a perfect classification with respect to the ground truth. In particular has the same precisionrecall curve as . Thus, if the rest of the assumptions hold, and the family of classifiers is rich enough to contain classifiers of the form we can obtain instance level classification from bag level labels and an SI assignment. Note that is simply a linear modification of ,
(8) 
and we assume is in the family. On the other hand, note also that if
is, say, a logistic regression, then
is no longer exactly realizable by a logistic regression. See also Section 4.1 for an additional discussion and an example where the richness of the class plays a role.We now prove Theorem 3.2.
Proof.
First, to minimize (5), it is clear that one has to set for . Our objective is therefore to show that the two other terms of (5) are minimized by a constant value .
Let be sampled from . Then
is a scalar random variable, with some distribution
. By the mixing assumption, will have the same distribution when is sampled from . We can therefore rewrite the last two terms of (5) as(9) 
where we have also removed the minus sign, and we seek to maximize (9) over all possible distributions . We have used the identity of the distribution of on and in the passage from the first to the second line. In this passage we have also assumed that sample averages may be replaced by respective expectations, that is, that we work in the large sample limit. A more detailed discussion of this assumption may be found in Section 5.
Next, one readily verifies that the function
(10) 
with is maximized at
. This can be seen either directly by taking the derivative, or as a consequence of the nonnegativity of the KullbackLeibler divergence between the distributions on two points given by
and . It therefore follows that (9) is maximized when is an atomic distribution taking value with probability 1. ∎We now analyze the case where the mixing assumption does not hold.
Theorem 3.3.
Denote by and the distributions form which objects in and are generated, respectively. Assume there is a classifier which fits the ground truth assignment perfectly, for , and for and .
Then the SI loss objective
(11)  
is minimized by such that
(12) 
Proof.
As in the proof of Theorem 3.2, the existence of a separating classifier implies that and are disjoint, and similarly we set for . We now consider the last two terms of the cost (11), and . Rewrite the terms in (11) as
(13) 
and
(14) 
Define the mixture by
(15) 
Since both and are absolutely continuous with respect to , we can change the measure to write
(16) 
and
(17) 
Thus,
(18)  
It remains to observe that similarly to the argument in Theorem 3.2, for each fixed , the expression
(19) 
is maximized over at iff
(20) 
which concludes the proof. ∎
Note that Theorem 3.3 is a proper generalization of Theorem 3.2. However, we chose to presented them separately due to illustrative purposes.
As discussed earlier, Theorem 3.3 reveals the real power of the SI method in the unbalanced case. Consider the expression for in (12), for :
(21)  
(22)  
(23) 
In terms of Theorem 3.3, the mixing assumption 3.1 is equivalent to the assertion for all . In this case, the term
(24) 
reduces to
(25) 
As discussed previously, this allows us to place a decision threshold above and make a perfect classification. Next, if , then
(26) 
and therefore the same decision threshold will still result in correct classification. These therefore are the easier cases. Consider now what happens when . The extreme case discussed above of “tree” appearing in the image if and only if “car” appears would correspond to and for features corresponding to “tree”. Therefore one asks how much larger can be compared to . Suppose that we wish to place the decision threshold at . Then iff
(27) 
Therefore, the frequency of in can be up to times larger than that in and still obtain the right classification. In other words, the lack of balance in the data provides a large margin in which the mixing assumption may not hold. The larger the imbalance is, the larger dependence in features the SI method can tolerate. Therefore in typically unbalanced MIL dataset, SI is a robust classification method.
To conclude this section let us make a few notes regarding the separability assumption – the assumption in Theorems 3.2 and 3.3 that there is a classifier which separates and perfectly. One could consider a more general case where the optimal, in terms of crossentropy cost, classifier of the unpacked dataset has a precisionrecall curve that is not identically one (and hence has an average precision score smaller than ). This could happen for instance if the features are not strong enough to completely separate the classes. If the mixing assumption holds, arguments similar to those of Theorem 3.2 imply that the learned from the SI assignment would still have the form (8), and, since this form is monotone in , would have the same precisionrecall curve as . When the mixing assumption does not hold, instead of considering the ratio of densities between the positive and negative classes, one would have to consider the ratios at all level sets of . While this would significantly complicate the notation, conclusions similar to those of Theorem 3.3 would still hold.
4 Experiments
4.1 Non Linearity And Noisy Label Estimator
In this Section we evaluate the SI method on a data where the mixing assumption does not hold. We demonstrate the utility of adding a nonlinearity. In addition, as described in Section 3.2, we evaluate the noisy label cost from [12], referred to as UC, and show that it does not perform well when the mixing assumption does not hold.
We work with the unpacked dataset (as defined in Section 3.1) corresponding to values , , and . The sets for and are shown in Figure 1(a). The set is located on the line with uniformly distributed in , . The set is split evenly between two intervals. The first half is located on a line with and the second half is located on a line with . In order to break the mixing assumption, of the points from are distributed on the line and on the line with in both cases. Note that the mixing assumption would correspond to a split. Clearly, this dataset is linearly separable, e.g. by the line . The noisy data is illustrated in Figure 1(b). Note that for clarity only a small fraction of the points appear on the plots.
For both SI and UC we trained two models on the data with the SI assignment labels (Figure 1(b)
). The first model is a one layer neural network, i.e a linear model. The second model is a two layers neural network, with two neurons in the hidden layer and sigmoid activations. We trained each model for
epochs ^{1}^{1}1We also tried to run more epochs but it did not change the conclusions. with the ADAM optimizer where we set batch size equals to dataset size. We trained with a constant learning rate in and choose the classifier achieving the lowest training loss.The average precision scores^{2}^{2}2Computed with average_precision_score function from sklearn.metrics. of the resulting classifiers with respect to the true labels (Figure 1(b)), are shown in Table 1.
In Figure 2 the prediction score (the output sigmoid of the model) is shown as a heatmap for each case. We first note that although the UC cost is theoretically guaranteed to find the correct classifier when mixing holds, here it fails in both architectures. For the SI cost, observe that the linear classifier approximates only poorly the optimal SI classifier , 12. However, the two layer model approximates much better (Figure 2, bottom left) and thresholding it at an appropriate level separates ground truth positives from negatives perfectly, therefore yielding the AP score .
Method  One Layer  Two Layers 

SI  0.21  1 
Unbiased Estimator  0.30  0.23 
4.2 Coco
As described in the Introduction, we consider the problem of object classification from captions data on the COCO 2014 dataset [11]. This problem can be naturally interpreted as an MIL problem.
We adopt the experimental setting of [6], and compare the performance of the SI classifier to the performance of the MIL objective used in [6].
In this setting, each image is rescaled to a size of , and divided into patches of size
with stride
. Each image is therefore a bag containing objects. Each image was fed into a VGG16 network, [15]. The output of the layer is then a dimensional representation for each patch. Next, a convolutional layer with stride and filters is used to represent classifiers from patch features, for a labels. We refer to [6] for full architectural details.The image labels are derived from captions. No preprocessing was done on captions, except a conversion to lower case. The vocabulary of labels consists of most frequent tokens appearing in captions. Note that about of these tokens are stopwords. However, to allow direct comparison to the code of [6], we chose to maintain the same vocabulary, and to measure the performance on all the labels, and separately on a selected subset of labels, as discussed below.
For a fixed label , let be the sigmoid output of a classifier corresponding to . For patches of an image , the MIL objective used in [6] corresponding to the image is
(28) 
and the total cost term corresponding to is obtained by summing the cost over all labels,
(29) 
where is the indicator of the label and is the crossentropy cost. The SI objective for the image is given by
(30) 
We have evaluated the performance at the bag level. Specifically, for a label , and image , given the scores we construct a bag level score via
(31) 
Then we evaluate the Average Precision of the scores against the labels on the COCO eval set. The mean Average Precision (mAP) is the mean over all labels . In addition, as discussed above, since some labels are stopwords, and some labels appear very few times in the dataset, we also measure the mAP on a smaller subset of “strong labels”. These are the labels such that their token appears as one of the object categories COCO, since these tend to be better represented in the dataset. For instance “car” is a strong label, while “water” is not. The matching between caption labels and categories was done via text matching. Since some categories are described by two words (ex. “traffic light”), they were not included. This process generated labels. It is important to note that object categories were only used to select the subset of labels. Training and evaluation of all models were performed solely using the images and caption data.
To obtain the results for the MIL objective (28) we have used the code from [6]
, available online. These results were reproduced in our own code, implemented in Tensorflow. To obtain the results for the SI objective, we replaced the cost with the SI cost (
30) in our implementation. The models were trained for epochs, at which point both of the models converged.The results are given in Table 2. One can see that the results are close, although the MIL (28) results are slightly higher. We believe that the difference is due to the hyperparameters rather than due to intrinsic properties of the costs involved. We have not attempted any hyperparameter tuning due to the high computational cost of this operation. Instead, we have used the given heavily tuned hyperparameters of the [6] code (hardcoded bias term initializations, SGD learning rate type and decay, hardcoded varying training rates for different layers). These hyperparameters were designed for the original objective, but are not necessarily optimal for SI.
5 Conclusions and Future Work
We have shown that SI learning is an effective classification method for MIL data if the problem has the following characteristics: (a) The bag labels are derived from objects, in the sense that a bag is positive if and only if it contains a positive object. (b) The data is unbalanced – there are more negative bags than positive. This allow the classification to be tolerant to a significant amount of dependence in the bags. (c) The class of classifiers is rich enough to contain not only the reference ground truth classifier, but also the classifiers described by Theorems 3.2 and 3.3.
We now describe two possible directions for future work. From the theoretical perspective, our results are largesample limit results. In particular we have assumed that we may replace sample averages by the respective expectations, as was done in (9) and (13),(14). While these computations allow us to understand the essential features of the problem, it is still an intriguing question of what can be said at the sample level. Classically, such questions may be answered within the framework of bounded complexity classifier classes, via notions such as the Rademacher complexity. However, these notions are well known not to be an adequate measure of complexity for neural networks, and therefore one must find a different approach.
From the practical perspective, the most appealing feature of SI method is the ability in principle to deal with arbitrarily large bags. As discussed earlier, typical MIL objectives compute a score, such as (28) which depends on all objects in the bag. Therefore, one either has to be able to have all objects in memory at once, or to design a cumbersome architecture to compute such a score sequentiality. The SI approach on the other hand, does not have this problem. Note that large bags may appear naturally in applications. Consider for example the situation where a news article is treated as a bag, containing several images. Even for a relatively modest number of images, keeping several copies of a modern visual CNN in memory is already prohibitive. We hope that the considerations in this paper shed light on the usefulness of the SI method and therefore open the door for such applications.
References
 [1] E. Alpaydin, V. Cheplygina, M. Loog, and D. M. Tax. Single vs. multipleinstance classification. Pattern Recogn., 48, 2015.
 [2] J. Amores. Multiple instance classification: Review, taxonomy and comparative study. Artificial Intelligence, 201, 2013.
 [3] A. Blum and A. Kalai. A note on learning from multipleinstance examples. Mach. Learn., 30, 1998.

[4]
R. C. Bunescu and R. J. Mooney.
Multiple instance learning for sparse positive bags.
In
Proceedings of the 24th international conference on Machine learning
, pages 105–112. ACM, 2007.  [5] M.A. Carbonneau, V. Cheplygina, E. Granger, and G. Gagnon. Multiple instance learning: A survey of problem characteristics and applications. Pattern Recognition, 77, 2018.

[6]
H. Fang, S. Gupta, F. Iandola, R. K. Srivastava, L. Deng, P. Dollar, J. Gao,
X. He, M. Mitchell, J. C. Platt, C. Lawrence Zitnick, and G. Zweig.
From captions to visual concepts and back.
In
The IEEE Conference on Computer Vision and Pattern Recognition (CVPR)
, June 2015.  [7] J. Hoffman, D. Pathak, T. Darrell, and K. Saenko. Detector discovery in the wild: Joint multiple instance and representation learning. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2015.
 [8] M. Ilse, J. M. Tomczak, and M. Welling. Attentionbased deep multiple instance learning. In ICML, 2018.
 [9] A. Karpathy and F. Li. Deep visualsemantic alignments for generating image descriptions. In CVPR, 2015.
 [10] W. Li and N. Vasconcelos. Multiple instance learning for soft bags via top instances. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2015.
 [11] T.Y. Lin, M. Maire, S. Belongie, J. Hays, P. Perona, D. Ramanan, P. Dollar, and C. L. Zitnick. Microsoft coco: Common objects in context. In ECCV, 2014.
 [12] N. Natarajan, I. S. Dhillon, P. K. Ravikumar, and A. Tewari. Learning with noisy labels. In Advances in Neural Information Processing Systems 26. 2013.
 [13] S. Ray and M. Craven. Supervised versus multiple instance learning: An empirical comparison. In Proceedings of the 22Nd International Conference on Machine Learning, ICML ’05, 2005.
 [14] S. Sabato and N. Tishby. Multiinstance learning with any hypothesis class. J. Mach. Learn. Res., 2012.
 [15] K. Simonyan and A. Zisserman. Very deep convolutional networks for largescale image recognition, 2014.
 [16] J. Wu, Y. Yu, C. Huang, and K. Yu. Deep multiple instance learning for image classification and autoannotation. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2015.
Comments
There are no comments yet.