Multiple Instance Dictionary Learning using Functions of Multiple Instances

11/09/2015 ∙ by Changzhe Jiao, et al. ∙ University of Missouri 0

A multiple instance dictionary learning method using functions of multiple instances (DL-FUMI) is proposed to address target detection and two-class classification problems with inaccurate training labels. Given inaccurate training labels, DL-FUMI learns a set of target dictionary atoms that describe the most distinctive and representative features of the true positive class as well as a set of nontarget dictionary atoms that account for the shared information found in both the positive and negative instances. Experimental results show that the estimated target dictionary atoms found by DL-FUMI are more representative prototypes and identify better discriminative features of the true positive class than existing methods in the literature. DL-FUMI is shown to have significantly better performance on several target detection and classification problems as compared to other multiple instance learning (MIL) dictionary learning algorithms on a variety of MIL problems.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 4

page 5

Code Repositories

FUMI

Functions of Multiple Instances and Extended Functions of Multiple Instances


view repo
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

Obtaining accurate training label information is often time consuming, expensive, and/or infeasible for large data sets. Furthermore, annotators may be inconsistent during labeling providing inherently imprecise labels. Thus, in many applications, one has access only to inaccurately or weakly labeled training data.

Sparse coding and dictionary learning methods, whose low-rank data representations generally reduce redundancy and improve discrimination ability, have been successfully applied to many applications [1]. DL-FUMI leverages the benefits of discriminative dictionary learning for target detection applications given only inaccurate labels. This is accomplished through the use of a novel model that assumes each target data point is a mixture of both target and background atoms whereas non-target data points are composed of only background atoms. In other words, unlike the majority of discriminative dictionary learning methods, DL-FUMI does not learn a separate dictionary for each class. Instead, DL-FUMI introduces a shared background dictionary that is used in reconstruction of both target and non-target points. The advantage of this model over class-specific dictionaries is that the target atoms only need to account for the unique characteristics of target (and do not need to address any shared background variability) resulting in more discriminative, representative target atoms.

Furthermore, the target atoms estimated by DL-FUMI can be examined to uncover what discriminates target data points. Since most approaches estimate class-specific dictionaries, each dictionary must characterize both the class-specific characteristics and the characteristics shared among all data points. Thus, in these methods, it is often difficult to pin down what is unique about each class without prior insight since the class-specific features are mixed with background features and spread across the atoms. In contrast, DL-FUMI provides that insight by pulling out the unique target characteristics and identifying which atoms contain those characteristics. In summary, DL-FUMI advances discriminative dictionary learning by (1) addressing multiple instance learning problems and (2) using a shared background model resulting in improved target characterization and discrimination.

I-a Multiple-instance learning (MIL):

MIL [2]

is a variation on supervised learning for problems with inaccurate label information. In particular, training data is segmented into positive and negative

bags. A bag is defined to be a multi-set of data points. In the case of target detection, the MIL problem requires that a positive bag contains at least one instance from the target class and negative bags are composed of entirely non-target data. Given training data of this form, the overall goal can be to predict either unknown instance-level or bag-level labels on test data. MIL methods are effective for problems where accurately labeled training data is unavailable.

Most MIL approaches focus on learning a classification decision boundary to distinguish between positive and negative instances/bags [3, 4]

. Although these decision boundary approaches are effective at training classifiers given inaccurate labels, they do not provide an intuitive description or

representative concept that characterizes the salient and discriminative features of the target class. The approaches that estimate target representatives [5, 6, 7] often only find a single target concept and are, thus, unable to account for large variation in the target class. To address this, DL-FUMI learns a set of target atoms (and background atoms) to characterize target variation.

(3)
(4)

I-B Supervised Dictionary Learning:

Sparse coding refers to the task of decomposing a signal into a sparse linear combination of dictionary atoms [8, 9]. Of particular relevance are supervised (i.e., task-driven or discriminative) dictionary learning methods [10, 11]. However, among supervised dictionary learning methods, there are only a few approaches that address the problem given inaccurate MIL labels. These include MMDL [12] that trains many linear SVM classifiers and views the estimated parameters as dictionary atoms and DMIL [13, 14] that learns class-specific dictionaries by maximizing the noisy-OR model in such a way that the all negative instances are poorly represented by the estimated target dictionary. As outlined in Sec I, DL-FUMI is unique from these existing methods through the use of a shared background dictionary.

Ii Dl-Fumi

Let be training data where is the dimensionality of an instance, , and is the total number of training instances. The data is grouped into bags, , with associated binary bag-level labels, where and denotes the instance in bag . Given training data in this form, DL-FUMI models each instance as a sparse linear combination of target and/or background atoms , , where

is the sparse vector of weights for instance

. Positive bags (i.e., with , denoted as ) contain at least one instance composed of some target:

(1)

where is a noise term. However, the number of instances in a positive bag with a target component is unknown.

If is a negative bag (i.e., , denoted as ), then this indicates that does not contain any target:

(2)

Given this problem formulation, the goal of DL-FUMI is to estimate the dictionary111 and are the concatenation of arrays and horizontally and vertically, respectively. , where are the target atoms and are the background atoms. This is accomplished by minimizing (3) which is proportional to the complete negative data log-likelihood, where and are subsets of corresponding to and , respectively. The first term in (3) computes the squared residual error between each instance and its estimate using the dictionary. In this term, a set of hidden binary latent variables that indicate whether an instance is or is not a target (i.e., when contains target) are introduced. For all points in negative bags, . For points in positive bags, the value of is unknown. Also, a weight is included where if and if where is a fixed parameter. This weight helps balance terms when there is a large imbalance between the number of negative and positive instances.

The second term is an regularization term to promote sparse weights. It also includes the latent variables, , to account for the uncertain presence of target in positive bags.

The third term is a robust penalty term that promotes discriminative target atoms (and inspired by a term presented in [15]). Instead of using a fixed penalty coefficient, we introduce an adaptive coefficient defined as:

(5)

where is the vector angle between the background atom and the target atom. Since , this discriminative term is always positive and will add large penalty when and have similar shape. Thus, this term encourages a discriminative dictionary by promoting background atoms that are orthogonal to target atoms. In implementation, is updated once per iteration using and which are the dictionary values from the previous iteration.

Iii DL-FUMI Optimization

Expectation-Maximization is used to optimize (3) and estimate . During optimization, the fact that many of the binary latent variables are unknown is addressed by taking the expected value of the log likelihood with respect to as shown in (4). In (4), is the set of parameters estimated at iteration and

is the probability that each instance is or is not a true target instance. During the E-step of each iteration,

is computed as:

(6)

where is a fixed scaling parameter. If is a non-target instance, then it should be characterized by the background atoms well, thus . Otherwise, if is a true target instance, it will not be characterized well using only the background atoms and .

1:  Initialize ,
2:  repeat
3:        E-step: Compute
4:        M-step:
5:        Update using (9),
6:        Update using (8),
7:        for  to  do
8:               Update for using (11), (12)
9:               Update for using (13)
10:        end for
11:        
12:  until Convergence
13:  return  , *DL-FUMI code can be found at: https://github.com/TigerSense/FUMI
Algorithm 1 DL-FUMI EM algorithm

The M-step is performed by iteratively optimizing (4) for each of the desired parameters. The dictionary is updated atom-by-atom using a block coordinate descent scheme [16, 17]. The sparse weights, , are updated using an iterative shrinkage-thresholding algorithm [18, 19]. For readability, the derivation of update equations are described in Sec. VII. The method is summarized in Alg. 1.

Iv Classification using Estimated Dictionary

Given , a confidence that the instance is target can be computed using a ratio of the reconstruction errors given the target and background atoms, , vs. background atoms, :

(7)

where are the sparse weights for the non-target atoms for the instance. If the numerator has a large error and the denominator has a low error, then the target atoms are needed to reconstruct instance .

V Experiments

DL-FUMI is evaluated on two MIL AR Face data [20, 21] recognition problems and an MIL USPS hand-written digits [22, 23] recognition problem. In all of our experiments, target atoms were initialized by computing mean of random subsets drawn from the union of all positive bags. -means was applied to the union of all negative bags and the cluster centers were set as the initial background atoms.

V-a AR Face Recognition

The AR-face data set consists of frontal-pose images with 26 images/person (2 sessions, 13 per session) corresponding to different expressions, illuminations and occlusions. Pre-processed and cropped imagery of 50 male and 50 female subjects provided by Martinez and Kak [21] was used. Each image was down-sampled to pixels and the raw gray scale values were used as features.

For the first AR Face experiment, sun-glasses were the target concept. Specifically, 50 positive training bags of 10 instances each were created. Each positive bag contained only two instances of randomly selected images of people wearing sun-glasses; the other eight were randomly chosen from images of people without sun-glasses. 50 negative bags were constructed by randomly selecting 10 instances per bag of images of individuals not wearing sun-glasses. Test data included all imagery that was not used for training.

The parameters for DL-FUMI for this experiment were set to , , , and . After dictionary estimation, the target confidence was computed for each test instance following Sec. IV. Receiver operating characteristic (ROC) curve analysis was conducted. Fig. 1 shows one of the 10 ROCs obtained by DL-FUMI, DMIL and EM-DD where the TPR vs FPR obtained by mi-SVM and MMDL were also plotted. The average TPRs of DL-FUMI, EM-DD and DMIL over 10 runs are shown in Table I at FPRs 1%, 18.4% and 41.9%, where 18.4% and 41.9% are average FPRs by two classification algorithms MMDL and mi-SVM, respectively. Fig. 2 and Fig. 3 show estimated target and background atoms by DL-FUMI and comparison methods, respectively. To estimate the DMIL background atoms, we flipped the sign of positive and negative bags (i.e., swapped the target and background classes) and re-trained the dictionary. This was done since, as stated in [14], DMIL does not learn a set of background atoms simultaneously. As shown, DL-FUMI target atoms are very discriminative and representative of the target class, e.g., there are male and female sun-glasses atoms and variation in light reflections. Finally, the overall dictionary set estimated by DL-FUMI is qualitatively more smooth which will help to reduce error in classification.

Fig. 1: ROC analysis for sun-glasses detection using AR face database comparing DL-FUMI, DMIL [13] and EM-DD [6] (code from [24]). True Positive Rate vs. False Positive Rate of mi-SVM [3] and MMDL [12] (code from author’s website) are also plotted.
Algorithm
TPR(%)
FPR=1%
TPR(%)
FPR=18.4%
TPR(%)
FPR=41.9%
mi-SVM [3] - - 96.2
EM-DD [6] 78.2 93.8 98.1
MMDL [12] - 98.0 -
DMIL [13] 60.2 95.2 99.5
DL-FUMI 97.5 100 100
TABLE I: Average TPR at FPRs over 10 runs
(a) DL-FUMI
(b) DMIL
(c) EMDD
Fig. 2: Plot of estimated dictionary atoms for sun-glasses. (a): DL-FUMI. (b): DMIL. (c): EM-DD.
(a) DL-FUMI
(b) DMIL
Fig. 3: Plot of estimated dictionary atoms for background. (a): DL-FUMI. (b): DMIL.

For the second AR Face experiment, Woman No. 10 was selected as the positive target class. Two positive training bags containing 50 instances each were created. The first positive bag contained 6 images from Woman No. 10 set 1 and the second positive bag contained the remaining 7 images from Woman No. 10 set 1, and the rest of the instances in each positive bag were randomly selected from other individuals. Three negative bags with 200 instances per bag were constructed by randomly selecting images from the data set excluding those from Woman No. 10. Given this, there are only 13 positive training instances and 687 negative training instances. This is a more difficult problem than sun-glass detection. The test data contained the 13 images of Woman No. 10 from set 2 and 100 images randomly selected from images that are not Woman No. 10. There is no overlap between the training and testing data.

The parameters used in DL-FUMI for this experiment were , , , and . One of 10 ROCs is shown in Fig. 4. The average TPRs of DL-FUMI, EM-DD and DMIL are shown in Table II at FPRs 2.9%, 5% and 12.0%, where 2.9% and 12.0% are average FPRs by two classification algorithms MMDL and mi-SVM, respectively. Table II and Fig. 4 clearly show that DL-FUMI outperforms all the comparison algorithms. In order to further show that the estimated target dictionary by DL-FUMI is effective at characterizing the target class, the subspace adaptive cosine estimator (subACE) target detection algorithm [25] was applied for detection using the target dictionary estimated by DL-FUMI directly. One of the subACE ROCs using the DL-FUMI dictionary shows a 100% TPR with 0% FPR in Fig. 4. Since subACE is a target detection algorithm that relies on target signatures that encompass the distinguishing characteristics of a target class, this further emphasizes that target dictionary estimated by DL-FUMI is highly representative of the target class. Fig. 5 - 5 show the target atoms estimated by DL-FUMI, DMIL and EM-DD for Woman No. 10, where it can be seen that the target dictionary atoms estimated by DL-FUMI are more discriminative as it captures different distinct features of the positive class (different occlusions, expressions, etc.). Fig. 6 and 6 show the background atoms estimated for Women No. 10 and it can be seen that the background dictionary estimated by DL-FUMI has better representative quality.

Fig. 4: Woman No. 10 detection on AR face database
Fig. 5: Plot of estimated dictionary atoms for Woman No. 10. (a): DL-FUMI. (b): DMIL. (c): EM-DD
Fig. 6: Plot of estimated background dictionary atoms for Woman No. 10.
Algorithm
TPR(%)
FPR=2.9%
TPR(%)
FPR=5%
TPR(%)
FPR=12.0%
mi-SVM [3] - - 69.2
EM-DD [6] 0 0 0
MMDL [12] 31.5 - -
DMIL [13] 54.5 69.1 78.8
DL-FUMI (using (7)) 78.9 91.5 95.5
DL-FUMI (subACE) 94.6 100 100
TABLE II: Average TPR at FPRs over 10 runs

V-B USPS Digit Classification

DL-FUMI is further evaluated on a multi-class classification task given the USPS222Database at: http://www-i6.informatik.rwth-aachen.de/ keysers/usps.html data set. The USPS data set contains 9298 images of handwritten digits from 0 to 9. Each image is in size. The raw gray-level pixel values are used as features in this experiment. The training and testing data partitions in this paper mimics the experimental set-up in [14]. Specifically, for each class , 50 positive training bags were generated. Each bag contains 4 instances in total and only one comes from true positive class and the other three instances are randomly chosen from other classes. 50 negatively labeled bags were also constructed by randomly selecting 50 instances per bag from classes other than . The testing data contains 2000 samples in total, 200 from each class.

In this experiment, the parameters used were , , , and . For instance level classification, the approach described in Sec. IV was applied given a dictionary estimated given each class as the target class. Then, the final class label, for multi-class classification, was assigned by selecting the class with the largest confidence value computed in (7). The classification results of DL-FUMI and comparison algorithms are listed in Table III, where results for GD-MIL are as reported in [14]. Table III shows that DL-FUMI outperforms two multiple instance dictionary learning methods, GD-MIL and MMDL, and two MIL methods mi-SVM and EM-DD. Fig. 7 and 7 show the estimated DL-FUMI target and non-target dictionary atoms, in these we can see that DL-FUMI is able to learn a set of discriminative target dictionary as well as characteristic background dictionary (i.e., each target dictionary atom looks like the target digit and the background dictionary atoms look like all the other digits).

To get insight into classification errors, Fig. 8 - 8 show several examples of randomly selected misclassified instances and Fig. 8 - 8 show the reconstructed images by DL-FUMI. For example, Fig. 8 has a true class label of 0, but was misclassified to 6; the reconstructed image is shown in Fig. 8. As can be seen, this data point appears to be very similar to the digit 6 and is even difficult for a human to correctly recognize. Similarly, Fig. 8 - 8 show the other three images and Fig. 8 - 8 show the corresponding reconstructed images, respectively.

Fig. 7: USPS dictionary atoms estimated by DL-FUMI. (a): Target atoms. (b): Non-target atoms.
Fig. 8: Examples of misclassified images by DL-FUMI. (a): true class 0 misclassified to 6. (b): true class 5 misclassified to 3. (c): true class 9 misclassified to 8. (d): true class 9 misclassified to 4. (e)-(h): corresponding reconstructed images by DL-FUMI
Alg. Acc.() Alg. Acc.()
mi-SVM [3] 81.1 GD-MIL333Results as stated in the literature [14]. [14] 83.4
EM-DD [6] 76.55 MMDL [12] 80.8
DL-FUMI 86.5
TABLE III: Classification accuracies on USPS data set
(8)

Vi Conclusion

In this paper, a multiple-instance dictionary learning algorithm, DL-FUMI, is proposed. DL-FUMI leverages the shared information between positive and negative classes to improve the discriminative ability of the estimated target atoms. Experimental results show that the estimated DL-FUMI target atoms provide a good representation of the positive class and improves target detection and classification performance over comparison methods.

Vii Derivation of DL-FUMI update equations

This section provides a derivation of DL-FUMI update equations. When updating the dictionary , the sparse weights are held fixed. To update one of the atoms in , (4) is minimized with respect to the corresponding atom while keeping all other atoms constant. The resulting update equations for and are shown in (9) and (8).

(9)

Note, is denoted as for simplicity.

When updating the sparse weights, , it should be noted that the sparse weight vector for instance is not dependent on any other instances.

The gradient with respect to without considering the penalty term is:

(10)

Then at iteration can be updated using gradient descent,

(11)

followed by a soft-thresholding:

(12)

s.t. , .

Following a similar proof to that in [26], when , the update of using a gradient descent method with step length monotonically decreases the value of the objective function, where

denotes the maximum eigenvalue of

. For simplicity, was set as for all , .

A similar update can be used for points from negative bags. The resulting update equation for negative points is:

(13)

The sparse weights corresponding to target dictionary atoms are set to 0 for all points in negative bags.

Acknowledgment

This material is based upon work supported by the National Science Foundation under Grant No. IIS-1350078 - CAREER: Supervised Learning for Incomplete and Uncertain Data.

References

  • [1] J. Mairal, F. Bach, and J. Ponce, “Sparse modeling for image and vision processing,” Found. and Trends in Comput. Graph. and Vision, vol. 8, no. 2-3, pp. 85–283, 2014.
  • [2] T. G. Dietterich, R. H. Lathrop, and T. Lozano-Perez, “Solving the multiple-instance problem with axis-parallel rectangles,” Artificial Intell., vol. 89, no. 1-2, pp. 31–17, 1997.
  • [3]

    S. Andrews, I. Tsochantaridis, and T. Hofmann, “Support vector machines for multiple-instance learning,” in

    Advances in Neural Inf. Process. Syst., 2002, pp. 561–568.
  • [4] Y. Chen, J. Bi, and J. Z. Wang, “MILES: Multiple-instance learning via embedded instance selection,” IEEE Trans. Pattern Anal. Mach. Intell, vol. 28, no. 12, pp. 1931–1947, 2006.
  • [5] O. Maron and T. Lozano-Perez, “A framework for multiple-instance learning.” in Advances in Neural Inf. Process. Syst., vol. 10, 1998.
  • [6] Q. Zhang and S. Goldman, “EM-DD: An improved multiple-instance learning technique,” in Advances in Neural Inf. Process. Syst., vol. 2.   MIT; 1998, 2002, pp. 1073–1080.
  • [7] C. Jiao and A. Zare, “Functions of multiple instances for learning target signatures,” IEEE Trans. on Geosci. Remote Sens., vol. 53, no. 8, pp. 4670 – 4686, 2015.
  • [8] S. G. Mallat and Z. Zhang, “Matching pursuits with time-frequency dictionaries,” IEEE Trans. Signal Process., vol. 41, no. 12, pp. 3397–3415, 1993.
  • [9] D. L. Donoho, “Compressed sensing,” IEEE Trans. Inf. Theory, vol. 52, no. 4, pp. 1289–1306, 2006.
  • [10] J. Mairal, F. Bach, and J. Ponce, “Task-driven dictionary learning,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 34, no. 4, pp. 791–804, 2012.
  • [11] Z. Jiang, Z. Lin, and L. S. Davis, “Label consistent K-SVD: Learning a discriminative dictionary for recognition,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 35, no. 11, pp. 2651–2664, 2013.
  • [12] X. Wang, B. Wang, X. Bai, W. Liu, and Z. Tu, “Max-margin multiple-instance dictionary learning,” in Int. Conf. On Mach. Learning, 2013, pp. 846–854.
  • [13] A. Shrivastava, J. K. Pillai, V. M. Patel, and R. Chellappa, “Dictionary-based multiple instance learning,” in IEEE Int. Conf. on Image Process., 2014, pp. 160–164.
  • [14] A. Shrivastava, V. M. Patel, j. K. Pillai, and R. Chellappa, “Generalized dictionaries for multiple instance learning,”

    Int. J. of Comput. Vision

    , vol. 114, no. 2, pp. 288–305, Septmber 2015.
  • [15] I. Ramirez, P. Sprechmann, and G. Sapiro, “Classification and clustering via dictionary learning with structured incoherence and shared features,” in

    IEEE Comput. Vision and Pattern Recognition

    , 2010, pp. 3501–3508.
  • [16] D. P. Bertsekas, Nonlinear Programming.   Athena Scientific, 1999.
  • [17] J. Mairal, F. Bach, J. Ponce, and G. Sapiro, “Online learning for matrix factorization and sparse coding,” J. of Mach. Learning Research, vol. 11, pp. 19–60, 2010.
  • [18] M. A. Figueiredo and R. D. Nowak, “An EM algorithm for wavelet-based image restoration,” IEEE Trans. Image Process., vol. 12, no. 8, pp. 906–916, 2003.
  • [19] I. Daubechies, M. Defrise, and C. De Mol, “An iterative thresholding algorithm for linear inverse problems with a sparsity constraint,” Commun. on Pure and Appl. Math., vol. 57, pp. 1413–1457, 2004.
  • [20] A. M. Martínez, “The AR face database,” CVC Tech. Rep., vol. 24, 1998.
  • [21] A. M. Martínez and A. C. Kak, “PCA versus LDA,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 23, no. 2, pp. 228–233, 2001.
  • [22]

    Y. LeCun, B. Boser, J. S. Denker, D. Henderson, R. E. Howard, W. Hubbard, and L. D. Jackel, “Backpropagation applied to handwritten zip code recognition,”

    Neural computation, vol. 1, no. 4, pp. 541–551, 1989.
  • [23] P. D. Gader and M. A. Khabou, “Automatic feature generation for handwritten digit recognition,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 18, no. 12, pp. 1256–1261, 1996.
  • [24] Z.-H. Zhou and M.-L. Zhang, “Ensembles of multi-instance learners,” in European Conf. on Mach. Learning.   Springer, 2003, pp. 492–502.
  • [25] S. Kraut, L. Scharf, and L. McWhorter, “Adaptive subspace detectors,” IEEE Trans. Signal Process., vol. 49, no. 1, pp. 1–16, 2001.
  • [26] F. Facchinei and J.-S. Pang, Finite-dimensional variational inequalities and complementarity problems.   Springer Science & Business Media, 2007.