Multi-label learning , which allows an instance to be associated with multiple labels simultaneously, has been applied successfully to various real-world problems, including images , texts  and biological data . An important issue with multi-label learning is that collecting all labels requires investigation of a large number of candidate labels one by one, and thus labels are usually missing in practice due to limited resources.
Multi-label learning with such missing labels, which is often called weakly supervised multi-label learning (WSML), has been investigated thoroughly [5, 6, 7]. Among them, the most popular methods are based on matrix completion (MC) [5, 8, 9], which is a technique to complete an (approximately) low-rank matrix with uniformed randomly missing entries [10, 11].
The MC-based WSML methods mentioned above assume that only the label matrix has missing entries, while the feature matrix is complete. However, in reality, features can also be missing . To deal with such missing features, a naive solution is to first complete the feature matrix using a classical MC technique, and then employ a WSML method to fill the label matrix . However, such a two-step approach may not work well since recovery of is performed in an unsupervised way (i.e., label information is completely ignored). Thus, when facing a situation with both label and feature missing, it would be desirable to employ the label information to complete the feature matrix in a supervised way. Following this spirit,  proposed concatenating the feature matrix and label matrix into a single big matrix , and employed an MC algorithm to recover both features and labels simultaneously.
In the WSML methods reviewed above, the label matrix is commonly assumed to be of (approximately) low-rank, based on the natural observation that labels are correlated in the multi-label setting. However, such a low-rank assumption on may not be true in reality since entries of take binary values and thus is unlikely to be of low-rank. Indeed, as we observe in Figure 1
, the singular values of the label matrixof the CAL500 data  have a heavy tail and thus such a low-rank assumption on may not be reasonable. Another assumption of 
is that there is a linear relationship between the feature and label matrices. Such an assumption may not hold well on some datasets, where classical multi-label methods learn a nonlinear classifier[14, 15].
In this paper, we propose a method to deal with WSML learning when both features and annotations are incomplete. Motivated by  learning a low-dimensional shared subspace between labels and features, we assume that there is some latent matrix generating the annotation matrix. More specifically, each entry of will be mapped by a nonlinear function to , which corresponds to the probability of setting the corresponding entry of to be . Considering the label correlation, we assume is low-rank. Motivated by elastic net  and one-bit MC [18, 19, 20], we propose a subgradient-based MC method named co-completion (COCO) which can recover and simultaneously. Furthermore, we give a theoretical bound on the recovery effect of COCO. In the experiments, we demonstrate that COCO not only has a better recovery performance than baseline, but also achieves better test error when the recovered data are used to train new classifiers.
In this section, we will first give the formal definition of our studied problem. Then, we will give our learning objective, as well as the optimization algorithm.
We will assume is the feature matrix, in which is the number of instances and is the number of features. There is a label matrix where is the number of labels in multi-label learning. We assume that generates the label matrix , that is, . In the following, we will assume
is the sigmoid function, i.e.,.
Our basic assumption is that and are concatenated together into a big matrix which is written as , and is low-rank. Note that previous work  has two assumptions when using matrix completion to solve a multi-label problem. One is that the label matrix has a linear relationship with the feature matrix . However, such an assumption may not hold well on real data, otherwise, there may not exist so many algorithms learning a nonlinear mapping between features and labels. Another assumption is that the label matrix is low-rank. Such an assumption is motivated by the fact that labels in multi-label learning are correlated, thus only a few factors determine their values. However, we want to argue that a sparse matrix may not be low-rank, and instead, we assume that the latent matrix generating labels is low-rank. Thus in our problem, we assume the concatenate of and forms a low-rank matrix, and will recover such a matrix when entries in both and are missing.
Similarly to previous works [5, 9], we assume that data are uniformly randomly missing with probability , where, for , for contains all the indices of observed entries in the matrix . Let be the subsets containing all the indices of observed entries in and respectively. Based on all these notations, in the following we will give our learning objective.
2.2 Learning Objective
In our learning objective, we need to consider three factors. One focuses on the feature matrix. To recover the feature matrix, a classical way is to use the Frobenius norm on those observed entries, i.e.,
Note that the Frobenius norm on matrices is corresponding to the L2 norm on vectors, and the trace norm on matrix singular values is similar to the L1 norm on vectors. Motivated by the advantage of the elastic net which uses both the L1 norm and L2 norm for regularization, we additionally consider optimizing the trace norm of the difference between the recovered feature matrix and the observed feature matrix, i.e.,
For the label matrix , motivated by previous work on one-bit matrix completion , we will consider the log-likelihood of those observed entries, i.e.,
where is the indicator function. Note that we will minimize the negative log-likelihood instead of maximizing the log-likelihood, in order to agree with other components in the objective.
To incorporate all the above conditions into consideration, we have our final learning objective,
where is the concatenation of and , i.e., .
always assume that the loss function is composed of two parts. One part is a differential convex function, and another part is the trace norm on the whole matrix, which is not differentiable but convex. In this way, it is easy for them to find a closed-form solution using algorithms such as proximal gradient descend, because minimizing the trace norm on the whole matrix plus a simple loss function will have a closed-form solution.
However, in our problem Eq (1), besides the simple trace norm on the whole matrix, we still have the trace norm on the submatrix, while
is a linear transformation of the whole matrix. Thus classical methods based on proximal gradient descent cannot be employed.
We divide the learning objective into two parts, and consider each part seperatedly. One part is
where is a matrix whose diagonal entries are and other entries are .
Another part contains only . Note that in previous works on the stochastic L1 loss minimization problem [23, 24], they first perform gradient descent on the loss function without considering the L1 loss part, and then derived a closed-form solution for the L1 loss part. Motivated by this, we will first perform gradient descent on Eq (2) and then obtain a closed-form solution taking into consideration.
is convex and it is easy to calculate the derivation. To calculate the subgradient of , we will need the following results:
In this way,the subgradient of is given by
We will perform iterative optimization. In the th iteration, after we have the subgradient of Eq (2), will be updated using gradient descent by
where is the step size. We then have a closed-form solution of taking the trace norm into consideration, which is,
where is the SVD of and
We will call our proposed method co-completion (COCO) and give the whole process in Algorithm 1.
. In both works, they constructed a random probe matrix, and multiplied the gradient with the probe matrix in each iteration to generate a stochastic gradient. In this way, the expectation of the stochastic gradient calculated in each iteration will be the exact gradient, which agrees with the principle of stochastic gradients in ordinary stochastic gradient descent (SGD). provided a theoretical guarantee of the convergence rate for such a kind of problems. As their objective is to save space for trace norm minimization, here we will not consider the space limitation problem, and will use plain gradient descent instead of subgradient descent. However, their convergence results on SGD can be used as a weak guarantee for the convergence of our algorithm.
In this section, we give a bound on the following optimization problem:
Note that if we change the operator in Eq (5) to and change the objective to its additive inverse, we will have an equivalence of Eq (5). We can then use Lagrange multiplier and add the two inequality constraints into the objective. In this way, the problem will have similar form as Eq (1). Thus by appropriately setting parameters and in Eq (1), the maximization problem Eq (5) and the minimization problem Eq (1) will be equal.
We assume that
We further define
is an all-zero matrix of size.
Since is a constant matrix, minus will not affect optimizing of the objective function, i.e., maximizing and under the same constraints will result in the same .
In the following, we will start deriving our theoretical results.
for some , and . Then
where and are constants, and the expectation are both over the choice of and the draw of .
With Lemma 2, we can have the following results:
Assume that and the largest entry of is less than . Suppose that is chosen independently at random following a binomial model with probability . Suppose that is generated using . Let be the solution to the optimization problem Eq (5). Then with a probability at least , we have
where denotes the Kullback-Leibler on two matrices. For it is defined as
By enforcing and using the fact that that when is a sigmoid function , we can have our main result:
Assume that . Suppose that is chosen independently at random following a binomial model with probability . Suppose that is generated using . Let be the solution to the optimization problem Eq.(5). Then with probability at least , we have
Furthermore, as long as , and further assuming that , we will have
Theorem 2 tells us that the average KL-divergence of the recovered and , together with the average Frobenius norm of weighted by are bounded above by if , in which . Note that when , the part will take the majority of , and the bound implies that we can have a nearly perfect feature recovery result with sample complexity , agreeing with previous perfect-recovery results although the confidence is degenerated a bit from where to . Otherwise if , our bound also agrees with previous bound on one-bit matrix completion .
We evaluate the proposed algorithm COCO on both synthetic and real data sets. Our implementation is in Matlab except the neural network which is implemented in Python and used to show the generalization performance of classifiers trained on recovered data.
4.1 Experimental Results on Synthetic Data
Our goal is to show the recovery effect of our proposed algorithm on both the feature matrix and label matrix. We will also show how adding the term can enhance our recovery effect.
Settings and Baselines
To create synthetic data, following previous works generating a low-rank matrix 
, we first generate a random matrixand with each entry drawn uniformly and independently randomly from . We then construct by . The first columns of is regarded as the feature matrix and the rest is regarded as the matrix. We then set each entry of by with probability and with . Here is the sigmoid function. Finally both and are observed with probability for each entry.
We set a variety of different numbers to , , , , . More specially, , , , , . In the experiments, we weight by and set all other weight parameters to be . The step size is set initially as and decays at the rate of , i.e., , until it is below . We will compare two cases: One is the parameter without considering the term in the optimization; another is motivated by the elastic net. The Maxide method is to first complete features using proximal graident descent  and then perform weakly supervised multi-label learning . The Mc method is to complete the concatenate of and , which is proposed in . We repeat each experiment five times, and report the average results.
We measure the recovery performance on the feature matrix by the relative error . The classification performance is measured by the Hamming loss. More specially, after we got , we set if and otherwise. The recovery performance on is then measured by where is the zero-one loss. The results are shown in Table 1. Note that we have results in total. We present results here and put all others in Appendix. From the results, we can see that, when data satisfy our assumption, our proposed COCO with the term in the optimization objective is always better at recovery. For the recovery, our proposal is always better than two baselines, i.e., Maxide and Mc. Occasionally ( among all cases) it is comparable to COCO-0. This would be reasonable since the term put more emphasis on feature recovery, and does not aid label recovery much. Comparing Maxide and Mc, we find that both two algorithms have the same recovery results on , but Maxide performs much worse on than MC. This may due to the fact that when recovery , Mc uses additional information on the structure of instead of using only the non-perfect recovered feature data.
To further study the impact of on the final performance, we illustrate how the recovery error of and decrease when the iterations evolve in Figure 2. We can see that the recovery error of COCO-1 decreases to a lower point when it converges, and get a slightly better recovery results than COCO-0. Although the recovery error also decreases to a lower point, the difference is not obvious. We can conclude that adding the term to the optimization objective can benefit recovery.
. The best result and comparable ones (pairwise single-tailed t-tests atconfidence level) in each row are bold.
4.2 Experimental Results on Real Data
We evaluate the proposed algorithm on real data. Here we will evaluate the performance using the CAL500 dataset . CAL500 is a music dataset containing instances, features, labels. As we previously shown in Figure 1, CAL500’s annotation matrix does not have the low-rank or approximately low-rank property. In this experiment, we will not only report the recovery performance of COCO, but also use the recovered data to train new classifiers, and report the test error of the trained classifier.
Settings and Baselines
We will first divide the datasets into two parts, for training and for testing. For the training data, we will randomly sample as observed data, and make all other entries unobserved. We will use the same parameter setting as Section 4.1, except that the step size will keep decaying without stopping. Here we will also compare with Maxide and Mc. For the two compared methods, we use the default parameter setting in their original codes. After the data are recovered, we use the state-of-the-art multi-label classification method called LIMO (label-wise and Instance-wise margins optimization)  and a single hidden layer neural network to test the generalization performance when using the recovered data to train a classifier. To make a fair comparison, we also use the clean data to train a classifier and record its test error, which can be counted as the best baseline for the current model. We will call this method the oracle. All the experiments are repeated twenty times and report the average results.
The results are reported in Table 2. We can see that our proposed COCO achieves the best recovery results among all three methods. For the generalization performance, we can see that our method also achieves the best results in all comparable methods, and it is more closer to the baseline using clean data.
|Recovery Error||Test Error|
In this paper, we considered the problem where both features and labels have missing values in weakly supervised multi-label learning. Realizing that previous methods either recover the features ignoring supervised information, or make unrealistic assumptions, we proposed a new method to deal with such problems. More specifically, we considered a latent matrix generating the label matrix, and considering labels are correlated, such a latent matrix together with features form a big low-rank matrix. We then gave our optimization objective and algorithm motivated by the elastic net. Experimental results on both simulated and real-world data validated the effectiveness of our proposed methods.
We want to thank Bo-Jian Hou for discussion and polishing of the paper.
Zhi-Hua Zhou and Min-Ling Zhang.
In Claude Sammut and Geoffrey I. Webb, editors,
Encyclopedia of Machine Learning and Data Mining, pages 875–881. Springer US, 2017.
-  Minmin Chen, Alice X. Zheng, and Kilian Q. Weinberger. Fast image tagging. In Proceedings of the 30th International Conference on Machine Learning, pages 1274–1282, 2013.
-  Viet-An Nguyen, Jordan L. Boyd-Graber, Philip Resnik, and Jonathan Chang. Learning a concept hierarchy from multi-labeled documents. In Advances in Neural Information Processing Systems 27, pages 3671–3679, 2014.
-  Zheng Chen, Minmin Chen, Kilian Q. Weinberger, and Weixiong Zhang. Marginalized denoising for link prediction and multi-label learning. In Proceedings of the 29th AAAI Conference on Artificial Intelligence, pages 1707–1713, 2015.
-  Hsiang-Fu Yu, Prateek Jain, Purushottam Kar, and Inderjit S. Dhillon. Large-scale multi-label learning with missing labels. In Proceedings of the 31th International Conference on Machine Learning, pages 593–601, 2014.
-  Yu-Yin Sun, Yin Zhang, and Zhi-Hua Zhou. Multi-label learning with weak label. In Proceedings of the 24th AAAI Conference on Artificial Intelligence, 2010.
-  Serhat Selcuk Bucak, Rong Jin, and Anil K. Jain. Multi-label learning with incomplete class assignments. In , pages 2801–2808, 2011.
-  Andrew B. Goldberg, Xiaojin Zhu, Ben Recht, Jun-Ming Xu, and Robert D. Nowak. Transduction with matrix completion: Three birds with one stone. In Advances in Neural Information Processing Systems 23, pages 757–765, 2010.
-  Miao Xu, Rong Jin, and Zhi-Hua Zhou. Speedup matrix completion with side information: Application to multi-label learning. In Advances in Neural Information Processing Systems 26, pages 2301–2309, 2013.
-  Emmanuel J. Candès and Benjamin Recht. Exact matrix completion via convex optimization. Foundations of Computational Mathematics, 9(6):717–772, 2009.
-  Emmanuel J. Candès and Yaniv Plan. Matrix completion with noise. Proceedings of the IEEE, 98(6):925–936, 2010.
-  Ofer Dekel and Ohad Shamir. Learning to classify with missing and corrupted features. In Proceedings of the 25th International Conference on Machine Learning, pages 216–223, 2008.
-  Douglas Turnbull, Luke Barrington, David A. Torres, and Gert R. G. Lanckriet. Semantic annotation and retrieval of music and sound effects. IEEE Transactions on Audio, Speech & Language Processing, 16(2):467–476, 2008.
-  André Elisseeff and Jason Weston. A kernel method for multi-labelled classification. In Advances in Neural Information Processing Systems 14, pages 681–687, 2001.
-  Min-Ling Zhang and Zhi-Hua Zhou. Multilabel neural networks with applications to functional genomics and text categorization. IEEE Transactions on Knowledge and Data Engineering, 18(10):1338–1351, 2006.
-  Sheng-Jun Huang, Wei Gao, and Zhi-Hua Zhou. Fast multi-instance multi-label learning. In Proceedings of the 28th AAAI Conference on Artificial Intelligence, pages 1868–1874, 2014.
-  Hui Zou and Trevor Hastie. Regularization and variable selection via the elastic net. Journal of the Royal Statistical Society, Series B, 67:301–320, 2005.
-  Mark A. Davenport, Yaniv Plan, Ewout van den Berg, and Mary Wootters. 1-bit matrix completion. arXiv, abs/1209.3672, 2012.
-  Mark Herbster, Stephen Pasteris, and Massimiliano Pontil. Mistake bounds for binary matrix completion. In Advances in Neural Information Processing Systems 29, pages 3954–3962, 2016.
-  Renkun Ni and Quanquan Gu. Optimal statistical and computational rates for one bit matrix completion. In Proceedings of the 19th International Conference on Artificial Intelligence and Statistics, pages 426–434, 2016.
-  Yurii Nesterov. Gradient methods for minimizing composite objective function. Technical report, Université catholique de Louvain, Center for Operations Research and Econometrics (CORE), 2007.
-  Paul Tseng. On accelerated proximal gradient methods for convex-concave optimization. Technical report, University of Washington, WA, 2008.
-  Shai Shalev-Shwartz and Ambuj Tewari. Stochastic methods for l regularized loss minimization. In Proceedings of the 26th International Conference on Machine Learning, pages 929–936, 2009.
-  John Langford, Lihong Li, and Tong Zhang. Sparse online learning via truncated gradient. In Advances in Neural Information Processing Systems 21, pages 905–912, 2008.
-  G Watson. Characterization of the subdifferential of some matrix norms. Linear Algebra and its Applications, 170:33–45, 1992.
-  Haim Avron, Satyen Kale, Shiva Prasad Kasiviswanathan, and Vikas Sindhwani. Efficient and practical stochastic subgradient descent for nuclear norm regularization. In Proceedings of the 29th International Conference on Machine Learning, 2012.
-  Lijun Zhang, Tianbao Yang, Rong Jin, and Zhi-Hua Zhou. Stochastic proximal gradient descent for nuclear norm regularization. arXiv, abs/1511.01664, 2015.
-  Benjamin Recht. A simpler approach to matrix completion. Journal of Machine Learning Research, 12:3413–3430, 2011.
-  Jian-Feng Cai, Emmanuel J. Candès, and Zuowei Shen. A singular value thresholding algorithm for matrix completion. SIAM Journal on Optimization, 20(4):1956–1982, 2010.
-  Xi-Zhu Wu and Zhi-Hua Zhou. A unified view of multi-label performance measures. In Proceedings of the 34th International Conference on Machine Learning, pages 3780–3788, 2017.