In data mining and machine learning tasks, a set of data objects is usually represented as a feature matrix, where each row is an object and each column is one dimension of the features. In many applications, the feature matrix may be partially observed with missing values due to various reasons(Liu and Motoda, 1998). For example, in disease diagnosis, a patient is an object, and the feature space consists of the physical examination results. Then some patients may selectively take some of the examinations, leaving the other features missing (Lim et al., 2005). In wireless sensor network analysis, multiple sensors detect features of the environment in different aspects, among which, some expired sensors will cause missing values of the corresponding features (Hou et al., 2017).
Given that the feature values are severely missing, the performance of a classification model trained on such a dataset will be significantly degenerated. It is thus important to recover the missing values. The most reliable way is to acquire the ground-truth values for the missing features. Unfortunately, acquiring a feature value usually involves special devices or complex processes, leading to high acquisition costs. Nevertheless, features are often correlated with each other, and redundant information is contained across features. Thus it may not be necessary to query all feature values. Instead, one can query a part of the features, and then recover the others from the observed entries.
Matrix completion would be a useful tool for recovering missing entries of the feature matrix, which has been extensively studied (Chen et al., 2014; Király et al., 2015; Zeng et al., 2015; Sun et al., 2017). However, existing approaches neglect the class labels, which may provide supervised information to guide the matrix completion to a desired solution. In practice, the observed entries may be noisy, and are not adequate to provide sufficient information to recover the missing values. Especially when the missing rate is high, there could be a large number of possible matrices that can well fit the observed values. The class labels, which strongly depend on the feature representations, are expected to narrow the choice over all possible matrices.
Furthermore, different features may have different contributions to recovering the missing values as well as improving the classification model. Some features are crucial while others may be less important. It is thus practical to actively select the most informative features to acquire their ground-truth values, and recover the missing values based on the observed features.
Traditional active learning algorithms select the most informative unlabeled instances to query their labels, and can significantly reduce the annotation cost(Settles, 2012; Huang et al., 2014). Similar ideas have been extended in order to perform active feature acquisition (Ruchansky et al., 2015; Bhargava et al., 2017; Mavroforakis et al., 2017)
. These methods typically try to estimate the expected utility of a feature value for improving the model performance, and then query the ground-truth value for the feature with maximum expected utility. However, some features with high potential utility can be recovered by matrix completion, and thus querying their values can be waste of acquisition costs.
In this paper, we jointly perform active feature querying and supervised matrix completion to minimize the acquisition cost. To exploit the label information for effective matrix completion, we propose an objective function that consists of the reconstruction error, the low rank regularizer and the empirical classification error. By minimizing this objective function, the recovered feature matrix is expected to on one hand well fit the structure in feature space, and on the other hand follow the label supervision to be discriminative. To select the most informative entry for active feature acquisition, we propose a variation based criterion, which estimates the informativeness of a feature value on recovering the missing values as well as improving the classification model. Furthermore, we introduce a bi-objective optimization method to handle the case where the acquisition cost varies for different features.
Theoretical analysis is presented to give an upper bound on the reconstruction error of the proposed matrix completion algorithm. Further, experiments are performed on different datasets to validate the effectiveness of the proposed approach. Results demonstrate that our approach can recover the matrix accurately, and achieve effective classification with less feature acquisition cost.
The rest of the paper is organized as follows: we review related works in Section 2, and introduce the proposed approach in Section 3. Section 4 presents the settings and results of the experiments, followed by the conclusion in Section 5.
2. Related Work
Active learning has been widely studied for reducing the labeling cost (Settles, 2012; Huang et al., 2014). Classical studies focused on designing a selection criterion such that selected instances can improve the model maximally. Informativeness is one of the most commonly used criteria, which estimates the ability of an instance in reducing the uncertainty of a statistical model. Typical techniques for informative sampling include statistical methods (Cohn et al., 1995), SVM-based methods (Tong and Koller, 2001) and query-by-committee methods (Freund et al., 1997), etc.
Differently from traditional active learning that targets reducing the labeling cost, there is another branch of research employing similar ideas to reduce the feature acquisition cost (Luo et al., 2015). These methods iteratively query the ground-truth values for the actively selected features, and are expected to improve the learning performance with least queries. Some methods tried to estimate the expected utility of each feature to improve the model, and then select the top features with maximum expected utility to query their values. For example, in (Melville et al., 2005), a criterion was proposed to estimate the expected improvement of accuracy per unit cost, and then the most cost-effective feature values were iteratively acquired. A similar approach was proposed in (Vu et al., 2008), where the learning task is clustering instead of classification, and thus the corresponding criterion estimates the expected improvement in clustering quality per unit cost. There is another category of methods called instance completion. Instead of querying one specific feature value, they selected a small batch of incomplete instances with missing features, and queried all missing values for the selected instances each time. The instances are actively selected aiming to improve the classification performance. For example, the authors of (Sankaranarayanan and Dhurandhar, 2013) proposed to estimate the expected utility of each instance for active selection, and also derived a probabilistic lower bound on the error reduction achieved with the proposed technique. The method in (Dhurandhar and Sankaranarayanan, 2015) chose the top
instances based on a derived upper bound on the expected distance between the next classifier and the final classifier.
A common limitation of these methods is that they do not consider the case where some of the missing features can be accurately recovered from the observed entries, and thus may waste the acquisition cost of unnecessary queries. There is one study that tried to query both missing features and labels, and built an imputation model for missing features(Moon et al., 2014). However, it requires a complete set of training examples for training a model, which may not be satisfied in real applications .
Matrix completion is a classical approach for recovering the missing entries of a partially observed matrix. It has been successfully applied to collaborative filtering (Rennie and Srebro, 2005), dimensionality reduction (Weinberger and Saul, 2006), multi-class/multi-label learning (Goldberg et al., 2010; Cabral et al., 2015), clustering (Eriksson et al., 2011; Yi et al., 2012), etc. One main category of existing methods is statistical matrix completion based on the low-rank assumption (Candès and Recht, 2009; Chen et al., 2014; Keshavan et al., 2010; Wen et al., 2012; Jain et al., 2013; Negahban and Wainwright, 2012). These methods usually transform the matrix completion task into an optimization problem, and try to find a low-rank matrix to fit the observed entries. There are some structural matrix completion methods which explicitly analyze the information contained in the observed entries and are capable of evaluating whether the observations are theoretically sufficient for recovering the missing values (Meka et al., 2009; Singer and Cucuringu, 2010; Király et al., 2015).
In some cases the observed entries are not enough to recover the others, and thus further queries are needed to acquire more ground-truth values for some missing entries. Given this background, there are some active learning approaches proposed to query the most informative entries for completion (Ruchansky et al., 2015). For example, a general framework was proposed in (Chakraborty et al., 2013) for active matrix completion, where existing matrix completion methods can be enhanced with an uncertainty sampling strategy. In (Sutherland et al., 2013)
, the authors firstly estimated the posterior distribution with variational approximations or Markov chain Monte Carlo sampling, and then queried the entries for collaborative prediction. The algorithm in(Ruchansky et al., 2015) unified active querying and matrix completion in a single framework. There are some other approaches which study active completion with specific requirements on the matrix (Bhargava et al., 2017; Mavroforakis et al., 2017).
While all the above studies are not theoretically grounded, there are two works focusing on adaptive querying for matrix completion with theoretical results. One is (Krishnamurthy and Singh, 2013) which firstly sampled several rows, and adaptively decided which columns are need to be fully observed. The other is (Bhargava et al., 2017) which actively completed a low-rank positive semi-definite matrix. Although these two works are theoretically sound, they do not consider any supervision information.
3. The Proposed Approach
We denote by a dataset with instances, where is a
-dimensional real feature vector for the-th instance and is its class label. Let be the ground-truth feature matrix of the instances, where each column represents one dimensionality of the -dimensional feature space. Here we consider the feature missing problem, where is only partially observed. We denote by the set of indices for the observed entries of . In the rest of this section, we will firstly propose a supervised matrix completion method, and then present an active feature acquisition approach.
3.1. Supervised matrix completion
We focus on the matrix completion problem under the supervised classification setting, where the task is to learn a function for predicting the class labels of instances. Matrix completion is a challenging problem because observed entries are usually limited, and often do not contain sufficient information for recovering missing values. Since there are an arbitrary number of possible matrices that perfectly match the observed entries, external knowledge is needed to find the optimal one closest to the ground-truth. Low-rank is a common assumption for matrix completion, which exploits the structure information in the feature space. In this paper, we further exploit the supervised information contained in class labels to guide the matrix completion to a desirable solution. Classification function is a mapping from the feature space to the label space, and thus can be utilized to inversely transfer the label information for feature recovering. For example, given an instance with missing features and its class label, we denote by a recovered feature vector. Assuming the classifier is reliable, if the prediction is faraway from the ground-truth label , then it is more likely that feature vector is not accurately recovered. Based on this motivation, we propose to minimize the empirical classification error along with the reconstruction error and the matrix rank within one unified framework, where the feature matrix and the classification model are alternately optimized.
On one hand, we want to accurately recover the ground-truth feature matrix from the partial observation of with the low-rank assumption. On the other hand, the classification model , which is trained with the recovered matrix , is expected to have a small empirical error. Based on this argument, we define our objective function as follows.
is the trace norm, is the Frobenius norm, and are regularization parameters.
We assume that the loss functioncan be written as a function parameterized by , and it is Lipschitz smooth with respect to . One example is the linear classifier with the squared loss, i.e., , where ; then we have , where and denotes the norm. In the following, we will write as for notational simplicity. Then the optimization problem becomes
which can be solved by alternately optimizing and .
When optimizing with fixed , we have
We will exploit the accelerated proximal gradient descend (Tseng, 2008) which is a classical optimization technique in trace norm minimization to solve this problem. Let
We summarize the main steps here:
Choose , , , . Set .
In the -th iteration,
The iteration continues until convergence. In the above steps, we have not specified how to obtain and next we will explain this. We rewrite the problem as
which is equivalent to
This can be solved by Singular Value Thresholding (SVT)(Cai et al., 2010)
, which performs singular value decomposition on. Let , the solution is given by .
Finally, the classification model is optimized with fixed , which can be efficiently solved using existing algorithms. These two procedures are repeated until convergence.
3.2. Active feature acquisition
In this subsection, we discuss how to actively query the ground-truth values as most informative features, with the target of improving the model mostly based on the smallest number of queries. We will first present a novel criterion for estimating the informativeness of a feature, and then introduce a method to handle the case where the acquisition cost varies for different features.
3.2.1. Variance-based selection
In traditional active learning, if the model is less certain about the prediction on an instance, then the instance is considered to be more informative for improving the model, and will be more likely to be selected for label querying (Huang et al., 2014). Inspired by this idea, we also propose an uncertainty criterion to estimate the informativeness of a feature. The challenge here is that the informativeness should reflect the usefulness of a feature both for recovering other entries and for training the classification model. Notice that the objective function defined in Eq. (1) does consider the two aspects simultaneously. At each iteration of active learning, after a small batch of feature values is acquired, the algorithm in Section 3.1 will be employed to optimize Eq. (1) for matrix completion. The output of the matrix completion may vary from iteration to iteration. If the variance of an entry over iterations is large, it implies that the entry can not be certainly decided by the algorithm, and thus may contain more useful information to recover the feature matrix and optimize the classification model. Denoting by the completed matrix at the -th iteration, the informativeness of the -th feature of is defined as:
where is the mean value of over all iterations. Then a small batch of most uncertain features with largest informativeness is selected to query their ground-truth values. The pseudo code of the algorithm for active feature acquisition is summarized in Algorithm 1. We call the proposed algorithm Active Feature Acquisition Supervised Matrix Completion (AFASMC).
Note that it is not necessary to calculate the variance based on all iterations. Generally speaking, it is more important to capture the change of an entry within recent iterations. For example, if an entry has a large variance at an early stage, but becomes stable after a few queries, it implies that this entry may have been well recovered from the recently acquired features, and thus does not need to be queried any more. We will discuss this in the experiments in more detail.
3.2.2. Cost-aware selection
Finally, we discuss a more complicated case, where the cost of acquiring a feature value varies for different features. This is a common case in real applications. For example, it is much more costly to perform an fMRI scan than blood examination for diagnosing a patient. While there is typically a conflict between the informativeness and acquisition cost of a feature, we propose to balance these two factors for achieving the best cost-effectiveness. We denote the cost for acquiring the -th dimension of the features by . Note here we assume that the acquisition cost is independent of the instance. We offer two optional strategies to consider the acquisition cost. The most straightforward method is to simply divide the informativeness by the acquisition cost. So we can have the selection strategy as:
This strategy provides a simple solution for cost-aware selection, but may fail when one of the two factors dominates the other.
In what follows, we introduce another solution by bi-objective optimization. In each iteration of our algorithm, we select a small batch of missing entries of the feature matrix to acquire their ground-truth values. This is a typical subset selection problem. Generally, a subset selection problem tries to select a subset from a large set with an objective function and a constraint of the subset size. It can be formalized as
where denotes the size of a set, and is the maximum number of selected elements. Further, for convenience of presentation, the subset selection problem is reformulated as optimizing a binary vector. We introduce a binary vector to indicate the subset membership, where if the -th element in is selected, and otherwise. Following the method in (Qian et al., 2015), the subset selection problem in Eq. (8) can be written as a bi-objective minimization problem:
where denotes the number of 1s in . Obviously, the problem is for sparse selection with the target of minimizing . Here is set to to avoid trivial solutions or over-sized subsets. In our case, we want to maximize the informativeness in Eq. (6), and at the same time minimize the acquisition cost of the selected entries. We thus can redefine the two objective functions and correspondingly, and have the following bi-objective optimization problem.
Here is the budget for the acquisition cost in each iteration, and is used to denote the element of corresponding to the entry of the -th row and -th column in matrix . Again, is set to to exclude trivial or over-cost solutions. We employ a recently proposed Pareto optimization algorithm called Pareto Optimization for Subset Selection (POSS) (Qian et al., 2015) to solve this problem. POSS is an evolutionary style algorithm, which maintains a solution archive, and iteratively update the archive by replacing some solutions with better ones. In detail, it initializes the archive with a solution of empty subset selection. In each iteration, a solution is selected from the current archive, and a new solution is generated by randomly flipping bits of . The two objective values and are then computed to compare with the archived solutions. Specifically, if there exists one solution in the achieve that satisfies both the following conditions:
then will be ignored; otherwise, will be added to the solution archive, and at the same time all the archived solutions that satisfy
will be removed from the solution archive. This process is repeated until reaching a specified number of iterations. At last, the best solution with the minimal value on and within the cost budget will be selected as the final solution.
3.3. Theoretical analysis
In this subsection, we will present a theoretical bound on the reconstruction error of the supervised matrix completion method introduced in Section 3.1. For the loss between and , i.e., the term in Eq. (1), here we discuss a more strict case by enforcing and to be equal. It is reasonable to relax this strict constraint as in Eq. (1) to cope with possible noises. The relaxation is also benefited by more flexible choice of the loss function, for example , as well as ease of optimization. For convenience of presentation, we rewrite the noiseless counterpart of Eq. (1) as:
where and are constants. We assume is the optimal solution for Eq. (11), and try to analyze the difference between the solution of our algorithm and the optimal solution. Before discussing the property of the solution, we first define the coherence of a matrix, which will be used later.
Definition 0 ().
For a rank- matrix whose SVD is , we use the following value as the coherence,
where denotes the th th row of .
Note that compared to (Candès and Recht, 2009; Xu et al., 2013), for ease of use, in this paper we do not normalize the coherence by the size of the matrix. Coherence measures how the values of the entries are distributed in a matrix. The lower the coherence is, the more average the values of the entries are distributed. Apparently, if there is no entry that has a “peak” value in a matrix, the matrix is easier to be completed with partial observations. Based on this definition, we give our theoretical results in Theorem 1.
Theorem 1 ().
Theorem 1 provides an upper bound of the reconstruction error for the proposed supervised matrix completion algorithm. Moreover, it is obvious that a smaller upper bound can be expected by increasing . This also motivates us to iteratively acquire more feature values. Below we present a sketch of proof of Theorem 1. A detailed proof is available in a longer version on arXiv (Huang et al., 2018).
To prove Theorem 1, we firstly define and . Note that because is a constant matrix, subtracting will not affect optimization of the objective function, i.e., they will both have the same leading to the optimum. Then we will use the following three lemmas to prove Theorem 1:
Lemma 0 ().
Assume that , and , then with probability at least we have
where the expectation is over the choice of .
We can also easily derive the following result from (Davenport et al., 2012):
Lemma 0 ().
If , and each entry is a Radamacher random variable;
is a Radamacher random variable;, and each entry is independently sampled when with probability and with probability . Then we have
provided that and is a constant.
Further, the trace norm of the Hadamard product of two matrices is bounded as follows:
Lemma 0 ().
Assume that there are two matrices and that have the same shape, then we have , where is the Hadamard product.
|Reconstruction Error||Test Accuracy (%)|
where the expectation is over . We can also have
Replacing by which is the optimal solution to Eq. (11) and noting that , we have
Using Lemma 3.2, with probability at least , we have
Further applying , we have
In this section, we experimentally investigate the proposed method.
We perform experiments on 6 benchmarks, namely abalone, letter, image, chess, HillValley and HTRU2. The number of entries in the matrix varies from 22,960 to 143,184. For each dataset, we randomly separate the set into two subsets, one with 70% examples for training, and the other one with 30% examples for testing. We repeat the random partition 10 times and report the average results.
|Data||Algorithms||Percentage of queried entries|
In the experiments, we examine the performance both on matrix completion and the classification after active queries. The proposed supervised matrix completion algorithm AFASMC is compared with following methods: OptSpace (Keshavan et al., 2010)—a low-rank matrix completion method based on spectral techniques and manifold optimization; LmaFit (Wen et al., 2012)—a low-rank factorization model based on the nonlinear successive over-relaxation (SOR) algorithm; NNLS (Toh and Yun, 2010)—an accelerated proximal gradient algorithm for low-rank matrix completion.
Also our active feature acquisition method AFASMC is compared with the following methods: QBC (Chakraborty et al., 2013)—an active matrix completion using Query by Committee strategy; Stability (Chakraborty et al., 2013)—an active matrix completion method based on committee stability; EM Inference (Moon et al., 2014)—it selects the instances with maximum expected utility; Random—randomly select features.
For AFASMC, the parameters and are fixed to 1 as default on all datasets. For other methods, parameters are set or tuned as suggested in the corresponding literature. We employ the linear SVM with default parameters as the classifier for all baselines.
4.2. Results on matrix completion
Firstly, we examine the effectiveness of the proposed method for supervised matrix completion. The performances are evaluated with the matrix reconstruct error as well as the classification accuracy. For each dataset, we compare all the methods under different missing rates. The results are reported in Table 1. The first row of each dataset corresponds to the case where 60% entries of the training set are observed, while the second row corresponds to the case with 80% entries observed. From the table we can see that our proposed method AFASMC can achieve the best performance in terms of both reconstruction and classification. The only exception is on HillValley with 60% observed entries, where AFASMC is outperformed by NNLS on the reconstruction error with tiny margin, but still achieves the best performance on the test accuracy.
4.3. Results on classification performance
In this subsection we examine the performance of active feature acquisition. The feature matrix is initialized with 60% observed entries for each dataset, while the 40% entries are randomly missing. Then active selection is performed iteratively based on the variance criterion. After each query, we perform matrix completion, and then train a linear SVM on the training data. The accuracy of the classifier on the test set is record.
Figure 1 plots the performance curves of compared methods as the number of queried features increases. Note that the performance of EM Inference is unbearably poor on the abalone, letter and HillValley datasets, and its curves are not plotted on these three datasets to avoid the poor visualization of other curves. Also it can be seen that the initial points are different because the methods are employing different matrix completion methods. It can be observed that the proposed approach AFASMC achieves the best performance in most cases. The performance of EM inference is not stable. It achieves decent performance on image and chess, but loses its edge on the others. The QBC and Stability methods perform similarly and are less competitive to AFASMC in most cases. Lastly, as expected, the Random method is not effective compared to the active methods. We also present in Table 2 the AUC results after different percentages of entires queried. It can be observed that the proposed approach outperforms the others in most cases.
4.4. Study with varied acquisition costs
As discussed previously, the acquisition costs of different features may be diverse. In this subsection, we examine the performance of the proposed strategies for cost-effective feature acquisition. We compare the two optional methods: AFASMC+Cost1, which simply divides the informativeness by the cost; and AFASMC+Cost2, which balances the informativeness and cost via bi-objective optimization. We specify the acquisition cost of each feature dimension as a random integer in . Due to space limit, we present the results on the largest dataset HTRU2 as an example.
We record the accuracy after each query, and plot the performance curves in Figure 2. Note that the curve of the original AFASMC is also presented for reference. It can be observed that both the two strategies for considering the acquisition cost can achieve better performance than the original AFASMC. When comparing AFASMC+Cost1 and AFASMC+Cost2, the method with bi-objective optimization achieves a significantly better performance.
4.5. Study on the variance computation
In Section 3.2, when calculating the informativeness based on the variance, we count in all previous iterations of active learning. As discussed before, it is more important to capture the change of an entry within recent iterations. An entry with large variance in the beginning iterations may have been well recovered from recent queries. To examine this idea, we perform experiments to compare the results of calculating the variance with different iterations. Specifically, we use the values of an entry during the last iterations to calculate the variance, and set to 2, 4, 8, 16, respectively. Again, for space limit, we report the results on the largest dataset HTRU2 as an example.
The performance curves are plotted in Figure 3. We also plot the curve of counting all iterations as the original AFASMC method. It can be seen that is the best choice, while counting too few or too many iterations may degrade the performance. This observation is consistent with our conjecture, that the variance computing should emphasize more on recent iterations. Note that we set as default on all datasets to perform the experiments. All the results of AFASMC in previous sections are obtained by counting all iterations. It is thus expected to further improve the performance of the proposed approach by tuning the number of iterations for evaluating the informativeness.
In this paper, we studied the problem of learning from data with missing features. Since the acquisition of ground-truth feature values is usually expensive, our target was to train an effective classification model with the least acquisition cost. We proposed a unified framework to jointly perform matrix completion and active feature acquisition. On one hand, missing values of the feature matrix are recovered by supervised matrix completion, which exploits the feature correlations with a low-rank regularizer, and the label supervision is utilized by minimizing the empirical classification error. On the other hand, the missing entries are actively queried based on a novel selection criterion, which simultaneously evaluates potential contribution of a feature on both recovering other entries and improving the classification model. Moreover, a bi-objective optimization method was introduced to handle the case where acquisition costs vary for different features. Extensive experimental results validated the superiority of our approach on matrix completion as well as classification performance. In the future, we plan to extend our approach and theoretical analysis to perform active querying both for missing features and class labels.
This research was partially supported by National Key R&D Program of China (2018YFB1004300), NSFC (61503182, 61732006), JiangsuSF (BK20150754), the Collaborative Innovation Center of Novel Software Technology and Industrialization, the International Research Center for Neurointelligence (WPI-IRCN) at The University of Tokyo Institutes for Advanced Study. Authors want to thank Bo-Jian Hou for proofreading.
et al. (2017)
Aniruddha Bhargava, Ravi
Ganti, and Rob Nowak. 2017.
Active positive semidefinite matrix completion:
Algorithms, theory and applications. In
International Conference on Artificial Intelligence and Statistics. 1349–1357.
- Cabral et al. (2015) Ricardo Cabral, Fernando De la Torre, Joao Paulo Costeira, and Alexandre Bernardino. 2015. Matrix completion for weakly-supervised multi-label image classification. IEEE Transactions on Pattern Analysis and Machine Intelligence 37, 1 (2015), 121–135.
- Cai et al. (2010) Jian-Feng Cai, Emmanuel Candès, and Zuowei Shen. 2010. A singular value thresholding algorithm for matrix completion. SIAM Journal on Optimization 20, 4 (2010), 1956–1982.
- Candès and Recht (2009) Emmanuel Candès and Benjamin Recht. 2009. Exact matrix completion via convex optimization. Foundations of Computational mathematics 9, 6 (2009), 717.
- Chakraborty et al. (2013) Shayok Chakraborty, Jiayu Zhou, Vineeth Balasubramanian, Sethuraman Panchanathan, Ian Davidson, and Jieping Ye. 2013. Active matrix completion. In IEEE International Conference on Data Mining. 81–90.
- Chen et al. (2014) Yudong Chen, Srinadh Bhojanapalli, Sujay Sanghavi, and Rachel Ward. 2014. Coherent matrix completion. In International Conference on Machine Learning. 674–682.
- Cohn et al. (1995) David Cohn, Zoubin Ghahramani, and Michael Jordan. 1995. Active learning with statistical models. In Advances in Neural Information Processing Systems, Vol. 8. 705–712.
- Davenport et al. (2012) Mark Davenport, Yaniv Plan, Ewout van den Berg, and Mary Wootters. 2012. 1-bit matrix completion. arXiv 1209.3672 (2012).
- Dhurandhar and Sankaranarayanan (2015) Amit Dhurandhar and Karthik Sankaranarayanan. 2015. Improving classification performance through selective instance completion. Machine Learning 100, 2-3 (2015), 425–447.
- Eriksson et al. (2011) Brian Eriksson, Laura Balzano, and Robert Nowak. 2011. High-rank matrix completion and subspace clustering with missing data. arXiv 1112.5629 (2011).
- Freund et al. (1997) Yoav Freund, Sebastian Seung, Eli Shamir, and Naftali Tishby. 1997. Selective sampling using the query by committee algorithm. Machine Learning 28, 2-3 (1997), 133–168.
- Goldberg et al. (2010) Andrew Goldberg, Xiaojin Zhu, Ben Recht, Jun-Ming Xu, and Robert Nowak. 2010. Transduction with matrix completion: Three birds with one stone. In Advances in Neural Information Processing Systems. 757–765.
- Hou et al. (2017) Bo-Jian Hou, Lijun Zhang, and Zhi-Hua Zhou. 2017. Learning with feature evolvable streams. In Advances In Neural Information Processing Systems. 1416–1426.
- Huang et al. (2018) Sheng-Jun Huang, Miao Xu, Ming-Kun Xie, Masashi Sugiyama, Gang Niu, and Songcan Chen. 2018. Active Feature Acquisition with Supervised Matrix Completion. arXiv 1802.05380 (2018).
- Huang et al. (2014) Sheng-Jun Huang, Rong Jin, and Zhi-Hua Zhou. 2014. Active learning by querying informative and representative examples. IEEE Transactions on Pattern Analysis and Machine Intelligence 10 (2014), 1936–1949.
et al. (2013)
Prateek Jain, Praneeth
Netrapalli, and Sujay Sanghavi.
Low-rank matrix completion using alternating
ACM Symposium on Theory of Computing. 665–674.
- Keshavan et al. (2010) Raghunandan Keshavan, Andrea Montanari, and Sewoong Oh. 2010. Matrix completion from a few entries. IEEE Transactions on Information Theory 56, 6 (2010), 2980–2998.
- Király et al. (2015) Franz Király, Louis Theran, and Ryota Tomioka. 2015. The algebraic combinatorial approach for low-rank matrix completion. Journal of Machine Learning Research 16 (2015), 1391–1436.
Akshay Krishnamurthy and
Aarti Singh. 2013.
Low-rank matrix and tensor completion via adaptive sampling. InAdvances In Neural Information Processing Systems. 836–844.
et al. (2005)
Chee-Peng Lim, Jenn-Hwai
Leong, and Mei-Ming Kuan.
A hybrid neural network system for pattern classification tasks with missing features.IEEE Transactions on Pattern Analysis and Machine Intelligence 27, 4 (2005), 648–653.
- Liu and Motoda (1998) Huan Liu and Hiroshi Motoda. 1998. Feature extraction, construction and selection: A data mining perspective. Vol. 453. Springer Science & Business Media.
- Luo et al. (2015) Yong Luo, Tongliang Liu, Dacheng Tao, and Chao Xu. 2015. Multiview matrix completion for multilabel image classification. IEEE Transaction on Image Processing 24, 8 (2015), 2355–2368.
- Mavroforakis et al. (2017) Charalampos Mavroforakis, Dóra Erdös, Mark Crovella, and Evimaria Terzi. 2017. Active positive-definite matrix completion. In SIAM International Conference on Data Mining. 264–272.
- Meka et al. (2009) Raghu Meka, Prateek Jain, and Inderjit Dhillon. 2009. Matrix completion from power-law distributed samples. In Advances in Neural Information Processing Systems. 1258–1266.
- Melville et al. (2005) Prem Melville, Foster Provost, and Raymond Mooney. 2005. An expected utility approach to active feature-value acquisition. In IEEE International Conference on Data Mining. 745–748.
- Moon et al. (2014) Seungwhan Moon, Calvin McCarter, and Yu-Hsin Kuo. 2014. Active learning with partially featured data. In International Conference on World Wide Web. 1143–1148.
- Negahban and Wainwright (2012) Sahand Negahban and Martin Wainwright. 2012. Restricted strong convexity and weighted matrix completion: Optimal bounds with noise. Journal of Machine Learning Research 13, May (2012), 1665–1697.
- Qian et al. (2015) Chao Qian, Yang Yu, and Zhi-Hua Zhou. 2015. Subset selection by Pareto optimization. In Advances in Neural Information Processing Systems. 1774–1782.
- Rennie and Srebro (2005) Jason Rennie and Nathan Srebro. 2005. Fast maximum margin matrix factorization for collaborative prediction. In International Conference on Machine Learning. 713–719.
- Ruchansky et al. (2015) Natali Ruchansky, Mark Crovella, and Evimaria Terzi. 2015. Matrix completion with queries. In ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. 1025–1034.
- Sankaranarayanan and Dhurandhar (2013) Karthik Sankaranarayanan and Amit Dhurandhar. 2013. Intelligently querying incomplete instances for improving classification performance. In Conference on Information and Knowledge Management. 2169–2178.
- Settles (2012) Burr Settles. 2012. Active learning. Synthesis Lectures on Artificial Intelligence and Machine Learning 6, 1 (2012), 1–114.
- Singer and Cucuringu (2010) Amit Singer and Mihai Cucuringu. 2010. Uniqueness of low-rank matrix completion by rigidity theory. SIAM Journal on Matrix Analysis Applications 31, 4 (2010), 1621–1641.
- Sun et al. (2017) Leilei Sun, Chonghui Guo, Chuanren Liu, and Hui Xiong. 2017. Fast affinity propagation clustering based on incomplete similarity matrix. Knowledge and Information System 51, 3 (2017), 941–963.
- Sutherland et al. (2013) Dougal Sutherland, Barnabás Póczos, and Jeff Schneider. 2013. Active learning and search on low-rank matrices. In ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. 212–220.
- Toh and Yun (2010) Kim-Chuan Toh and Sangwoon Yun. 2010. An accelerated proximal gradient algorithm for nuclear norm regularized linear least squares problems. Pacific Journal of Optimization 6, 615-640 (2010), 15.
- Tong and Koller (2001) Simon Tong and Daphne Koller. 2001. Support vector machine active learning with applications to text classification. Journal of Machine Learning Research 2 (2001), 45–66.
- Tseng (2008) Paul Tseng. 2008. On accelerated proximal gradient methods for convex-concave optimization. Technical Report. University of Washington, Seattle.
- Vu et al. (2008) Duy Vu, Mikhail Bilenko, Maytal Saar-Tsechansky, and Prem Melville. 2008. Intelligent information acquisition for improved clustering. Folia Veterinaria (2008).
Kilian Weinberger and
Lawrence Saul. 2006.
Unsupervised learning of image manifolds by
International Journal of Computer Vision70, 1 (2006), 77–90.
- Wen et al. (2012) Zaiwen Wen, Wotao Yin, and Yin Zhang. 2012. Solving a low-rank factorization model for matrix completion by a nonlinear successive over-relaxation algorithm. Mathematical Programming Computation 4, 4 (2012), 333–361.
- Xu et al. (2013) Miao Xu, Rong Jin, and Zhi-Hua Zhou. 2013. Speedup matrix completion with side information: Application to multi-label learning. In Advances In Neural Information Processing Systems. 2301–2309.
- Yi et al. (2012) Jinfeng Yi, Tianbao Yang, Rong Jin, Anil Jain, and Mehrdad Mahdavi. 2012. Robust ensemble clustering by matrix completion. In IEEE International Conference on Data Mining. 1176–1181.
- Zeng et al. (2015) Guangxiang Zeng, Ping Luo, Enhong Chen, Hui Xiong, Hengshu Zhu, and Qi Liu. 2015. Convex matrix completion: A trace-ball optimization perspective. In SIAM International Conference on Data Mining. 334–342.