Minimally Supervised Feature Selection for Classification (Master's Thesis, University Politehnica of Bucharest)

12/09/2015 ∙ by Alexandra Maria Radu, et al. ∙ 0

In the context of the highly increasing number of features that are available nowadays we design a robust and fast method for feature selection. The method tries to select the most representative features that are independent from each other, but are strong together. We propose an algorithm that requires very limited labeled data (as few as one labeled frame per class) and can accommodate as many unlabeled samples. We also present here the supervised approach from which we started. We compare our two formulations with established methods like AdaBoost, SVM, Lasso, Elastic Net and FoBa and show that our method is much faster and it has constant training time. Moreover, the unsupervised approach outperforms all the methods with which we compared and the difference might be quite prominent. The supervised approach is in most cases better than the other methods, especially when the number of training shots is very limited. All that the algorithm needs is to choose from a pool of positively correlated features. The methods are evaluated on the Youtube-Objects dataset of videos and on MNIST digits dataset, while at training time we also used features obtained on CIFAR10 dataset and others pre-trained on ImageNet dataset. Thereby, we also proved that transfer learning is useful, even though the datasets differ very much: from low-resolution centered images from 10 classes, to high-resolution images with objects from 1000 classes occurring in different regions of the images or to very difficult videos with very high intraclass variance. 7

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 10

page 26

page 28

page 39

page 40

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Our method

Through our method we approach the case of binary classification, while for the multi-class scenario we apply the one vs. all strategy. Our training set is composed of samples, each -th sample being expressed as a column vector of features with values in ; such features could themselves be outputs of classifiers. We want to find a vector , with elements in and unit -norm, such that when the -th sample is from the positive class and otherwise, with . For a positive labeled training sample , we fix the ground truth target and for a negative one we fix it to . Our novel constraints on limit the impact of each individual feature , encouraging the selection of features that are powerful in combination, with no single one strongly dominating. This produces solutions with good generalization power. In Sec. 2 we show that is equal to the number of selected features, all with weights . The solution we look for is a weighted feature average with an ensemble response that is stronger on positives than on negatives. For that, we want any feature to have expected value over positive samples greater than its expected value over negatives. From the labeled samples we estimate the sign of each feature and if it is negative we simply flip the feature values: .

1.1 Supervised learning

We begin with the supervised learning task, which we formulate as a least-squares constrained minimization problem. Given the feature matrix with on its -th row and the ground truth vector , we look for that minimizes , and obeys the required constraints. We drop the last constant term and obtain the following convex minimization problem:

Our least squares formulation is related to Lasso, Elastic Net and other regularized approaches, with the distinction that in our case individual elements of are restricted to , which leads to important theoretical properties regarding sparsity and directly impacts generalization power (Sec. 2). This also leads to our (almost) unsupervised approach, presented in the next section.

1.2 Unsupervised learning

Consider a pool of signed features correctly flipped according to their signs, which could be known a priori, or estimated from a small set of labeled data. We make the simplifying assumption that the signed features’ expected values for positive and negative samples, respectively, are close to the ground truth target values . Then, for a given sample , and any obeying the constraints, the expected value of the weighted average is also close to the ground truth target : . Then, for all samples we have the expectation , such that any feasible solution will produce, on average, approximately correct answers. Thus, we can regard the supervised learning scheme as attempting to reduce the variance of the feature ensemble output, as their expected value is close to the ground truth target. If we now introduce the approximation into the learning objective , we obtain our new ground-truth-free objective with the following learning scheme, which is unsupervised once the feature signs are determined. Here :

Interestingly, while the supervised case is a convex minimization problem, the unsupervised learning scheme is a concave minimization problem, which is NP-hard. This is due to the change in sign of the matrix . However, since in the unsupervised case could be created from larger quantities of unlabeled data, could in fact be less noisy than and produce significantly better local optimal solutions — a fact confirmed by our experiments.

1.3 Intuition

Let us take a closer look at the two terms involved in our objectives, the quadratic term: and the linear term: . If we assume that feature outputs have similar expected values, then minimizing the linear term in the supervised case will give more weight to features that are strongly correlated with the ground truth and are good for classification, even independently. However, things become more interesting when looking at the role played by the quadratic term in the two cases of learning. The positive definite matrix contains the dot-products between pairs of feature responses over the samples. In the supervised case, minimizing should find a group of features that are as uncorrelated as possible. Thus we seek group of features that are individually relevant due to the linear term, but not redundant with respect to each other due to the quadratic term. They should be conditionally independent given the class, an observation that is consistent with earlier research in machine learning (e.g., [22]) and neuroscience (e.g., [36]).

In the unsupervised case, the task seems reversed: maximize the same quadratic term

, with no linear term involved. We could interpret this as transforming the learning problem into a special case of clustering with pairwise constraints, related to methods such as spectral clustering with

-norm constraints [37] and robust hypergraph clustering with -norm constraints [38, 39]. The problem is addressed by finding the group of features with strongest intra-cluster score — the largest amount of covariance. In the absence of ground truth labels, if we assume that features in the pool are, in general, correctly signed and not redundant, then the maximum covariance is attained by features whose collective average varies the most as the hidden class labels also vary. Thus, the unsupervised variant seeks features that respond in a united manner to the distributions of the two classes.

2 Algorithms

In our both approaches, we first need to determine the sign for each feature, as defined before. Once it is estimated, we can set up the optimization problems to find . In Algorithms  1 and  2, we present our supervised and unsupervised learning methods. The supervised case is a convex minimization problem, with efficient global optimization possible in polynomial time. The unsupervised learning is a concave minimization problem, which is NP-hard and can only have local efficient optimization.

  Learn feature signs from the labeled samples.
  Create with flipped features from the labeled samples.
  Set ,
  Find s.t. .
  return  
Algorithm 1 Supervised learning
  Learn feature signs from a small set of labeled samples.
  Create with flipped features from unlabeled data.
  Set ,
  Find s.t. .
  return  
Algorithm 2 Unsupervised learning from signed features

There are many possible fast methods for optimization. In our implementation we adapted the integer projected fixed point (IPFP) approach [40, 41], related to the Frank-Wolfe algorithm[42], which is efficient in practice (Fig. 2c) and is applicable to both supervised and unsupervised cases. The method converges to a stationary point — the global optimum in the supervised case. At each iteration IPFP approximates the original objective with a linear, first-order Taylor approximation that can be optimized immediately in the feasible domain. That step is followed by a line search with rapid closed-form solution, and the process is repeated until convergence. In practice, iterations bring us close to the stationary point; nonetheless, for thoroughness, we use iterations in all our experiments. See, for example, comparisons to Matlab’s quadprog run-time for the convex supervised learning case in Fig. 2 and to other learning methods in Fig. 8. Note that once the linear and quadratic terms are set up, the learning problems are independent of the number of samples and only dependent on the number of features considered, since is and is .

Figure 2: Optimization and sensitivity analysis: a) Sensitivity to . Performance improves as features are added, is stable around the peak and falls for as useful features are exhausted. b) Features ordered by weight for confirming that our method selects equal weights up to the chosen . c) Our method almost converges in iterations. d) Runtime of interior point method divided by ours, both in Matlab and with max iterations. All results are averages over random runs.
Figure 3: Sensitivity analysis for Lasso: Left: sensitivity to number of features with non-zero weights in the solution. Note the higher sensitivity when compared to our approach. Lasso’s best performance is achieved for fewer features, but the accuracy is generally worse than in our case. Right: sensitivity to lambda , which controls the L1-regularization penalty.

3 Theoretical analysis

First we show that the solutions are sparse with equal non-zero weights (P1), also observed in practice (Fig. 2b). This property makes our classifier learning also an excellent feature selection mechanism. Next, we show that simple equal weight solutions are likely to minimize the output variance over samples of a given class (P2) and minimize the error rate. This explains the good generalization power of our method. Then we show how the error rate is expected to go towards zero when the number of considered non-redundant features increases (P3), which explains why a large diverse pool of features is beneficial. Let be our objective for either the supervised or unsupervised case:

Proposition 1: Let be the gradient of . The partial derivatives corresponding to those elements of the stationary points with non-sparse, real values in must be equal to each other.

Proof: The stationary points for the Lagrangian satisfy the Karush-Kuhn-Tucker (KKT) necessary optimality conditions. The Lagrangian is . From the KKT conditions at a point we have:

Here and the Lagrange multipliers have non-negative elements, so if and . Then there must exist a constant such that:

This implies that all that are different from or correspond to partial derivatives that are equal to some constant , therefore those must be equal to each other, which concludes our proof.

From Proposition it follows that in the general case, when the partial derivatives of the objective error function at the Lagrangian stationary point are unique, the elements of the solution are either or . Since it follows that the number of nonzero weights is exactly

, in the general case. Thus, our solution is not just a simple linear separator (hyperplane), but also a sparse representation and a feature selection procedure that effectively averages the selected

(or close to ) features. The method is robust to the choice of (Fig. 2.a) and seems to be less sensitive to the number of features selected than the Lasso (see Fig. 3). In terms of memory cost, compared to the solution with real weights for all features, whose storage requires bits in floating point representation, our averaging of selected features needs only bits — select features out of possible and automatically set their weights to . Next, for a better statistical interpretation we assume the somewhat idealized case when all features have equal means

and equal standard deviations

over positive (P) and negative (N) training sets, respectively.

Proposition 2: If we assume that the input soft classifiers are independent and better than random chance, the error rate converges towards as their number goes to infinity.

Proof: Given a classification threshold for , such that , then, as

goes to infinity, the probability that a negative sample will have an average response greater than

(a false positive) goes to . This follows from Chebyshev’s inequality. By a similar argument, the chance of a false negative also goes to as goes to infinity.

Proposition 3: The weighted average with smallest variance over positives (and negatives) has equal weights.

Proof: We consider the case when ’s are from positive samples, the same being true for the negatives. Then . We minimize by setting its partial derivatives to zero and get . Then .

4 Youtube-Objects experiments

4.1 Features design

Training dataset Testing dataset
No. of frames 436970 134119
No. of shots 4200 1284
No. of classes 10 10
Table 1: Youtube-Objects dataset statistics.
Dataset description:

To train and test our system we used Youtube-Objects video dataset [43] and features obtained on ImageNet and CIFAR10. Details about Youtube-Objects are found in Table 1. The 10 classes are aeroplane, bird, boat, car, cat, cow, dog, horse, motorbike, train. This information refers to the entire training dataset, but in our experimental design, we used only a part of it to train the two methods. Details about the actual training set can be found in the next sections. Each video in the dataset consists of a number of shots. The labeling is done per video; this means that some frames that appear in a video labeled as “dog” might contain only people and not dogs. This fact makes our task more difficult because we consider that some frames show a certain object even though they do not.

Figure 4: Three types of features are created to form our pool. They are classifiers trained on three different datasets: CIFAR10, Youtube-Objects and ImageNet. We encourage feature diversity and independence by looking either at different parts of the input space (type I) or at different locations in the image (types II and III). For type I we train separate classifiers for each cluster of each class. For types II and III we train different classifiers for each class at every sub-window. In total we get over 6160 features. As our experiments show the large variety of our features helps in improving classification.
Features generation:

For the experiments we used a pool of 6160 features obtained on three different datasets, in different ways. The feature types are the following:

  1. Features obtained by training binary classifiers on CIFAR10 dataset [44]. These classifiers are trained on the data obtained by clustering the images from each of the 10 classes into 5 clusters. The positive examples for each classifier are the examples from the corresponding class, while negative examples are those from other classes (8 times more negatives than positives). These make a total of 50 features. The classes from CIFAR10 coincide only partially (7 classes) with those from Youtube-Objects dataset. The classes from CIFAR10 are frog, truck, deer, automobile, bird, horse, ship, cat, dog, aeroplane.

  2. Features obtained by training a multiclass SVM classifier on the HOG applied on different parts of the frames. We have applied PCA on the resulted HOG, thus we obtained smaller descriptors of length 46, so as to avoid as much as possible overfitting when using SVM. The training set that we used to obtain these features is a subset of Youtube-Objects (25000 frames, equally distributed between the 10 classes). Then, we applied these classifiers on each image and we considered as features the probabilities returned by SVM, this means 10 features for each part-classifier. The parts of the images that we considered are the whole image, the center of the image (length and height are half the initial size), the four corners of the image, the center of the center of the image and the corners of the center of the image. Finally, we have 11 classifiers, each with 10 probabilities, summing up to 110 features.

  3. Features obtained by using a pretrained network from Caffe [5]

    . The convolutional neural network was trained on the ImageNet dataset 

    [45] and it contains 1000 features. We applied these features on our own dataset so: on the whole image, on the center and on the 4 corners of each example, thus we obtained new features.

4.2 Experiments and results

We evaluate our method in the context of a limited training dataset and we intend to show that our method generalizes better than other well known methods. We combine features obtained from different image datasets and prove that knowledge transfer is useful by testing the system on videos. In all the experiments we consider the accuracy per frame. In order to compare the different methods that we took into consideration, we varied some dimensions of the problem: the number of shots for training, the number of frames from each shot and the number of features considered. To avoid overfitting, besides studying the testing accuracy, we also observe training accuracy vs testing accuracy.

The Youtube-Objects dataset is a difficult one because the movies are taken in the wild and there are more categories of objects that appear simultaneously in a frame. Moreover, in some frames of the videos the real object is even missing or other objects appear instead (e.g. a video is labeled as ”dog”, but it contains only a car in some of its frames). The shots in each video may differ very much. The differences are caused by the orientation, size, luminosity, or by the presence of more object categories in the same frame. In some shots the object is occluded, it is in a corner or it is coming in and out. Due to these facts, learning a class of objects from a small number of frames becomes a very challenging task. The splitting of the videos in training and testing is done as in [6]. In Fig. 5 we show one of the random sets of training frames used in the case of one-shot learning. In this scenario we feed only one labeled frame from each class to the algorithm to learn the signs of the features. You can notice that some of the frames are not at all representative for their category even though we chose the frame found in the middle of the random shot. To ensure the accuracy of the results we have averaged the results of 30 or 100 random experiments for each method.

Figure 5: One of the 30 random training sets used for one-shot learning - we use only one labeled frame per class. Note that some frames do not contain the object, making the learning task really difficult.
Locations W C TL TR BL BR
aeroplane 65.6 30.2 0 0 2.1 2.1
bird 78.1 21.9 0 0 0 0
boat 45.8 21.6 0 0 12.3 20.2
car 54.1 40.2 2.0 0 3.7 0
cat 76.4 17.3 5.0 0 1.3 0
cow 70.8 22.2 1.8 2.4 0 2.8
dog 92.8 6.2 1.0 0 0 0
horse 75.9 14.7 0 0 8.3 1.2
motorbike 65.3 33.7 0 0 0 1.0
train 56.5 20.0 0 2.4 12.8 8.4
Table 2: The distribution of sub-windows for the input classifiers selected for each category. The most selected one is whole, indicating that information from the whole image is relevant, including the background. For some classes the object itself is also important as indicated by the high percentage of center classifiers. The presence of classifiers at other locations indicates that the object of interest, or objects co-occurring with it might be present at different places. Note, for example the difference in distribution between cats and dogs. The results suggest that looking more carefully at location and moving towards detection, could benefit the overall classification.

Regarding the type II of features we did an experiment to study whether there is a preference for features computed on a certain region. In Table 2 we show the distribution of the classifiers selected by our supervised method with respect to the position of the region on which they are obtained for each class. We can make some observations regarding this distribution. First, the whole image (W) is the most important for many classes, this means that apart from the object, the environment is also important. Secondly, for some categories like car, motorbike, aeroplane the center (C) of the image is important, while for others, regions off-center seem to be more representative than the center. Thirdly, for some classes that seem to be similar to humans, the classifiers chosen are rather different, as in the case of cats and dogs. For dogs the whole image is more representative, while for cats different off-center regions are preferred. This might be due to the fact that cats can be found in more unconventional places than dogs that are bigger and usually found on the ground.

We evaluated and compared eight methods: ours, SVM, AdaBoost, ours + SVM (feed to SVM only the features selected by our method), Lasso, Elastic Net, forward-backward selection (FoBa) and averaging. For SVM we used the most recent version (at the moment of the experiments) of LIBSVM [46], while for Lasso and Elastic Net we used the implementation provided in MATLAB. All the features have values between 0 and 1 and are expected to be positively correlated with the positive class. In the case of our method, we select the features whose weights have a value greater than a threshold. After the features are selected, we can use them with any classifier. The features are used with most of the tested algorithms exactly as they are, while with AdaBoost they should be transformed into classifiers, by finding for each feature the threshold that minimizes the expected exponential loss at each iteration. This is the reason why AdaBoost proved to be much slower than the other methods.

We performed extensive experiments on both variants of our method (supervised and (almost) unsupervised) and also on the methods mentioned above. We evaluated: testing accuracy, training accuracy (to make sure the algorithm is not overfitting), training time, sensitivity to input parameters, accuracy of sign estimation, sparsity of the solutions and influence of the quantity of unlabeled data over the recognition accuracy. In the majority of our experiments we are going to use four subsets of features: 1) all features of type I ( features), 2) all features of types I and II ( features), 3) out of the features of type III - those computed on the whole image and on the central part of the image, 3) all features of types I and II and the features of type III also selected in the previous case.

Signs estimation:

The (almost) unsupervised setting supposes very limited labeled data, only for computing the signs of the features. It leads to very good performance even when it uses only one labeled frame per class to flip the features. In Table 3 we show that the accuracy of signs estimation is usually high, it increases with the number of labeled shots and frames used. We can also notice an improvement in the sign estimation accuracy when the training sets contain stronger features like in the third and fourth case. The fact that the performance of our algorithm is good even for fewer labeled samples, as we will see in the next experiments, supports our affirmation that the algorithm is robust, not being sensitive to the signs of the features.

Number
of shots Features I Features I+II Features III Features I+II+III
1 (1 frame) 61.06% 64.19% 66.03% 65.89%
1 (10 frames) 62.61% 65.64% 67.53% 67.39%
3 (10 frames) 66.53% 69.31% 73.21% 72.92%
8 (10 frames) 72.17% 73.36% 78.33% 77.97%
16 (10 frames) 74.83% 75.44% 79.97% 79.63%
20 (10 frames) 75.51% 76.23% 80.54% 80.22%
30 (10 frames) 76.60% 77.15% 80.98% 80.70%
50 (10 frames) 77.41% 77.79% 81.52% 81.24%


Table 3: Accuracy of feature sign estimation for different number of labeled shots: note that the signs are estimated mostly correctly even in the 1 labeled training frame per class case. While not perfectly correct, the estimation agreements between signs estimated from a single frame per class and signs estimated from the entire test set (containing more than frames) show that estimating these feature signs from very limited labeled training data is feasible. Also, the experiments presented here indicate that our approach is robust to sign estimation errors.
Figure 6: Accuracy of features signs estimation when the signs computed on the testing set are considered to be the ground-truth (the testing set size is bigger than and it is more probable to predict the signs correctly when the samples are more numerous). Note that the signs are better estimated when the features are stronger (as it happens in the case of type III and types I + II + III features), but the accuracy is quite high in all the four cases. The first value corresponds to 1-shot-1-frame case (please also see Table 3).
Locations Feats. I Feats. I+II Feats. III Feats. I+II+III
Aeroplane 99.00% 100.00% 100.00% 100.00%
Bird 93.17% 85.33% 64.83% 63.87%
Boat 92.67% 67.50% 77.67% 75.47%
Car 90.67% 80.17% 60.67% 61.73%
Cat 92.00% 93.83% 88.50% 86.00%
Cow 90.50% 92.83% 70.17% 74.67%
Dog 91.33% 96.50% 89.33% 88.47%
Horse 92.67% 96.17% 88.67% 85.53%
Motorbike 90.17% 87.00% 71.83% 78.33%
Train 96.00% 93.67% 81.50% 80.53%
Mean 92.82% 89.30% 79.32% 79.46%
Table 4: Sign estimation accuracy for each class for the features selected by our algorithm when using 16 labeled shots with 10 labeled frames each. Results are averages of different experiments.

In Fig. 6 we show the accuracy of the sign prediction for each feature, which depends only on the ground truth and the value of the features. We considered the ground truth for feature signs as being the signs obtained by using the whole testing set as labeled data. In Table 4 we show how sign estimation accuracy varies for each class. In this case we computed the accuracy of the sign estimation particularly for the features selected with our unsupervised algorithm, not for all features. Both experiments were done on the four subsets of features described before. In the second experiment we studied only the case of 16 shots, each with 10 labeled frames per class, while in the first one we varied the number of shots and frames in order to see how it influences the sign accuracy. Notice (by comparing row 5 from Table 3 and last row from Table 4 which present same settings for the experiments) that our method chooses more features correctly flipped which means that the algorithm tends to select reliable features (the percent of the features selected by the algorithm that have correct signs is higher than the percent of the total number of features that have correct signs).

Methods comparison:

In Fig. 7 we compare our supervised and unsupervised approaches with other methods used for feature selection. We notice that the results of ours-unsup2 are better than those for all the other methods. Table 5 shows the results obtained with regularized methods like: Lasso and Elastic Net compared to the results obtained with our supervised method. Notice how our algorithm performs much better when the number of features is higher, while Elastic Net performs slightly better for the first set of features that are the least numerous. This suggests that our algorithm manages to better select from a higher number of features, which is quite encouraging because feature selection is more acutely needed when the number of features is very big. We tested Lasso and Elastic Net with different values of parameter for each of the four subsets of features, and we chose the value for which we obtained the best results in the case of eight shots. The value of is the same for Elastic Net and Lasso for the same subset of features.

Figure 7: Comparison of our unsupervised approach to the supervised case for different methods. In our case, unsup1 uses training data in only from the shots used by all supervised methods; unsup2 also includes the testing data for learning, with unknown test labels; one-shot is unsup2 with a single labeled image per class used only for feature sign estimation. This truly demonstrates the ability of our approach to use minimum amount of supervision and accurately learn in an unsupervised manner.
Feats. I Ours supervised Lasso Elastic Net ()
1 shot 17.83% 17.70% 18.18%
3 shots 21.98% 22.73% 22.76%
8 shots 29.12% 30.22% 30.25%
16 shots 29.75% 30.70% 30.72%
(a) Features of type I
Feats. I Ours supervised Lasso Elastic Net ()
1 shot 36.71% 32.67% 33.59%
3 shots 46.63% 42.89% 44.00%
8 shots 51.06% 48.33% 48.75%
16 shots 51.94% 48.90% 49.02%
(b) Features of types I + II
Feats. I Ours supervised Lasso Elastic Net ()
1 shot 49.58% 27.95% 28.31%
3 shots 62.78% 45.83% 46.37%
8 shots 69.16% 56.14% 56.39%
16 shots 70.23% 59.31% 59.33%
(c) Features of type III
Feats. I Ours supervised Lasso Elastic Net ()
1 shot 52.39% 27.57% 28.90%
3 shots 64.52% 45.81% 46.66%
8 shots 71.21% 54.84% 55.77%
16 shots 72.66% 59.69% 60.72%
(d) Features of type I + II + III
Table 5: Recognition accuracy for Lasso, Elastic Net and our supervised method.
Figure 8: Accuracy and training time on YouTube-Objects, with varying training video shots ( frames per shot and results averaged over runs). Input feature pool, row 1: type I features on CIFAR10; row 2: type II features on YouTube-Parts + type I features on CIFAR10; row 3: type III features in ImageNet; row 4: all features. Ours outperforms SVM, Lasso, AdaBoost and FoBa.

The results shown in Fig. 8 prove that our supervised method is much faster than the other methods, it has a constant training time and also it outperforms them in accuracy, even SVM in many cases. The difference is more prominent when the number of shots is smaller. In the unsupervised case, we added unlabeled training data. Here, we outperformed all other methods by a very large margin, up to over (Table 6 and Fig. 7). We tested with different amounts of unlabeled data. While being almost insensitive to the number of labeled shots, (used only to estimate the feature signs), performance improved as more unlabeled data was added. Of particular note is when only a single labeled image per class was used to estimate the feature signs, with all the other data being unlabeled (Fig. 7).

Training # shots 1 3 8 16
Feature I +15.1% +15.2% +12.6% +12.6%
Feature I+II +16.7% +10.4% +6.4% +6.2%
Feature III +23.6% +11.3% +5.1% +3.7%
Feature I+II+III +24.4% +13.5% +6.9% +5.3%
Table 6: Experiments on our unsupervised learning. Improvement in recognition accuracy over our supervised method, by using unsupervised learning with unlabeled test data. Feature signs are learned using the same shots as in the supervised case. Note that the first column presents the one-shot learning case, when the unsupervised method uses a single labeled image per shot and also per class. Results are averages over random runs.
Figure 9: Our method shows a good generalization power. It quickly learns subsets of features that are more powerful on the testing set than when combined with SVM or SVM-alone. Note that, in our case, the training and testing errors are relatively closer to each other than for the competitors.
Accuracy Feats. I () Feats. I+II () Feats. I+II+III ()
train shots 29.69% 51.57% 69.99%
train shots 31.97% 52.37% 71.31%
Table 7: Recognition accuracy of our supervised method on Youtube-Objects dataset, using training shots (first row) and training shots (second row), as we combine features from several datasets. The accuracy increases significantly (it doubles) as the pool of features grows and becomes more diverse. Features are: type I: CIFAR; types I+II: CIFAR + Youtube-Parts; types I+II+III: CIFAR + Youtube-Parts + Imagenet (6000).

Our method also exhibits good generalization as we can notice in Fig. 9, where training vs testing accuracy are plotted. This experiment is performed on the supervised variant of our algorithm. We analysed the evolution of the testing accuracy with respect to the training accuracy in order to avoid overfitting. The size of the pool of features from which we choose a subset is very important when it comes to accuracy as we can see in Table 7, this experiment is also done on the supervised method. In this experiment we wanted to emphasize that the performance of our algorithm increases with the number of features used.

Mean accuracy per class (%)
5cmAero-
plane Bird Boat Car Cat Cow Dog Horse 5cmMotor-
bike Train
91.53 91.61 99.11 86.67 70.02 78.13 61.63 53.65 72.66 83.37
Table 8: Mean accuracy per class, over 30 random runs of unsupervised learning with 16 labeled training shots.
No. of shots Same frames Diff. frames Diff. shots Diff. videos
3 shots 37.24% 38.89% 27.18% 30.66%
8 shots 41.79% 43.11% 29.29% 34.21%
16 shots 42.41% 43.79% 27.65% 33.54%
(a) Features of type I
No. of shots Same frames Diff. frames Diff. shots Diff. videos
3 shots 57.01% 56.76% 52.96% 50.02%
8 shots 57.49% 57.29% 53.89% 50.07%
16 shots 58.11% 58.03% 54.01% 50.05%
(b) Features of types I + II
No. of shots Same frames Diff. frames Diff. shots Diff. videos
3 shots 74.04% 73.95% 56.66% 55.60%
8 shots 74.25% 74.05% 57.86% 55.33%
16 shots 73.87% 73.50% 58.29% 55.42%
(c) Features of type III
No. of shots Same frames Diff. frames Diff. shots Diff. videos
3 shots 78.02% 77.95% 61.24% 62.44%
8 shots 78.13% 77.96% 61.28% 62.11%
16 shots 77.93% 77.79% 61.24% 62.11%
(d) Features of types I + II + III
Table 9: Recognition accuracy for our unsupervised method for four cases: 1) same frames used for unsupervised learning and for testing, 2) different frames for unsupervised learning and for testing, 3) different shots used for unsupervised learning and for testing and 4) different videos used for unsupervised learning and for testing. In the first three cases we used the testing dataset for the unsupervised learning, while in the fourth case we used only the training set for the unsupervised learning, therefore the videos are different in training and testing.
Unsupervised learning:

We also evaluate how testing accuracy is influenced by the quantity of unlabeled data used for learning. In order to assess this aspect, we trained our unsupervised algorithm on different quantities of unlabeled data and we realized that, as expected, the accuracy increases with this quantity, but the variation is not so high. Once more than 25% of the unsupervised data is provided, the accuracy reaches a plateau. These results are summarized in Table 10 and Fig. 10.

Unsupervised data Feats. I Feats. I+II Feats. III Feats. I+II+III
Train + 0% test 30.86% 48.96% 49.03% 53.71%
Train + 25% test 41.26% 55.50% 66.90% 72.01%
Train + 50% test 42.72% 56.66% 71.31% 76.78%
Train + 75% test 42.88% 57.24% 73.65% 77.39%
Train + 100% test 43.00% 57.44% 74.30% 78.05%
Table 10: Testing accuracy when varying the quantity of unlabeled data used for unsupervised learning. We used video shots per class with 10 frames each, for estimating the signs of the features. Results are averages over random runs.
Figure 10: Testing accuracy vs amount of unlabeled data used for unsupervised learning. Note that the performance almost reaches a plateau after a relatively small fraction of unlabeled frames are added.

In Table 8 we show how testing accuracy varies among classes. For this experiment we used 16 shots with 10 labeled frames per class to compute the signs of the features; we used features of types I, II and III. Note that the accuracy is generally higher for classes designing human-made objects like: boat, aeroplane, car, while for natural classes as dog, cat, horse the recognition accuracy is smaller. The difference in accuracy might be explained by the fact that the videos for classes like boat, aeroplane and bird for which the accuracy is very high are rather static, the changes of the background and of the object itself are reduced, while dogs, cats, and horses are more dynamic and the recognition task becomes more difficult. Especially for videos in classes aeroplane and boat the foreground is very uniform.

For the unsupervised case we also performed experiments in which the unlabeled training set contained distinct examples than those used for testing, because we wanted to see what is the influence of the unlabeled set on the results. We considered four cases that we present in Table 9: 1) same frames for unsupervised training and testing, 2) different frames for unsupervised training and testing, 3) different shots for unsupervised training and testing, 4) different videos for unsupervised training and testing. The results are as expected because the accuracy is higher when the frames used for the unsupervised learning are also used for testing, and it decreases when the samples used for learning and testing are more distinct. The frames in the same shot are very similar to each other, and the shots within the same video are more similar than the shots coming from different videos. However, the performance is still good even in the most difficult case, when the examples used for training are actually extremely different from those used for testing.

Figure 11: Lists of ten classified frames per category, for which the ratio of correct to incorrect samples matches the mean class recognition accuracy.
Figure 12: Ten classified frames per class, for which the ratio of correct to incorrect samples matches the mean class recognition accuracy.

For a better understanding of the performance of our unsupervised method, we present in Figs. 11 and 12 for each of the 10 categories of objects images that were classified correctly and incorrectly. For each class the proportion of correct and incorrect examples is consistent with the recognition accuracy per class. These results are obtained when testing our one-shot learning algorithm (only one labeled example is used per class). Note that we considered all frames in a video shot as belonging to a single category - even though sometimes a significant amount of frames did not contain any of the above categories. Therefore, often our results look qualitatively better than the quantitative evaluation.

Signs transfer:

Another idea that we investigated during our experiments was the possibility to transfer the signs from a category to another. We computed the signs of the features for class cat and used them also for class dog. This would be a very useful idea if we have some classifiers already learnt and we want to learn a new category for which we do not have labeled images. Then we can take the signs from one of the classifiers and use them for the new class. We made two experiments. In the first one we computed the binary accuracy for each class individually for three distinct cases: 1) the signs used were the real signs, 2) the signs were taken from a very similar category, 3) the signs were taken from a very dissimilar category (the evaluation of the similarity/dissimilarity of the classes is decided by us, so it might be subjective). We can notice a decrease in accuracy when the signs used were not the original ones, and the decrease is more pronounced when borrowing the signs from more dissimilar classes. In Table 11 we present the results for two sets of features: for types I + II and for types I + II + III, while in Table 12 we show the classes from which we borrowed the signs.

Testing accuracy(%)
5cmClass
name 5cmIts
own
sign 5cmFrom
sim.
class 5cmFrom
dissim.
class
Aeroplane 96.85 84.93 87.77
Bird 96.73 94.53 94.58
Boat 98.34 88.34 87.50
Car 97.53 96.90 96.81
Cat 92.30 92.23 92.08
Cow 94.16 94.14 94.13
Dog 80.79 80.73 81.26
Horse 88.95 88.99 80.72
Motorbike 95.10 95.10 95.10
Train 93.35 87.33 87.06
Mean 93.41 90.32 89.70
(a) Features of types I + II
Testing accuracy(%)
5cmClass
name 5cmIts
own
sign 5cmFrom
sim.
class 5cmFrom
dissim.
class
Aeroplane 96.63 91.73 93.01
Bird 99.01 97.68 94.59
Boat 97.43 98.29 93.25
Car 98.67 98.47 97.73
Cat 94.48 94.66 93.12
Cow 95.80 95.87 94.21
Dog 84.87 82.13 80.88
Horse 92.05 91.03 81.26
Motorbike 98.09 97.67 97.19
Train 94.28 92.26 87.82
Mean 95.13 93.97 91.30
(b) Features of types I + II + III.
Table 11: Mean binary accuracy per class, over 30 random runs of unsupervised learning with 8 labeled training shots. We tested three cases: 1) the signs are computed on each class, 2) the signs are borrowed from another very similar class, 3) the signs are borrowed from a very dissimilar class. Results are obtained on two subsets of features. See also Table 12a for the classes considered similar / dissimilar.
5cmClass
name 5cmVery sim.
class 5cmVery dissim.
class
Aeroplane Bird Train
Bird Aeroplane Train
Boat Aeroplane Cat
Car Motorbike Bird
Cat Dog Boat
Cow Horse Cat
Dog Cat Boat
Horse Cow Aeroplane
Motorbike Car Cat
Train Car Cow
(a) The classes that we considered similar / dissimilar in order to borrow the feature signs from.
5cmClass
name 5cm6 orig.
signs 5cm4 orig.
signs
Aeroplane Aeroplane Aeroplane
Bird Bird Aeroplane
Boat Aeroplane Aeroplane
Car Car Car
Cat Cat Cat
Cow Cow Horse
Dog Cat Cat
Horse Cow Horse
Motorbike Car Horse
Train Train Car
(b) Classes chosen to borrow the feature signs from for the experiments in which we kept 6 original signs and 4 original signs.
Table 12: The classes from which we borrowed the signs in the experiments with the sign transfer.

In the second experiment we evaluated the multiclass accuracy in 3 different cases: 1) when all the signs were the original ones, 2) the signs were the original ones for 6 classes, while for the other 4 they were borrowed, 3) the signs for 4 classes were the original ones, while for the other 6 classes they were borrowed. The results are summarized in Table 13. We can notice that the accuracy generally decreases when the signs are borrowed and we do not use all the original ones. For the two new settings of the experiments, for each of the classes we present in Table (b)b the classes on which we computed their signs.

Multiclass accuracy
5cmNo. of
labeled
shots 5cmAll orig.
signs 5cm6 orig.
signs 5cm4 orig.
signs
1 34.67% 35.14% 34.28%
3 38.79% 37.37% 34.06%
8 43.06% 40.05% 34.91%
16 43.67% 40.45% 35.38%
(a) Features of type I
Multiclass accuracy
5cmNo. of
labeled
shots 5cmAll orig.
signs 5cm6 orig.
signs 5cm4 orig.
signs
1 54.95% 54.14% 51.40%
3 57.01% 55.67% 51.02%
8 57.49% 55.25% 50.32%
16 58.11% 55.77% 50.25%
(b) Features of types I + II
Multiclass accuracy
5cmNo. of
labeled
shots 5cmAll orig.
signs 5cm6 orig.
signs 5cm4 orig.
signs
1 73.52% 72.46% 72.66%
3 74.04% 72.46% 71.96%
8 74.25% 72.12% 73.32%
16 73.87% 72.38% 73.08%
(c) Features of type III
Multiclass accuracy
5cmNo. of
labeled
shots 5cmAll orig.
signs 5cm6 orig.
signs 5cm4 orig.
signs
1 76.86% 76.08% 75.95%
3 78.02% 76.01% 75.61%
8 78.13% 76.34% 76.19%
16 77.93% 76.29% 75.93%
(d) Features of types I + II + III
Table 13: Multiclass accuracy for 3 settings: 1) with the original signs, 2) with 6 original signs, 3) with 4 original signs. See also Table 12b for the classes from which the signs are borrowed.
Figure 13: Youtube-Objects classes similarity based on the signs of the features. For each pair of features we present the percent of signs of features that coincide. The higher this percent is, the higher the similarity is.

Another interesting experiment related to the sign transfer was to evaluate the similarity/dissimilarity of the classes based on the percent of feature signs that coincide for each pair of classes. We try to find a more objective criterion (a numerical one) in order to decide which classes are similar and which are not. In Fig. 13 we show for each class how similar it is to all classes in the dataset. We can notice that the similarities computed in this way are quite intuitive and the more and stronger features we have, the more intuitive the similarities found are. The similarities computed with the fourth subset of features (types I + II + III) are better than those found only with features of type I. Let us focus on the last case considered and look at the classes to which the similarity is ; for aeroplane: boat, motorbike, bird, train, for bird: cat, dog, motorbike, aeroplane, cow, for boat: train, car aeroplane, for car: train, boat, motorbike, for cat: dog, bird, cow, horse, for cow: horse, dog, cat, bird, for dog: cow, horse, cat, bird, for horse: cow, dog, cat, for motorbike: aeroplane, bird, car, for train: car, boat, aeroplane. We can remark the fact that generally the classes that designate animals are similar to each other, while classes related to transportation (which are also human-made) are more similar between them according to the signs of the features. This result is not at all surprising, we expected that the categories that are semantically related to be more similar than those that are not. It would have been counterintuitive to obtain that the train is similar to the cat, for example.

5 MNIST experiments

Figure 14: We represented the classifiers chosen in white and black: white for the positively correlated, black for negatively correlated and grey for those not chosen.

In order to assess more accurately the performance of the algorithm that we developed we have tested it on MNIST dataset (containing images with digits). For these experiments we made a small change to the algorithm. The data are normalized so that they have the mean equal to 0 and the standard deviation equal to 1. Therefore, the values are not anymore between 0 and 1, they might also be negative and not subunitary. We noticed that when we flip the features it would be better to use instead of as we did before. This new way of flipping the features is used in the MNIST experiments. We show in Fig. 14 the classifiers chosen for each class. We represented in black the negatively correlated features (before flipping, because after flipping all features are positively correlated), in white the positively correlated features (before flipping) and in grey the features that were not chosen. The number of classifiers chosen was k = 400. The number of labeled images per class used for learning the signs was 2000. We can notice that the classifiers chosen are those from the center of the image. The positively correlated ones are precisely those that represent the shape of the digit, while the negatively correlated are around them and emphasize the shape of the digit.

No. of labeled images Whole image Center
1 64.36% 32.93%
8 74.75% 58.88%
16 75.70% 62.26%
64 76.57% 65.89%
128 76.63% 66.51%
512 76.63% 67.19%
1024 76.61% 66.90%
2048 76.68% 67.07%
6000 76.57% 67.80%
Table 14: Mean multiclass accuracy for unsupervised learning on MNIST dataset for 30 random experiments. We varied the number of labeled images on which we learnt the signs, and we used the whole testing set for unsupervised learning.
Figure 15: Recognition accuracy for the unsupervised learning algorithm when: a) the testing set is used for unsupervised learning, b) the training set only is used for the unsupervised learning.

The multiclass recognition accuracies obtained by our algorithm on the MNIST dataset are found in Table 14. We learnt the signs of the features on different numbers of images per class, ranging from one image per class, up to all (around 6000) images per class. We also present the results obtained when we use instead of the whole image, only its central part. Even though the number of features is halved in this case, the accuracy when the center is used nears the accuracy obtained with the whole image when the number of labeled examples increases, although for 1 labeled image the accuracy when only the center is used is half the accuracy with the whole image.

In Fig. 15 we showed the testing accuracy of our unsupervised algorithm for two different settings: 1) the unsupervised learning was done on the testing set, 2) the unsupervised learning was done on the training set. We can notice that the difference in accuracy between the two is extremely small, this means that the algorithm can generalize very well and the power of the classification method does not necessarily come from the testing examples used during the unsupervised learning. We have also performed an experiment that assesses the similarity between the ten classes (digits) by evaluating the percent of signs that coincide between each pair of classes. In Fig. 16 we show the level of the sign coincidence for each pair of digits.

Figure 16: Digits similarity in MNIST dataset based on the percent of feature signs that coincide among them.

6 Discussion

Discussion on (almost) unsupervised learning:

We demonstrated that our approach is able to learn superior classifiers in the case when no data labels are available, but only the signs of features are known. In our experiments, we only used minimal data to estimate these signs. Once they are known, any amount of unlabeled data can be incorporated. This aspect reveals a key insight: being able to label the features, and not the data, is sufficient for learning. For example, when learning to separate oranges from cucumbers, if we knew the features that are positively correlated with the “orange” class (roundness, redness, sweetness, temperature, latitude where image was taken) in the immense sea of potential cues, we could then employ huge amounts of unlabeled images of oranges and cucumbers, to find the best relatively small group of such features. Also note that since only a small number of images are used for estimating the feature signs (as few as one per class), some signs may be wrong. However, the very weak sensitivity of the method to the number of labeled training samples strongly indicates that it is robust to noise in sign estimation, as long as most of the features are correctly oriented.

Discussion on the selected features:
Figure 17: For each training target class from Youtube-Objects videos (labels on the left), we present the most frequently selected ImageNet classifiers (input features), over independent experiments, with frames per shot and random shots for training. In images we show the classes that were always selected by our method when . On the right we show the probability of selection for the most important features together with other relevant classes and their frequency of selection presented as a list. Note how stable the selection process is and how related (but not identical) the selected classes are in terms of appearance, context or geometric part-whole relationships, to work robustly together as an ensemble. We find two aspects indeed surprising: 1) the high probability (perfect 1) of selection of the same classes, even for such small random training sets and 2) the fact that unrelated classes, in terms of meaning, could be so useful for classification, based on their surprising shape and appearance similarity.

We have noticed some surprising ways in which the class of a frame in Youtube-Objects is associated with a series of classes in ImageNet. There are different ways in which these associations are done:

  1. similarity of the global appearance of the two objects, but no semantic relation: eg. train vs banister, tigershark vs. plane, Polaroid camera vs. car, scorpion vs. motorbike, remote control vs. cat’s face, space heater vs. cat’s head.

  2. co-occurrence and similar context: helmet vs. motorbike

  3. part-to-whole object relation: grille, mirror and wheel vs. car

  4. combinations of the previous: dock vs. boat, steel bridge vs. train, albatross vs. plane.

Another observation would be the fact that some of the classes play a role of borders between the positive class and the others. This ensures the separation between the main class and the neighbouring classes. Another benefit is the fact that although there is no classifier for a certain class, it manages to learn how to distinguish this class from the others by using together other existent classes that are similar to it. For example, even though in ImageNet there is not a “cow” class, it learns the new concept from the ones that are available. In order to support our claims we show in Figure 17 for each class in Youtube-Objects the classes from ImageNet whose weights were the biggest, which means that they mattered more. We can notice that many selected classes are similar in appearance to the positive class, this is most visible in the case of the aeroplane class, while for other classes the resemblance is also at the semantic level, not only in appearance.

7 Conclusion

We present a fast feature selection and learning method that requires minimal supervision, with strong theoretical properties and excellent generalization and accuracy in practice. The crux of our approach is its ability to learn from unlabeled data once the feature signs are determined. Our contribution could open doors for new and exciting research in machine learning, with practical and theoretical impact. Both our supervised and unsupervised approaches can quickly learn from limited data and identify sparse combinations of features that outperform powerful methods such as SVM, AdaBoost, Lasso and greedy sequential selection — in both time and accuracy. With a formulation that permits very fast optimization and effective learning from large heterogeneous feature pools, our approach provides a useful tool for many recognition tasks, suited for real-time, dynamic environments. Our work complements much of the machine learning research on developing new, more powerful, classifiers. While this thesis has primarily demonstrated the effectiveness of our feature combinations in a specific context, our methods are general and could be used in conjunction with any machine learning algorithm.

We tested the method on a difficult video dataset and also showed that knowledge transfer is possible between datasets with very different characteristics: starting from different object classes, to different image quality and positioning of the target object. The method needs very limited labeled data for computing the signs of the features (whether they are positively or negatively correlated). It manages to compute the signs quite well even when only one frame per class is presented. And the method can handle successfully high quantities of unlabeled data. Moreover, after a percent of the unlabeled data are presented to the algorithm, the recognition accuracy reaches a plateau which means that even fewer examples are enough to learn. Either the supervised and the unsupervised approaches are better than most of the methods mentioned above.

The proposed method has strong theoretical properties; it guarantees the sparsity of the solution, all features have the same contribution and together, they sum to 1. The original supervised formulation was a convex optimization problem with a global minimum, while the unsupervised formulation is a concave problem, more difficult to solve, only with local minima.

Even though our algorithm is a standalone feature selection method, it can also be used in combination with other machine learning methods, an example could be to combine it with SVM: apply SVM only on the selected features by our method, as we did in our experiments.

8 Future work

In the next steps of this work we intend to apply our method on new datasets. We can also apply our feature selection method for different problems, because this is a general approach which is not designed especially for object recognition. Moreover we want to try to combine it with some neural networks to use features obtained on different levels of the networks and feed them to our feature selection algorithm. We also take into consideration using other features more video-oriented, like motion. Until now, we used only features that could have been applied also on images. We might also improve our prediction by taking into consideration the fact that some frames come from the same shot and take the class predicted in the majority of the frames as the class of the given shot.

Another idea would be to create an unsupervised hierarchy starting from our unsupervised variant of the algorithm. We want to add a new level to this algorithm by creating other features. We can consider regions of images that contain a pattern built from the pixels already chosen in the previous stage on which we can apply functions like max, min, mean and create new features. We can also make a local search around the centers of these regions because maybe the pattern contained in the current region responds better if it is shifted a few pixels. The values obtained by applying the functions mentioned on these regions might be considered higher level features that can be used either in parallel with the old ones, or separately. We need to optimize some parameters that characterize these features: the size of the region and the distance that we look around for a better position. We choose the centroids of these regions using our unsupervised algorithm, thus we create the new level of unsupervised learning. We are currently working on this idea, but it requires more investigation.

References

  • [1] I. Guyon and A. Elisseeff. An introduction to variable and feature selection. The Journal of Machine Learning Research, 3:1157–1182, 2003.
  • [2] Andrew Y Ng. On feature selection: learning with exponentially many irrevelant features as training examples. pages 404–412, 1998.
  • [3] Richard O Duda, Peter E Hart, et al. Pattern classification and scene analysis, volume 3. Wiley New York, 1973.
  • [4] Sanjeev Arora, Aditya Bhaskara, Rong Ge, and Tengyu Ma. Provable bounds for learning some deep representations. arXiv preprint arXiv:1310.6343, 2013.
  • [5] Yangqing Jia, Evan Shelhamer, Jeff Donahue, Sergey Karayev, Jonathan Long, Ross Girshick, Sergio Guadarrama, and Trevor Darrell. Caffe: Convolutional architecture for fast feature embedding. arXiv preprint arXiv:1408.5093, 2014.
  • [6] Alessandro Prest, Christian Leistner, Javier Civera, Cordelia Schmid, and Vittorio Ferrari. Learning object class detectors from weakly annotated video. In

    Computer Vision and Pattern Recognition (CVPR), 2012 IEEE Conference on

    , pages 3282–3289. IEEE, 2012.
  • [7] Marius Leordeanu, Alexandra Radu, and Rahul Sukthankar. Features in concert: Discriminative feature selection meets unsupervised clustering. arXiv preprint arXiv:1411.7714, 2014.
  • [8] Marius Leordeanu, Alexandra Radu, Shumeet Baluja, and Rahul Sukthankar. Labeling the features not the samples: Efficient video classification with minimal supervision. In AAAI-16, to appear, 2016.
  • [9] Marius Leordeanu, Alexandra Radu, Shumeet Baluja, and Rahul Sukthankar. Labeling the features not the samples: Efficient video classification with minimal supervision. arXiv preprint arXiv:1512.00517, 2015.
  • [10] Marius Leordeanu and Rahul Sukthankar. Thoughts on a recursive classifier graph: a multiclass network for deep object recognition. arXiv preprint arXiv:1404.2903, 2014.
  • [11] Christophe Couvreur and Yoram Bresler. On the optimality of the backward greedy algorithm for the subset selection problem. SIAM Journal on Matrix Analysis and Applications, 21(3):797–808, 2000.
  • [12] Jianmei Guo, Jules White, Guangxin Wang, Jian Li, and Yinglin Wang. A genetic algorithm for optimized feature selection with resource constraints in software product lines. Journal of Systems and Software, 84(12):2208–2221, 2011.
  • [13] Md Monirul Kabir, Md Shahjahan, and Kazuyuki Murase. A new hybrid ant colony optimization algorithm for feature selection. Expert Systems with Applications, 39(3):3747–3763, 2012.
  • [14] Girish Chandrashekar and Ferat Sahin. A survey on feature selection methods. Computers & Electrical Engineering, 40(1):16–28, 2014.
  • [15] Hoai An Le Thi and Manh Cuong Nguyen.

    Efficient algorithms for feature selection in multi-class support vector machine.

    In

    Advanced Computational Methods for Knowledge Engineering

    , pages 41–52. Springer, 2013.
  • [16] Zhenqiu Liu and Gang Li. Efficient regularized regression for variable selection with l0 penalty. arXiv preprint arXiv:1407.7508, 2014.
  • [17] Zhixiang Xu, Gao Huang, Kilian Q Weinberger, and Alice X Zheng. Gradient boosted feature selection. In Proceedings of the 20th ACM SIGKDD international conference on Knowledge discovery and data mining, pages 522–531. ACM, 2014.
  • [18] Tong Zhang et al. Multi-stage convex relaxation for feature selection. Bernoulli, 19(5B):2277–2293, 2013.
  • [19] Feiping Nie, Heng Huang, Xiao Cai, and Chris H Ding. Efficient and robust feature selection via joint l2, 1-norms minimization. In Advances in Neural Information Processing Systems, pages 1813–1821, 2010.
  • [20] Robert Tibshirani. Regression shrinkage and selection via the lasso. Journal of the Royal Statistical Society. Series B (Methodological), pages 267–288, 1996.
  • [21] Hui Zou and Trevor Hastie. Regularization and variable selection via the elastic net. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 67(2):301–320, 2005.
  • [22] Thomas G Dietterich. Ensemble methods in machine learning. In Multiple classifier systems, pages 1–15. Springer, 2000.
  • [23] R. Maclin and D. Opitz. Popular ensemble methods: An empirical study. arXiv preprint arXiv:1106.0257, 2011.
  • [24] Peter Bühlmann. Bagging, boosting and ensemble methods. In Handbook of Computational Statistics, pages 985–1022. Springer, 2012.
  • [25] Yoav Freund and Robert E Schapire. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences, 55(1):119–139, 1997.
  • [26] Leo Breiman. Bagging predictors. Machine learning, 24(2):123–140, 1996.
  • [27] Leo Breiman. Random forests. Machine learning, 45(1):5–32, 2001.
  • [28] Suk Wah Kwok and Chris Carter. Multiple decision trees. arXiv preprint arXiv:1304.2363, 2013.
  • [29] Yi Yang, Heng Tao Shen, Zhigang Ma, Zi Huang, and Xiaofang Zhou. l2, 1-norm regularized discriminative feature selection for unsupervised learning. In IJCAI Proceedings-International Joint Conference on Artificial Intelligence, volume 22, page 1589. Citeseer, 2011.
  • [30] Zhi-Hua Zhou, Jianxin Wu, and Wei Tang. Ensembling neural networks: many could be better than all. Artificial intelligence, 137(1):239–263, 2002.
  • [31] Paul Viola and Michael J Jones. Robust real-time face detection. International journal of computer vision, 57(2):137–154, 2004.
  • [32] David G Lowe. Distinctive image features from scale-invariant keypoints. International journal of computer vision, 60(2):91–110, 2004.
  • [33] Navneet Dalal and Bill Triggs. Histograms of oriented gradients for human detection. In Computer Vision and Pattern Recognition, 2005. CVPR 2005. IEEE Computer Society Conference on, volume 1, pages 886–893. IEEE, 2005.
  • [34] Rui Min and HD Cheng.

    Effective image retrieval using dominant color descriptor and fuzzy support vector machine.

    Pattern Recognition, 42(1):147–157, 2009.
  • [35] Pedro F Felzenszwalb, Ross B Girshick, David McAllester, and Deva Ramanan. Object detection with discriminatively trained part-based models. Pattern Analysis and Machine Intelligence, IEEE Transactions on, 32(9):1627–1645, 2010.
  • [36] E.T. Rolls and G. Deco. The noisy brain: stochastic dynamics as a principle of brain function, volume 34. Oxford university press Oxford, 2010.
  • [37] S. Sarkar and K.L. Boyer.

    Quantitative measures of change based on feature organization: Eigenvalues and eigenvectors.

    71(1):110–136, 1998.
  • [38] Samuel R Bulò and Marcello Pelillo. A game-theoretic approach to hypergraph clustering. In Advances in neural information processing systems, pages 1571–1579, 2009.
  • [39] Hairong Liu, Longin J Latecki, and Shuicheng Yan. Robust clustering as ensembles of affinity relations. In Advances in neural information processing systems, pages 1414–1422, 2010.
  • [40] Marius Leordeanu and Cristian Sminchisescu. Efficient hypergraph clustering. In International Conference on Artificial Intelligence and Statistics, pages 676–684, 2012.
  • [41] Marius Leordeanu, Martial Hebert, and Rahul Sukthankar. An integer projected fixed point method for graph matching and map inference. In Advances in neural information processing systems, pages 1114–1122, 2009.
  • [42] M. Frank and P. Wolfe. An algorithm for quadratic programming. Naval Research Logistics Quarterly, 3(1-2):95–110, 1956.
  • [43] Alessandro Prest and Vittorio Ferrari. Youtube-objects dataset, June 2012.
  • [44] Alex Krizhevsky. The cifar-10 dataset, 2009.
  • [45] Li Fei-Fei and all. Imagenet, 2014.
  • [46] Chih-Chung Chang and Chih-Jen Lin. LIBSVM: A library for support vector machines. ACM Transactions on Intelligent Systems and Technology, 2:27:1–27:27, 2011. Software available at http://www.csie.ntu.edu.tw/~cjlin/libsvm.