Scalable Solution for Approximate Nearest Subspace Search

03/29/2016 ∙ by Masakazu Iwamura, et al. ∙ 0

Finding the nearest subspace is a fundamental problem and influential to many applications. In particular, a scalable solution that is fast and accurate for a large problem has a great impact. The existing methods for the problem are, however, useless in a large-scale problem with a large number of subspaces and high dimensionality of the feature space. A cause is that they are designed based on the traditional idea to represent a subspace by a single point. In this paper, we propose a scalable solution for the approximate nearest subspace search (ANSS) problem. Intuitively, the proposed method represents a subspace by multiple points unlike the existing methods. This makes a large-scale ANSS problem tractable. In the experiment with 3036 subspaces in the 1024-dimensional space, we confirmed that the proposed method was 7.3 times faster than the previous state-of-the-art without loss of accuracy.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 4

page 6

page 8

page 9

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Subspace representation

, which represents something by a linear subspace in the Euclidean space, attracts increasing attention in the computer vision community. Some examples of research work related to it include activity recognition 

[1, 2], video clustering [1], pedestrian detection [3]

, face recognition 

[4, 1, 5], object recognition [5, 6, 7], feature representation [8, 9], gender recognition [10] and MRI data analysis [11].

A major usage of the subspace representation is for pattern recognition, which requires to find the nearest subspace to the query subspace. Thus, this problem, called nearest subspace search (NSS) problem, is fundamental and influential to many applications. In particular, a scalable solution that is fast and accurate for a large problem has a great impact and is expected to be indispensable in the near future.

The main difficulty of the NSS problem is that the distance between subspaces is not measured by a common distance (e.g., the Euclidean distance) defined in the Euclidean space. Instead, it is measured by a special kind of distances defined in the Grassmannian, where it is regarded as the set of linear subspaces and a point in the manifold represents a subspace. Thus, solutions of the well-studied approximate nearest neighbor search (ANNS) problem111 To avoid confusion between ANSS and ANNS, we add (representing a subspace) to ANSS and (representing a point) to ANNS. , which finds the nearest point to the query point, are not directly applicable to the problem.

To cope with the difficulty, two approaches have been proposed. One is to develop an approximate nearest subspace search (ANSS) method dedicated to the Grassmannian [12]. The method uses a distance defined in the Grassmannian. In this approach, the existing framework of the ANNS problem is applied to the ANSS

 problem. However, even with an excellent algorithm, one cannot avoid the heavy computational burden of principal component analysis (PCA) or singular value decomposition (SVD) required to calculate every distance between subspaces. The other is to map points in the Grassmannian to the Euclidean space 

[13, 14, 15]. In this approach, solutions of the ANNS problem are usable as they are. However, the dimensionality of the mapped space is too high to benefit from the ANNS methods.

The existing methods of both approaches do not efficiently solve the problem. This is because they are developed based on the traditional idea that a single instance (such as subspace or point) should be represented by a single point, which has been cultivated through the successful experience in solving the ANNS problem. Indeed, they all represent a subspace by a single point in either the Grassmannian or Euclidean space and directly apply the ANNS to the points.

In this paper, we propose a novel method that is computationally efficient in a large-scale ANSS problem. The proposed method is scalable to both the number of subspaces and the dimensionality of the feature space. Its main idea is to decompose a distance calculation in the Grassman manifold into multiple distance calculations in the Euclidean space. Each of the distance calculations is efficiently realized by an existing ANNS method in a different usage from the usual. Thus, this approach can be interpreted to represent a subspace by multiple points in the Euclidean space. This makes a large-scale ANSS problem tractable.

The contributions of this paper are listed below.

  • The paper presents a scalable solution in the ANSS problem, with regard to both the number of subspaces and the dimensionality of the feature space.

  • The main contribution of the proposed method can be intuitively interpreted to represent a subspace by multiple points while the existing methods adhere the traditional idea to represent a subspace by a single point.

  • In the experiment with 3036 subspaces in the 1024-dimensional space, we confirmed that the proposed method was 7.3 times faster than the previous state-of-the-art without loss of accuracy.

2 Preparation

This section provides necessary knowledge to read the paper that includes the Grassmannian, principal angles and distances between subspaces as well as the relationship between the Euclidean distance and an inner product. A part of Secs. 2.2, 2.3, 2.4 is based on [16].

Throughout the paper, bold capital letters denote matrices (e.g.,

) and bold lower-case letters denote column vectors (e.g.,

).

2.1 Squared Euclidean distance and inner product

Let us begin with reviewing the equivalence of the squared Euclidean distance and an inner product in a certain condition.

A squared Euclidean distance between two vectors and is given as

(1)
(2)

If and are unit vectors, i.e., , it becomes

(3)

In this case, the squared Euclidean distance and inner product are in the relationship of the one-by-one mapping, where a small distance corresponds to a large inner product and vice versa.

2.2 Grassmannian

Definition 1

The Grassmannian is the set of -dimensional linear subspaces of the .

Let be a orthonormal matrix such that , where is the identity matrix. spans a subspace, and denotes the subspace spanned by 222 The subspace spanned by is also spanned by other orthonormal matrices , where . is the group of orthonormal matrices. . is regarded as a point in the Grassmannian.

2.3 Principal angles

To find the nearest subspace, we have to define a distance between subspaces. Some distances are calculated using the principal angles defined below.

Definition 2

Let and be orthonormal matrices of size . The principal angles between two subspaces and can be computed from the SVD of as

(4)

where is an diagonal matrix such that , and and are orthonormal matrices such that and , respectively.

2.4 Distances between subspaces

The geodesic distance is a formal measure of the distance between two subspaces. It is the length of the shortest geodesic connecting the two points on the Grassmannian. Using the principal angles, the geodesic distance is given as

(5)

The geodesic distance is computationally expensive because it requires either PCA or SVD to calculate the principal angles.

In the literature [16, 17], some other distances are defined using the principal angles. Among them, we introduce the projection metric given below.

(6)
(7)

This is one of two metrics kernelized in [16]. The kernelized version of the projection metric called projection kernel is given as follows.

(8)

Note that the projection kernel represents similarity while the projection metric does distance. The projection kernel is computationally cheaper than the geodesic distance because it does not depend on the principal angles. As is described in Sec. 4, it has a desirable property for the proposed method.

Other kernels for the Grassmannian are proposed in [18, 10]. Among them, we introduce the

Grassmannian radial basis function (Grassmannian RBF; GRBF) kernel

 [10] given as

(9)

The Grassmannian RBF kernel also has a desirable property for the proposed method.

3 Related Work

In this section, we review related work in greater detail than in the introduction using the terminologies introduced in the previous section.

A representative method to solve the ANSS problem is the one proposed by Basri et al. in a series of researches [13, 14, 15]. We call the method BHZ, following [12]. Their main idea is to map a linear subspace (i.e., a point in the Grassmannian) to a point in the Euclidean space so as to apply an ANNS method. As is already pointed out, its major drawback is that the dimensionality of the mapped space is too high to benefit from the ANNS methods. Letting be the dimensionality of the original feature space, that of the mapped space is . For example, subspaces in the 1024- and 256-dimensional spaces used in the experiments in the paper are respectively mapped to points in the 524800- and 32896-dimensional spaces. In such a high-dimensional space, no ANNS method efficiently works and even the brute-force search does not work.

Wang et al. propose another kind of method called Grassmanian-based locality hashing (GLH). It realizes the framework of the locality sensitive hashing (LSH) [19] in the Grassmannian using the geodesic distance. Its main idea is to index the subspaces in the database with random vectors. More precisely, for each random vector, subspaces are divided into two states based on whether the angle between a random vector and a subspace is within a threshold angle (less than or equal to ) or not. It is, however, not practical in the large-scale ANSS problem. A cause is that in a high dimensional space, the angle of two vectors tends to be close to orthogonal; almost no chance for the angle to be less than or equal to . This means that it cannot divide subspaces as desired. In addition, as already pointed out, the fact that computationally expensive PCA or SVD is required to calculate every geodesic distance between subspaces is a heavy burden.

4 Proposed Method

4.1 Problem definition

The problem we address is to find the nearest subspace from the subspaces stored in the database, each of which is denoted by , to the given query subspace, denoted by . Here, and are orthonormal matrices of size . The problem is formulated as

(10)

or

(11)

where is the ID of the nearest subspace to the query subspace, and and are some distance and similarity functions between subspaces, respectively. In this paper, we use the projection kernel given in Eq. (8) and the Grassmannian RBF kernel given in Eq. (9) as the similarity function in Eq. (11333One may have a question about the pros and cons of replacing the geodesic distance with these kernels with regard to accuracy. However, it is almost impossible to have a general discussion on it because the accuracy fully depends on data. Thus, the accuracies of these kernels may be worse than that of the geodesic distance though the experimental results in the paper were opposite.. Hereafter, we present the proposed method using the projection kernel. The outcome on the projection kernel is directly applicable to the Grassmannian RBF kernel because Eq. (9) is rewritten as

(12)

4.2 Approximation of distance calculation

As mentioned above, the proposed method calculates a distance between subspaces based on multiple distances in the Euclidean space. This is realized by decomposing the similarity function.

Letting and , the elements of their matrix product are given as

(13)

Thus, Eq. (8) of the projection kernel can be deformed as

(14)

In contrast to the squared Euclidean distance having only one inner product in Eq. (3), the projection kernel has inner products. Most of them, however, do not contribute to determine the value of the projection kernel. Fig. 1 shows typical distributions of inner products.

(a) 256-dimensional feature vector of the object recognition task.
(b) 256-dimensional feature vector of the handwritten character recognition task.
(c) 1024-dimensional feature vector of the handwritten character recognition task.
Figure 1: Histogram of inner products in the object recognition task in Sec. 5.1 and handwritten character recognition task in Sec. 5.2.

As seen in the figure, most of the inner products take values close to zero. Favorably, in a higher dimensional space, more inner products take values close to zero. It is because the sum of each column of in Eq. (13

) is bounded by one regardless of dimensionality. That is, for a query eigenvector

, . This can be geometrically interpreted as that the length of a unit vector projected on a subspace is less than one. This implies that only a limited number of inner products contribute to determine the value of the projection kernel, and more importantly they determine the order of these values of different subspaces.

As easily conceivable from the property, taking dominating inner products is sufficient to select the nearest subspace with the maximum similarity. Here a question arises. How can we efficiently take the dominating (large-valued) inner products? One might think of a process such as (1) calculate all inner products first and then (2) select large-valued ones. However, this does not help reduce computational time because the process (1) requires much time while (2) does not.

Our strategy efficiently realizes it by using an ANNS method in a different usage from the usual. As seen in Eq. (3), the squared Euclidean distance and inner product are equivalent for unit vectors. Recall that a small distance means a large inner product. Thus, finding some vectors with large-valued inner products is equivalent to finding the same number of vectors with small Euclidean distances. An important note is that we need a special care to treat squared values of inner products in this scheme. Since squared values of inner products are used in Eq. (14), not only vectors having small distances but also ones having large distances should be retrieved. So as to cope with this, query vectors with the opposite signs (e.g., for ) are also used as queries of the ANNS problem. Since is the most distant vector to on the surface of the unit hyper-sphere, the most distant vectors to are obtained as the nearest neighbors to .

One might worry about robustness of the proposed method against rotation of bases that span subspaces (i.e., ) because of the following reason.

The Euclidean distance between two vectors is not preserved after the two vectors are differently rotated. That is,

(15)

The proposed method selects large-valued inner products based on the Euclidean distance. Thus, if the bases are rotated, the proposed method cannot correctly select the large-valued inner products.

Though the Euclidean distance is affected by rotation of bases like Eq. (4.2) and the proposed method is based on the Euclidean distance, they do not mean that the proposed method is spoiled by rotation of bases. The reason is that the proposed method does not try to select the same inner products regardless of rotation of bases but adaptively selects large-valued inner products. Rotation of bases changes the values of elements (inner products) of the matrix in the right-hand side of Eq. (13). Hence, large-valued inner products that should be selected by the proposed method are changed. They are adaptively selected by an ANNS method which can efficiently select near points to a query (near points correspond to large-valued inner products).

Input: Vector data to calculate their subspaces
Output: Eigenvectors indexed by an ANNS method
// [Step 1] Calculate eigenvectors
Calculate the eigenvectors of the subspace to be stored in the database, for all . // [Step 2] Store eigenvectors into database
Store all the column vectors, i.e., , into the database. // [Step 3] Indexing
Execute the indexing process of an ANNS method.
Algorithm 1 Indexing procedure of the proposed method.
Input: Query vectors to calculate its subspace, Eigenvectors indexed by the ANNS method
Output: The nearest subspace(s)
// [Step 1] Calculate query eigenvectors
Calculate in the same manner as in the indexing procedure. // [Step 2] ANNS search
Using the ANNS method, search the (approximate) nearest neighbors to each of column vectors () of and ones with opposite signs (). // Preparation for Step 3
Let be an indicator to represent either or ; “” for and “” for . Let and be the subspace ID () and eigenvector ID () of the -th nearest neighbor of either or , switched by . The nearest neighbors of and are given as
(16)
// [Step 3] Calculate an approximate similarity of the projection kernel
Calculate an approximate similarity of the projection kernel (Eq. (14)) in an incremental manner described in Steps 3-1 and 3-2. // [Step 3-1] Initialize similarities
Initialize the similarities of all subspaces with 0 by
(17)
// [Step 3-2] Update similarities
For all , and , update the similarity as follows.
(18)
// [Step 4] Select the nearest subspace(s)
Select the subspace(s) having the largest similarity(ies) as the nearest subspace(s).
Algorithm 2 Searching procedure of the proposed method.

4.3 Procedure

The procedure of the proposed method is given as follows.

4.3.1 Indexing

The indexing procedure of the proposed method is shown in Algorithm 1. In the process, all eigenvectors spanning the subspaces are stored in the database. Then, they are indexed following the manner of an ANNS method.

4.3.2 Search

The searching procedure of the proposed method is shown in Algorithm 2. In the process, Eq. (14) is calculated in an incremental manner; larger inner products are summed up earlier. The computation is approximated by quitting the process before all inner products are calculated. If an ANNS method finds nearest neighbors to each of vectors, The total number of inner products calculated is . The vectors consist of query eigenvectors and ones with opposite signs to (i.e., ). With , the proposed method outputs the result without approximation, where is the number of subspaces.

It is noteworthy that it is not necessary to calculate the inner product in Eq. (18) because it is provided by the ANNS method. That is, from Eq. (3), the inner product is given in the form of

(19)

This also helps reduce computational time of the proposed method.

5 Experiments

To evaluate the scalability of the proposed methods, a large dataset in terms of the number of subspaces is desirable. As far as the authors know, however, there is no appropriate dataset that satisfies both (1) the number of categories (subspaces) is large (preferably 10000+) and (2) multiple samples per category are available. Thus, we used a dataset for object recognition that is commonly used to evaluate subspace classification methods and a relatively large dataset for handwritten Japanese character recognition.

Type Abbreviation Method Parameters used in Parameters used in
Sec. 5.1 Sec. 5.2
GD Geodesic distance in Eq. (5) N/A N/A
NSS PK Projection kernel in Eq. (8) N/A N/A
GRBF Grassmannian RBF kernel
BHZ ANSS method by Basri et al. [15] N/A N/A
GLH Grassmanian-based locality hashing [12] Combinations of
ANSS and
APK* Approximate projection kernel
AGRBF* Approximate Grassmannian RBF kernel
Table 1: Summary of the methods and parameters used in the experiments. * in abbreviation indicates the proposed methods.

The summary of the ANSS methods used in the experiments is shown in Table 1. As the proposed methods, in addition to the approximate projection kernel (APK) presented in Sec. 4, approximate Grassmannian RBF kernel (AGRBF) obtained by replacing the projection kernel (PK) in Eq. (8) with Grassmannian RBF kernel in Eq. (9) was also used. The proposed methods were compared with existing NSS and ANSS methods. The purpose of comparison with the NSS methods is to evaluate how much computational time the proposed methods can save with a reasonable loss of accuracy. As the NSS methods, geodesic distance (GD) in Eq. (5), projection kernel (PK) in Eq. (8), and Grassmannian RBF kernel (GRBF) in Eq. (9) were used. The purpose of comparison with the ANSS methods is to evaluate how much computational time the proposed methods can save without loss of accuracy. As ANSS methods, the one proposed by Basri et al. (BHZ) [15], and Grassmanian-based locality hashing (GLH) [12] were used.

All the methods were implemented in the C++ language by the authors. The liboctave library was used to efficiently calculate SVD and PCA as well as matrix products. As the ANNS method for the proposed methods (APK and AGRBF), the bucket distance hashing (BDH) [20] was used. Though we tried to use it also in BHZ, it did not work due to too high dimensionality. Thus, we used the brute-force search method instead. GLH indexes the subspaces in the database by dividing the angles between the subspaces and random vectors into two. We fixed its threshold to as suggested in the paper. For other methods (GD, PK and GRBF), the brute-force search method was used. Note that PK and GRBF achieve the same recognition accuracy because their difference is whether the exponential function is taken or not, and the order of similarities of subspaces do not change in PK and GRBF. In the same reason, APK and AGRBF achieve the same recognition accuracy.

We employed servers where 4 CPUs (Intel Xeon E5-4627 v2, 3.3GHz, 8 cores) and 512GB memory were installed. All data were stored on memory. Each program was executed as a single thread on a single core. The average results of seven runs are shown below.

5.1 Object recognition

The ETH-80 dataset contained eight object categories, each of which contained ten objects [21]. Thus, we had 80 categories in total. For each object, 41 images captured from different viewpoints were available. Sample images are shown in Fig. 2.

Figure 2: Sample images in the ETH-80 database.
Figure 3: Sample images of the ETL9B database.

The resolution of the images was . The images were converted to

grayscale images, and each pixel value was used as a feature. Finally, 256-dimensional feature vectors were obtained. The images of odd numbers (21 images per category) were used for training and the ones of even numbers (20 images per category) were used for testing. The number of subspaces stored in the database was 80 (equivalent to the number of categories) and that of query subspaces was 880. 880 comes from use of 11 query subspaces for each category. The reason we had 11 query subspaces was that, among 20 training images, consecutive 10 images were selected 11 times.

The dimensionality of subspaces was determined in the preliminary experiment to achieve the highest accuracy by PK. It was . Since the number of subspaces (categories) in the database was 80, the total number of inner products was 3920. 3920 comes from . Since the dimensionality of subspaces was determined to be 7, the number of inner products of Eq. (13) became 49 for each category. Thus, with , the proposed methods APK and AGRBF achieve the same accuracies as PK and GRBF as long as all the true -nearest neighbors are retrieved by the ANNS method.

(a) vs Accuracy
(b) vs Time
(c) Time vs Accuracy (vs NSS methods)
(d) Time vs Accuracy (vs ANSS methods)
Figure 4: Experimental results on the object recognition task.

The recognition results are shown in Fig. 4. In Figs. LABEL:sub@fig:ETH80_k_vs_Acc and LABEL:sub@fig:ETH80_k_vs_Time, the accuracies and computational times of both proposed methods APK and AGRBF almost monotonically increased as of the proposed methods increased. Their accuracies rapidly increased and reached the accuracy of PK and GRBF at . At the same , the computational times of APK and AGRBF became 1.72 and 1.64 times larger than those of PK and GRBF, respectively, due to the computational overhead. In Figs. LABEL:sub@fig:ETH80_Time_vs_Acc_NSS and LABEL:sub@fig:ETH80_Time_vs_Acc_ANSS, APK was compared with existing NSS and ANSS methods. Fig. LABEL:sub@fig:ETH80_Time_vs_Acc_NSS shows that APK was 4.5 times faster than GD in almost same accuracy, and was 8.4 times faster than PK with a 15.57% loss of accuracy. Fig. LABEL:sub@fig:ETH80_Time_vs_Acc_ANSS shows that without loss of accuracy, APK was 112 and 5.5 times faster than BHZ and GLH, respectively.

5.2 Handwritten Japanese character recognition

We used the handwritten Japanese character dataset ETL9B, which is the binarized version of the ETL9 released in the 1980s 

[22]. Sample images are shown in Fig. 3. It contained handwritten Japanese characters of 3036 categories. Each character category had 200 samples written by 200 subjects. The first 100 and latter 100 samples per category were used for training and testing, respectively.

The character images in the ETL9B dataset were of pixels and in black and white. They were nonlinearly normalized to be adjusted to a -pixel grid using a nonlinear normalization method [23]. Then, 1024- and 256-dimensional feature vectors were calculated. In the 1024-dimensional feature vectors, image patches were laid out on a -pixel image without overlapping, and the sum of four pixel values (from 0 to 4) was used as a feature. Then, 1024-dimensional feature vectors were obtained. The 256-dimensional feature vectors were created in an almost same manner. The only difference was that image patches were used instead of the image patches.

The dimensionality of subspaces was explored in the same manner as the object recognition task, and the best accuracy of PK was achieved with in both 1024- and 256-dimensional feature vectors. However, with , it is not an NSS task. For the purpose of confirming the effectiveness of the proposed methods as ANSS methods, we selected . Since the number of subspaces in the database was 3036, the total number of inner products was 75900. With , the proposed methods achieve the same accuracies as the ones without approximation.

(a) vs Accuracy
(b) vs Time
(c) Time vs Accuracy (vs NSS methods)
(d) Time vs Accuracy (vs ANSS methods)
Figure 5: Experimental results on the character recognition task with the 256-dimensional feature vectors.

The recognition results of 256- and 1024-dimensional feature vectors are shown in Figs. 5 and 6, respectively. Figs. LABEL:sub@fig:ETL9B_16_k_vs_Acc, LABEL:sub@fig:ETL9B_16_k_vs_Time, LABEL:sub@fig:ETL9B_32_k_vs_Acc and LABEL:sub@fig:ETL9B_32_k_vs_Time show the same tendency observed in the object recognition task; the accuracies and computational times of the proposed methods monotonically increased as increased. With the 256-dimensional feature vectors, Fig. LABEL:sub@fig:ETL9B_16_k_vs_Acc shows that both APK and AGRBF reached the accuracy of PK and GRBF at . At the same , Fig. LABEL:sub@fig:ETL9B_16_k_vs_Time shows that their computational times became 0.93 and 1.2 times of those of PK and GRBF, respectively. With the 1024-dimensional feature vectors, Fig. LABEL:sub@fig:ETL9B_32_k_vs_Acc shows that both APK and AGRBF reached the accuracy of PK and GRBF at . At the same , Fig. LABEL:sub@fig:ETL9B_32_k_vs_Time shows that their computational times became 0.22 and 0.26 times of those of PK and GRBF, respectively. In Figs. LABEL:sub@fig:ETL9B_16_Time_vs_Acc_NSS, LABEL:sub@fig:ETL9B_16_Time_vs_Acc_ANSS, LABEL:sub@fig:ETL9B_32_Time_vs_Acc_NSS and LABEL:sub@fig:ETL9B_32_Time_vs_Acc_ANSS, APK was compared with existing NSS and ANSS methods. With the 256-dimensional feature vectors, Fig. LABEL:sub@fig:ETL9B_16_Time_vs_Acc_NSS shows that APK was 17.1 times faster than GD in almost same accuracy, and was 311.1 times faster than GRBF with a 11.00% loss of accuracy. Fig. LABEL:sub@fig:ETL9B_16_Time_vs_Acc_ANSS shows that without loss of accuracy, APK was 66.0 and 16.1 times faster than BHZ and GLH, respectively. With the 1024-dimensional feature vectors, Fig. LABEL:sub@fig:ETL9B_32_Time_vs_Acc_NSS shows that in almost same accuracy, APK was 38.8 and 3.6 times faster than GD and GRBF, respectively. Fig. LABEL:sub@fig:ETL9B_32_Time_vs_Acc_ANSS shows that without loss of accuracy, APK was 2341 and 7.3 times faster than BHZ and GLH, respectively. Comparing the results with the 256- and 1024-dimensional feature vectors, the advantage of the proposed methods to the other methods was larger with the 1024-dimensional feature vectors. This supports the effectiveness of the proposed methods.

(a) vs Accuracy
(b) vs Time
(c) Time vs Accuracy (vs NSS methods)
(d) Time vs Accuracy (vs ANSS methods)
Figure 6: Experimental results on the character recognition task with the 1024-dimensional feature vectors.

6 Conclusion

In this paper, we presented a scalable solution for the approximate nearest subspace search problem. The proposed methods are computationally efficient in a large-scale problem, with regard to both the number of subspaces and the dimensionality of the feature space. The key idea is to decompose a distance calculation in the Grassman manifold into multiple distance calculations in the Euclidean space. This makes it possible to efficiently approximate the distance calculation even in the difficult conditions. In the experiment with 3036 subspaces in the 1024-dimensional space, we confirmed that one of the proposed methods was 7.3 times faster than the previous state-of-the-art without loss of accuracy.

Future work includes an extension of the proposed methods to cope with subspaces of variable dimensionalities. The limitation of the dimensionalities of subspaces comes from the definition of the Grassmannian and all subspace distances defined on it have the same problem. This problem may be solved by using latest results such as [24].

Acknowledgment

This work is partially supported by JST CREST project, and JSPS KAKENHI #25240028.

References

  • [1] P. Turaga, A. Veeraraghavan, A. Srivastava, and R. Chellappa, “Statistical computations on Grassmann and stiefel manifolds for image and video-based recognition,” IEEE TPAMI, vol. 33, no. 11, pp. 2273–2286, Nov. 2011.
  • [2] A. Sanin, C. Sanderson, M. T. Harandi, and B. C. Lovell, “Spatio-temporal covariance descriptors for action and gesture recognition,” in Proc. WACV, Jan. 2013, pp. 103–110.
  • [3] Y. Hong, R. Kwitt, N. Singh, B. Davis, N. Vasconcelos, and M. Niethammer, “Geodesic regression on the Grassmannian,” in Proc. ECCV, ser. Lecture Notes in Computer Science, vol. 8690, Sep. 2014, pp. 632–646.
  • [4] L. Wang, X. Wang, and J. Feng, “Subspace distance analysis with application to adaptive bayesian algorithm for face recognition,” Pattern Recognition, vol. 39, no. 3, pp. 456–464, Mar. 2006.
  • [5] M. T. Harandi, C. Sanderson, S. Shirazi, and B. C. Lovell, “Graph embedding discriminant analysis on Grassmannian manifolds for improved image set matching,” in Proc. CVPR, Jun. 2011, pp. 2705–2712.
  • [6] S. Chen, C. Sanderson, M. T. Harandi, and B. C. Lovell, “Improved image set classification via joint sparse approximated nearest subspaces,” in Proc. CVPR, 2013.
  • [7] A. Cherian and S. Sra, “Riemannian sparse coding of positive definite matrices,” in Proc. ECCV, ser. Lecture Notes in Computer Science, vol. 8691, Sep. 2014, pp. 299–314.
  • [8] K. Kise and T. Kashiwagi, “1.5 million subspaces of a local feature space for 3d object recognition,” in Proc. 1st Asian Conference on Pattern Recognition, Nov. 2011, pp. 672–676.
  • [9] Z. Wang, B. Fan, , and F. Wu, “Affine subspace representation for feature description,” in Proc. ECCV, ser. Lecture Notes in Computer Science, vol. 8695, Sep. 2014, pp. 94–108.
  • [10] M. T. Harandi, M. Salzmann, S. Jayasumana, R. Hartley, and H. Li, “Expanding the family of Grassmannian kernels: An embedding perspective,” in Proc. 13th European Conference on Computer Vision, Part VII, ser. Lecture Notes in Computer Science, vol. 8695, Sep. 2014, pp. 408–423.
  • [11] H. J. Kim, N. Adluru, B. B. Bendlin, S. C. Johnson, B. C. Vemuri, and V. Singh, “Canonical correlation analysis on riemannian manifolds and its applications,” in Proc. ECCV, ser. Lecture Notes in Computer Science, vol. 8690, Sep. 2014, pp. 251–267.
  • [12] X. Wang, S. Atev, J. Wright, and G. Lerman, “Fast subspace search via Grassmannian based hashing,” in Proc. ICCV, Dec. 2013, pp. 2776–2783.
  • [13] R. Basri, T. Hassner, and L. Zelnik-Manor, “Approximate nearest subspace search with applications to pattern recognition,” in Proc. CVPR, Jun. 2007, pp. 1–8.
  • [14] ——, “A general framework for approximate nearest subspace search,” in Proc. 2nd International Workshop on Subspace Methods, Sep. 2009, pp. 109–116.
  • [15] ——, “Approximate nearest subspace search,” IEEE TPAMI, vol. 33, no. 2, pp. 266–278, Feb. 2011.
  • [16] J. Hamm and D. D. Lee, “Grassmann discriminant analysis: a unifying view on subspace-based learning,” in Proc. ICML, 2008.
  • [17] A. Edelman, T. A. Arias, and S. T. Smith, “The geometry of algorithms with orthogonality constraints,” SIAM Journal on Matrix Analysis and Applications, vol. 20, no. 2, pp. 303–353, 1998.
  • [18] M. T. Harandi, C. Sanderson, A. Wiliem, and B. C. Lovell, “Kernel analysis over riemannian manifolds for visual recognition of actions, pedestrians and textures,” in Proc. WACV, Jan. 2012, pp. 433–439.
  • [19]

    S. Har-Peled, P. Indyk, and R. Motwani, “Approximate nearest neighbor: Towards removing the curse of dimensionality,”

    Theory of computing, vol. 8, pp. 321–350, Jul. 2012.
  • [20] M. Iwamura, T. Sato, and K. Kise, “What is the most efficient way to select nearest neighbor candidates for fast approximate nearest neighbor search?” in Proc. ICCV, Dec. 2013, pp. 3535–3542.
  • [21] B. Leibe and B. Schiele, “Analyzing appearance and contour based methods for object categorization,” in Proc. CVPR, Jun. 2003.
  • [22] T. Saito, H. Yamada, and K. Yamamoto, “On the data base etl9 of handprinted characters in jis chinese characters and its analysis,” Trans. IEICE, vol. J68-D, no. 4, pp. 757–764, Apr. 1985.
  • [23] H. Yamada, Yamamoto, and T. Saito, “A nonlinear normalization method for handprinted kanji character recognition — line density equalization —,” Pattern Recognition, vol. 23, pp. 1023–1029, 1990.
  • [24] K. Ye and L.-H. Lim, “Distance between subspaces of different dimensions,” arXiv:1407.0900, Jul. 2014. [Online]. Available: http://arxiv.org/abs/1407.0900