1 Introduction
has proven to be a useful framework for signal processing. Each point from a dataset consisting of vectors in a Euclidean space is represented by a vector with only a few nonzero coefficients. Sparse modeling has lead to state of the art algorithms in image denoising, inpainting, supervised learning, and of particular interest here, object recognition. The systems described in
[1, 2, 5, 6, 7] use sparse coding as an integral element. Since the coding is done densely in an image with relatively large dictionaries, this is a computationally expensive part of the recognition system, and a barrier to real time application. The main contribution of this paper is a fast approximate algorithm for finding sparse representations; we use this algorithm to build a system with near state of the art recognition performance that runs in real time. During inference the algorithm uses a tree to assign an input to a group of allowed dictionary elements and then finds the corresponding coefficient values using a cached pseudoinverse. We give an algorithm for learning the tree, the dictionary and the dictionary element assignment, and along the way discuss methods for the more general problem of learning the groups in group structured sparse modelling.One standard formulation of sparse coding is to consider dimensional real vectors and represent them using dimensional real vectors using a dictionary matrix by solving
(1) 
where measures the number of nonzero elements of a vector; each input vector is thus represented as a vector with at most nonzero coefficients. While this problem is not convex, and in fact the problem in the variable is NPhard, there exist algorithms for solving both the problem in (e.g. Orthogonal Matching Pursuit, OMP ) and the problem in both variables (e.g. SVD [4]) that work well in many practical situations.
It is sometimes appropriate to enforce more structure on than just sparsity. For example, many authors have noted that the solution to the minimization in (1) (and its relaxation) is very unstable in the sense that nearby inputs can have very different coefficients, in part because of the combinatorially large number of possible active sets (i.e. sets of nonzero coordinates of ). This can be a problem in classification tasks. Other times we may know in advance some structure in the data that the coefficients should preserve. Various forms of structured sparsity are explored in [8, 9, 10, 11].
A simple form of structured sparsity is given by specifying a list of allowable active sets, and some function associating to each to one of the configurations. An example of this is the output of many subspace clustering algorithms. There, is reordered and partitioned into (where is a permutation matrix), so that each block is near a low dimensional subspace spanned by . Supposing for simplicity that each of the are of the same dimension , then if we set , the allowable active sets are given by , , etc. By setting the allowable active sets to the blocks, and the function to simply map each point to its nearest subspace (say in the standard sense of Euclidean projections), then we get an example of structured sparsity as described above; this sort of method is used in object recognition in [6].
In this work we will try to learn the configurations as well as the dictionary. We introduce a LLoydlike algorithm that alternates between updating the dictionary, updating the assignments of each data point to the groups, and updating the dictionary elements associated to a group via simultaneous OMP [12].
At inference time, we need a fast method for determining which group an belongs to. This is computationally expensive if there is a large number of groups and one needs check the projection onto each group. However, by specializing the Lloyd type algorithm to the case when each group is composed of a union of (perhaps only one) leaves of a binary decision tree, we will build a fast inference scheme into the learned dictionary. The key idea is that by using SOMP, we can learn which leaves should use which dictionary elements as we train the dictionary. To code an input, we march it down the tree until we arrive at the appropriate leaf. In addition to the decision vectors and thresholds, we will store a lookup table with the active set of each leaf as learned above, and the pseudoinverse of the columns of corresponding to that active set. Thus after following down the tree we need only make one matrix multiplication to get the coefficients.
Finally, we would like use these algorithms to build an accurate real time recognition system. We focus on a particular architecture studied in [1, 2, 6, 7]
. First, SIFT descriptors are calculated densely over the image. Then (a form of) sparse coding is used to calculate a sparse vector at every location from the corresponding sift vector. Then each feature is pooled over a small number of spatial regions and the results are concatenated. Finally the labels are obtained using linear SVM or logistic regression.
We use this pipeline with two modifications. First we write our own fast implementation of the SIFT descriptor. Second we use our fast algorithm for the sparse coding step. The resulting system achieves nearly the same performance as exact sparse coding calculation but processes size images at the rate of frames per second on a laptop computer with a quad core cpu.
The rest of this paper is organized as follows: Section 2, we discuss greedy structured sparse modeling, and describe in depth how to train a model that learns the structure, and that respects a given set of groups given by a tree. In section 3, we show experiments on image patches to qualitatively demonstrate what learned groups look like and then we apply our methods to object recognition.
2 Hashing and dictionary learning
2.1 A simple form of structured dictionary learning
Here we will first suppose that a list of perhaps overlapping groups on the coefficients is given. That is, if we are learning a representation of with atoms, each , where is the set of all subsets of its argument, is specified. We can generalize the LLoyd algorithm for means or flats to this setting. After initializing the dictionary , we find the distance of each in to its projection onto the span of for each . Each is associated to the with the smallest distance
(2) 
and we find the coefficients
Then we update to be the minimum of the convex problem
and repeat. Each of the subproblems either has an explicit solution or is convex, and so the energy decreases. When the training is finished, we define to be the function that maps each point in to the minimizing the error of the projection of onto the span of .
We can also run the same sort of algorithm when in addition to each group specifying a list of indices, it also specifies a cost for the use of the dictionary elements associated to each index. If we choose an cost for each of the coefficients, we still get explicit updates and the decrease of energy at each round.
Note that if the number of groups is very large, it may be too costly to find the best group for each exaustively. However, we can make a greedy approximation by running a modified OMP. Here, supposing at iteration of the OMP we have an active set , the available dictionary elements to add to are the union of all groups containing . It is not necessary to be able to enumerate all the groups to use this method, only to have a subroutine which given can return . However, using this sort of greedy approximation removes the guarantee that the energy decreases at each iteration.
2.2 Learning the groups with simultaneous orthogonal matching pursuit
In the previous section the groups were specified in advance. If we want to learn the groups, we can add a step in the algorithm. Now instead of taking the list of groups as input, we instead input just the number of dictionary elements and the number of coefficients allowed per . After associating to each the group that best represents it, we can turn around and consider all the associated to that group. Our task is then to choose a subset of the dictionary that best represents that group. A greedy approximation to this problem in the least squares sense is given by the Simultaneous Orthogonal Matching Pursuit algorithm (SOMP)[12]. This algorithm proceeds just as a standard OMP, but at each iteration, all the associated to a given group have to choose the next dictionary element added to the group together. See algorithm 1.
Unfortunately, because neither OMP nor SOMP is guaranteed to find the optimal solution to the NP hard problems they address, the energy may not decrease at each iteration with this scheme; however, as usual, we have found that in practice these methods do usually lead to a decrease in the energy. As in means, it may happen that no group uses a dictionary element; in such a situation one can remove a dictionary element from one of the groups, find the residual, and replace the unused dictionary element by the principal component of the residual.
We note that the model presented here can be thought of as a greedy sparse coding version of a “topic model”. The dictionary elements act as the words, the as the documents, and the groups are the topics. The algorithm learns the topics and the dictionary simultaneously.
2.3 Hashing, quantization, and dictionary learning
The main focus of this work will be choosing a that can be computed rapidly and learning a dictionary that respects . We will consider to be a hash function on , and hash buckets will be the atomic units of the groups; that is, the groups will either be the hash buckets or will be glued together from the hash buckets. This can be considered a sort of geometric regularization of the sparse coding problem: the active set will be forced to remain constant on the region of corresponding to each hash bucket.
Once is chosen, we will learn the dictionary (and perhaps groups) as above, but instead of allowing each to choose the group that best represents it individually, the in a hash bucket will need to choose the group that best represents them together on average. We will also try to approximate standard greedy dictionary learning; in this case, there will be one group for every hash bucket. As above, and as with means, it may happen that no spatial bucket uses a particular group; in that case we can just pick a bucket at random and use the output of SOMP on that bucket to regenerate the unused group.
Learning how to quantize is a much studied (but still not completely understood) problem. One common motivation is to build a data structure allowing nearest neighbors from a given data set to be quickly computed. Another common motivation is to use the buckets of the quantization as words to build bag of words feature representations. The relationship between vector quantization and sparse coding has studied before by many authors. In particular, means is simply sparse coding with only the coefficients and allowed, and only 1 nonzero per ^{1}^{1}1“shape gain coding” allows a nonbinary coefficient.
In this work we will use a means tree ^{2}^{2}2Although perhaps not exactly standard usage, we will call the data structure obtained from binary partitions of a hash with subdivisions along medians to define . We start by taking the entire data set and running means, obtaining centers and . We take each data point and find the angle between and ; is divided at the median. We then repeat on each of the pieces, continuing until each piece is within a given distance to its mean, or a set depth , whichever comes first. We initialize the means with farthest insertion, as in [13]. Note that our experience is that very few iterations are necessary, and really the farthest insertion is suffient; in fact cutting in random directions (with some additional tricks and randomizations) has been shown to lead to good partitions when the underlying data has a “manifold” structure, see [14]. The number of buckets at the bottom of the tree is upper bounded by ; we will choose small enough so that it is simple to store a lookup table with the indices into the dictionary for each bucket, as well as the decision vectors for each branch in the tree.
We also could use mappings of the form , where is a matrix, is some sort of nonlinearity (e.g. , or ), is an offset, and is a thresholding function . These mappings require less storage and are somewhat simpler to compute for the same bit depth, but on the data sets we work on, they have the disadvantage that many of the buckets are often empty or have very few entries for reasonable . While this can be remedied by simply gluing (nearly) empty buckets to nearby full buckets and updating the lookup table, we have found the trees to work better. Note also that unlike in nearest neighbor data structures, it is unnecessary for leaf nodes to keep track of spatially nearby leaves that are far away in the tree metric, because all we care about is which dictionary atoms are used at that node.
After building and training the dictionary, in order compute the coefficients of a new data point , we pass it though the tree, obtaining . We lookup in a table, and this gives an index of columns of ; at this point we solve the linear system to get the outputs. Alternatively, for each group, we can store (or some stable factorization), or , and just do the requisite matrix multiplications
2.4 Discussion of related work
The idea of clustering the input space and then using a different dictionary for each cluster has appeared several times before. As mentioned in the introduction, a simple example is the flats algorithm, or other subspace clustering algorithms [15]. There, the subdictionaries serve the dual purpose of determining the clusters and also finding the coefficients for the data points associated to them. More recently this technique has been succesfully applied to object recognition by [6, 7]. In those works, the clusters are determined by
means (or a Gaussian mixture model); in the first, there is a different dictionary for each cluster, and the code is the size of the union of all the subdictionaries, but only the blocks corresponding to the centroids near the input are nonzero. In the second work, the dictionaries for each centroid are the same, but the code is still a concatenation of the codes associated to each centroid (and are set to zero if the input does not belong to that centroid). The current work differes from these in two ways. The first is the use of a fast method for clustering, and the second is the use of shared parts across the dictionaries, where the organization of the parts sharing has been learned from the data.
In [16]
the authors construct a dictionary on the backbone of a hierarchical clustering with fast evaluation. They also use shared parts. However, in that work the part sharing is determined by the tree structure of the clustering, and not learned.
There is now a large literature on structured sparsity. Like this work, [11, 17] use a greedy approach for structured sparse coding based on OMP or CoSaMP. Unlike this work, they have provable recovery properties when the true coefficients respect the structure, and when the dictionaries satisify certain incoherence properites. On the other hand, those works do not attempt to learn the dictionary, and only discuss the forward problem of finding from and .The works in [8, 9, 10] use an approach to structured sparsity that allows for convex optimization in . In these works the coefficients are arranged into a predetermined set of groups, and the sparsity term penalizes the number of active groups, rather than the number of active elements; the dictionary is trained to fit the data. None of these works attempt to learn the group structure along with the dictionary
Finally we note that other works have explored the idea of accelerating sparse coding by training the dictionary along with an approximation method, e.g. [5, 18]. In the first, the approximation is via a single layer feed forward network, and in the second, via a multilayer feed forward network with a shrinkage nonlinearity. This work uses a tree and lookup table instead.
3 Experiments
3.1 What do the groups look like?
To get a sense of what kind of groups learned from algorithm 2 look like, we train a dictionary on 500,000 image patches, and view the results. The image patches are drawn from the PASCAL dataset, and their means are removed. We train a dictionary with 256 elements and 512 groups; each group has 5 dictionary elements in it. We train using the batch method, with a SVD update for the dictionary.
After training, some of the dictionary elements are used by many groups, and others are used by only a few. The median number of groups using a given element is 6; 47 elements are in exactly 1 group, and 15 are in more than 30. In figure 1 we display the dictionary ordered by the number of groups containing each element; this number increases in each column and moving to the right. Unsurprisingly, “popular” elements that belong to many groups are low frequency. In this figure we also show the groups containing a few chosen atoms.
3.2 Review of the image classification pipeline
Here we will review a standard pipeline for object recognition [1, 2]
, while giving details about our implementation, which streamlines certain components. It consists of the following parts: 1) Calculation of sift vectors at every location (sift grid) 2) Calculation of the feature vectors for every sift vector using the ”tree sparse coding” described above, 3) Spatial pyramidal max pooling 4) logistic regression or SVM classification. Care is taken to calculate each of these parts efficiently.
3.2.1 Sift grid
We run tests with two different implementations of dense sift. The first is matlab code by L. Lazebnik [1]. We also use a fast, approximate c++ version that we coded ourselves. The details are as follows:
The and derivatives. We convolve the image with two filters that are the and derivatives of Gaussian. This results in the values of and derivatives of the image intensity at every location of the image.
Orientation histogram. This operation takes the two gradient values at every location and smoothly bins them into histogram of eight orientations () as follows. First we calculate the orientation angle and magnitude . Let , . The final set of values is where the if and otherwise. Most of these operations are computationally expensive and therefore we precompute these values. We bin the and values into bins each and for every combination ( values) we calculate , . The bin range is chosen so that the values of and never fall outside the range of the binning so no checks are needed. After this computation we obtain values at every location of the image.
Smooth subsampling We subsample the resulting features by two in each direction. Specifically let be the input value obtained from the previous step, where is the feature number and is the location. The output value will be . This is efficient since it only involves additions. Note that it results in output values that are essentially four times larger the input values at each location.
Smoothing We convolve each feature with filter. This is calculated using again resulting in essentially four times larger output values then input values.
Combining and normalizing into sift vector Now we obtain component sift vector from every location of the features maps from the previous step. At every location (of the subsampled feature image) we first obtain component vector by concatenating the component vectors at the following locations , and . Then we normalize this vector as follows. If the norm of the vector is smaller then the threshold we keep the vector. If it is larger we normalize it to have size . The result is placed into the appropriate location of the final vector, where where are the dimensions of the original image. The dimensions are slightly smaller due to boundary effects. This last operation (combining and normalizing) is the most expensive operation in the sift grid calculation and we took care to implement it efficiently. Note that in Lazebnik’s (and Lowe’s original) sift the smoothing is done over a larger neighborhood with inputs near the center weighted more then those further. This makes the output more smoothly varying under translations; in our case we used equal weighting over small neighborhoods for computational efficiency.
3.2.2 Hashed sparse coding.
We used the main procedure of this paper to calculate feature vector for each sift vector. Each such computation consisted essentially of depth= multiplications of sift and tree decision vectors ( computations) followed by multiplication of the sift vector by the appropriate pseudoinverse matrix (typically multiplications) resulting in total of approximately multiplications. For dimensional feature vector this compares to multiplications that are needed for omp resulting in almost fold reduction. Our model was trained on randomly selected sift vectors from Pascal 2011 dataset.
3.2.3 Spatial pyramidal pooling.
We used the same spatial pyramidal max pooling as in [YLan]. Since the feature vectors are in the sparse format the resulting computation is very efficient and negligible compared to either sift or tree sparse coding. The details are as follows. We need to calculate the maximum over the features in , and regions of the feature vector obtained in the previous step. First we split this vector into regions . Let be number of features, typically , be the input feature vector and , , be the part of the final feature vector. We calculate using the following.
(3) 
This calculation is done by looping over all feature vectors and indices and filling the pooled feature vector so the number of computations is of the order of the total number of nonzero features. We can get and parts of the final feature vector analogously. However it is more efficient now to use the vector obtained and pool it into regions and then pool the result into regions. The final output vector is concatenation of these vectors, resulting in vector.
3.2.4 Classification.
Subsequently a logistic regression classifier is trained on the feature vectors using the liblinear package
[19].3.2.5 Implementation.
Each the following operations we implemented using a multicore processing: all steps of the sift, finding the group using tree, and multiplying by pseudoinverses. In each of these steps separately the image/feature image was split in parts and send to different core. The system was implemented in C++. Blas in the Accelerate framework was used in the tree sparse coding. We report the result on a macbook pro, with a 2.3 Ghz Intel Core i7 processor with 4 cores. The observed speedup compared to single core was about .
We also test the run time of just the coding, compared with coding using OMP with the SPAMS package [20].
3.3 Accuracy on Caltech 101 and 15 scenes
We test the accuracy of the standard pipeline with the hashed dictionary and with standard sparse coding on two object recognition benchmarks, Caltech 101 and 15 scenes . As mentioned before, for all data sets, we train the hashed dictionary on
randomly selected sift vectors from the Pascal 2011 dataset. Caltech 101 consists of 101 image categories and approximately 50 images per category; many classes have more training examples and we do the usual normalization of error by class size. We use 30 training examples per class. The 15 scenes database contains 15 categories and 4485 images, and between 200 to 400 images per category. We use 100 training images per class on this data set. For each data set, we run over 10 random splits and record the mean and standard deviation of the test error. We record the results in Tables
1 and 2. The first two columns of each table correspond to the hashed sparsed coding run with or nonzero entries on Lazebnik’s sift. The next two columns correspond to the “real time” system, hashed sparse coding run on our approximate sift, and the last two columns correspond to OMP, trained and coded with SPAMS[20] on Lazebnik’s sift. Each row corresponds to the number of atoms in the dictionary. As far as we know, state of the art with single features on grayscale images on Caltech 101 with 30 training examples per category is .773, in [7], and .898 for the 15 scenes, in [21]. Both of these methods use the same basic pipeline as this work, but with variations on the sparse coding; our method can be used in conjuction with their methods.As has been observed by other authors, increasing the size of the dictionary only seems to increase the accuracy. Note that for our method, the only places that the size of the dictionary affects the computational cost is in training, where we use an SOMP, and in the final classification stage. The last component is small for these experiments, but if we wanted to use the system for detection at many locations at an image, it would start to be significant.
3.4 Running speed.
We tested the speed of the full pipeline from image to classification. We show results on images from the Berkeley dataset and Caltech 101. The Berkeley images are , The Caltech 101 images were resized so that the largest size was at most 300, with the aspect ratio fixed. With nonzero coefficients and depth tree, we get the results in Table 3. The entire dataset of images in Caltech 101 was processed in minutes and seconds with features and in minutes and seconds with features. This corresponds to fps and fps respectively.
We also test the speed of just the sparse coding^{3}^{3}3This test was done on a quad core intel i5 running 64 bit Linux, with 4 gigs of ram; both our code and SPAMS were run as a mex file through Matlab. Coding 15000 sift vectors with a depth 16 tree and 5 nonzeros per takes .034 seconds with one core, and .018 with four. In comparison, SPAMS with a dictionary of size 1024 costs .898 seconds using four cores. This is not exactly a fair test, as SPAMS must calculate a Cholesky decomposition of the Gram matrix of the dictionary when it runs, and this could be cached; however, simply multiplying the dictionary matrix by the data vectors takes .294 seconds. As the size of the dictionary increases, this will increase, but our method will not get any slower.
hashed  hashed  hashed , R.T.  hashed R.T.  OMP  OMP  

.722 .011  .704 .010  .710 .007  .697 .010  .725 .008  .721 .010  
.735 .007  .731 .011  .723 .007  .716 .005  .747 .008  .738 .008  
.741 .011  .740 .006  .736 .005  .724 .004  .754 .008  .757 .010  
.751 .009  .739 .003 
hashed  hashed  hashed , R.T.  hashed R.T.  OMP  OMP  

.792 .006  .789 .004  .786 .004  .770 .007  .801 .006  .802 .004  
.807 .006  .800 .004  .796 .007  .788 .007  .814 .006  .813 .006  
.810 .007  .810 .004  .807 .003  .804 .004  .826 .007  .822 .007  
.811 .004  .815 .004 
pixel images  Caltech 101 (on 4 cores)  

1 core (s)  4 cores (s)  1 core (fps)  4 cores (fps)  total time (m:s)  (fps)  performance  
SIFT  0.039  0.017  25  59       
SIFT+TreeSC+pyramid  0.143  0.045  7  22.5       
full (1024)  0.145  0.0465  6.9  21  4:01  38  
full (2048)  0.1473  0.050  6.8  20  4:45  32  
full (4096)  0.1495  0.052  6.7  19  4:42  32  
full (8092)  0.155  0.0565  6.4  18  5:35  27 
training images per category. (The speeds vary probably due to disc access and are faster after one or more sweeps through the dataset).
4 Conclusion
In this paper we presented a fast approximate sparse coding algorithm and use it to build an accurate real time object recognition system. Our contributions can be summarized into four parts. 1) We describe a general method for learning the groups for greedy structured sparse coding using a generalization of LLoyd’s algorithm and SOMP. 2) We use this method to design a fast approximation of greedy sparse coding that uses a tree structure for inference. 3) We give a fast approximate implementation of the SIFT descriptor. 4) These algorithms together allow as to build a real time object recognition system in the framework of [2]. It processes the entire Caltech 101 dataset in under 5 minutes (with images resized so that larger size is pixels). As far as we know this is the first time that a fast implementation of this type of system has been put together with comparable accuracy.
We see many possible directions in the future both for improving the group sparse coding algorithm and applying our system to vision. We would like to learn the hash or tree, rather than build it before the dictionary training. We would like to train the system on larger datasets and work on real time object detection (as opposed to classification). At this speed the algorithm allows us to process around 2 million medium sized images () in hours on a single computer. The object detection should also be feasible given that the expensive part – calculation of features at different parts of the image from which detection is calculated  is fast.
References
 [1] C. Schmid S. Lazebnik and J. Ponce, “Beyond bags of features: Spatial pyramid matching for recognizing natural scene categories”, in CVPR’06, 2006.
 [2] Jianchao Yang, Kai Yu, Yihong Gong, and Thomas Huang, “Linear spatial pyramid matching using sparse coding for image classification”, in CVPR’09, 2009.
 [3] B.A. Olshausen and D. Field, “Emergence of simplecell receptive field properties by learning a sparse code for natural images”, Nature, vol. 381, no. 6583, pp. 607–609, 1996.
 [4] M. Aharon, M. Elad, and A. Bruckstein, “KSVD: An algorithm for designing overcomplete dictionaries for sparse representation”, IEEE Transactions on Signal Processing, vol. 54, no. 11, pp. 4311–4322, 2006.
 [5] Koray Kavukcuoglu, Marc’Aurelio Ranzato, and Yann LeCun, “Fast inference in sparse coding algorithms with applications to object recognition”, Tech. Rep. CBLLTR20081201, Computational and Biological Learning Lab, Courant Institute, NYU, 2008.

[6]
K. Yu J. Yang and T. Huang.,
“Efï¬cient highly overcomplete sparse coding using a mixture
model.”,
in
European Conference on Computer Vision
, 2010.  [7] Y. Boureau, N. La Roux, F. Bach, J. Ponce, and Y. LeCun, “Ask the locals: multiway local pooling for image recognition”, in International Conference on Computer Vision, 2011.

[8]
R. Jenatton, J. Mairal, G. Obozinski, and F. Bach,
“Proximal methods for sparse hierarchical dictionary learning”,
in
International Conference on Machine Learning (ICML)
, 2010.  [9] Seyoung Kim and Eric P. Xing, “Treeguided group lasso for multitask regression with structured sparsity”, in ICML, 2010, pp. 543–550.
 [10] Laurent Jacob, Guillaume Obozinski, and JeanPhilippe Vert, “Group lasso with overlap and graph lasso”, in Proceedings of the 26th Annual International Conference on Machine Learning, New York, NY, USA, 2009, ICML ’09, pp. 433–440, ACM.
 [11] Richard G. Baraniuk, Volkan Cevher, Marco F. Duarte, and Chinmay Hegde, “ModelBased Compressive Sensing”, Dec 2009.
 [12] Anna C. Gilbert, Martin J. Strauss, and Joel A. Tropp, “Simultaneous Sparse Approximation via Greedy Pursuit”, IEEE Trans. Acoust. Speech Signal Process., vol. 5, pp. 721–724, 2005.

[13]
R. Ostrovsky, Y. Rabani, L. Schulman, and C. Swamy,
“The effectiveness of lloydtype methods for the kmeans problem”,
in FOCS 2006, 2006.  [14] S. Dasgupta and Y. Freund., “Random projection trees and low dimensional manifolds.”, in STOC 2008, 2008.
 [15] R. Vidal, “Subspace clustering”, IEEE Signal Processing Magazine, vol. 28, pp. 52–68, 2011.
 [16] W. Allard, G. Chen, and M. Maggioni, “Multiscale geometric methods for data sets II: Geometric multiresolution analysis”, to appear in Applied and Computational Harmonic Analysis.
 [17] Junzhou Huang, Tong Zhang, and Dimitris N. Metaxas, “Learning with structured sparsity”, in ICML, 2009, p. 53.
 [18] K. Gregor and Y. LeCun, “Learning fast approximations of sparse coding”, in International Conference on Machine Learning (ICML), 2010.
 [19] RongEn Fan, KaiWei Chang, ChoJui Hsieh, XiangRui Wang, and ChihJen Lin, “LIBLINEAR: A library for large linear classification”, Journal of Machine Learning Research, vol. 9, pp. 1871–1874, 2008.
 [20] ”, http://www.di.ens.fr/willow/SPAMS/.
 [21] LiangTien Chia Shenghua Gao, Ivor WaiHung Tsang and Peilin Zhao, “Local features are not lonelylaplacian sparse coding for image classification.”, in CVPR 2010, 2010.
Comments
There are no comments yet.