Graph Regularized Tensor Sparse Coding for Image Representation

03/27/2017 ∙ by Fei Jiang, et al. ∙ 0

Sparse coding (SC) is an unsupervised learning scheme that has received an increasing amount of interests in recent years. However, conventional SC vectorizes the input images, which destructs the intrinsic spatial structures of the images. In this paper, we propose a novel graph regularized tensor sparse coding (GTSC) for image representation. GTSC preserves the local proximity of elementary structures in the image by adopting the newly proposed tubal-tensor representation. Simultaneously, it considers the intrinsic geometric properties by imposing graph regularization that has been successfully applied to uncover the geometric distribution for the image data. Moreover, the returned sparse representations by GTSC have better physical explanations as the key operation (i.e., circular convolution) in the tubal-tensor model preserves the shifting invariance property. Experimental results on image clustering demonstrate the effectiveness of the proposed scheme.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Sparse coding (SC), which encode the images using only a few active coefficients, has been successfully applied to many areas across computer vision and pattern recognition

[1, 2, 3], since it is computationally efficient and has physical interpretations.

However, conventional SC [4] for image representations suffers from the following two major problems: (i) the vectorization preprocess breaks apart the local proximity of pixels and destructs the object structures of images; and (ii) the geometric distributions of the image space are ignored, while such information can significantly enhance the learning performance.

Two different kinds of sparse coding models have been proposed to preserve the intrinsic spatial structures of images: tensor sparse coding (TenSR) [5, 6] and convolutional sparse coding (CSC) [7, 8]. For TenSR models [5, 6], tensors are exploited for the image representation and a series of separable dictionaries are used to approximate the structures in each mode of images. Though the spatial structures are preserved by tensor representations, the relationships between the learned sparse coefficients and dictionaries are more complicated, which will cause the encoding (sparse coefficients) hard to interpret. For CSC models [7, 8], images are represented as the summation of the convolutions of the filters that capture the local patterns of images and the corresponding feature maps. Each feature map has nearly the same size as the image, which significantly increases the computational complexity for further analyses of images, such as image classification and clustering. And Figure 2 shows the fundamental theoretical differences of the above-mentioned sparse coding models. Moreover, those two kinds of models do not consider the geometric structures of the image space.

Figure 1: Shifted versions of basis based on the tensor-product. The shifted versions correspond to a dynamic flight in a counter-clockwise direction.
Figure 2: Four sparse coding models. (i) the original image of size ; (ii) conventional sparse coding; (iii) tensor sparse coding based on the tucker decomposition where ; (iv) convolution sparse coding (CSC) based on convolution operation where . The size of is almost the same as that of ; (v) our tubal-tensor model based on circular the convolution operation where . The size of is much smaller than that in CSC.

Several sparse coding models incorporating the geometrical structures of the images space have been proposed. They are based on the locally invariant idea, which assumes that two close points in the original space are likely to have similar encodings. It has been shown [9] that the learning performance can be significantly enhanced if the geometrical structure is exploited and the local invariance is considered. However, these sparse coding models ignore the spatial structure of the images due to the vectorization preprocess.

Motivated by the progress in tubal-tensor representation [10, 11], in this paper, we propose a novel graph regularized tensor sparse coding (GTSC) scheme for image representation that simultaneously considers the spatial structures of images and geometrical distributions of the image space. Firstly, we propose a novel tensor sparse coding model based on the tensor-product operation, which preserves the spatial structures of images by tensor representation. Unlike TenSR [5, 6], the learned coefficients by our model have better physical explanations, which show the contributions of corresponding bases and their shifted versions, as shown in Figure 1. Then we incorporate the geometric distributions of the image space by using the graph Laplacian as a smooth regularizer to preserve the local geometrical structures. By perserving the locally invariant property, GTSC has better discriminating power than the conventional SC [4].

The rest of this paper is organized as follows: Section 2 introduces the proposed tubal-tensor sparse representation of images. Section 3 presents the proposed GTSC model and the alternating minimization algorithm for GTSC. The experimental results on image clustering are presented in Section 4. Finally, we conclude the paper in Section 5.

2 Tubal-tensor Sparse Representation

2.1 Notation

A third-order tensor of size is denoted as . represents the -th frontal slice which is a matrix, represents the expansion of along the third dimension where ,

is the discrete Fourier transform (DFT) along the third dimension of

, and represents the transpose of where and , . The superscript “” denotes the transpose of matrices.

For convenience, tensor spaces , , are denoted as , , and , respectively. denotes the set . The and Frobenius norms of tensors are denoted as , and , respectively.

2.2 Tensor-linear Combination

A two-dimensional image of size is represented by a third-order tensor , which can be approximated by the tensor-product between and as

(1)

where denotes tensor-product introduced in [10].

Note that, can be rewritten as a tensor-linear combination of tensor bases with corresponding tensor representations :

(2)

Equation (2) is quite similar to linear combinations as tubes play the same role as scalars in the matrix representation.

2.3 Tubal-tensor Sparse Representation

Given images of size , we present them as a third-order tensor . Let be the tensor dictionary, where each lateral slice represents a tensor basis, and be the tensor corresponding representations. Each image is approximated by a sparse tensor-linear combination of those tensor bases. Our tubal-tensor sparse coding (TubSC) model can be formulated as:

(3)

where is the sparsity regularizer.

Remark 1

Conventional SC is a special case of TubSC.

2.4 Explanations of Tensor Representations

To explain the tensor representations, we introduce Lemma 1 which bridges the tensor-product with the matrix-product.

Lemma 1

[10] The tensor-product has an equivalent matrix-product as:

(4)

where is the circular matrix of defined as follows:

(5)

Assuming the vectorization formulations of tensor bases are , where , , then is actually . Moreover, are the set of shifted versions of , which can be denoted as .

If we rewrite with corresponding to the shifted versions of , the tensor-product (1) is further transformed into linear combination as follows:

(6)

From (6), we can see the explicit meanings of tensor representations , which display the reconstruction contributions of the corresponding original bases and the shifted versions of bases simultaneously. Figure 1 shows the shifted versions of an image basis. It can be seen that the shifted versions of the basis are used for image reconstruction without storing them.

3 Graph Regularized Tensor Sparse Coding

In this section, we present our graph regularized tensor sparse coding (GTSC) model which simultaneously takes into account the spatial structures of images and local geometric information of the image space.

3.1 Problem Formulation

From one aspect, TubSC model defined by (2.3) can preserve the spatial structures of images based on the tensor-linear combinations in (2). From another aspect, one might further hope that the learned tensor dictionary can respect the intrinsic geometrical information of the image space. A natural assumption is to keep local invariance where the learned sparse representations of two close points in the original space are also close to each other. This assumption is usually referred to as manifold learning [9], which can significantly enhance the learning performance.

Given a set of images of size , a -nearest neighbor graph is constructed. Considering the problem of mapping the weighted graph to the tensor sparse representation , we first make an expansion of along the third dimension as , where each column is a sparse representation of an image. A reasonable criterion for choosing a better mapping is to minimize the following objective function:

(7)

where represents the trace of the matrix, and is the Laplacian matrix, where , and .

By incorporating the Laplacian regularizer (7) into the TubSC model (2.3), we propose a novel model named graph regularized tensor sparse coding (GTSC) as:

(8)

where is the graph regularizer, and is the sparsity regularizer.

Problem (3.1) is quite challenging due to the non-convex objective function and the convolutional operation. Instead of transforming (3.1) into conventional graph regularized SC formulation based on Lemma 1, we propose a much more efficient algorithm by alternatively optimizing and directly in the tensor space.

1:  Input: images: , dictionary: , regularizers: , , graph Laplacian: , maximum iterative steps: num,
2:  Initialization: Set , ,
3:  for iter = 1 to num  do
4:     Set ,
5:     Compute via Equation (13),
6:     Compute via ,
7:     ,
8:     ,
9:  end for
10:  Output: Sparse coefficients .
Algorithm 1 Iterative Shrinkage Thresholding algorithm based on Tensor representation (ISTT)

3.2 Graph Regularized Tensor Sparse Representations

In this subsection, we discuss how to solve (3.1) by fixing the tensor dictionary . Problem (3.1) becomes:

(9)

By Lemma 1, (9) is equivalent to:

(10)

The size of the dictionary in (10) will be significantly increased for high dimensional images, which will need more computational resources.

To alleviate the above-mentioned problem, we propose a novel Iterative Shrinkage Thresholding algorithm based on the tensor presentation (ISTT) to solve (9) directly, which is rewritten as:

(11)

where stands for , and stands for the sparsity constraint .

Then, the iterative shrinkage function is constructed by the linearized function around the previous estimation of

with the proximal regularization and the nonsmooth regularization. Thus, at the -th iteration, is updated by:

(12)

To solve (12), we firstly show w.r.t. the data reconstruction term :

(13)

where with the first frontal slice and the other slices , .

Secondly, we discuss how to determine the Lipschitz constant in (12). For every and , we have

(14)

where the superscript “” represents conjugate transpose, and .

Thus the Lipschitz constant of used in our algorithm is .

Lastly, (11) can be solved by the proximal operator , where is the soft-thresholding operator .

To speed up the convergence of the iterative shrinkage algorithm, an extrapolation operator is adopted [5]. Algorithm 1 summarizes our proposed Iterative Shrinkage Thresholding algorithm based on tensor presentation (ISTT).

0:   images: , the number of atoms: , regularizers: , , graph Laplacian , maximum iterative steps: num,
1:  Initialization: randomly initialize , , and Lagrange dual variables ,
2:  for  to num do
3:     //Graph Regularized Tensor Sparse Representation
4:     Solving via Equation (12) in Algorithm 1,
5:     //Tensor Dictionary Learning
6:     , ,
7:     for  to  do
8:        Solving (19) for by Newton’s method,
9:        Calculate from (18),
10:     end for
11:     ,
12:  end for
12:  , .
Algorithm 2 Algorithm for GTSC

3.3 Tensor Dictionary Learning

For learning the dictionary while fixed , the optimization problem is:

(15)

where atoms are coupled together due to the circular convolution operation. Therefore, we firstly decompose (3.3) into nearly-independent problems (that are coupled only through the norm constraint) by DFT as follows:

(16)

Then, we adopt the Lagrange dual [4] for solving the dual variables by Newton’s algorithm. Another advantage of Lagrange dual is that the number of optimization variables is , which is much smaller than of the primal problem for solving .

To use the Lagrange dual algorithm, firstly, we consider the Lagrangian of (3.3):

(17)

where , is a dual variable, and .

Secondly, minimizing over analytically, we obtain the optimal formulation of :

(18)

Substituting this expression into the Lagrangian , we obtain the Lagrange dual function , and the optimal dual variables by using Newton’s method.

(19)

Once getting the dual variables, the dictionary can be recovered using Equation (18).

The algorithm we proposed for GTSC is shown in Algorithm 2

3.4 Complexity Analysis

Given images of size , the numbers of bases and nearest neighbors, the computational complexities of GTSC and GraphSC [9] are as follows:

For sparse representation learning, GTSC is based on an iterative shrinkage thresholding algorithm in the tensor space, and GraphSC is based on a feature-sign algorithm. The computational complexity for GTSC is and for GraphSC[9] is , where is the number of non-zero coefficients.

For dictionary learning, both GTSC and GraphSC[9] are based on Lagrange-dual algorithms, and the computational complexities are . For GTSC, the optimal dictionary is obtained slice by slice, and the computational complexity for each slice is . For GraphSC, the computational complexity is .

Overall, the computational complexity of GTSC is less than GraphSC[9]

, especially for high dimensional data.

4 Evaluation

We apply our proposed algorithms, TubSC in (2.3) and GTSC in (10) models, to image clustering tasks on four image databases: COIL20111\(http://www1.cs.columbia.edu/CAVE/software/softlib/coil-100.php\), USPS222\(http://www.cad.zju.edu.cn/home/dengcai/Data/MLData.html\), ORL333\(http://www.uk.research.att.com/facedatabase.html\), Yale444\(http://www.cad.zju.edu.cn/home/dengcai/Data/FaceData.html\). The important statistics of these datasets are sumarized in Table 1. Two metrics, the accuracy (ACC) and the normalized mutual information (NMI), are used for evaluations. ACC measures the percentage of correct labels obtained by an algorithm and NMI measures how similar two clusters are. The details of these two metrics can refer to [9].

Data Class Size Number of Images
COIL20 20 3232 1440
USPS 10 1616 9298
ORL 40 3232 400
YALE 15 6464 165
Table 1: Statistics of the Four Dataset

4.1 Compared Algorithms

To evaluate the clustering performances, we compare our proposed methods against the conventional SC [4] and GraphSC [9]

. The performance scores are obtained by averaging over the 10 tests. For each test, we first apply the compared methods to learn new representations for images, and then apply K-means in the new representation space. For SC

[4] and GraphSC [9] methods, PCA is used to reduce the data dimensionality by retaining

of the variance. The numbers of bases for USPS and YALE are set to 128, and those for COIL20 and ORL are set to 256, respectively. For our methods, we do not need to reduce the data dimensionality. Moreover, the numbers of bases are much smaller than those used in SC

[4] and GraphSC [9], due to the powerful representation generated from the tensor-product. For our methods, the numbers of bases are set to 45 for all data sets except YALE, which is set to 80. Based on the physical explanations of tensor sparse representations, we use C as the final image representation, which is defined as

(20)

For GraphSC[9] and GTSC, we empirically set the graph regularization parameter alpha to 1 and the number of nearest neighbors to 3.

4.2 Clustering Results

Table 2 shows the clustering results in terms of ACC and NMI. As can be seen, our GTSC algorithm performs the best in all the cases. TubSC performs much better than conventional SC, which indicate that by considering spatial proximity information of images, the learning performance can be significantly enhanced. Moreover, GraphSC outperforms SC, which shows that by encoding geometrical distribution information of the image space, the learning performance can also be improved.

We would like to point out that we use much smaller sizes of dictionaries in our proposed models than SC and GraphSC, but without any dimensionality reduction preprocessing. For another aspect, we do not compare the clustering performances with TenSR [5] and CSC [8], which also consider the spatial structures of images. Without dimensionality reduction, the representations learned from TenSR and CSC are larger than the original images, which significantly increase the computational complexity of clustering.

Data COIL20 USPS ORL YALE AVG.
ACC ()
K-Means 60.49 67.45 53.50 51.52 58.24
SC [4] 67.43 68.62 53.25 54.55 60.96
GraphSC [9] 75.28 67.89 59.50 57.58 65.06
TubSC(Ours) 72.29 71.16 60.00 56.36 64.95
GTSC(Ours) 83.19 76.11 67.50 60.00 71.70
NMI ()
K-Means 73.86 62.20 71.82 53.69 65.39
SC[4] 73.24 65.91 72.18 54.70 66.51
GraphSC [9] 81.03 67.22 76.00 60.21 71.12
TubSC(Ours) 80.52 68.21 76.94 61.15 71.70
GTSC(Ours) 89.66 79.31 81.22 63.98 78.54
Table 2: Clustering Performances of Different Algorithms on Four Datasets

5 Conclude

In this paper, we propose a novel graph regularized tensor sparse coding (GTSC) model for image presentation, which explicitly considers both the spatial proximity information of images and geometric structures of the image space. GTSC is based on a novel tubal-tensor sparse coding (TubSC) model where the tensor encodings of TubSC have richer explanations than conventional sparse coding. The experimental results on image clustering have demonstrated that our proposed algorithm can have better representation power and significantly enhance the clustering performance.

6 Ackonwledgements

This work is supported by NSFC (No.61671290) in China, the Key Program for International ST Cooperation Project (No.2016YFE0129500) of China and partially supported by the Basic Research Project of Innovation Action Plan (No. 16JC1402800) of Shanghai Science and Technology Committee.

References

  • [1] Zheng Zhang, Yong Xu, Jian Yang, Xuelong Li, and David Zhang, “A survey of sparse representation: algorithms and applications,” IEEE Access, vol. 3, pp. 490–530, 2015.
  • [2] Mehrdad J Gangeh, Ahmed K Farahat, Ali Ghodsi, and Mohamed S Kamel, “Supervised dictionary learning and sparse representation-a review,” arXiv preprint arXiv:1502.05928, 2015.
  • [3] John Wright, Yi Ma, Julien Mairal, Guillermo Sapiro, Thomas S Huang, and Shuicheng Yan, “Sparse representation for computer vision and pattern recognition,” Proceedings of the IEEE, vol. 98, no. 6, pp. 1031–1044, 2010.
  • [4] Honglak Lee, Alexis Battle, Rajat Raina, and Andrew Y Ng, “Efficient sparse coding algorithms,” in NIPS, 2006, pp. 801–808.
  • [5] Na Qi, Yunhui Shi, Xiaoyan Sun, and Baocai Yin, “Tensr: Multi-dimensional tensor sparse representation,” in CVPR, 2016, pp. 5916–5925.
  • [6] Na Qi, Yunhui Shi, Xiaoyan Sun, Jingdong Wang, and Baocai Yin, “Two dimensional synthesis sparse model,” in ICME, 2013, pp. 1–6.
  • [7] Hilton Bristow, Anders Eriksson, and Simon Lucey, “Fast convolutional sparse coding,” in CVPR, 2013, pp. 391–398.
  • [8] Felix Heide, Wolfgang Heidrich, and Gordon Wetzstein, “Fast and flexible convolutional sparse coding,” in CVPR, 2015, pp. 5135–5143.
  • [9] Miao Zheng, Jiajun Bu, Chun Chen, Can Wang, Lijun Zhang, Guang Qiu, and Deng Cai, “Graph regularized sparse coding for image representation,” IEEE Transactions on Image Processing, vol. 20, no. 5, pp. 1327–1336, 2011.
  • [10] Misha E Kilmer, Karen Braman, Ning Hao, and Randy C Hoover, “Third-order tensors as operators on matrices: A theoretical and computational framework with applications in imaging,” SIAM Journal on Matrix Analysis and Applications, vol. 34, no. 1, pp. 148–172, 2013.
  • [11] Xiao-Yang Liu, Shuchin Aeron, Vaneet Aggarwal, and Xiaodong Wang, “Low-tubal-rank tensor completion using alternating minimization,” arXiv: https://arxiv.org/abs/1610.01690, 2016.